[{"categories":["Java"],"contents":"In this article, we will get familiar with JUnit 5 functional interfaces. JUnit 5 significantly advanced from its predecessors. Features like functional interfaces can greatly simplify our work once we grasp their functionality.\n Example Code This article is accompanied by a working code example on GitHub. Quick Introduction to Java Functional Interfaces Functional interfaces are a fundamental concept in Java functional programming. Java 8 specifically designed them to allow the usage of lambda expressions or method references while processing data streams. In the Java API, specifically in the java.util.function package, you will find a collection of functional interfaces. The main characteristic of a functional interface is that it contains only one abstract method. Although it can have default methods and static methods, these do not count towards the single abstract method requirement. Functional interfaces serve as the targets for lambda expressions or method references.\nJUnit 5 Functional Interfaces JUnit functional interfaces belong to the org.junit.jupiter.api.function package. It defines three functional interfaces: Executable, ThrowingConsumer\u0026lt;T\u0026gt; and ThrowingSupplier\u0026lt;T\u0026gt;. We typically use them with the Assertions utility class.\nKnowing how these interfaces work can greatly improve your testing methods. They make it easier to write and comprehend tests, reducing the amount of repetitive code needed for handling exceptions. By using these interfaces, you can describe complex test scenarios more clearly and concisely.\nLet\u0026rsquo;s learn how to use these functional interfaces.\nUsing Executable Executable is a functional interface that enables the implementation of any generic block of code that may potentially throw a Throwable.\nJUnit 5 defines the Executable interface as follows:\n@FunctionalInterface public interface Executable { void execute() throws Throwable; } The Executable interface defines a single method called execute, which does not have any parameters and does not return anything. It can throw different types of exceptions, making it flexible for handling exceptional situations.\nIt is particularly useful for writing tests to validate if specific code paths throw specific exceptions.\nOne common scenario is to use the Executable with the Assertions.assertThrows() method. This method takes an Executable as an argument, executes it, and checks if it throws the expected exception.\nThe Assertions class provides many assertion methods accepting implementations of the Executable interface:\nstatic void assertAll(Executable... executables) // other variants of assertAll accepting executables  static void assertDoesNotThrow(Executable executable) // other variants of assertDoesNotThrow accepting executables  static \u0026lt;T extends Throwable\u0026gt; T assertThrows(Class\u0026lt;T\u0026gt; expectedType, Executable executable) // other variants of assertThrows accepting executables  static \u0026lt;T extends Throwable\u0026gt; T assertThrowsExactly(Class\u0026lt;T\u0026gt; expectedType, Executable executable) // other variants of assertThrowsExactly accepting executables  static void assertTimeout(Duration timeout, Executable executable) // other variants of assertTimeout accepting executables  static void assertTimeoutPreemptively(Duration timeout, Executable executable) // other variants of assertTimeoutPreemptively accepting executables Let\u0026rsquo;s learn about these methods one by one.\nUsing Executables in assertAll() The Assertions.assertsAll() method asserts that all supplied executables do not throw exceptions.\nLet\u0026rsquo;s first define the executables which we\u0026rsquo;ll use in our examples:\npublic class ExecutableTest { private List\u0026lt;Long\u0026gt; numbers = Arrays.asList(100L, 200L, 50L, 300L); private Executable sorter = () -\u0026gt; { TimeUnit.SECONDS.sleep(2); numbers.sort(Long::compareTo); }; private Executable checkSorting = () -\u0026gt; assertEquals(List.of(50L, 100L, 200L, 300L), numbers); private Executable noChanges = () -\u0026gt; assertEquals(List.of(100L, 200L, 50L, 300L), numbers); // tests } In the ExecutableTest class, we define several tests to demonstrate the usage of the Executable functional interface with JUnit\u0026rsquo;s timeout assertions.\nWe start by initializing a list of Long numbers (numbers) and defining two Executable lambdas: sorter and checkSorting. The sorter lambda simulates a time-consuming operation by sleeping for 2 seconds and then sorting the list. The checkSorting lambda verifies that the list is in the correct sort order. Additionally, we define another Executable lambda, noChanges, which checks that the list remains in its initial unsorted state.\nConsider the following example that shows how to use assertAll with an executable:\n@ParameterizedTest @CsvSource({\u0026#34;1,1,2,Hello,H,bye,2,byebye\u0026#34;, \u0026#34;4,5,9,Good,Go,Go,-10,\u0026#34;, \u0026#34;10,21,31,Team,Tea,Stop,-2,\u0026#34;}) void testAssertAllWithExecutable(int num1, int num2, int sum, String input, String prefix, String arg, int count, String result) { assertAll( () -\u0026gt; assertEquals(sum, num1 + num2), () -\u0026gt; assertTrue(input.startsWith(prefix)), () -\u0026gt; { if (count \u0026lt; 0) { assertThrows( IllegalArgumentException.class, () -\u0026gt; { new ArrayList\u0026lt;\u0026gt;(count); }); } else { assertEquals(result, arg.repeat(count)); } }); } In the testAssertAllExecutable() method, Assertions.assertAll() groups 3 assertions provided as a lambda function. The first assertion checks if the sum of num1 and num2 equals sum. The second assertion verifies if the string input starts with prefix. Finally, the third assertion checks if count is less than 0, asserts that creating ArrayList with count will throw IllegalArgumentException, otherwise asserts that string arg repeated count times equals result.\nUsing Executables in assertDoesNotThrow() The Assertions.assertDoesNotThrow() method asserts that the execution of the supplied executable does not throw any kind of exception. Thus, we can explicitly verify that the logic under test executes without encountering any exception. It is a useful assertion method that we can use to test the happy paths.\nHere\u0026rsquo;s a simple example:\n@ParameterizedTest @CsvSource({\u0026#34;one,0,o\u0026#34;, \u0026#34;one,1,n\u0026#34;}) void testAssertDoesNotThrowWithExecutable(String input, int index, char result) { assertDoesNotThrow(() -\u0026gt; assertEquals(input.charAt(index), result)); } The test testAssertDoesNotThrowWithExecutable() annotated @ParameterizedTest and @CsvSource, runs the test with different sets of parameters. The @CsvSource annotation specifies two sets of parameters: (“one”, 0, \u0026lsquo;o\u0026rsquo;) and (“one”, 1, \u0026lsquo;n\u0026rsquo;). For each set of parameters, the test method checks that execution does not throw an exception when verifying that the character at the specified index in the input string matches the expected result.\nUsing Executables in assertThrows() The Assertions.assertsThrows() method asserts that the execution of the supplied executable throws an expected exception and returns the exception.\nIf the logic does not throw any exception or throws a different exception, then this method will fail.\nWe can perform additional checks on the exception instance we get in the returned value.\nIt is an useful assertion method we can use to test the failure paths.\nLet\u0026rsquo;s see how we can use assertThrows() in action:\n@Test void testAssertThrowsWithExecutable() { List\u0026lt;String\u0026gt; input = Arrays.asList(\u0026#34;one\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;three\u0026#34;, null, \u0026#34;five\u0026#34;); IllegalArgumentException exception = assertThrows( IllegalArgumentException.class, () -\u0026gt; { for (String value : input) { if (value == null || value.isBlank()) { throw new IllegalArgumentException(\u0026#34;Got invalid value\u0026#34;); } // process values  } }); assertEquals(\u0026#34;Got invalid value\u0026#34;, exception.getMessage()); } The testAssertThrowsWithExecutable() method tests the assertThrows() method with an Executable.\nIt begins by creating a list of strings containing the values “one”, “”, “three”, null, and “five”. Using the assertThrows() method, it checks that executing the lambda expression throws an IllegalArgumentException.\nThe lambda iterates through the list, and for each string, it checks if the value is null or blank. If it finds a null or blank value, it throws a IllegalArgumentException with the message “Got invalid value\u0026quot;.\nThe assertion confirms that the exception is thrown and verifies that the exception message matches the expected “Got invalid value”.\nUsing Executables in assertTimeout() The Assertions.assertTimeout() method asserts that execution of the supplied executable completes before the given timeout. The execution can continue even after it exceeds the timeout. The assertion will throw an exception in case it exceeds the timeout duration.\nThis is useful to verify if the execution completes within the expected duration.\nHere is an example showcasing the use of assertTimeout():\n@Test void testAssertTimeoutWithExecutable() { // execution does not complete within expected duration  assertAll( () -\u0026gt; assertThrows( AssertionFailedError.class, () -\u0026gt; assertTimeout(Duration.ofSeconds(1), sorter)), checkSorting); // execution completes within expected duration  assertAll(() -\u0026gt; assertDoesNotThrow( () -\u0026gt; assertTimeout(Duration.ofSeconds(5), sorter)), checkSorting); } The testAssertTimeoutWithExecutable() method demonstrates the usage of assertTimeout() with an Executable.\nIn the first assertAll() block, it checks if sorting the list throws a AssertionFailedError within a 1-second timeout. The assertTimeout() method tries to sleep for 2 seconds before sorting. Since it exceeds the timeout, a AssertionFailedError occurs. The checkSorting executable confirms the numbers are in ascending order.\nIn the second assertAll() block, it verifies that sorting the list within a 5-second timeout does not throw an exception. The assertTimeout() method still sleeps for 2 seconds before sorting. This time, it falls within the 5-second limit, so there is no exception. The checkSorting executable again confirms the numbers are in ascending order.\nThis shows that the execution continues as expected, but if it takes too long, there is an exception.\nAdditionally, it illustrates combining multiple assertion techniques for verification.\nUsing Executables in assertTimeoutPreemptively() The Assertions.assertTimeoutPreemptively() method asserts that the execution of the supplied executable completes before the given timeout. Furthermore, it aborts the execution of the executable preemptively if it exceeds the timeout.\nLet\u0026rsquo;s see how the preemptive timeout works for the same scenario:\n@Test void testAssertTimeoutPreemptivelyWithExecutable() { // execution does not complete within expected duration  assertAll( () -\u0026gt; assertThrows( AssertionFailedError.class, () -\u0026gt; assertTimeoutPreemptively(Duration.ofSeconds(1), sorter)), noChanges); // execution completes within expected duration  assertAll( () -\u0026gt; assertDoesNotThrow(() -\u0026gt; assertTimeoutPreemptively(Duration.ofSeconds(5), sorter)), checkSorting); } The testAssertTimeoutPreemptivelyWithExecutable() method demonstrates how to use the assertTimeoutPreemptively() method with two executables to verify the sorting of a list of numbers.\nIn the first assertAll() block, the method asserts that the execution throws an AssertionFailedError when sorting the list with a preemptive timeout of 1 second. The assertTimeoutPreemptively() method attempts to sleep for the 2-second delay before sorting the numbers. Since the delay exceeds the 1-second timeout, the assertion fails, and it throws the AssertionFailedError. The noChanges executable then verifies that the list remains unchanged, confirming that assertTimeoutPreemptively() preemptively stopped the sorting operation.\nIn the second assertAll block, the method asserts that we do not get any exception when sorting the list with a preemptive timeout of 5 seconds. The assertTimeoutPreemptively() method again attempts to sleep for the 2-second delay before sorting the numbers. This time, the delay is within the 5-second timeout, it does not throw any exception. Finally, checkSorting executable confirms that the execution has sorted the list correctly.\nUsing ThrowingConsumer The ThrowingConsumer interface serves as a functional interface that enables the implementation of a generic block of code capable of consuming an argument and potentially throwing a Throwable. Unlike the Consumer interface, ThrowingConsumer allows for the throwing of any type of exception, including checked exceptions.\nThe ThrowingConsumer interface can be handy in scenarios where we need to test code that might throw checked exceptions. This interface allows us to write more concise and readable tests by handling checked exceptions seamlessly.\nHere are some typical use cases.\nTesting Methods That Throw Checked Exceptions When we have methods that throw checked exceptions, using ThrowingConsumer can simplify our test code. Instead of wrapping our test logic in try-catch blocks, we can use ThrowingConsumer to write clean and straightforward assertions.\nFirst, let\u0026rsquo;s define a validation exception our logic would use:\npublic class ValidationException extends Throwable { public ValidationException(String message) { super(message); } } Now, write a test illustrating the use of ThrowingConsumer:\npublic class ThrowingConsumerTest { @ParameterizedTest @CsvSource({\u0026#34;50,true\u0026#34;, \u0026#34;130,false\u0026#34;, \u0026#34;-30,false\u0026#34;}) void testMethodThatThrowsCheckedException(int percent, boolean valid) { // acceptable percentage range: 0 - 100  ValueRange validPercentageRange = ValueRange.of(0, 100); Function\u0026lt;Integer, String\u0026gt; message = input -\u0026gt; MessageFormat.format( \u0026#34;Percentage {0} should be in range {1}\u0026#34;, input, validPercentageRange.toString()); ThrowingConsumer\u0026lt;Integer\u0026gt; consumer = input -\u0026gt; { if (!validPercentageRange.isValidValue(input)) { throw new ValidationException(message.apply(input)); } }; if (valid) { assertDoesNotThrow(() -\u0026gt; consumer.accept(percent)); } else { assertAll( () -\u0026gt; { ValidationException exception = assertThrows(ValidationException.class, () -\u0026gt; consumer.accept(percent)); assertEquals(exception.getMessage(), message.apply(percent)); }); } } } In this test, we validate percentage values in the range of 0 to 100 using a ThrowingConsumer. The test method testMethodThatThrowsCheckedException() takes an integer percent and a boolean valid as input. We define a ValueRange object to specify the valid range for percentages.\nIf the input percentage is not within the valid range, it throws a ValidationException with an appropriate error message. The test covers scenarios for a valid percentage (50), an invalid percentage above the range (130), and an invalid percentage below the range (-30).\nThe assertions verify that the ThrowingConsumer handles both valid and invalid inputs according to the defined percentage range.\nDynamic Tests with ThrowingConsumer JUnit 5 offers a powerful feature called dynamic tests, allowing us to create tests at runtime rather than at compile time. This can be especially useful when we don\u0026rsquo;t know the number of tests or the test data set beforehand.\nA common scenario where dynamic tests are beneficial is when you need to validate a series of inputs and their expected outcomes. ThrowingConsumer allows us to define test logic that can throw checked exceptions.\nLet\u0026rsquo;s define a dynamic test for the percentage validation and verify the results:\n// Helper record to represent a test case record TestCase(int percent, boolean valid) {} @TestFactory Stream\u0026lt;DynamicTest\u0026gt; testDynamicTestsWithThrowingConsumer() { // acceptable percentage range: 0 - 100  ValueRange validPercentageRange = ValueRange.of(0, 100); Function\u0026lt;Integer, String\u0026gt; message = input -\u0026gt; MessageFormat.format( \u0026#34;Percentage {0} should be in range {1}\u0026#34;, input, validPercentageRange.toString()); // Define the ThrowingConsumer that validates the input percentage  ThrowingConsumer\u0026lt;TestCase\u0026gt; consumer = testCase -\u0026gt; { if (!validPercentageRange.isValidValue(testCase.percent)) { throw new ValidationException(message.apply(testCase.percent)); } }; ThrowingConsumer\u0026lt;TestCase\u0026gt; executable = testCase -\u0026gt; { if (testCase.valid) { assertDoesNotThrow(() -\u0026gt; consumer.accept(testCase)); } else { assertAll( () -\u0026gt; { ValidationException exception = assertThrows(ValidationException.class, () -\u0026gt; consumer.accept(testCase)); assertEquals(exception.getMessage(), message.apply(testCase.percent)); }); } }; // Test data: an array of test cases with inputs and their validity  Collection\u0026lt;TestCase\u0026gt; testCases = Arrays.asList(new TestCase(50, true), new TestCase(130, false), new TestCase(-30, false)); Function\u0026lt;TestCase, String\u0026gt; displayNameGenerator = testCase -\u0026gt; \u0026#34;Testing percentage: \u0026#34; + testCase.percent; // Generate dynamic tests  return DynamicTest.stream(testCases.stream(), displayNameGenerator, executable); } First, let\u0026rsquo;s understand the dynamic test.\nclass DynamicTest public static \u0026lt;T\u0026gt; Stream\u0026lt;DynamicTest\u0026gt; stream( Iterator\u0026lt;T\u0026gt; inputGenerator, Function\u0026lt;? super T,String\u0026gt; displayNameGenerator, ThrowingConsumer\u0026lt;? super T\u0026gt; testExecutor) // other variants of stream } A DynamicTest is a test case generated at runtime. It is composed of a display name and an Executable. We annotate our test with @TestFactory so that the factory generates instances of DynamicTest.\nThe stream() method generates a stream of dynamic tests based on the given generator and test executor. Use this method when the set of dynamic tests is nondeterministic or when the input comes from an existing Iterator.\nThe inputGenerator generates input values and adds a DynamicTest to the resulting stream for each dynamically generated input value. It uses ThrowingConsumer to validate percentage inputs falling within the range of 0 to 100. It defines dynamic tests using a collection of percentages and a boolean indicating validity, with display names generated based on the percentage values. This setup allows for dynamically generating and running tests to ensure the correct handling of both, valid and invalid percentage values.\nThrowingConsumer simplifies testing methods that throw checked exceptions, manage resources, validate inputs, handle callbacks, and process complex data by eliminating extensive try-catch blocks in test code.\nUsing ThrowingSupplier ThrowingSupplier is a functional interface that enables the implementation of a generic block of code that returns an object and may throw a Throwable. It is similar to Supplier, except that it can throw any kind of exception, including checked exceptions.\nThe Assertions class has many assertion methods accepting throwing supplier:\nstatic \u0026lt;T\u0026gt; T assertDoesNotThrow(ThrowingSupplier\u0026lt;T\u0026gt; supplier) // other variants of assertDoesNotThrow accepting supplier  static \u0026lt;T\u0026gt; T assertTimeout(Duration timeout, ThrowingSupplier\u0026lt;T\u0026gt; supplier) // other variants of assertTimeout accepting supplier  static void assertTimeoutPreemptively(Duration timeout, ThrowingSupplier\u0026lt;T\u0026gt; supplier) // other variants of assertTimeoutPreemptively accepting supplier Let\u0026rsquo;s learn about these methods one by one.\nUsing ThrowingSupplier in assertDoesNotThrow() The method assertDoesNotThrow() asserts that the execution of the supplied supplier does not throw any kind of exception.\nIf the assertion passes, it returns the supplier\u0026rsquo;s result. It is useful for testing happy paths.\nLet\u0026rsquo;s look at the implementation of a test using ThrowingSupplier:\npublic class ThrowingSupplierTest { @ParameterizedTest @CsvSource({\u0026#34;25.0d,5.0d\u0026#34;, \u0026#34;36.0d,6.0d\u0026#34;, \u0026#34;49.0d,7.0d\u0026#34;}) void testDoesNotThrowWithSupplier(double input, double expected) { ThrowingSupplier\u0026lt;Double\u0026gt; findSquareRoot = () -\u0026gt; { if (input \u0026lt; 0) { throw new ValidationException(\u0026#34;Invalid input\u0026#34;); } return Math.sqrt(input); }; assertEquals(expected, assertDoesNotThrow(findSquareRoot)); } } In this parameterized test, we use ThrowingSupplier to check if the code throws exceptions and verify the return value. The test method, testDoesNotThrowWithSupplier(), performs a square root calculation. The negative input results in a ValidationException. Otherwise, it returns the square root. The test verifies that the ThrowingSupplier executes without exceptions and that the return value matches the expected result.\nUsing ThrowingSupplier in assertTimeout() The method assertTimeout() checks that execution of the supplied executable completes before the given timeout. If the running time of the executable exceeds the timeout, the method will throw an AssertionFailedError.\nLet\u0026rsquo;s now check how to use assertTimeout() with a supplier:\n@Test void testAssertTimeoutWithSupplier() { List\u0026lt;Long\u0026gt; numbers = Arrays.asList(100L, 200L, 50L, 300L); int delay = 2; Consumer\u0026lt;List\u0026lt;Long\u0026gt;\u0026gt; checkSorting = list -\u0026gt; assertEquals(List.of(50L, 100L, 200L, 300L), list); ThrowingSupplier\u0026lt;List\u0026lt;Long\u0026gt;\u0026gt; sorter = () -\u0026gt; { if (numbers == null || numbers.isEmpty() || numbers.contains(null)) { throw new ValidationException(\u0026#34;Invalid input\u0026#34;); } TimeUnit.SECONDS.sleep(delay); return numbers.stream().sorted().toList(); }; // slow execution  assertThrows(AssertionFailedError.class, () -\u0026gt; assertTimeout(Duration.ofSeconds(1), sorter)); // fast execution  assertDoesNotThrow( () -\u0026gt; { List\u0026lt;Long\u0026gt; result = assertTimeout(Duration.ofSeconds(5), sorter); checkSorting.accept(result); }); // reset the number list and verify if the supplier validates it  Collections.fill(numbers, null); ValidationException exception = assertThrows(ValidationException.class, () -\u0026gt; assertTimeout(Duration.ofSeconds(1), sorter)); assertEquals(\u0026#34;Invalid input\u0026#34;, exception.getMessage()); } In this test, we use ThrowingSupplier and assertTimeout() to verify a sorting operation on a list of numbers. The test method testAssertTimeoutWithSupplier() works with a list of Long numbers and introduces a delay to simulate slow execution.\nWe test slow and fast execution scenarios and validate the input using assertThrows() and assertDoesNotThrow(). Finally, we demonstrate how to use ThrowingSupplier for operations that may throw checked exceptions and how to verify execution time constraints and input validation using JUnit’s assert methods.\nUsing ThrowingSupplier in assertTimeoutPreemptively() The method assertTimeoutPreemptively() asserts that the execution of the supplied supplier completes before the given timeout. It returns the supplier\u0026rsquo;s result if the assertion passes. If it exceeds the timeout, it will abort the supplier preemptively.\nLet\u0026rsquo;s see an example of assertTimeoutPreemptively() with supplier:\npublic class ThrowingSupplierTest { private List\u0026lt;Long\u0026gt; numbers = Arrays.asList(100L, 200L, 50L, 300L); private Consumer\u0026lt;List\u0026lt;Long\u0026gt;\u0026gt; checkSorting = list -\u0026gt; assertEquals(List.of(50L, 100L, 200L, 300L), list); private ThrowingSupplier\u0026lt;List\u0026lt;Long\u0026gt;\u0026gt; sorter = () -\u0026gt; { if (numbers == null || numbers.isEmpty() || numbers.contains(null)) { throw new ValidationException(\u0026#34;Invalid input\u0026#34;); } TimeUnit.SECONDS.sleep(2); return numbers.stream().sorted().toList(); }; } In this ThrowingSupplierTest class, we define several tests to demonstrate the usage of ThrowingSupplier and JUnit\u0026rsquo;s timeout assertions.\nWe start by initializing a list of Long numbers (numbers) and a Consumer\u0026lt;List\u0026lt;Long\u0026gt;\u0026gt; (checkSorting) that checks if the list is in sorted order. We also define a ThrowingSupplier\u0026lt;List\u0026lt;Long\u0026gt;\u0026gt; named sorter, which sorts the list after a delay of 2 seconds. If the list is null, empty, or contains null values, the sorter throws a ValidationException.\nLet\u0026rsquo;s consider a simple example of using a supplier when the test does not throw any exception:\n@ParameterizedTest @CsvSource({\u0026#34;25.0d,5.0d\u0026#34;, \u0026#34;36.0d,6.0d\u0026#34;, \u0026#34;49.0d,7.0d\u0026#34;}) void testDoesNotThrowWithSupplier(double input, double expected) { ThrowingSupplier\u0026lt;Double\u0026gt; findSquareRoot = () -\u0026gt; { if (input \u0026lt; 0) { throw new ValidationException(\u0026#34;Invalid input\u0026#34;); } return Math.sqrt(input); }; assertEquals(expected, assertDoesNotThrow(findSquareRoot)); } In the testDoesNotThrowWithSupplier() method, we use @ParameterizedTest with CsvSource to test the calculation of square roots for different inputs. We define a ThrowingSupplier\u0026lt;Double\u0026gt; named findSquareRoot, which throws a ValidationException for negative inputs. The test uses assertDoesNotThrow to verify that the square root of the input matches the expected value.\nHere is an example showcasing the use of timeout with a supplier:\n@Test void testAssertTimeoutWithSupplier() { // slow execution  assertThrows(AssertionFailedError.class, () -\u0026gt; assertTimeout(Duration.ofSeconds(1), sorter)); // fast execution  assertDoesNotThrow( () -\u0026gt; { List\u0026lt;Long\u0026gt; result = assertTimeout(Duration.ofSeconds(5), sorter); checkSorting.accept(result); }); // reset the number list and verify if the supplier validates it  Collections.fill(numbers, null); ValidationException exception = assertThrows(ValidationException.class, () -\u0026gt; assertTimeout(Duration.ofSeconds(1), sorter)); assertEquals(\u0026#34;Invalid input\u0026#34;, exception.getMessage()); } In the testAssertTimeoutWithSupplier() method, we test the sorting operation with different timeout durations. First, we verify that the sorting operation fails to complete within 1 second, using assertThrows() to expect a AssertionFailedError.\nThen, we test the same operation with a 5-second timeout, using assertDoesNotThrow() to ensure it completes successfully, and the result is in sorted order by the checkSorting consumer.\nNext, we reset the numbers list to contain null values and verify that the sorter throws a ValidationException when executed. We use assertThrows() to check that the exception message matches the expected “Invalid input”.\nLet\u0026rsquo;s learn to preemptively time out the execution of a supplier:\n@Test void testAssertTimeoutPreemptivelyWithSupplier() { // slow execution  assertThrows( AssertionFailedError.class, () -\u0026gt; assertTimeoutPreemptively(Duration.ofSeconds(1), sorter)); // fast execution  assertDoesNotThrow( () -\u0026gt; { List\u0026lt;Long\u0026gt; result = assertTimeoutPreemptively(Duration.ofSeconds(5), sorter); checkSorting.accept(result); }); } Finally, in the testAssertTimeoutPreemptivelyWithSupplier() method, we repeat the timeout tests with assertTimeoutPreemptively(). We verify that the sorting operation fails to complete within 1 second, expecting a AssertionFailedError. We then test the same operation with a 5-second timeout to ensure it completes successfully, and the list is in sorted order.\nConclusion In this article we got familiar with JUnit 5 functional interfaces, focusing on Executable, ThrowingConsumer, and ThrowingSupplier. These interfaces enhance the flexibility and readability of our test code by allowing us to leverage lambda expressions and method references.\nWe started with Executable, which encapsulates code that may throw any Throwable. We explored its usage in various JUnit assertions like assertAll, assertTimeout, and assertTimeoutPreemptively, demonstrating how we can use it to group multiple assertions and test time-sensitive operations efficiently.\nNext, we examined ThrowingConsumer\u0026lt;T\u0026gt;, which represents an operation that accepts a single input argument and can throw checked exceptions. This interface is particularly useful for scenarios where we need to validate inputs or perform operations that may result in exceptions. We also explored its integration with dynamic tests, showcasing how it can streamline the creation of complex, parameterized test cases.\nFinally, we looked at ThrowingSupplier\u0026lt;T\u0026gt;, which provides a value and can throw checked exceptions. This interface simplifies the testing of methods that generate values and might throw exceptions. We demonstrated its use in various timeout assertions, illustrating how it can validate the timely execution of operations and the correctness of generated results.\nBy understanding and using these functional interfaces, we can write more concise, expressive, and maintainable test code in JUnit 5, ultimately improving the robustness and reliability of our test suites.\n","date":"July 12, 2024","image":"https://reflectoring.io/images/stock/0041-adapter-1200x628-branded_hudbdb52a7685a8d0e28c5b58dcc10fabe_81226_650x0_resize_q90_box.jpg","permalink":"/junit5-functional-interfaces/","title":"Guide to JUnit 5 Functional Interfaces"},{"categories":["Spring"],"contents":"Spring Security provides a comprehensive set of security features for Java applications, covering authentication, authorization, session management, and protection against common security threats such as CSRF (Cross-Site Request Forgery). The Spring Security framework is highly customizable and allows developers to curate security configurations depending on their application needs. It provides a flexible architecture that supports various authentication mechanisms like Basic Authentication, JWT and OAuth.\nSpring Security provides Basic Authentication out of the box. To understand how this works, refer to this article. In this article, we will deep-dive into the working of JWT and how to configure it with spring security.\n Example Code This article is accompanied by a working code example on GitHub. What is JWT JWT (JSON Web Token) is a secure means of passing a JSON message between two parties. It is a standard defined in RFC 7519. The information contained in a JWT token can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA.\nFor this article, we will create a JWT token using a secret key and use it to secure our REST endpoints.\nJWT Structure In this section, we will take a look at a sample JWT structure. A JSON Web Token consists of three parts:\n Header Payload Signature  JWT Header The header consists of two parts: the type of the token i.e. JWT and the signing algorithm being used such as HMAC SHA 256 or RSA. Sample JSON header:\n{ \u0026#34;alg\u0026#34;: \u0026#34;HS256\u0026#34;, \u0026#34;typ\u0026#34;: \u0026#34;JWT\u0026#34; } This JSON is then Base64 encoded thus forming the first part of the JWT token.\nJWT Payload The payload is the body that contains the actual data. The data could be user data or any information that needs to be transmitted securely. This data is also referred to as claims. There are three types of claims: registered, public and private claims.\nRegistered Claims They are a set of predefined, three character claims as defined in RFC7519. Some frequently used ones are iss (Issuer Claim), sub (Subject Claim), aud (Audience Claim), exp (Expiration Time Claim), iat (Issued At Time), nbf (Not Before). Let\u0026rsquo;s look at each of them in detail:\n iss: This claim is used to specify the issuer of the JWT. It is used to identify the entity that issued the token such as the authentication server or identity provider. sub: This claim is used to identify the subject of the JWT i.e. the user or the entity for which the token was issued. aud: This claim is used to specify the intended audience of the JWT. This is generally used to restrict the token usage only for certain services or applications. exp: This claim is used to specify the expiration time of the JWT after which the token is no longer considered valid. Represented in seconds since Unix Epoch. iat: Time at which the JWT was issued. Can be used to determine age of the JWT. Represented in seconds since Unix Epoch. nbf: Identifies the time before which JWT can not be accepted for processing.  To view a full list of registered claims here. In the further sections, we will look at a few examples of how to use them.\nPublic Claims Unlike registered claims that are reserved and have predefined meanings, these claims can be customized depending on the requirement of the application. Most of the public claims fall under the below categories:\n User/Client Data: Includes username, clientId, email, address, roles, permissions, scopes, privileges and any user/client related information used for authentication or authorization. Application Data: Includes session details, user preferences(e.g. language preference), application settings or any application specific data. Security Information: Includes additional security-related information such as keys, certificates, tokens and others.  Private Claims Private claims are custom claims that are specific to a particular organization. They are not standardized by the official JWT specification but are defined by the parties involved in the JWT exchange.\nJWT Claims Recommended Best Practices  Use standard claims defined in JWT specification whenever possible. They are widely recognized and have well-defined meanings. The JWT payload should have only the minimum required claims for better maintainability and limit the token size. Public claims should have clear and descriptive names. Follow a consistent naming convention to maintain consistency and readability. Avoid including PII information to minimize the risk of data exposure. Ensure JWTs are encrypted with the recommended algorithms specified under the alg registered claim. The none value in the alg claim indicates the JWT is not signed and is not recommended.   JWT Signature To create the signature, we encode the header, encode the payload, and use a secret to sign the elements with an algorithm specified in the header. The resultant token will have three Base64 URL strings separated by dots. A pictorial representation of a JWT is as shown below:\nThe purpose of the signature is to verify the message wasn\u0026rsquo;t changed along the way. Since they are also signed with a secret key, it can verify that the sender of the JWT is who it claims to be.\nCommon Use Cases of JWT JWTs are versatile and can be used in a variety of scenarios as discussed below:\n Single Sign-On: JWTs facilitate Single Sign-On (SSO) by allowing user authentication across multiple services or applications. After a user logs in to one application, he receives a JWT that can be used to login to other services (that the user has access to) without needing to enter/maintain separate login credentials. API Authentication: JWTs are commonly used to authenticate and authorize access to APIs. Clients include the JWT token in the Authorization header of an API request to validate their access to the API. The APIs will then decode the JWT to grant or deny access. Stateless Sessions: JWTs help provide stateless session management as the session information is stored in the token itself. Information Exchange: Since JWTs are secure and reliable, they can be used to exchange not only user information but any information that needs to be transmitted securely between two parties. Microservices: JWTs are one of the most preferred means of API communication in a microservice ecosystem as a microservice can independently verify a token without relying on an external authentication server making it easier to scale.  Caveats of Using JWT Now that we understand the benefits that JWT provides, let\u0026rsquo;s look at the downside of using JWT. The idea here is for the developer to weigh the options in hand and make an informed decision about using a token-based architecture within the application.\n In cases where JWTs replace sessions, if we end up using a big payload, the JWT token can bloat. On top of it, if we add a cryptographic signature, it can cause overall performance overhead. This would end up being an overkill for storing a simple user session. JWTs expire at certain intervals post which the token needs to be refreshed and a new token will be made available. This is great from a security standpoint, but the expiry time needs to be carefully considered. For instance, an expiry time of 24 hours would be a bad design consideration.  Now, that we\u0026rsquo;ve looked at the focus points, we will be able to make an informed decision around how and when to use JWTs. In the next section, we\u0026rsquo;ll create a simple JWT token in Java.\nCreating a JWT Token in Java JJWT is the most commonly used Java library to create JWT tokens in Java and Android. We will begin by adding its dependencies to our application.\nConfigure JWT Dependencies Maven dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.jsonwebtoken\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jjwt-api\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.11.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.jsonwebtoken\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jjwt-impl\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.11.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;runtime\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.jsonwebtoken\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jjwt-jackson\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.11.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;runtime\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; Gradle dependency:\ncompile \u0026#39;io.jsonwebtoken:jjwt-api:0.11.1\u0026#39; runtime \u0026#39;io.jsonwebtoken:jjwt-impl:0.11.1\u0026#39; runtime \u0026#39;io.jsonwebtoken:jjwt-jackson:0.11.1\u0026#39; Our Java application is based on Maven, so we will add the above Maven dependencies to our pom.xml.\nCreating JWT Token We will use the Jwts class from the io.jsonwebtoken package. We can specify claims (both registered and public) and other JWT attributes and create a token as below:\npublic static String createJwt() { return Jwts.builder() .claim(\u0026#34;id\u0026#34;, \u0026#34;abc123\u0026#34;) .claim(\u0026#34;role\u0026#34;, \u0026#34;admin\u0026#34;) /*.addClaims(Map.of(\u0026#34;id\u0026#34;, \u0026#34;abc123\u0026#34;, \u0026#34;role\u0026#34;, \u0026#34;admin\u0026#34;))*/ .setIssuer(\u0026#34;TestApplication\u0026#34;) .setIssuedAt(java.util.Date.from(Instant.now())) .setExpiration(Date.from(Instant.now().plus(10, ChronoUnit.MINUTES))) .compact(); } This method creates a JWT token as below:\neyJhbGciOiJub25lIn0.eyJpZCI6ImFiYzEyMyIsInJvbGUiOiJhZG1pbiIsImlzcyI6IlR lc3RBcHBsaWNhdGlvbiIsImlhdCI6MTcxMTY2MTA1MiwiZXhwIjoxNzExNjYxNjUyfQ. Next, let\u0026rsquo;s take a look at the builder methods used to generate the token:\n claim: Allows us to specify any number of custom name-value pair claims. We can also use addClaims method to add a map of claims as an alternative. setIssuer: This method corresponds to the registered claim iss. setIssuedAt: This method corresponds to the registered claim iat. This method takes java.util.Date as a parameter. Here we have set this value to the current instant. setExpiration: This method corresponds to the registered claim exp. This method takes java.util.Date as a parameter. Here we have set this value to 10 minutes from the current instant.  Let\u0026rsquo;s try to decode this JWT using an online JWT Decoder:\nIf we closely look at the header, we see alg:none. This is because we haven\u0026rsquo;t specified any algorithm to be used. As we have already seen earlier, it is recommended that we use an algorithm to generate the signature.\nSo, let\u0026rsquo;s use the HMAC SHA256 algorithm in our method:\npublic static String createJwt() { // Recommended to be stored in Secret  String secret = \u0026#34;5JzoMbk6E5qIqHSuBTgeQCARtUsxAkBiHwdjXOSW8kWdXzYmP3X51C0\u0026#34;; Key hmacKey = new SecretKeySpec(Base64.getDecoder().decode(secret), SignatureAlgorithm.HS256.getJcaName()); return Jwts.builder() .claim(\u0026#34;id\u0026#34;, \u0026#34;abc123\u0026#34;) .claim(\u0026#34;role\u0026#34;, \u0026#34;admin\u0026#34;) .setIssuer(\u0026#34;TestApplication\u0026#34;) .setIssuedAt(java.util.Date.from(Instant.now())) .setExpiration(Date.from(Instant.now().plus(10, ChronoUnit.MINUTES))) .signWith(hmacKey) .compact(); } The resultant token created looks like this:\neyJthbGciOiJIUzI1NiJ9.eyJpZCI6ImFiYzEyMyIsInJvbGUiOiJhZG1pbiIsImlz cyI6IlRlc3RBcHBsaWNhdGlvbiIsImlhdCI6MTcxMjMyODQzMSwiZXhwIjoxNzEyMzI5MDMxfQ. pj9AvbLtwITqBYazDnaTibCLecM-cQ5RAYw2YYtkyeA Decoding this JWT gives us:\nParsing JWT Token Now that we have created the JWT, let\u0026rsquo;s look at how to parse the token to extract the claims. We can only parse the token if we know the secret key that was used to create the JWT in the first place. The below code can be used to achieve this:\npublic static Jws\u0026lt;Claims\u0026gt; parseJwt(String jwtString) { // Recommended to be stored in Secret  String secret = \u0026#34;5JzoMbk6E5qIqHSuBTgeQCARtUsxAkBiHwdjXOSW8kWdXzYmP3X51C0\u0026#34;; Key hmacKey = new SecretKeySpec(Base64.getDecoder().decode(secret), SignatureAlgorithm.HS256.getJcaName()); Jws\u0026lt;Claims\u0026gt; jwt = Jwts.parserBuilder() .setSigningKey(hmacKey) .build() .parseClaimsJws(jwtString); return jwt; } Here, the method parseJwt takes the JWT token as a String argument. Using the same secret key (used for creating the token) this token can be parsed to retrieve the claims. This can be verified using the below test:\n@Test public void testParseJwtClaims() { String jwtToken = JWTCreator.createJwt(); assertNotNull(jwtToken); Jws\u0026lt;Claims\u0026gt; claims = JWTCreator.parseJwt(jwtToken); assertNotNull(claims); Assertions.assertAll( () -\u0026gt; assertNotNull(claims.getSignature()), () -\u0026gt; assertNotNull(claims.getHeader()), () -\u0026gt; assertNotNull(claims.getBody()), () -\u0026gt; assertEquals(claims.getHeader().getAlgorithm(), \u0026#34;HS256\u0026#34;), () -\u0026gt; assertEquals(claims.getBody().get(\u0026#34;id\u0026#34;), \u0026#34;abc123\u0026#34;), () -\u0026gt; assertEquals(claims.getBody().get(\u0026#34;role\u0026#34;), \u0026#34;admin\u0026#34;), () -\u0026gt; assertEquals(claims.getBody().getIssuer(), \u0026#34;TestApplication\u0026#34;) ); } For a full list of the available parsing methods, refer to the documentation.\nComparing Basic Authentication and JWT in Spring Security Before we dive into the implementation of JWT in a sample Spring Boot application, let\u0026rsquo;s look at a few points of comparison between BasicAuth and JWT.\n   Comparison By Basic Authentication JWT     Authorization Headers Sample Basic Auth Header: Authorization: Basic xxx. Sample JWT Header: Authorization: Bearer xxx.   Validity and Expiration Basic Authentication credentials are configured once and the same credentials need to be passed with every request. It never expires. With JWT token, we can set validity/expiry using the exp registered claim after which the token throws a io.jsonwebtoken.ExpiredJwtException. This makes JWT more secure as the token validity is short. The user would have to resend the request to generate a new token.   Data Basic Authentication is meant to handle only credentials (typically username-password). JWT can include additional information such as id, roles, etc. Once the signature is validated, the server can trust the data sent by the client thus avoiding any additional lookups that maybe needed otherwise.    Implementing JWT in a Spring Boot Application Now that we understand JWT better, let\u0026rsquo;s implement it in a simple Spring Boot application. In our pom.xml, let\u0026rsquo;s add the below dependencies:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-web\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-security\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.jsonwebtoken\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jjwt-api\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.11.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.jsonwebtoken\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jjwt-impl\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.11.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;runtime\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.jsonwebtoken\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jjwt-jackson\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.11.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;runtime\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; We have created a simple spring boot Library application that uses an in-memory H2 database to store data. The application is configured to run on port 8083. To run the application:\nmvnw clean verify spring-boot:run (for Windows) ./mvnw clean verify spring-boot:run (for Linux) Intercepting the Spring Security Filter Chain for JWT The application has a REST endpoint /library/books/all to get all the books stored in the DB. IF we make this GET call via Postman, we get a 401 UnAuthorized error:\nThis is because, the spring-boot-starter-security dependency added in our pom.xml, automatically brings in Basic authentication to all the endpoints created. Since we haven\u0026rsquo;t specified any credentials in Postman we get the UnAuthorized error. For the purpose of this article, we need to replace Basic Authentication with JWT-based authentication. We know that Spring provides security to our endpoints, by triggering a chain of filters that handle authentication and authorization for every request. The UsernamePasswordAuthenticationFilter is responsible for validating the credentials for every request. In order to override this filter, let\u0026rsquo;s create a new Filter called JwtFilter. This filter will extend the OncePerRequestFilter class as we want the filter to be called ony once per request:\n@Component @Slf4j public class JwtFilter extends OncePerRequestFilter { private final AuthUserDetailsService userDetailsService; private final JwtHelper jwtHelper; public JwtFilter(AuthUserDetailsService userDetailsService, JwtHelper jwtHelper) { this.userDetailsService = userDetailsService; this.jwtHelper = jwtHelper; } @Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { log.info(\u0026#34;Inside JWT filter\u0026#34;); // Code to validate the Authorization header  } } The JwtHelper class is responsible for creating and validating the token. Let\u0026rsquo;s look at how to create a token first:\npublic String createToken(Map\u0026lt;String, Object\u0026gt; claims, String subject) { Date expiryDate = Date.from(Instant.ofEpochMilli(System.currentTimeMillis() + jwtProperties.getValidity())); Key hmacKey = new SecretKeySpec(Base64.getDecoder() .decode(jwtProperties.getSecretKey()), SignatureAlgorithm.HS256.getJcaName()); return Jwts.builder() .setClaims(claims) .setSubject(subject) .setIssuedAt(new Date(System.currentTimeMillis())) .setExpiration(expiryDate) .signWith(hmacKey) .compact(); } The following params are responsible for creating the token:\n claims refers to an empty map. No user specific claims have been defined for this example. subject refers to the username passed by the user when making the API call to create a token. expiryDate refers to the date after adding \u0026lsquo;x\u0026rsquo; milliseconds to the current date. The value of \u0026lsquo;x\u0026rsquo; is defined in the property jwt.validity. hmacKey refers to the jva.security.Key object used to sign the JWT request. For this example, the secret used is defined in property jwt.secretKey and HS256 algorithm is used.  This method returns a String token that needs to be passed to the Authorization header with every request. Now that we have created a token, let\u0026rsquo;s look at the doFilterInternal method in the JwtFilter class and understand the responsibility of this Filter class:\n@Override protected void doFilterInternal( HttpServletRequest request, HttpServletResponse response, FilterChain filterChain ) throws ServletException, IOException { final String authorizationHeader = request.getHeader(AUTHORIZATION); String jwt = null; String username = null; if (Objects.nonNull(authorizationHeader) \u0026amp;\u0026amp; authorizationHeader.startsWith(\u0026#34;Bearer \u0026#34;)) { jwt = authorizationHeader.substring(7); username = jwtHelper.extractUsername(jwt); } if (Objects.nonNull(username) \u0026amp;\u0026amp; SecurityContextHolder.getContext().getAuthentication() == null) { UserDetails userDetails = this.userDetailsService.loadUserByUsername(username); boolean isTokenValidated = jwtHelper.validateToken(jwt, userDetails); if (isTokenValidated) { UsernamePasswordAuthenticationToken usernamePasswordAuthenticationToken = new UsernamePasswordAuthenticationToken( userDetails, null, userDetails.getAuthorities()); usernamePasswordAuthenticationToken.setDetails( new WebAuthenticationDetailsSource().buildDetails(request)); SecurityContextHolder.getContext().setAuthentication( usernamePasswordAuthenticationToken); } } filterChain.doFilter(request, response); } Step 1. Reads the Authorization header and extracts the JWT string.\nStep 2. Parses the JWT string and extracts the username. We use the io.jsonwebtoken library Jwts.parseBuilder() for this purpose. The jwtHelper.extractUsername() looks as below:\npublic String extractUsername(String bearerToken) { return extractClaimBody(bearerToken, Claims::getSubject); } public \u0026lt;T\u0026gt; T extractClaimBody(String bearerToken, Function\u0026lt;Claims, T\u0026gt; claimsResolver) { Jws\u0026lt;Claims\u0026gt; jwsClaims = extractClaims(bearerToken); return claimsResolver.apply(jwsClaims.getBody()); } private Jws\u0026lt;Claims\u0026gt; extractClaims(String bearerToken) { return Jwts.parserBuilder().setSigningKey(jwtProperties.getSecretKey()) .build().parseClaimsJws(bearerToken); } Step.3. Once the username is extracted, we verify if a valid Authentication object i.e. if a logged-in user is available using SecurityContextHolder.getContext().getAuthentication(). If not, we use the Spring Security UserDetailsService to load the UserDetails object. For this example we have created AuthUserDetailsService class which returns us the UserDetails object.\npublic class AuthUserDetailsService implements UserDetailsService { private final UserProperties userProperties; @Autowired public AuthUserDetailsService(UserProperties userProperties) { this.userProperties = userProperties; } @Override public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { if (StringUtils.isEmpty(username) || !username.equals(userProperties.getName())) { throw new UsernameNotFoundException( String.format(\u0026#34;User not found, or unauthorized %s\u0026#34;, username)); } return new User(userProperties.getName(), userProperties.getPassword(), new ArrayList\u0026lt;\u0026gt;()); } } The username and password under UserProperties are loaded from application.yml as:\nspring: security: user: name: libUser password: libPassword Step.4. Next, the JwtFilter calls the jwtHelper.validateToken() to validate the extracted username and makes sure the jwt token has not expired.\npublic boolean validateToken(String token, UserDetails userDetails) { final String userName = extractUsername(token); return userName.equals(userDetails.getUsername()) \u0026amp;\u0026amp; !isTokenExpired(token); } private Boolean isTokenExpired(String bearerToken) { return extractExpiry(bearerToken).before(new Date()); } public Date extractExpiry(String bearerToken) { return extractClaimBody(bearerToken, Claims::getExpiration); } Step.5. Once the token is validated, we create an instance of the Authentication object. Here, the object UsernamePasswordAuthenticationToken object is created (which is an implementation of the Authentication interface) and set it to SecurityContextHolder.getContext().setAuthentication(usernamePasswordAuthenticationToken). This indicates that the user is now authenticated.\nStep.6. Finally, we call filterChain.doFilter(request, response) so that the next filter gets called in the FilterChain.\nWith this, we have successfully created a filter class to validate the token. We will look at exception handling in the further sections.\nJWT Token Creation Endpoints In this section, we will create a Controller class to create an endpoint, that will allow us to create a JWT token string. This token will be set in the Authorization header when we make calls to our Library application. Let\u0026rsquo;s create a TokenController class:\n@RestController public class TokenController { private final TokenService tokenService; public TokenController(TokenService tokenService) { this.tokenService = tokenService; } @PostMapping(\u0026#34;/token/create\u0026#34;) public TokenResponse createToken(@RequestBody TokenRequest tokenRequest) { return tokenService.generateToken(tokenRequest); } } The request body TokenRequest class will accept username and password:\n@Data @NoArgsConstructor @AllArgsConstructor @Builder public class TokenRequest { private String username; private String password; } The TokenService class is responsible to validate the credentials passed in the request body and call jwtHelper.createToken() as defined in the previous section. In order to authenticate the credentials, we need to implement an AuthenticationManager. Let\u0026rsquo;s create a SecurityConfiguration class to define all Spring security related configuration.\n@Configuration @EnableWebSecurity public class SecurityConfiguration { private final JwtFilter jwtFilter; private final AuthUserDetailsService authUserDetailsService; private final JwtAuthenticationEntryPoint jwtAuthenticationEntryPoint; @Autowired public SecurityConfiguration(JwtFilter jwtFilter, AuthUserDetailsService authUserDetailsService, JwtAuthenticationEntryPoint jwtAuthenticationEntryPoint) { this.jwtFilter = jwtFilter; this.authUserDetailsService = authUserDetailsService; this.jwtAuthenticationEntryPoint = jwtAuthenticationEntryPoint; } @Bean public DaoAuthenticationProvider authenticationProvider() { final DaoAuthenticationProvider daoAuthenticationProvider = new DaoAuthenticationProvider(); daoAuthenticationProvider.setUserDetailsService(authUserDetailsService); daoAuthenticationProvider.setPasswordEncoder( PlainTextPasswordEncoder.getInstance()); return daoAuthenticationProvider; } @Bean public AuthenticationManager authenticationManager(HttpSecurity httpSecurity) throws Exception { return httpSecurity.getSharedObject(AuthenticationManagerBuilder.class) .authenticationProvider(authenticationProvider()) .build(); } } The AuthenticationManager uses the AuthUserDetailsService which uses the spring.security.user property. Now that we have the AuthenticationManager in place, let\u0026rsquo;s look at how the TokenService is defined:\n@Service public class TokenService { private final AuthenticationManager authenticationManager; private final AuthUserDetailsService userDetailsService; private final JwtHelper jwtHelper; public TokenService(AuthenticationManager authenticationManager, AuthUserDetailsService userDetailsService, JwtHelper jwtHelper) { this.authenticationManager = authenticationManager; this.userDetailsService = userDetailsService; this.jwtHelper = jwtHelper; } public TokenResponse generateToken(TokenRequest tokenRequest) { this.authenticationManager.authenticate( new UsernamePasswordAuthenticationToken( tokenRequest.getUsername(), tokenRequest.getPassword())); final UserDetails userDetails = userDetailsService.loadUserByUsername(tokenRequest.getUsername()); String token = jwtHelper.createToken( Collections.emptyMap(), userDetails.getUsername()); return TokenResponse.builder() .token(token) .build(); } } TokenResponse is the Response object that contains the token string:\n@Data @NoArgsConstructor @AllArgsConstructor @Builder public class TokenResponse { private String token; } With the API now created, let\u0026rsquo;s start our application and try to hit the endpoint using Postman. We see a 401 Unauthorized error as below:\nThe reason is the same as we encountered before. Spring Security secures all endpoints by default. We need a way to exclude only the token endpoint from being secured. Also, on startup logs we can see that although we have defined JwtFilter and we expect this filter to override UsernamePasswordAuthenticationFilter, we do not see this filter being wired in the security chain as below:\n2024-05-22 15:41:09.441 INFO 20432 --- [ main] o.s.s.web.DefaultSecurityFilterChain : Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@14d36bb2, org.springframework.security.web.context.request.async. WebAsyncManagerIntegrationFilter@432448, org.springframework.security.web.context.SecurityContextPersistenceFilter@54d46c8, org.springframework.security.web.header.HeaderWriterFilter@c7cf8c4, org.springframework.security.web.csrf.CsrfFilter@17fb5184, org.springframework.security.web.authentication.logout.LogoutFilter@42fa5cb, org.springframework.security.web.authentication. UsernamePasswordAuthenticationFilter@70d7a49b, org.springframework.security.web.authentication.ui. DefaultLoginPageGeneratingFilter@67cd84f9, org.springframework.security.web.authentication.ui. DefaultLogoutPageGeneratingFilter@4452e13c, org.springframework.security.web.authentication.www. BasicAuthenticationFilter@788d9139, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5c34b0f2, org.springframework.security.web.servletapi. SecurityContextHolderAwareRequestFilter@7dfec0bc, org.springframework.security.web.authentication. AnonymousAuthenticationFilter@4d964c9e, org.springframework.security.web.session.SessionManagementFilter@731fae, org.springframework.security.web.access.ExceptionTranslationFilter@66d61298, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@55c20a91] In order to chain the JwtFilter to the other set of filters and to exclude securing the token endpoint, let\u0026rsquo;s create a SecurityFilterChain bean in our SecurityConfiguration class:\n@Bean public SecurityFilterChain configure (HttpSecurity http) throws Exception { return http.csrf().disable() .authorizeRequests() .antMatchers(\u0026#34;/token/*\u0026#34;).permitAll() .anyRequest().authenticated().and() .sessionManagement(session -\u0026gt; session.sessionCreationPolicy(SessionCreationPolicy.STATELESS)) .addFilterBefore(jwtFilter, UsernamePasswordAuthenticationFilter.class) .exceptionHandling(exception -\u0026gt; exception.authenticationEntryPoint(jwtAuthenticationEntryPoint)) .build(); } In this configuration, we are interested in the following:\n antMatchers(\u0026quot;/token/*\u0026quot;).permitAll() - This will allow API endpoints that match the pattern /token/* and exclude them from security. anyRequest().authenticated() - Spring Security will secure all other API requests. addFilterBefore(jwtFilter, UsernamePasswordAuthenticationFilter.class) - This will wire the JwtFilter before UsernamePasswordAuthenticationFilter in the FilterChain. exceptionHandling(exception -\u0026gt; exception.authenticationEntryPoint(jwtAuthenticationEntryPoint) - In case of authentication exception, JwtAuthenticationEntryPoint class will be called. Here we have created a JwtAuthenticationEntryPoint class that implements org.springframework.security.web.AuthenticationEntryPoint in order to handle unauthorized errors gracefully. We will look at handling exceptions in detail in the further sections.  With these changes, let\u0026rsquo;s restart our application and inspect the logs:\n2024-05-22 16:13:07.803 INFO 16188 --- [ main] o.s.s.web.DefaultSecurityFilterChain : Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@73e25780, org.springframework.security.web.context.request.async. WebAsyncManagerIntegrationFilter@1f4cb17b, org.springframework.security.web.context.SecurityContextPersistenceFilter@b548f51, org.springframework.security.web.header.HeaderWriterFilter@4f9980e1, org.springframework.security.web.authentication.logout.LogoutFilter@6b92a0d1, com.reflectoring.security.filter.JwtFilter@5961e92d, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@56976b8b, org.springframework.security.web.servletapi. SecurityContextHolderAwareRequestFilter@74844216, org.springframework.security.web.authentication. AnonymousAuthenticationFilter@280099a0, org.springframework.security.web.session.SessionManagementFilter@144dc2f7, org.springframework.security.web.access.ExceptionTranslationFilter@7a0f43dc, org.springframework.security.web.access.intercept. FilterSecurityInterceptor@735167e1] We see the JwtFilter being chained which indicates that the Basic auth has now been overridden by token based authentication. Now, let\u0026rsquo;s try to hit the /token/create endpoint again. We see that the endpoint is now able to successfully return the generated token:\nSecuring Library Application Endpoints Now, that we are able to successfully create the token, we need to pass this token to our library application to successfully call /library/books/all. Let\u0026rsquo;s add an Authorization header of type Bearer Token with the generated token value and fire the request. We can now see a 200 OK response as below:\nException Handling with JWT In this section, we will take a look at some commonly encountered exceptions from the io.jsonwebtoken package:\n ExpiredJwtException - The JWT token contains the expired time. When the token is parsed, if the expiration time has passed, an ExpiredJwtException is thrown. UnsupportedJwtException - This exception is thrown when a JWT is received in a format that is not expected. The most common use case of this error is when we try to parse a signed JWT with the method Jwts.parserBuilder().setSigningKey(jwtProperties.getSecretKey()) .build().parseClaimsJwt instead of Jwts.parserBuilder().setSigningKey(jwtProperties.getSecretKey()) .build().parseClaimsJws MalformedJwtException - This exception indicates the JWT is incorrectly constructed. IncorrectClaimException - Indicates that the required claim does not have the expected value. Therefore, the JWT is not valid. MissingClaimException - This exception indicates that a required claim is missing in the JWT and hence not valid.  In general, it is considered a good practice to handle authentication related exceptions gracefully. In case of basic authentication, Spring security by default adds the BasicAuthenticationEntryPoint to the Security filter chain which wraps basic auth related errors to 401 Unauthorized. Similarly, in our example we have explicitly created a JwtAuthenticationEntryPoint to handle possible authentication errors such as spring security\u0026rsquo;s BadCredentialsException or JJwt\u0026rsquo;s MalformedJwtException:\n@Component @Slf4j public class JwtAuthenticationEntryPoint implements AuthenticationEntryPoint { @Override public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException, ServletException { Exception exception = (Exception) request.getAttribute(\u0026#34;exception\u0026#34;); response.setStatus(HttpServletResponse.SC_UNAUTHORIZED); response.setContentType(APPLICATION_JSON_VALUE); log.error(\u0026#34;Authentication Exception: {} \u0026#34;, exception, exception); Map\u0026lt;String, Object\u0026gt; data = new HashMap\u0026lt;\u0026gt;(); data.put(\u0026#34;message\u0026#34;, exception != null ? exception.getMessage() : authException.getCause().toString()); OutputStream out = response.getOutputStream(); ObjectMapper mapper = new ObjectMapper(); mapper.writeValue(out, data); out.flush(); } } In our JwtFilter class, we are adding the exception message to the HttpServletRequest exception attribute. This allows us to use request.getAttribute(\u0026quot;exception\u0026quot;) and write it to the output stream.\npublic class JwtFilter extends OncePerRequestFilter { @Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { try { //validate token here  } catch (ExpiredJwtException jwtException) { request.setAttribute(\u0026#34;exception\u0026#34;, jwtException); } catch (BadCredentialsException | UnsupportedJwtException | MalformedJwtException e) { log.error(\u0026#34;Filter exception: {}\u0026#34;, e.getMessage()); request.setAttribute(\u0026#34;exception\u0026#34;, e); } filterChain.doFilter(request, response); } } With these changes, we can now see an exception message with 401 Unauthorized exceptions as below:\nHowever, it is important to note that JwtFilter only gets called for the endpoints that are secured by spring security through the spring security filter chain. In our case the endpoint is /library/books/all. Since, we have excluded the token endpoint /token/create from spring security, the exception handling done under JwtAuthenticationEntryPoint will not apply here. For such cases, we will handle exceptions using Spring\u0026rsquo;s global exception handler.\n@ControllerAdvice public class GlobalExceptionHandler { @ExceptionHandler({BadCredentialsException.class}) public ResponseEntity\u0026lt;Object\u0026gt; handleBadCredentialsException(BadCredentialsException exception) { return ResponseEntity .status(HttpStatus.UNAUTHORIZED) .body(exception.getMessage()); } } With this exception handling, exception caused due to bad credentials will now be handled with 401 Unauthorized error:\nSwagger documentation In this section, we\u0026rsquo;ll look at how to configure Open API for JWT. We will add the below Maven dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springdoc\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;springdoc-openapi-ui\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.7.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Next, let\u0026rsquo;s add the below configuration:\n@OpenAPIDefinition( info = @Info( title = \u0026#34;Library application\u0026#34;, description = \u0026#34;Get all library books\u0026#34;, version = \u0026#34;1.0.0\u0026#34;, license = @License( name = \u0026#34;Apache 2.0\u0026#34;, url = \u0026#34;http://www.apache.org/licenses/LICENSE-2.0\u0026#34; )), security = { @SecurityRequirement( name = \u0026#34;bearerAuth\u0026#34; ) } ) @SecurityScheme( name = \u0026#34;bearerAuth\u0026#34;, description = \u0026#34;JWT Authorization\u0026#34;, scheme = \u0026#34;bearer\u0026#34;, type = SecuritySchemeType.HTTP, bearerFormat = \u0026#34;JWT\u0026#34;, in = SecuritySchemeIn.HEADER ) public class OpenApiConfig { } Here, security is described using one or more @SecurityScheme. The type defined here SecuritySchemeType.HTTP applies to both basic auth and JWT. The other attributes like scheme and bearerFormat depend on this type attribute. After defining the security schemes, we can apply them to the whole application or individual operations by adding the security section on the root level or operation level. In our example, all API operations will use the bearer token authentication scheme. For more information, on configuring multiple security schemes and applying a different scheme at the API level, refer to its documentation..\nNext, let\u0026rsquo;s add some basic swagger annotations to our controller classes, in order to add descriptions to the API operations.\n@RestController @Tag(name = \u0026#34;Library Controller\u0026#34;, description = \u0026#34;Get library books\u0026#34;) public class BookController { } @RestController @Tag(name = \u0026#34;Create Token\u0026#34;, description = \u0026#34;Create Token\u0026#34;) public class TokenController { } Also, we will use the below property to override the URL where Springdoc\u0026rsquo;s Swagger-UI loads.\nspringdoc: swagger-ui: path: /swagger-ui With this configuration, swagger ui will now be available at http://localhost:8083/swagger-ui/index.html\nLet\u0026rsquo;s try to run the application and load the swagger page at the mentioned URL. When we try to hit the endpoint, we see this:\nThis is because all endpoints in the application are automatically secured. We need a way to explicitly exclude the swagger endpoint from being secured. We can do this by adding the WebSecurityCustomizer bean and excluding the swagger endpoints in our SecurityConfiguration class.\n@Bean public WebSecurityCustomizer webSecurityCustomizer() { return web -\u0026gt; web.ignoring().antMatchers( ArrayUtils.addAll(buildExemptedRoutes())); } private String[] buildExemptedRoutes() { return new String[] {\u0026#34;/swagger-ui/**\u0026#34;,\u0026#34;/v3/api-docs/**\u0026#34;}; } Now, when we run the application, the swagger page will load as below:\nSince we have only one security scheme, let\u0026rsquo;s add the JWT token to the Authorize button at the top of the swagger page:\nWith the bearer token set, let\u0026rsquo;s try to hit the /library/books/all endpoint:\nWith this, we have successfully configured swagger endpoints for our application.\nAdding Spring Security Tests In our example, we need to write tests to test our token endpoint and another test for our Library application.\nLet\u0026rsquo;s add some required properties for our tests along with an in-memory database to work with real data. Test application.yml:\nspring: security: user: name: libUser password: libPassword datasource: driver-class-name: org.hsqldb.jdbc.JDBCDriver url: jdbc:hsqldb:mem:testdb;DB_CLOSE_DELAY=-1 username: sa password: jwt: secretKey: 5JzoMbk6E5qIqHSuBTgeQCARtUsxAkBiHwdjXOSW8kWdXzYmP3X51C0 validity: 600000 Next, let\u0026rsquo;s write tests to verify our token endpoint:\n@SpringBootTest @AutoConfigureMockMvc public class TokenControllerTest { @Autowired private MockMvc mvc; @Test public void shouldNotAllowAccessToUnauthenticatedUsers() throws Exception { TokenRequest request = TokenRequest.builder() .username(\u0026#34;testUser\u0026#34;) .password(\u0026#34;testPassword\u0026#34;) .build(); mvc.perform(MockMvcRequestBuilders.post(\u0026#34;/token/create\u0026#34;) .contentType(MediaType.APPLICATION_JSON) .content(new ObjectMapper().writeValueAsString(request))) .andExpect(status().isUnauthorized()); } @Test public void shouldGenerateAuthToken() throws Exception { TokenRequest request = TokenRequest.builder() .username(\u0026#34;libUser\u0026#34;) .password(\u0026#34;libPassword\u0026#34;) .build(); mvc.perform(MockMvcRequestBuilders.post(\u0026#34;/token/create\u0026#34;) .contentType(MediaType.APPLICATION_JSON) .content(new ObjectMapper().writeValueAsString(request))) .andExpect(status().isOk()); } } Here, we will use @MockMvc to verify our TokenController class endpoint is working as expected in both positive and negative scenarios.\nSimilarly, our BookControllerTest will look like this:\n@SpringBootTest @AutoConfigureMockMvc @SqlGroup({ @Sql(value = \u0026#34;classpath:init/first.sql\u0026#34;, executionPhase = BEFORE_TEST_METHOD), @Sql(value = \u0026#34;classpath:init/second.sql\u0026#34;, executionPhase = BEFORE_TEST_METHOD) }) public class BookControllerTest { @Autowired private MockMvc mockMvc; @Test void failsAsBearerTokenNotSet() throws Exception { mockMvc.perform(get(\u0026#34;/library/books/all\u0026#34;)) .andDo(print()) .andExpect(status().isUnauthorized()); } @Test void testWithValidBearerToken() throws Exception { TokenRequest request = TokenRequest.builder() .username(\u0026#34;libUser\u0026#34;) .password(\u0026#34;libPassword\u0026#34;) .build(); MvcResult mvcResult = mockMvc.perform( MockMvcRequestBuilders.post(\u0026#34;/token/create\u0026#34;) .contentType(MediaType.APPLICATION_JSON) .content(new ObjectMapper().writeValueAsString(request))) .andExpect(status().isOk()).andReturn(); String resultStr = mvcResult.getResponse().getContentAsString(); TokenResponse token = new ObjectMapper().readValue( resultStr, TokenResponse.class); mockMvc.perform(get(\u0026#34;/library/books/all\u0026#34;) .header(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer \u0026#34; + token.getToken())) .andDo(print()) .andExpect(status().isOk()) .andExpect(jsonPath(\u0026#34;$\u0026#34;, hasSize(5))); } @Test void testWithInvalidBearerToken() throws Exception { mockMvc.perform(get(\u0026#34;/library/books/all\u0026#34;) .header(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer 123\u0026#34;)) .andDo(print()) .andExpect(status().isUnauthorized()); } } To test the application endpoints, we will be using the Spring MockMvc class and load the in-memory database with data using sample sql scripts. For this we will make use of annotations @SqlGroup and @Sql and the insert scripts will be placed within /resources/init folders.\nIn order to verify the successful run of the endpoint testWithValidBearerToken(), we will make a call first to the /token/create endpoint using MockMvc, extract the token from the response and set the token in the Authorization header of the next call to /library/books/all.\nConclusion In summary, JWT authentication is one step ahead to Spring\u0026rsquo;s Basic authentication in terms of security. It is one of the most sought after means of authentication and authorization. In this article, we have explored some best practices, advantages of using JWT and looked at configuring a simple Spring Boot application to use JWT for security.\n","date":"June 19, 2024","image":"https://reflectoring.io/images/stock/0101-keylock-1200x628-branded_hu54aa4efa315910c5671932665107f87d_212538_650x0_resize_q90_box.jpg","permalink":"/spring-security-jwt/","title":"Getting Started with Spring Security and JWT"},{"categories":["Node"],"contents":"In this step-by-step guide, we\u0026rsquo;ll create, publish, and manage an NPM package using TypeScript for better code readability and scalability. We\u0026rsquo;ll write test cases with Jest and automate our NPM package versioning and publishing process using Changesets and GitHub Actions.\nAn NPM package allows for the encapsulation of reusable code, simplifies project development, and promotes collaboration by sharing useful libraries with the community. This accelerates the development process and gives us the option to keep this package private or make it public (open-source) for others to use.\nPrerequisites We\u0026rsquo;ll need the following:\n Node.js installed on our computer. Basic knowledge of TypeScript. GitHub Account. NPM Account.  In this post, we will create an NPM package to validate user inputs such as emails, mobile numbers, and social media links. Instead of rewriting these validation functions for each project, our package can be installed across projects to simplify and standardize the validation process.\nLet\u0026rsquo;s jump right into it:\n Example Code This article is accompanied by a working code example on GitHub. Step 1: Setting up Node.js Application Begin by creating an empty folder named validate-npm-pc.\nThis folder will serve as the name of our NPM package. It\u0026rsquo;s important to choose a unique name since the NPM registry hosts a vast number of packages, each requiring a distinct name for successful publication.\nTo initialize Node.js in the project, run the following command in the terminal:\nnpm init -y This command generates a package.json file, which holds essential metadata about our project, including details of the dependencies and version information.\nAgain in the terminal, we will execute the following:\nmkdir -p \\  .github/workflows \\  src \\  __tests__ touch \\  src/index.ts \\  src/validate.ts \\  __tests__/validate.ts \\  .github/workflows/release.yml \\  .gitignore These commands create all the necessary files and folders required in the package.\nInstall all the dependencies needed in the package by running the following command:\nnpm install \\  typescript \\  jest \\  ts-jest \\  @types/jest \\  @changesets/cli \\  --save-dev Here\u0026rsquo;s a brief overview of what each dependency does:\n typescript: Enables static typing in our code. jest: A JavaScript testing framework. ts-jest: A TypeScript preprocessor with source map support for Jest. @types/jest: Provides TypeScript type definitions for Jest. @changesets/cli: is a command-line tool for managing versioning and changelogs in a monorepo setup. Changeset automates the NPM versioning process for our package.  NPM versioning follows the Semantic Versioning (SemVer) convention, which consists of three numbers separated by periods: MAJOR.MINOR.PATCH (for example \u0026ldquo;1.2.3\u0026rdquo;).\nAccording to SemVer, we are to:\n Increment the PATCH number for backward-compatible bug fixes. Increment the MINOR number for added functionality in a backward-compatible manner. Increment the MAJOR number for significant changes or incompatible API changes.  These versions should be updated appropriately whenever the NPM package is modified or changed, the same package version number cannot be published twice on the NPM registry.\nNext, we will update our package.json file with important properties and script commands.\nHere\u0026rsquo;s what our NPM package.json file should look like now:\n{ \u0026#34;name\u0026#34;: \u0026#34;validate-npm-pc\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;A comprehensive library for validating user inputs including emails, mobile numbers, and social media links.\u0026#34;, \u0026#34;main\u0026#34;: \u0026#34;dist/cjs/index.js\u0026#34;, \u0026#34;module\u0026#34;: \u0026#34;dist/esm/index.js\u0026#34;, \u0026#34;types\u0026#34;: \u0026#34;dist/types/index.d.ts\u0026#34;, \u0026#34;files\u0026#34;: [ \u0026#34;/dist\u0026#34; ], \u0026#34;scripts\u0026#34;: { \u0026#34;build\u0026#34;: \u0026#34;tsc --project tsconfig.json \u0026amp;\u0026amp; tsc --project tsconfig.cjs.json\u0026#34;, \u0026#34;release\u0026#34;: \u0026#34;npm run build \u0026amp;\u0026amp; changeset publish\u0026#34;, \u0026#34;test\u0026#34;: \u0026#34;jest\u0026#34; }, \u0026#34;keywords\u0026#34;: [ \u0026#34;validate\u0026#34; ], \u0026#34;author\u0026#34;: \u0026#34;ajibadde\u0026#34;, \u0026#34;license\u0026#34;: \u0026#34;ISC\u0026#34;, \u0026#34;devDependencies\u0026#34;: { \u0026#34;@changesets/cli\u0026#34;: \u0026#34;^2.27.1\u0026#34;, \u0026#34;@types/jest\u0026#34;: \u0026#34;^29.5.12\u0026#34;, \u0026#34;jest\u0026#34;: \u0026#34;^29.7.0\u0026#34;, \u0026#34;ts-jest\u0026#34;: \u0026#34;^29.1.2\u0026#34;, \u0026#34;typescript\u0026#34;: \u0026#34;^5.4.5\u0026#34; } } Our package.json now has these additional fields:\n main: This specifies the entry point for CommonJS users. When someone uses require(\u0026quot;validate-npm-pc\u0026quot;), Node.js will look for dist/cjs/index.js. module: This specifies the entry point for ES module users. When someone uses import from \u0026quot;validate-npm-pc\u0026quot;, tools like Webpack or Rollup will look for dist/esm/index.js. types: This specifies the location of the TypeScript declaration file. This helps TypeScript understand the types when our package is used. files: An array specifying which files should be included when our package is published. keywords: An array of keywords to enhance our package\u0026rsquo;s searchability on the NPM website. scripts: Define commands for building, releasing, and testing the project.  build: This script compiles TypeScript files into JavaScript using the tsc. It runs two separate builds using the typescript configuration files we will be creating (tsconfig.json and tsconfig.cjs.json). release: This script first runs the build script, and if successful, it then publishes the changes using changeset publish. test: This runs our project tests using Jest    Next, update the .gitignore to exclude unnecessary files from being included in our GitHub repository\nTo do this copy and paste the following into the .gitignore file:\ndist node_modules Step 2: Initializing Helper Packages Before proceeding with our package development, we need to initialize the necessary helper dependencies for our NPM package. We\u0026rsquo;ll be setting up TypeScript, Jest, and Changesets in our project.\nInitializing Typescript We will configure TypeScript to compile our code to output both ES modules (ESM) and CommonJS modules (CJS). To achieve this, we will create two tsconfig.json files: one for ES modules and another for CommonJS modules.\nFirst, initialize TypeScript in the project by running the following command:\nnpx tsc --init This command generates our initial tsconfig.json file, which contains TypeScript configuration options. We\u0026rsquo;ll modify this configuration to enable publishing our package using ES modules.\nReplace the content of the tsconfig.json file with the following:\n{ \u0026#34;compilerOptions\u0026#34;: { \u0026#34;target\u0026#34;: \u0026#34;es2015\u0026#34;, \u0026#34;module\u0026#34;: \u0026#34;ESNext\u0026#34;, \u0026#34;declaration\u0026#34;: true, \u0026#34;outDir\u0026#34;: \u0026#34;./dist/esm\u0026#34;, \u0026#34;strict\u0026#34;: true, \u0026#34;esModuleInterop\u0026#34;: true, \u0026#34;forceConsistentCasingInFileNames\u0026#34;: true, \u0026#34;skipLibCheck\u0026#34;: true }, \u0026#34;include\u0026#34;: [\u0026#34;src/**/*\u0026#34;], \u0026#34;exclude\u0026#34;: [\u0026#34;node_modules\u0026#34;, \u0026#34;**/__tests__/*\u0026#34;] } Next, create a tsconfig.cjs.json file, then copy and paste the following:\n{ \u0026#34;extends\u0026#34;: \u0026#34;./tsconfig.json\u0026#34;, \u0026#34;compilerOptions\u0026#34;: { \u0026#34;module\u0026#34;: \u0026#34;CommonJS\u0026#34;, \u0026#34;outDir\u0026#34;: \u0026#34;./dist/cjs\u0026#34; } } In the tsconfig.cjs.json file we extended the settings from our initial tsconfig.json file, changing only the module system to CommonJS and setting the output directory to ./dist/cjs.\nWith the above configurations, TypeScript is set up to compile our package. The output will be organized into the ./dist directory, with ESM files located in ./dist/esm and CJS files in ./dist/cjs.\nInitializing Jest To set up unit tests for our package using Jest, we will create and configure a Jest configuration file by running the following command:\ntouch jest.config.mjs Then copy and paste the following configuration into the jest.config.mjs file:\nconst config = { moduleFileExtensions: [\u0026#34;ts\u0026#34;, \u0026#34;tsx\u0026#34;, \u0026#34;js\u0026#34;], preset: \u0026#34;ts-jest\u0026#34;, }; export default config; With the above configuration, Jest is now configured to work with TypeScript files, leveraging the ts-jest preset to compile TypeScript code during testing. Jest will recognize files with the specified extensions and execute tests accordingly.\nInitializing Changeset To simplify our NPM versioning process, we will leverage the changeset CLI dependency.\nchangeset monitors and automates version increments, ensuring precise updates following each change. It maintains a comprehensive record of changes made to our package, facilitating transparency and accountability in version management.\nTo initialize the changeset in our application, run:\nnpx changeset init This command generates a .changeset folder containing a README.md and a config.json file.\nBy default, the access setting in the config.json file is set to restricted. To publish our package with public access, update the content of the Changeset config.json file with the following:`\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://unpkg.com/@changesets/config@2.3.1/schema.json\u0026#34;, \u0026#34;changelog\u0026#34;: \u0026#34;@changesets/cli/changelog\u0026#34;, \u0026#34;commit\u0026#34;: false, \u0026#34;fixed\u0026#34;: [], \u0026#34;linked\u0026#34;: [], \u0026#34;access\u0026#34;: \u0026#34;public\u0026#34;, \u0026#34;baseBranch\u0026#34;: \u0026#34;main\u0026#34;, \u0026#34;updateInternalDependencies\u0026#34;: \u0026#34;patch\u0026#34;, \u0026#34;ignore\u0026#34;: [] } With these settings, our package is configured for public access and ready for versioning and publishing.\nStep 3: Writing Our Package Function Now, let\u0026rsquo;s proceed to the development of our package code. We\u0026rsquo;ll organize our package logic and functions as follows: the core validation functionalities will reside within the src/validate.ts file, while src/index.ts will serve as the main entry point, exporting all the functions of our module.\nIn the src/validate.ts file copy and paste the following:\n/** * Validates a mobile number, ensuring it starts with a \u0026#34;+\u0026#34; sign * and contains only digits, with a maximum length of 15 characters. * @param {string} mobileNumber * @returns {boolean} * @example * validateMobileNumber(\u0026#34;+23470646932\u0026#34;) // Output: true */ export const validMobileNo = (mobileNumber: string): boolean =\u0026gt; { if (mobileNumber.charAt(0) === \u0026#34;+\u0026#34;) { const numberWithoutPlus = mobileNumber.slice(1); if (!isNaN(Number(numberWithoutPlus))) return numberWithoutPlus.length \u0026lt;= 15; } return false; }; /** * Validates an email address using a regular expression. * @param {string} email * @returns {boolean} * @example * validateEmail(\u0026#34;example@mail.com\u0026#34;) // Output: true */ export const validEmail = (email: string): boolean =\u0026gt; { const emailRegex: RegExp = /^[^\\s@]+@[^\\s@]+\\.[^\\s@]+$/; return emailRegex.test(email); }; /** * Validates a social media URL for Facebook or Twitter. * @param {string} url * @returns {boolean} * @example * validateSocialURL(\u0026#34;https://www.facebook.com/example\u0026#34;) // Output: true * validateSocialURL(\u0026#34;https://www.twitter.com/example\u0026#34;) // Output: true */ export const validSocial = (url: string): boolean =\u0026gt; { const socialRegexMap: Map\u0026lt;string, RegExp\u0026gt; = new Map([ [\u0026#34;facebook\u0026#34;, /^(https?:\\/\\/)?(www\\.)?facebook.com\\/[a-zA-Z0-9._-]+\\/?$/], [\u0026#34;twitter\u0026#34;, /^(https?:\\/\\/)?(www\\.)?twitter.com\\/[a-zA-Z0-9_]+\\/?$/], // Add more social platforms\u0026#39; regex patterns here  ]); return Array.from(socialRegexMap.values()).some(regex =\u0026gt; regex.test(url)); }; In the above code snippet, we created three methods to validate users' input for emails, mobile numbers, and social media links using regular expressions.\nFinally, within the src/index.ts file, let\u0026rsquo;s import and re-export all our API methods as follows:\nimport { validEmail, validMobileNo, validSocial } from \u0026#34;./validate\u0026#34;; export { validEmail, validMobileNo, validSocial }; Step 4: Writing Tests To prevent avoidable bugs and errors, it is important to write tests for our package functions.\nTo do this we\u0026rsquo;ll simply copy and paste the following into the __test__/validate.ts file:\nimport { validMobileNo, validEmail, validSocial } from \u0026#34;../src/index\u0026#34;; describe(\u0026#34;validMobileNo\u0026#34;, () =\u0026gt; { test(\u0026#34;Valid mobile number with + sign and 15 digits\u0026#34;, () =\u0026gt; { expect(validMobileNo(\u0026#34;+234706469321234\u0026#34;)).toBe(true); }); test(\u0026#34;Invalid mobile number without + sign\u0026#34;, () =\u0026gt; { expect(validMobileNo(\u0026#34;234706469321234\u0026#34;)).toBe(false); }); test(\u0026#34;Invalid mobile number with more than 15 digits\u0026#34;, () =\u0026gt; { expect(validMobileNo(\u0026#34;+23470646932123456\u0026#34;)).toBe(false); }); }); describe(\u0026#34;validEmail\u0026#34;, () =\u0026gt; { test(\u0026#34;Valid email address\u0026#34;, () =\u0026gt; { expect(validEmail(\u0026#34;example@mail.com\u0026#34;)).toBe(true); }); test(\u0026#39;Invalid email address without \u0026#34;@\u0026#34; symbol\u0026#39;, () =\u0026gt; { expect(validEmail(\u0026#34;examplemail.com\u0026#34;)).toBe(false); }); test(\u0026#34;Invalid email address without domain\u0026#34;, () =\u0026gt; { expect(validEmail(\u0026#34;example@mail\u0026#34;)).toBe(false); }); }); describe(\u0026#34;validSocial\u0026#34;, () =\u0026gt; { test(\u0026#34;Valid Facebook URL\u0026#34;, () =\u0026gt; { expect(validSocial(\u0026#34;https://www.facebook.com/example\u0026#34;)).toBe(true); }); test(\u0026#34;Valid Twitter URL\u0026#34;, () =\u0026gt; { expect(validSocial(\u0026#34;https://www.twitter.com/example\u0026#34;)).toBe(true); }); test(\u0026#34;Invalid URL\u0026#34;, () =\u0026gt; { expect(validSocial(\u0026#34;https://www.invalid.com/example\u0026#34;)).toBe(false); }); }); This code defines a series of test cases to verify the functionality of our validation functions.\nTo execute these test cases in the terminal, run:\nnpm run test This will search for test files within our project directory and execute the test cases found in them.\nStep 5: Publishing the Package to NPM It is often recommended to compile TypeScript code before sharing it on npm for improved performance and compatibility across multiple JavaScript environments. Because our code is written in TypeScript, we will compile it into JavaScript before publishing.\nTo do this, we\u0026rsquo;ll run the build command in our package.json file:\nnpm run build This command reads TypeScript files (with a .ts extension) and compiles them into JavaScript files (with a .js extension). It automatically creates a ./dist folder, housing the compiled JavaScript version of our TypeScript code both the EJS and CJS version of our code.\nOnce the compilation is complete, our package is ready for publishing. We will publish our package manually from the terminal.\nManually publishing our first NPM package allows us to thoroughly understand each step of the process. By doing this, we gain insight into how NPM works, from creating an account to logging in and publishing a package.\nTo publish an NPM package from the terminal, ensure you have an NPM account.\nNext, log in to the NPM registry from the terminal by running:\nnpm login This will prompt us to enter our credentials, which will log us into our NPM account.\nAfter a successful login, our terminal should indicate that we are logged in on https://registry.npmjs.org/.\nWe are now ready to publish our package. To publish, run the NPM publish command using a --access=public tag. By default, a published package is set to private this tag will make our package accessible publicly.\nTo publish our package, run:\nnpm publish --access=public There we go! We successfully created and published an NPM package.\nNote: If you receive a 403 Forbidden error on the first publish attempt, it is likely because you haven’t yet verified your email address on npmjs.com or attempting to publish the same version of a package twice\nWe can now view our published package on the NPM registry.\nUpdating a Published NPM Package Next, here are the steps to take to manually update our published NPM package after modifications have been made to the package:\nVersioning After modifying or making changes to the package, head to the package.json file, and update the version number appropriately. We can do this manually or by using NPM\u0026rsquo;s version command:\nnpm version patch This command will automatically increment the patch version number.\nDepending on the significance of our changes, we can use npm version [major|minor|patch] to indicate the level of version change needed.\nPublishing the Update Once the changes and the version number update have been made, publish the update to NPM using:\nnpm publish This will publish our new changes to the NPM registry. By following these steps, we can ensure that our package updates are properly versioned and made available for installation through the NPM registry.\nStep 6: Automating NPM Publishing on GitHub Using Changesets We\u0026rsquo;ve built an awesome NPM library and can\u0026rsquo;t wait to start using it and sharing it with the world. However, manually publishing and updating this library can quickly become a hassle, especially if we\u0026rsquo;re open to receiving contributions from others. To streamline this, let\u0026rsquo;s automate our package publishing process using GitHub Actions and the Changesets action.\nWe\u0026rsquo;ll start by pushing our package code to a new GitHub repository if it hasn\u0026rsquo;t been done already. This ensures that our package is ready for integration with changesets and GitHub Actions.\nOnce a change set is created and merged into the GitHub main branch, our package will undergo automatic publication to npm. This eliminates the need for manual deployment and guarantees that our package remains consistently up to date. Additionally, all records of publishing are kept for reference.\nHere are the steps to automate our NPM package publishing process:\nGenerating and Add NPM Token to GitHub Secrets The NPM token is necessary for publishing packages to NPM via Github, enabling us to bypass the need to log in to the NPM registry manually.\nHere’s how to generate NPM Token, head to npmjs.com. Navigate to your profile and select Access Tokens. Then click Generate new token (Classic Token).\nClick the Generate Token button, and copy the generated token.\nNext, we will use GitHub secrets to protect this generated token within our project\u0026rsquo;s repository, as it is sensitive and must be kept secure.\nTo do this, head to GitHub and go to the project\u0026rsquo;s repository. Navigate to Settings -\u0026gt; Secrets and variables -\u0026gt; Actions, and select New repository secret to add our generated NPM token as a secret as shown below:\nBy storing the token as a GitHub secret, we ensure it is securely managed and can be used safely in our workflow.\nUpdating Repository Action Settings: Changeset auto-creates a new PR to publish our changes to NPM. However, by default, GitHub Actions cannot create PRs. To enable this functionality, we need to update our GitHub repository\u0026rsquo;s action settings.\nNavigate to repository Settings -\u0026gt; Actions -\u0026gt; General settings.\nThen Enable \u0026ldquo;Read and write permissions\u0026rdquo; to grant GitHub Actions the necessary access to read from and write to the repository.\nBy granting this permission, Changeset will be able to create PRs facilitating the automation of our package publishing process.\nWriting Our Git Action Command Next, let\u0026rsquo;s write our action workflow.\nCopy and Paste the following in the .github/workflows/release.yml file:\nname: Release on: push: branches: - main jobs: release: name: Release runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-node@v3 with: node-version: 18 - run: npm install - run: npm run test - name: Publish to npm id: changesets uses: changesets/action@v1 with: publish: npm run release env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} NPM_TOKEN: ${{ secrets.NPM_TOKEN }} The above workflow is triggered on any push to the main branch. It uses Changesets to automate the process of versioning and publishing our package to NPM.\nIn our action environment variable, we are using the saved NPM_TOKEN and a GITHUB_TOKEN, which is provided by GitHub. If you don’t have a GitHub token. You can follow this guide to create one. Don’t forget to give permissions for read-write in GitHub actions.\nThis workflow automates the versioning and publishing process of our package. When a new PR or Push occurs on the main branch, it automatically creates a PR named Version Packages containing the required changes to publish the package\nVersioning and Pushing New Updates We are now ready to push our package to GitHub.\nTo do this, create a feature branch\ngit checkout -b feature/test-changeset Next, we will make our desired changes or updates to the package code.\nWhenever we make changes that alter the package\u0026rsquo;s functionality, we need to create a new changeset before pushing or creating a pull request.\nCreate a new changeset, by running:\nnpx changeset This command will prompt us to choose the type of version change [patch|minor|major] and provide a description of the changes. The description will be included in the changelog upon release. After completing the prompts, a new markdown file will be created in the .changeset folder, documenting the changes made. This file is essential for tracking the changes and versioning.\nNote: To remind contributors (and ourselves) to add changeset to PRs, install the Changeset bot from the GitHub Marketplace. This bot will remind all contributors to include a changeset whenever they create a PR.\nNext, commit and push the changeset and new updates to the GitHub repository by running the following command:\ngit add . git commit -m \u0026quot;feature/test-changeset: testing\u0026quot; git push -u origin feature/test-changeset Our changes are on GitHub, now create a new Pull Request (PR)!\nThe Changeset bot has already acknowledged that we\u0026rsquo;ve added our changeset file. If we hadn\u0026rsquo;t, it would have sent out a notice.\nApprove and merge the PR, this will integrate our changes into the main branch and trigger our GitHub Actions workflow.\nIf the action runs successfully, it creates a new PR Version Packages!\nUpon merging the Version Packages PR, the published script executes, and our updated package version is published to NPM.\nNow we are all set! Our NPM package is up to date, and its update and versioning process is fully automated.\nStep 7: Use the Package We can now install our published package in any project of choice:\nnpm install validate-npm-pc In this snippet, we can see our validate-npm-pc package in action, validating user inputs like a charm.\nConclusion Creating and publishing an NPM package is a powerful way to contribute to open-source and enhance code reusability. We covered the essentials, from initializing our package to publishing it on NPM, and using changesets for versioning and automated releases. For more details, explore the NPM and changeset documentation. Happy coding!\n","date":"June 16, 2024","image":"https://reflectoring.io/images/stock/0137-speed-1200x628-branded_hub713cc45004fb4a228379981531d1996_109522_650x0_resize_q90_box.jpg","permalink":"/create-and-publish-npm-package/","title":"Creating and Publishing an NPM Package with Automated Versioning and Deployment"},{"categories":["AWS","Spring Boot","Java"],"contents":"When building web applications that involve file uploads or downloads, a common approach is to have the files pass through an application server. However, this can lead to increased load on the server, consuming valuable computing resources, and potentially impacting performance. A more efficient solution is to offload file transfers to the client (web browsers, desktop/mobile applications) using Presigned URLs.\nPresigned URLs are time-limited URLs that allow clients temporary access to upload or download objects directly to or from the storage solution. These URLs are generated with a specified expiration time, after which they are no longer accessible.\nThe storage solution we\u0026rsquo;ll use in this article is Amazon S3 (Simple Storage Service), provided by AWS. However, it\u0026rsquo;s worth noting that the concept of Presigned URLs is not limited to AWS. It can also be implemented with other cloud storage services like Google Cloud Storage, DigitalOcean Spaces, etc.\nIn this article, we\u0026rsquo;ll discuss how to generate Presigned URLs in a Spring Boot application to delegate the responsibility of uploading/downloading files to the client. We\u0026rsquo;ll be using Spring Cloud AWS to communicate with Amazon S3 and develop a service class that provides methods for generating Presigned URLs.\nThese URLs will allow the client applications to securely upload and download objects to/from a provisioned S3 bucket. We\u0026rsquo;ll also test our developed Presigned URL functionality using LocalStack and Testcontainers.\n Example Code This article is accompanied by a working code example on GitHub. Use Cases and Benefits of Presigned URLs Before diving into the implementation, let\u0026rsquo;s further discuss the use cases and advantages of using Presigned URLs to offload file transfers from our application servers:\n  Large File Downloads: When having an entertainment or an e-learning platform that serves video courses to users, instead of serving the large video files from our application server, we can generate Presigned URLs for each video file and offload the responsibility of downloading/streaming the video directly from S3 to the client.\nTo secure this architecture, before we generate the Presigned URLs, our server can validate/authenticate the user requesting the video content.\nBy implementing Presigned URLs on applications that serve a high volume of content from S3, we reduce the load on our server(s), improve performance, and make our architecture scalable.\n  Uploading User-Generated Content: In scenarios where our application requires the users to upload files such as profile pictures, documents, or other media content, instead of having the files pass through our application server, we can generate a Presigned URL with the necessary permissions and provide it to the client.\nThis approach not only reduces the load on our application server but also simplifies the upload process. The client can initiate the file upload directly to S3, eliminating the need for temporary storage on our server and the additional step of forwarding the file to S3.\n  Now that we understand the use cases for which we can implement Presigned URLs and their benefits, let\u0026rsquo;s proceed with the implementation.\nConfigurations We\u0026rsquo;ll be using Spring Cloud AWS to connect to and interact with our provisioned S3 bucket. While interacting with S3 directly through the AWS SDK for Java is possible, it often leads to verbose configuration classes and boilerplate code.\nSpring Cloud AWS simplifies this integration by providing a layer of abstraction over the official SDK, making it easier to interact with services like S3.\nThe main dependency that we need is spring-cloud-aws-starter-s3, which contains all S3-related classes needed by our application.\nWe will also make use of Spring Cloud AWS BOM (Bill of Materials) to manage the version of the S3 starter in our project. The BOM ensures version compatibility between the declared dependencies, avoids conflicts, and makes it easier to update versions in the future.\nHere is how our pom.xml file would look like:\n\u0026lt;properties\u0026gt; \u0026lt;spring.cloud.version\u0026gt;3.1.1\u0026lt;/spring.cloud.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;!-- Other project dependencies... --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws-starter-s3\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${spring.cloud.version}\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; Now, the only thing left in order to allow Spring Cloud AWS to establish a connection with the Amazon S3 service is to define the necessary configuration properties in our application.yaml file:\nspring: cloud: aws: credentials: access-key: ${AWS_ACCESS_KEY} secret-key: ${AWS_SECRET_KEY} s3: region: ${AWS_S3_REGION} Spring Cloud AWS will automatically create the necessary configuration beans using the above-defined properties, allowing us to interact with the S3 service in our application.\nS3 Bucket Name and Presigned URL Validity To perform operations against a provisioned S3 bucket, we need to provide its name. To generate Presigned URLs, we need to provide a validity duration. We\u0026rsquo;ll store these properties in our application.yaml file and make use of @ConfigurationProperties to map the values to a POJO:\n@Getter @Setter @Validated @ConfigurationProperties(prefix = \u0026#34;io.reflectoring.aws.s3\u0026#34;) public class AwsS3BucketProperties { @NotBlank(message = \u0026#34;S3 bucket name must be configured\u0026#34;) private String bucketName; @Valid private PresignedUrl presignedUrl = new PresignedUrl(); @Getter @Setter @Validated public class PresignedUrl { @NotNull(message = \u0026#34;S3 presigned URL validity must be specified\u0026#34;) @Positive(message = \u0026#34;S3 presigned URL validity must be a positive value\u0026#34;) private Integer validity; } public Duration getPresignedUrlValidity() { var urlValidity = this.presignedUrl.validity; return Duration.ofSeconds(urlValidity); } } We\u0026rsquo;ve added validation annotations to ensure that both the bucket name and Presigned URL validity are configured correctly. If any of the validations fail, it will result in the Spring Application Context failing to start up. This allows us to conform to the fail fast principle.\nWhen generating Presigned URLs, the validity duration needs to be provided as an instance of the Duration class. To facilitate this, we\u0026rsquo;ve also added a getPresignedUrlValidity() method in our class that\u0026rsquo;ll be invoked by our service layer.\nBelow is a snippet of our application.yaml file, which defines the required properties that will be automatically mapped to our AwsS3BucketProperties class defined above:\nio: reflectoring: aws: s3: bucket-name: ${AWS_S3_BUCKET_NAME} presigned-url: validity: ${AWS_S3_PRESIGNED_URL_VALIDITY} This setup allows us to externalize the bucket name and the validity duration of the Presigned URL attributes and easily access it in our code.\nFor the sake of demonstration, the defined configuration assumes that the application will be operating against a single S3 bucket and the defined validity will be applicable for both PUT and GET Presigned URLs. Should this not align with your application\u0026rsquo;s requirements, then the AwsS3BucketProperties class can be modified accordingly.\nI\u0026rsquo;d recommend keeping the validity of both PUT and GET Presigned URLs separate to enjoy more flexibility, and also would like to emphasize that the validity duration of the Presigned URLs should be kept as short as possible, especially for upload operations, to protect against unauthorized access.\nGenerating Presigned URLs Now that we have our configurations set up, we\u0026rsquo;ll proceed to develop our service class that generates Presigned URLs for uploading and downloading objects:\n@Service @RequiredArgsConstructor @EnableConfigurationProperties(AwsS3BucketProperties.class) public class StorageService { private final S3Template s3Template; private final AwsS3BucketProperties awsS3BucketProperties; public URL generateViewablePresignedUrl(String objectKey) { var bucketName = awsS3BucketProperties.getBucketName(); var urlValidity = awsS3BucketProperties.getPresignedUrlValidity(); return s3Template.createSignedGetURL(bucketName, objectKey, urlValidity); } public URL generateUploadablePresignedUrl(String objectKey) { var bucketName = awsS3BucketProperties.getBucketName(); var urlValidity = awsS3BucketProperties.getPresignedUrlValidity(); return s3Template.createSignedPutURL(bucketName, objectKey, urlValidity); } } We have used the S3Template class provided by Spring Cloud AWS in our service layer which offers a high-level abstraction over the S3Presigner class from the official AWS SDK.\nWhile it\u0026rsquo;s possible to use the S3Presigner class directly, S3Template reduces boilerplate code and simplifies the generation of Presigned URLs by offering convenient, Spring-friendly methods.\nWe also make use of our custom AwsS3BucketProperties class to reference the S3 bucket name and the Presigned URL validity duration defined in our application.yaml file.\nIt\u0026rsquo;s important to ensure that the controller API endpoints that\u0026rsquo;ll consume the service class we\u0026rsquo;ve created, should be secured and preferably rate-limited. Before our application generates a Presigned URL, the requesting user\u0026rsquo;s identity and authority should be validated to prevent unauthorized access.\nRequired IAM Permissions To have our service layer generate Presigned URLs correctly, the IAM user whose security credentials we have configured must have the necessary permissions of s3:GetObject and s3:PutObject.\nHere is what our policy should look like:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:PutObject\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:s3:::bucket-name/*\u0026#34; } ] } The above IAM policy conforms to the least privilege principle, by granting only the necessary permissions required for our service layer to generate Presigned URLs. We also specify the bucket ARN in the Resource field, further limiting the scope of the IAM policy to work with a single bucket that is provisioned for our application.\nEnabling Cross-Origin Resource Sharing (CORS) There\u0026rsquo;s one last thing we need to configure before our Presigned URLs can be invoked from our client applications.\nWhen invoking Presigned URLs from a client such as a single page application (SPA), we\u0026rsquo;ll encounter an error in our browser console:\nAccess to XMLHttpRequest at \u0026#39;https://..presigned-url\u0026#39; from origin \u0026#39;http://ourdomain.com\u0026#39; has been blocked by CORS policy: No \u0026#39;Access-Control-Allow-Origin\u0026#39; header is present on the requested resource. Ah, the infamous CORS error! 😩\nWe encounter this error because web browsers implement the Same-Origin Policy mechanism by default, which restricts our client application from interacting with a resource from another origin (S3 bucket in this context).\nIn order to solve this, we\u0026rsquo;ll need to add a CORS configuration to our provisioned S3 bucket:\n[ { \u0026#34;AllowedMethods\u0026#34;: [ \u0026#34;GET\u0026#34;, \u0026#34;PUT\u0026#34; ], \u0026#34;AllowedOrigins\u0026#34;: [ \u0026#34;http://localhost:8081\u0026#34; ], \u0026#34;AllowedHeaders\u0026#34;: [], \u0026#34;ExposeHeaders\u0026#34;: [], \u0026#34;MaxAgeSeconds\u0026#34;: 3000 } ] The above configuration allows our client application, hosted at http://ourdomain.com, to send HTTP GET and PUT requests to access our provisioned S3 bucket.\nIn our configuration, we follow the least privilege principle similar to our IAM policy and grant access only to the necessary origin and methods required by our application. An overly permissive CORS configuration that uses a wildcard * for all properties should be avoided, as it can introduce security vulnerabilities.\nIntegration Testing with LocalStack and Testcontainers Before concluding this article, we need to ensure that our configurations and service layer work correctly and are able to generate legitimate Presigned URLs. We\u0026rsquo;ll be making use of LocalStack and Testcontainers to do this, but first let\u0026rsquo;s look at what these two tools are:\n LocalStack : is a cloud service emulator that enables local development and testing of AWS services, without the need for connecting to a remote cloud provider. We\u0026rsquo;ll be provisioning the required S3 bucket inside this emulator. Testcontainers : is a library that provides lightweight, throwaway instances of Docker containers for integration testing. We\u0026rsquo;ll be starting a LocalStack container via this library.  The prerequisite for running the LocalStack emulator via Testcontainers is, as you’ve guessed it, an up-and-running Docker instance. We need to ensure this prerequisite is met when running the test suite either locally or when using a CI/CD pipeline.\nLet’s start by declaring the required test dependencies in our pom.xml:\n\u0026lt;!-- Test dependencies --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.testcontainers\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;localstack\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; The declared spring-boot-starter-test gives us the basic testing toolbox as it transitively includes JUnit, AssertJ, and other utility libraries, that we will be needing for writing assertions and running our tests.\nAnd org.testcontainers:localstack dependency will allow us to run the LocalStack emulator inside a disposable Docker container, ensuring an isolated environment for our integration test.\nProvisioning S3 Bucket Using Init Hooks In order to upload and download objects from an S3 bucket via Presigned URLs, we need\u0026hellip; an S3 bucket. (big brain stuff 🧠)\nLocalstack gives us the ability to create required AWS resources when the container is started via Initialization Hooks. We\u0026rsquo;ll be creating a bash script init-s3-bucket.sh for this purpose inside our src/test/resources folder:\n#!/bin/bash bucket_name=\u0026#34;reflectoring-bucket\u0026#34; awslocal s3api create-bucket --bucket $bucket_name echo \u0026#34;S3 bucket \u0026#39;$bucket_name\u0026#39; created successfully\u0026#34; echo \u0026#34;Executed init-s3-bucket.sh\u0026#34; The script creates an S3 bucket with name the reflectoring-bucket. We\u0026rsquo;ll copy this script to the path /etc/localstack/init/ready.d inside the LocalStack container for execution in our integration test class.\nStarting LocalStack via Testcontainers At the time of this writing, the latest version of the LocalStack image is 3.4, we\u0026rsquo;ll be using this version in our integration test class:\n@SpringBootTest class StorageServiceIT { private static final LocalStackContainer localStackContainer; // Bucket name as configured in src/test/resources/init-s3-bucket.sh  private static final String BUCKET_NAME = \u0026#34;reflectoring-bucket\u0026#34;; private static final Integer PRESIGNED_URL_VALIDITY = randomValiditySeconds(); static { localStackContainer = new LocalStackContainer(DockerImageName.parse(\u0026#34;localstack/localstack:3.4\u0026#34;)) .withCopyFileToContainer(MountableFile.forClasspathResource(\u0026#34;init-s3-bucket.sh\u0026#34;, 0744), \u0026#34;/etc/localstack/init/ready.d/init-s3-bucket.sh\u0026#34;) .withServices(Service.S3) .waitingFor(Wait.forLogMessage(\u0026#34;.*Executed init-s3-bucket.sh.*\u0026#34;, 1)); localStackContainer.start(); } @DynamicPropertySource static void properties(DynamicPropertyRegistry registry) { // spring cloud aws properties  registry.add(\u0026#34;spring.cloud.aws.credentials.access-key\u0026#34;, localStackContainer::getAccessKey); registry.add(\u0026#34;spring.cloud.aws.credentials.secret-key\u0026#34;, localStackContainer::getSecretKey); registry.add(\u0026#34;spring.cloud.aws.s3.region\u0026#34;, localStackContainer::getRegion); registry.add(\u0026#34;spring.cloud.aws.s3.endpoint\u0026#34;, localStackContainer::getEndpoint); // custom properties  registry.add(\u0026#34;io.reflectoring.aws.s3.bucket-name\u0026#34;, () -\u0026gt; BUCKET_NAME); registry.add(\u0026#34;io.reflectoring.aws.s3.presigned-url.validity\u0026#34;, () -\u0026gt; PRESIGNED_URL_VALIDITY); } private static int randomValiditySeconds() { return ThreadLocalRandom.current().nextInt(5, 11); } } That\u0026rsquo;s a lot of setup code 😥, let\u0026rsquo;s break it down. In our integration test class StorageServiceIT, we do the following:\n Start a new instance of the LocalStack container and enable the S3 service. Copy our bash script init-s3-bucket.sh into the container to ensure bucket creation. Configure a strategy to wait for the log \u0026quot;Executed init-s3-bucket.sh\u0026quot; to be printed, as defined in our init script. Configure a small random Presigned URL validity using randomValiditySeconds(). Dynamically define the AWS configuration properties needed by our applications to create the required S3-related beans using @DynamicPropertySource.  Our @DynamicPropertySource code block declares an additional spring.cloud.aws.s3.endpoint property, which is not present in the main application.yaml file.\nThis property is necessary when connecting to the LocalStack container\u0026rsquo;s S3 bucket, reflectoring-bucket, as it requires a specific endpoint URL. However, when connecting to an actual AWS S3 bucket, specifying an endpoint URL is not required. AWS automatically uses the default endpoint for each service in the configured region.\nThe LocalStack container will be automatically destroyed post test suite execution, hence we do not need to worry about manual cleanups.\nWith this setup, our application will use the started LocalStack container for all interactions with AWS cloud during the execution of our integration test, providing an isolated and ephemeral testing environment.\nTesting the Service Layer With the LocalStack container set up successfully via Testcontainers, we can now write test cases to ensure our service layer generates legitimate Presigned URLs that can be used to upload and download objects to/from the provisioned S3 bucket:\n@SpringBootTest class StorageServiceIT { @Autowired private S3Template s3Template; @Autowired private StorageService storageService; // LocalStack setup as seen above  @Test void shouldGeneratePresignedUrlToFetchStoredObjectFromBucket() { // Prepare test file and upload to S3 Bucket  var key = RandomString.make(10) + \u0026#34;.txt\u0026#34;; var fileContent = RandomString.make(50); var fileToUpload = createTextFile(key, fileContent); storageService.save(fileToUpload); // Invoke method under test  var presignedUrl = storageService.generateViewablePresignedUrl(key); // Perform a GET request to the presigned URL  var restClient = RestClient.builder().build(); var responseBody = restClient.method(HttpMethod.GET) .uri(URI.create(presignedUrl.toExternalForm())) .retrieve() .body(byte[].class); // verify the retrieved content matches the expected file content.  var retrievedContent = new String(responseBody, StandardCharsets.UTF_8); assertThat(fileContent).isEqualTo(retrievedContent); } private MultipartFile createTextFile(String fileName, String content) { var fileContentBytes = content.getBytes(); var inputStream = new ByteArrayInputStream(fileContentBytes); return new MockMultipartFile(fileName, fileName, \u0026#34;text/plain\u0026#34;, inputStream); } } In our initial test case, we verify that our StorageService class successfully generates a Presigned URL that can be used to download an object from the provisioned S3 bucket.\nWe begin by preparing a file with random content and name and save it to our S3 bucket. Then we invoke the generateViewablePresignedUrl method exposed by our service layer with the corresponding random file key.\nFinally, we perform an HTTP GET request on the generated Presigned URL and assert that the API response matches the saved file\u0026rsquo;s content.\nNow, to validate the functionality of uploading an object through the generated Presigned URL:\n@Test void shouldGeneratePresignedUrlForUploadingObjectToBucket() { // Prepare test file to upload  var key = RandomString.make(10) + \u0026#34;.txt\u0026#34;; var fileContent = RandomString.make(50); var fileToUpload = createTextFile(key, fileContent); // Invoke method under test  var presignedUrl = storageService.generateUploadablePresignedUrl(key); // Upload the test file using the presigned URL  var restClient = RestClient.builder().build(); var response = restClient.method(HttpMethod.PUT) .uri(URI.create(presignedUrl.toExternalForm())) .body(fileToUpload.getBytes()) .retrieve() .toBodilessEntity(); assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK); // Verify that the file is saved successfully in S3 bucket  var isFileSaved = s3Template.objectExists(BUCKET_NAME, key); assertThat(isFileSaved).isTrue(); } In the above test case, we again create a test file with a random key and content. We invoke the generateUploadablePresignedUrl method of our service layer with the corresponding random file key to generate the Presigned URL.\nWe perform an HTTP PUT request on the generated Presigned URL and send the contents of the test file in the request body.\nFinally, we make use of S3Template to assert that the file is indeed saved in our S3 bucket successfully.\nBy executing the above integration test cases, we successfully validate that our service layer generates valid Presigned URLs for both uploading and downloading objects to/from the provisioned S3 bucket.\nConclusion In this article, we explored how to generate Presigned URLs in a Spring Boot application to offload file transfers from the application server to the client.\nWe used Spring Cloud AWS to communicate with our Amazon S3 bucket and reduced boilerplate configuration code.\nWe discussed the benefits and use cases of using Presigned URLs, such as handling large file downloads and user-generated content uploads. We walked through the necessary configurations and developed a service class that generates Presigned URLs for uploading and downloading objects to/from an S3 bucket.\nThroughout the article, we\u0026rsquo;ve discussed various security considerations and best practices to strengthen our architecture\u0026rsquo;s security and mitigate risks, such as keeping the validity of the Presigned URL short, securing the controller API endpoints that generate Presigned URLs, and following the least privilege principle when defining the IAM policy and CORS configuration.\nWe also tested our implementation using LocalStack and Testcontainers to ensure the developed functionality works as expected.\nThe source code demonstrated throughout this article is available on Github. I would highly encourage you to explore the codebase and set it up locally.\n","date":"June 15, 2024","image":"https://reflectoring.io/images/stock/0139-stamped-envelope-1200x628-branded_hu4301089be0b3f5191ac0f2f8e5ac18ff_121314_650x0_resize_q90_box.jpg","permalink":"/offloading-file-transfers-with-amazon-s3-presigned-urls-in-spring-boot/","title":"Offloading File Transfers with Amazon S3 Presigned URLs in Spring Boot"},{"categories":["Java"],"contents":"Introduction to Functional Programming Functional programming is a paradigm that focuses on the use of functions to create clear and concise code. Instead of modifying data and maintaining state like in traditional imperative programming, functional programming treats functions as first-class citizens. That makes it possible to assign them to variables, pass as arguments, and return from other functions. This approach can make code easier to understand and reason about.\n Example Code This article is accompanied by a working code example on GitHub. History of Functional Programming Lambda calculus, also known as λ-calculus, is a formal system in mathematical logic used to express computation through function abstraction and application with variable binding and substitution. Alonzo Church, a mathematician, introduced it in the 1930s as part of his mathematical foundations research.\nJohn McCarthy created Lisp, the initial high-level functional programming language, in the late 1950s at Massachusetts Institute of Technology (MIT) for the IBM 700/7000 series of scientific computers. Lisp introduced many concepts central to functional programming, like first-class functions and recursion. Lisp influenced many modern functional languages.\nMoving forward, in the 1970s, languages like ML (Meta Language) and Scheme built on these ideas, introducing features like type inference and lazy evaluation. Haskell, another pivotal language introduced in the 1990s, brought pure functional programming to the forefront with its strong emphasis on immutability and function composition.\nThese languages laid the groundwork for many features we see in Java 8. Inspired by these pioneers, Java adopted functional programming principles to enhance its expressiveness and conciseness. Lambda expressions, method references, and a rich set of functional interfaces like Function, Predicate, and Consumer are now part of Java’s repertoire.\nCompetition from Scala and Python Scala and Python are indeed strong competitors in the functional programming space.\nScala combines object-oriented and functional programming paradigms. It gained traction among Java developers looking for more powerful abstractions. Its compatibility with the JVM and expressive syntax made it a compelling alternative.\nPython’s simplicity, readability, and support for functional programming through features like list comprehensions, lambda functions, and higher-order functions made it a favorite for many developers, especially in the fields of data science and web development.\nBy integrating functional programming features, Java aimed to provide its existing user base with modern tools without needing to switch languages, ensuring it remained a versatile and powerful choice for a wide range of applications.\nFunctional Programming in Java In recent years, functional programming has gained popularity due to its ability to help manage complexity, especially in large-scale applications. It emphasizes immutability, avoiding side effects, and working with data in a more predictable and modular way. This makes it easier to test and maintain code.\nJava, traditionally an object-oriented language, adopted functional programming features in Java 8. The following factors triggered this move:\n  Simplifying Code: Functional programming can reduce boilerplate code and make code more concise, leading to easier maintenance and better readability.\n  Concurrency and Parallelism: Functional programming works well with modern multicore architectures, enabling efficient parallel processing without worrying about shared state or side effects.\n  Expressiveness and Flexibility: By embracing functional interfaces and lambda expressions, Java gained a more expressive syntax, allowing us to write flexible and adaptable code.\n  Functional programming in Java revolves around several key concepts and idioms:\n  Lambda Expressions: Use these compact functions wherever we need to provide a functional interface. They help reduce boilerplate code.\n  Method References: These are a shorthand way to refer to methods, making code even more concise and readable.\n  Functional Interfaces: These are interfaces with a single abstract method, making them perfect for lambda expressions and method references. Common examples include Predicate, Function, Consumer, Supplier, and Operator.\n  Advantages and Disadvantages of Functional Programming Functional programming in Java brings many advantages but also has its share of disadvantages and challenges.\nOne of the key benefits of functional programming is that it improves code readability. Functional code tends to be concise, thanks to lambda expressions and method references, leading to reduced boilerplate and easier code maintenance. This focus on immutability—where data structures remain unchanged after creation—helps to reduce side effects and prevents bugs caused by unexpected changes in state.\nAnother advantage is its compatibility with concurrency and parallelism. Since functional programming promotes immutability, operations can run in parallel without the usual risks of data inconsistency or race conditions. This results in code that\u0026rsquo;s naturally better suited for multithreaded environments.\nFunctional programming also promotes modularity and reusability. With functions being first-class citizens, we create small, reusable components, leading to cleaner, more maintainable code. The abstraction inherent in functional programming reduces overall complexity, allowing us to focus on the essential logic without worrying about implementation details.\nHowever, these advantages come with potential drawbacks. The learning curve for functional programming can be steep, especially for us accustomed to imperative or object-oriented paradigms. Concepts like higher-order functions and immutability might require a significant mindset shift.\nPerformance overheads are another concern, particularly due to frequent object creation and additional function calls in functional programming. This could impact performance in resource-constrained environments. Debugging functional code can also be challenging due to the abstractions involved, and understanding complex lambda expressions might require a deeper understanding of functional concepts.\nCompatibility issues may arise when integrating with legacy systems or libraries that aren\u0026rsquo;t designed for functional programming, potentially causing integration problems. Finally, functional programming\u0026rsquo;s focus on immutability and side-effect-free functions may reduce flexibility in scenarios that require mutability or complex object manipulations.\nUltimately, while functional programming offers significant benefits like improved readability and easier concurrency, it also comes with challenges. We need to consider both the advantages and disadvantages to determine how functional programming fits into our Java applications.\nUnderstanding Functional Interfaces The @FunctionalInterface annotation in Java is a special marker that makes an interface a functional interface. A functional interface is an interface with a single abstract method (SAM). That makes it possible to use it as a target for lambda expressions or method references.\nThis annotation serves as a way to document our intention for the interface and provides a layer of protection against accidental changes. By using @FunctionalInterface, we indicate that the interface should maintain its single-method structure. If we add more abstract methods, the compiler will generate an error, ensuring the functional interface\u0026rsquo;s integrity.\nFunctional interfaces are central to Java\u0026rsquo;s support for functional programming. They allow us to write cleaner, more concise code by using lambda expressions, reducing boilerplate code, and promoting reusability. Common examples of functional interfaces include Predicate, Consumer, Function, and Supplier.\nUsing the @FunctionalInterface annotation isn\u0026rsquo;t strictly necessary. Any interface with a single abstract method is inherently a functional interface. But it\u0026rsquo;s a good practice. It improves code readability, enforces constraints, and helps others to understand our intentions, contributing to better maintainability and consistency in our codebase.\nCreating Custom Functional Interfaces We now know that a functional interface in Java is an interface with a single abstract method.\nLet\u0026rsquo;s consider a simple calculator example that takes two integers and returns the result of an arithmetic operation. To implement this, we can define a functional interface called ArithmeticOperation, which has a single method to perform the operation.\nHere\u0026rsquo;s the definition of the functional interface:\n@FunctionalInterface interface ArithmeticOperation { int operate(int a, int b); } Consider the ArithmeticOperation interface, marked with @FunctionalInterface. This annotation makes it clear that the interface is functional, emphasizing that it should only contain one abstract method.\nThe ArithmeticOperation interface defines a single method, operate(), that takes two integers and returns an integer result. The use of this annotation documents that the interface is functional.\nWith this functional interface, we create different arithmetic operations, like addition, subtraction, multiplication, and division, using lambda expressions.\nLet\u0026rsquo;s build a basic calculator with this setup:\n@Test void operate() { // Define operations  ArithmeticOperation add = (a, b) -\u0026gt; a + b; ArithmeticOperation subtract = (a, b) -\u0026gt; a - b; ArithmeticOperation multiply = (a, b) -\u0026gt; a * b; ArithmeticOperation divide = (a, b) -\u0026gt; a / b; // Verify results  assertEquals(15, add.operate(10, 5)); assertEquals(5, subtract.operate(10, 5)); assertEquals(50, multiply.operate(10, 5)); assertEquals(2, divide.operate(10, 5)); } The test operate() verifies if the defined arithmetic operations get accurate outcomes. Using the ArithmeticOperation functional interface, it begins by generating lambda expressions for the four fundamental arithmetic operations of addition, subtraction, multiplication, and division. After that, it uses assertions to confirm that the results of these operations on the integers 5 and 10 match the expected values.\nBuilt-in Functional Interfaces Here\u0026rsquo;s an overview of some of the most common built-in functional interfaces in Java 8, along with their typical use cases and examples:\n   Functional Interface Description Example Use Cases     Predicate\u0026lt;T\u0026gt; Represents a function that takes an input of type T and returns a boolean. Commonly used for filtering and conditional checks. Checking if a number is evenFiltering a list of strings based on lengthValidating user inputs   Function\u0026lt;T, R\u0026gt; Represents a function that takes an input of type T and returns a result of type R. Often used for transformation or mapping operations. Converting a string to uppercaseMapping employee objects to their salariesParsing a string to an integer   Consumer\u0026lt;T\u0026gt; Represents a function that takes an input of type T and performs an action, without returning a result. Ideal for side-effect operations like printing or logging. Printing a list of numbersLogging user actionsUpdating object properties   Supplier\u0026lt;T\u0026gt; Represents a function that provides a value of type T without taking any arguments. Useful for lazy initialization and deferred computation. Generating random numbersProviding default valuesCreating new object instances   UnaryOperator\u0026lt;T\u0026gt; Represents a function that takes an input of type T and returns a result of the same type. Often used for simple transformations or operations. Negating a numberReversing a stringIncrementing a value   BinaryOperator\u0026lt;T\u0026gt; Represents a function that takes two inputs of type T and returns a result of the same type. Useful for combining or reducing operations. Adding two numbersConcatenating stringsFinding the maximum of two values    These built-in functional interfaces in Java 8 provide a foundation for functional programming, enabling us to work with lambda expressions and streamline code. Due to their versatility, we can use them in a wide range of applications, from data transformation to filtering and beyond.\nLambda Expressions Explained Lambda expressions are a key feature of Java 8, allowing us to create compact, anonymous functions in a clear and concise manner. They are a cornerstone of functional programming in Java and provide a way to represent functional interfaces in a simpler form.\nThe general syntax of a lambda expression is as follows:\n(parameters) -\u0026gt; { body } Parameters represent a comma-separated list of input parameters to the lambda function. If there\u0026rsquo;s only one parameter, we can omit the parentheses. The arrow operator separates the parameters from the body of the lambda expression. Finally, the body contains the function logic. If there\u0026rsquo;s only one statement, we can omit the braces. Typically, the logic in the body will be concise. But it can be complex, multiline logic as per requirements.\nExample of crisp lambda expression:\nFunction\u0026lt;String, String\u0026gt; toUpper = s -\u0026gt; s == null ? null : s.toUpperCase(); Example of complex lambda expression:\nIntToLongFunction factorial = n -\u0026gt; { long result = 1L; for (int i = 1; i \u0026lt;= n; i++) { result *= i; } return result; }; We can use lambda expressions to create anonymous functions. That allows us to write inline logic without the need for additional class definitions. We can use such anonymous functions where it requires us to pass functional interfaces.\nInner Workings of Lambda Expressions Have you ever wondered what a lambda expression looks like in Java code and inside the JVM? It\u0026rsquo;s quite fascinating! In Java, we have two types of values: primitive types and object references. Now, lambdas are definitely not primitive types, which means they must be something else. Well, a lambda expression is actually a special kind of expression that returns an object reference. Isn\u0026rsquo;t that intriguing?\nLet\u0026rsquo;s decode it. We start by writing a lambda expression in our source code.\nFor example:\npublic class Lambda { LongFunction\u0026lt;Double\u0026gt; squareArea = side -\u0026gt; (double) (side * side); } When we compile it and check its bytecode using javap command:\njavap -c -p Lambda.class Compiled from \u0026#34;Lambda.java\u0026#34; public class Lambda { java.util.function.LongFunction\u0026lt;java.lang.Double\u0026gt; squareArea; public Lambda(); Code: 0: aload_0 1: invokespecial #1 // Method java/lang/Object.\u0026#34;\u0026lt;init\u0026gt;\u0026#34;:()V 4: aload_0 5: invokedynamic #7,0//InvokeDynamic #0:apply:()Ljava/util/function/LongFunction; 10: putfield #11 // Field squareArea:Ljava/util/function/LongFunction; 13: return private static java.lang.Double lambda$new$0(long); Code: 0: lload_0 1: lload_0 2: lmul 3: l2d 4: invokestatic #17 // Method java/lang/Double.valueOf:(D)Ljava/lang/Double; 7: areturn } Did you notice that the bytecode starts with a invokedynamic call? Imagine it as a call to a unique factory method. This method returns an instance of a type that implements Runnable. The compiler does not define the specific type in the bytecode and knowing the exact type is not important. It generates the type at runtime when needed, not during compilation.\n  Compilation: When we compile the code, the Java compiler transforms the lambda expression into a form that the Java Virtual Machine (JVM) can understand. Instead of generating a new anonymous inner class, the compiler uses a technique called invokedynamic introduced in Java 7.\n  InvokeDynamic: The invokedynamic bytecode instruction supports dynamic languages on the JVM. For lambdas, it allows the JVM to defer the decision of how to create the lambda instance until runtime. This provides more flexibility and efficiency compared to traditional anonymous inner classes.\n  Lambda Metafactory: When runtime encounters the invokedynamic instruction, it calls a special method called LambdaMetafactory.metafactory(). This method is responsible for creating the actual implementation of the lambda expression. The JVM uses this metafactory method to generate a lightweight class or method handle that represents the lambda.\n  Instance Creation: The LambdaMetafactory dynamically creates an instance of the lambda expression. This instance is typically a singleton if the lambda is stateless (i.e., it doesn\u0026rsquo;t capture any variables from the enclosing scope). If the lambda captures variables, it creates a new instance with those captured values.\n  Execution: It executes the lambda expression as if it were an instance of an anonymous inner class implementing the functional interface. The JVM ensures that the lambda conforms to the expected functional interface\u0026rsquo;s single abstract method.\n   \nHere are a few examples demonstrating how to use lambda expressions without relying on built-in functional interfaces:\nExample 1: Implementing a Custom Functional Interface We have already seen a custom functional interface for arithmetic operation:\ninterface ArithmeticOperation { int operate(int a, int b); } we create lambda expressions to implement this interface:\nArithmeticOperation add = (a, b) -\u0026gt; a + b; ArithmeticOperation subtract = (a, b) -\u0026gt; a - b; Example 2: Anonymous Comparator It is not mandatory to define a custom functional interface and then use it to declare lambdas:\nList\u0026lt;String\u0026gt; words = Arrays.asList(\u0026#34;apple\u0026#34;, \u0026#34;banana\u0026#34;, \u0026#34;cherry\u0026#34;); Collections.sort(words, (s1, s2) -\u0026gt; Integer.compare(s1.length(), s2.length())); In this example, we created an anonymous comparator to sort a list of strings by length.\nExample 3: Runnable for a Thread We can also use lambda expressions to create a Runnable for threads:\nThread thread = new Thread(() -\u0026gt; { System.out.println(\u0026#34;Running in a lambda!\u0026#34;); }); thread.start(); This example demonstrates how we create an executable using lambda.\nThese examples demonstrate how we can use lambda expressions to define simple, concise functions without explicitly creating additional classes. They are powerful tools for streamlining code and making functional programming in Java more accessible and expressive.\nLambda Expressions and Var You cannot also use var with lambda expressions because they require an explicit target type. The following assignment will fail:\nvar addAsVar = (a, b) -\u0026gt; a + b; It gives the error: Cannot infer type: lambda expression requires an explicit target type.\nThe code is incorrect because we cannot use var to infer the type of lambda expression itself. We can use the var only for local variable type inference, not for lambda expressions or method return types.\nLet\u0026rsquo;s now see how we can use var in lambda expressions:\nArithmeticOperation add = (var a, var b) -\u0026gt; a + b; The lambda expression (var a, var b) -\u0026gt; a + b defines a lambda that takes two parameters a and b, both using the var keyword to indicate that the types should be inferred by the compiler. This lambda performs addition on the two parameters.\nWe can also use bean validation annotations:\nArithmeticOperation addNullSafe = (@NotNull var a, @NotNull var b) -\u0026gt; a + b; Similar to the previous example, this lambda also takes two parameters with var. Additionally, it uses the @NotNull annotation from bean validation library. This ensures that the parameters a and b should not be null.\nMethod References Method references are a shorthand way to refer to existing methods by their name. Instead of using lambda expressions, use method references to write code that is more concise and easier to read. Use method references to pass executable logic. Such deferred method invocation makes them ideal for functional programming scenarios and stream processing.\nJava 8 provides four types of method references:\n Reference to a Static Method Reference to an Instance Method of a Particular Type Reference to an Instance Method of an Arbitrary Object of a Particular Type Reference to a Constructor  Let\u0026rsquo;s learn about them.\nReference to a Static Method A static method reference refers to a static method in a class. It uses the class name followed by :: and the method name:\nContainingClass::staticMethodName Let\u0026rsquo;s see an example of static reference:\npublic class MethodReferenceTest { @Test void staticMethodReference() { List\u0026lt;Integer\u0026gt; numbers = List.of(1, -2, 3, -4, 5); List\u0026lt;Integer\u0026gt; positiveNumbers = numbers.stream().map(Math::abs).toList(); positiveNumbers.forEach(number -\u0026gt; Assertions.assertTrue(number \u0026gt; 0)); } } The test staticMethodReference in the MethodReferenceTest class verifies the use of a static method reference. It creates a list of integers, numbers, containing both positive and negative values. Using a stream, it applies the Math::abs method reference to convert each number to its absolute value, resulting in a new list, positiveNumbers. The test then checks that each element in positiveNumbers is positive.\nReference to an Instance Method of a Particular Type This type of method reference refers to an instance method of a specific type.\nThere are two primary syntaxes for referencing instance methods: using a containing class or using a specific object instance.\nUsing a Containing Class:\nContainingClass::instanceMethodName The ContainingClass::instanceMethodName syntax denotes an instance method belonging to a particular class. This method reference is not for a specific object instance but rather signifies that any object of that class can use the method. We commonly use it in stream operations, where we know the object instance at runtime.\nFor example, we can use String::toLowerCase to refer to the toLowerCase() method on any String object. Use it in a stream operation like .map(String::toLowerCase) to apply it to each string in the stream.\nContaining class instance method reference example:\n@Test void containingClassInstanceMethodReference() { List\u0026lt;String\u0026gt; numbers = List.of(\u0026#34;One\u0026#34;, \u0026#34;Two\u0026#34;, \u0026#34;Three\u0026#34;); List\u0026lt;Integer\u0026gt; numberChars = numbers.stream().map(String::length).toList(); numberChars.forEach(length -\u0026gt; Assertions.assertTrue(length \u0026gt; 0)); } The containingClassInstanceMethodReference test verifies the use of an instance method reference. It creates a list of strings, numbers, containing \u0026ldquo;One\u0026rdquo;, \u0026ldquo;Two\u0026rdquo;, and \u0026ldquo;Three\u0026rdquo;. Using a stream, it applies the String::length method reference to convert each string into its length, resulting in a new list, numberChars. The test checks that each element in numberChars is greater than zero, ensuring that all strings have a positive length.\nUsing a Specific Object:\ncontainingObject::instanceMethodName The syntax containingObject::instanceMethodName refers to an instance method of a specific object. It binds this method reference to a particular object, allowing us to call its method directly when needed.\nFor example, if we have an instance str of String, we can refer to its length() method with str::length. This approach is useful when we need to use a specific object\u0026rsquo;s method in a lambda expression or a stream operation.\nNow let\u0026rsquo;s see how to use containing object method reference:\n// Custom comparator class StringNumberComparator implements Comparator\u0026lt;String\u0026gt; { @Override public int compare(String o1, String o2) { if (o1 == null) { return o2 == null ? 0 : 1; } else if (o2 == null) { return -1; } return o1.compareTo(o2); } } @Test void containingObjectInstanceMethodReference() { List\u0026lt;String\u0026gt; numbers = List.of(\u0026#34;One\u0026#34;, \u0026#34;Two\u0026#34;, \u0026#34;Three\u0026#34;); StringNumberComparator comparator = new StringNumberComparator(); List\u0026lt;String\u0026gt; sorted = numbers.stream().sorted(comparator::compare).toList(); List\u0026lt;String\u0026gt; expected = List.of(\u0026#34;One\u0026#34;, \u0026#34;Three\u0026#34;, \u0026#34;Two\u0026#34;); Assertions.assertEquals(expected, sorted); } The code snippet sorts a list of strings using an instance method reference. The StringNumberComparator class defines a comparison logic for strings. The comparator::compare is a method reference that references the compare method of the StringNumberComparator instance. It passes method reference to sorted(), allowing the stream to sort the numbers list according to the specified comparison logic. The test checks if the sorted list matches the expected order.\nComparison of two syntaxes: Both syntaxes are useful in different scenarios. The class-based method reference is more flexible, allowing us to reference methods without tying them to a specific object. The object-based method reference, on the other hand, is helpful when we want to use a method tied to a specific object instance. Both approaches provide a more concise way to call instance methods without the need for traditional anonymous classes or explicit lambda expressions.\nReference to an Instance Method of an Arbitrary Object of a Particular Type This type also refers to an instance method, but it determines the exact object at runtime, allowing flexibility when dealing with collections or stream operations:\n@Test void instanceMethodArbitraryObjectParticularType() { List\u0026lt;Number\u0026gt; numbers = List.of(1, 2L, 3.0f, 4.0d); List\u0026lt;Integer\u0026gt; numberIntValues = numbers.stream().map(Number::intValue).toList(); Assertions.assertEquals(List.of(1, 2, 3, 4), numberIntValues); } The instanceMethodArbitraryObjectParticularType test checks the use of method reference to instance method for an arbitrary object of a particular type. It creates a list of Number objects (numbers) containing various types of numeric values: a int, a long, a float, and a double.\nUsing a stream, it maps each Number to its integer value using the Number::intValue method reference, resulting in a list of integers (numberInvValues). The test then compares this list with the expected result.\nReference to a Constructor A constructor reference refers to a class constructor, allowing us to create new instances through a method reference.\nIts syntax is as follows:\nContainingClass::new The ContainingClass::new points to the constructor of a specific class, allowing us to create new instances.\nLet\u0026rsquo;s now see how to use constructor reference:\n@Test void constructorReference() { List\u0026lt;String\u0026gt; numbers = List.of(\u0026#34;1\u0026#34;, \u0026#34;2\u0026#34;, \u0026#34;3\u0026#34;); Map\u0026lt;String, BigInteger\u0026gt; numberMapping = numbers.stream() .map(BigInteger::new) .collect(Collectors.toMap(BigInteger::toString, Function.identity())); Map\u0026lt;String, BigInteger\u0026gt; expected = new HashMap\u0026lt;\u0026gt;() { { put(\u0026#34;1\u0026#34;, BigInteger.valueOf(1)); put(\u0026#34;2\u0026#34;, BigInteger.valueOf(2)); put(\u0026#34;3\u0026#34;, BigInteger.valueOf(3)); } }; Assertions.assertEquals(expected, numberMapping); } The constructorReference test demonstrates the use of a constructor reference in a stream operation. It creates a list of strings (numbers) containing \u0026ldquo;1\u0026rdquo;, \u0026ldquo;2\u0026rdquo;, and \u0026ldquo;3\u0026rdquo;. Using a stream, it maps each string to a BigInteger object by referencing the BigInteger constructor with BigInteger::new.\nThe test then collects the resulting BigInteger objects into a Map, where the keys are the original strings, and the values are the corresponding BigInteger instances. It uses Collectors.toMap with a lambda expression (BigInteger::toString) to create the keys and Function.identity() for the values.\nFinally, the test compares it with an expected map (expected) containing the same key-value pairs.\nLet\u0026rsquo;s summarize the use cases for method references, along with descriptions and examples:\n   Type of Method Reference Description Example     Reference to a Static Method Refers to a static method in a class. This type of method reference uses the class name followed by :: and the method name. Function\u0026lt;Integer, Integer\u0026gt; square = MathOperations::square;   Reference to an Instance Method of a Particular Object Refers to an instance method of a specific object. The instance must be explicitly defined before using the method reference. Supplier getMessage = stringUtils::getMessage;   Reference to an Instance Method of an Arbitrary Object of a Particular Type Refers to an instance method of an arbitrary object of a specific type. We commonly use this type in stream operations, where Java determines the object type at runtime. List uppercasedWords = words.stream()\n.map(String::toUpperCase)\n.collect(Collectors.toList());   Reference to a Constructor Refers to a class constructor, allowing us to create new instances. This type is useful when we need to create objects without explicitly calling a constructor. Supplier carSupplier = Car::new;    Predicates Predicates are functional interfaces in Java that represent boolean-valued functions of a single argument. They are commonly used for filtering, testing, and conditional operations.\nThe Predicate functional interface is part of the java.util.function package and defines a functional method test(T t) that returns a boolean. It also provides default methods that allow combining two predicates:\n@FunctionalInterface public interface Predicate\u0026lt;T\u0026gt; { boolean test(T t); // default methods } The test() method evaluates the predicate on the input argument and determines whether it satisfies the condition defined by the predicate.\nWe often use predicates with the stream() API for filtering elements based on certain conditions. Pass them as arguments to methods like filter() to specify the criteria for selecting elements from a collection.\nLet\u0026rsquo;s see filtering in action:\npublic class PredicateTest { @Test void testFiltering() { List\u0026lt;Integer\u0026gt; numbers = List.of(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); Predicate\u0026lt;Integer\u0026gt; isEven = num -\u0026gt; num % 2 == 0; List\u0026lt;Integer\u0026gt; actual = numbers.stream().filter(isEven).toList(); List\u0026lt;Integer\u0026gt; expected = List.of(2, 4, 6, 8, 10); Assertions.assertEquals(expected, actual); } } In the test testFiltering() method, first we populate a list of integers. Then we define a predicate isEven to check if a number is even. Using stream() and filter() methods, we filter the list to contain only even numbers. Finally, we compare the filtered list to the expected list.\nCombining Predicates We can combine predicates using logical operators such as and(), or(), negate() and not() to create complex conditions.\nLet\u0026rsquo;s see how to combine the practices:\n@Test void testPredicate() { List\u0026lt;Integer\u0026gt; numbers = List.of(-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5); Predicate\u0026lt;Integer\u0026gt; isZero = num -\u0026gt; num == 0; Predicate\u0026lt;Integer\u0026gt; isPositive = num -\u0026gt; num \u0026gt; 0; Predicate\u0026lt;Integer\u0026gt; isNegative = num -\u0026gt; num \u0026lt; 0; Predicate\u0026lt;Integer\u0026gt; isOdd = num -\u0026gt; num % 2 == 1; Predicate\u0026lt;Integer\u0026gt; isPositiveOrZero = isPositive.or(isZero); Predicate\u0026lt;Integer\u0026gt; isPositiveAndOdd = isPositive.and(isOdd); Predicate\u0026lt;Integer\u0026gt; isNotPositive = Predicate.not(isPositive); Predicate\u0026lt;Integer\u0026gt; isNotZero = isZero.negate(); Predicate\u0026lt;Integer\u0026gt; isAlsoZero = isPositive.negate().and(isNegative.negate()); // check zero or greater  Assertions.assertEquals(List.of(0, 1, 2, 3, 4, 5), numbers.stream().filter(isPositiveOrZero).toList()); // check greater than zero and odd  Assertions.assertEquals(List.of(1, 3, 5), numbers.stream().filter(isPositiveAndOdd).toList()); // check less than zero and negative  Assertions.assertEquals(List.of(-5, -4, -3, -2, -1, 0), numbers.stream().filter(isNotPositive).toList()); // check not zero  Assertions.assertEquals(List.of(-5, -4, -3, -2, -1, 1, 2, 3, 4, 5), numbers.stream().filter(isNotZero).toList()); // check neither positive nor negative  Assertions.assertEquals(numbers.stream().filter(isZero).toList(), numbers.stream().filter(isAlsoZero).toList()); } In this test, we have combined predicates to filter a list of numbers. isPositiveOrZero combines predicates for positive numbers or zero. isPositiveAndOdd combines predicates for positive and odd numbers. isNotPositive negates the predicate for positive numbers. isNotZero negates the predicate for zero. isAlsoZero shows us how to chain predicates. We apply each combined predicate to the list, and verify the expected results.\nPredicate Evaluation Order When we combine multiple predicates, we need to pay attention to the order the predicates are evaluated.\nConsider the following example:\nPredicate\u0026lt;Integer\u0026gt; isPositive = x -\u0026gt; x \u0026gt; 0; Predicate\u0026lt;Integer\u0026gt; isDiv3 = x -\u0026gt; x % 3 == 0; Predicate\u0026lt;Integer\u0026gt; isOdd = x -\u0026gt; x % 2 != 0; Following table explains the evaluation order.\n   Predicate Expression Evaluation Order Description     Predicate test1 = isPositive.or(isDiv3).and(isOdd) isPositive or isDiv3Result and isOdd First checks if the number is positive or divisible by 3, then checks if the result is odd.   Predicate test2 = isPositive.and(isOdd).or(isDiv3) isPositive and isOddResult or isDiv3 First checks if the number is positive and odd, then checks if the result is true or if the number is divisible by 3.   Predicate test3 = (isPositive.and(isOdd)).or(isDiv3) isPositive and isOddResult or isDiv3 Same as test2, checks if the number is positive and odd, then checks if the result is true or if the number is divisible by 3.   Predicate test4 = isPositive.and((isOdd).or(isDiv3)) isOdd or isDiv3isPositive and result First checks if the number is odd or divisible by 3, then checks if the number is positive and the previous result is true.    Let\u0026rsquo;s deep dive into the evaluation order of the first test: Predicate\u0026lt;Integer\u0026gt; test1: isPositive.or(isDiv3).and(isOdd);\nFirst, it evaluates the isPositive or isDiv3 condition. If isPositive is true, then the result is true. If isPositive is false, proceed to check isDiv3. Then, if isDiv3 is true, the result is true. Finally, if both isPositive and isDiv3 are false, the result is false.\nNext, take the result from the first step and evaluate it with isOdd. If the result from the first step is true, check the isOdd condition. If the result from the first step is false, the final result is false.\nSimilarly, it evaluates the order for other predicates given in the table above.\nBiPredicates The BiPredicate\u0026lt;T, U\u0026gt; takes two arguments of types T and U and returns a boolean result. It\u0026rsquo;s common to use them for testing conditions involving two parameters. For instance, we use BiPredicate to check if one value is greater than the other or if two objects satisfy a specific relationship. We may validate if a person\u0026rsquo;s age and income meet certain eligibility criteria for a financial service.\nBiPredicate defines a test() method with two arguments, and it returns a boolean. It also provides default methods that allow combining two predicates:\n@FunctionalInterface public interface BiPredicate\u0026lt;T, U\u0026gt; { boolean test(T t, U u); // default methods } Let\u0026rsquo;s now learn how to use the BiPredicate:\npublic class PredicateTest { // C = Carpenter, W = Welder  private Object[][] workers = {{\u0026#34;C\u0026#34;, 24}, {\u0026#34;W\u0026#34;, 32}, {\u0026#34;C\u0026#34;, 35}, {\u0026#34;W\u0026#34;, 40}, {\u0026#34;C\u0026#34;, 50}, {\u0026#34;W\u0026#34;, 44}, {\u0026#34;C\u0026#34;, 30}}; @Test void testBiPredicate() { BiPredicate\u0026lt;String, Integer\u0026gt; juniorCarpenterCheck = (worker, age) -\u0026gt; \u0026#34;C\u0026#34;.equals(worker) \u0026amp;\u0026amp; (age \u0026gt;= 18 \u0026amp;\u0026amp; age \u0026lt;= 40); BiPredicate\u0026lt;String, Integer\u0026gt; juniorWelderCheck = (worker, age) -\u0026gt; \u0026#34;W\u0026#34;.equals(worker) \u0026amp;\u0026amp; (age \u0026gt;= 18 \u0026amp;\u0026amp; age \u0026lt;= 40); long juniorCarpenterCount = Arrays.stream(workers).filter(person -\u0026gt; juniorCarpenterCheck.test((String) person[0], (Integer) person[1])).count(); Assertions.assertEquals(3L, juniorCarpenterCount); long juniorWelderCount = Arrays.stream(workers).filter(person -\u0026gt; juniorWelderCheck.test((String) person[0], (Integer) person[1])).count(); Assertions.assertEquals(2L, juniorWelderCount); } } In the test, first, we defined an array of workers with their respective ages. We have created two BiPredicate instances: juniorCarpenterCheck and juniorWelderCheck. These predicates evaluate if a worker is within a certain age range (18 to 40) based on their occupation (Carpenter or Welder). Then we use these predicates to filter the array of workers using the test() method. Finally, we count the workers meeting the criteria for junior carpenters and junior welders and verify if they match the expected counts.\nNow let\u0026rsquo;s learn to use the default methods used to combine and negate:\n@Test void testBiPredicateDefaultMethods() { // junior carpenters  BiPredicate\u0026lt;String, Integer\u0026gt; juniorCarpenterCheck = (worker, age) -\u0026gt; \u0026#34;C\u0026#34;.equals(worker) \u0026amp;\u0026amp; (age \u0026gt;= 18 \u0026amp;\u0026amp; age \u0026lt;= 40); // groomed carpenters  BiPredicate\u0026lt;String, Integer\u0026gt; groomedCarpenterCheck = (worker, age) -\u0026gt; \u0026#34;C\u0026#34;.equals(worker) \u0026amp;\u0026amp; (age \u0026gt;= 30 \u0026amp;\u0026amp; age \u0026lt;= 40); // all carpenters  BiPredicate\u0026lt;String, Integer\u0026gt; allCarpenterCheck = (worker, age) -\u0026gt; \u0026#34;C\u0026#34;.equals(worker) \u0026amp;\u0026amp; (age \u0026gt;= 18); // junior welders  BiPredicate\u0026lt;String, Integer\u0026gt; juniorWelderCheck = (worker, age) -\u0026gt; \u0026#34;W\u0026#34;.equals(worker) \u0026amp;\u0026amp; (age \u0026gt;= 18 \u0026amp;\u0026amp; age \u0026lt;= 40); // junior workers  BiPredicate\u0026lt;String, Integer\u0026gt; juniorWorkerCheck = juniorCarpenterCheck.or(juniorWelderCheck); // junior groomed carpenters  BiPredicate\u0026lt;String, Integer\u0026gt; juniorGroomedCarpenterCheck = juniorCarpenterCheck.and(groomedCarpenterCheck); // all welders  BiPredicate\u0026lt;String, Integer\u0026gt; allWelderCheck = allCarpenterCheck.negate(); // test or()  long juniorWorkerCount = Arrays.stream(workers).filter(person -\u0026gt; juniorWorkerCheck .test((String) person[0], (Integer) person[1])) .count(); Assertions.assertEquals(5L, juniorWorkerCount); // test and()  long juniorGroomedCarpenterCount = Arrays.stream(workers).filter(person -\u0026gt; juniorGroomedCarpenterCheck .test((String) person[0], (Integer) person[1])).count(); Assertions.assertEquals(2L, juniorGroomedCarpenterCount); // test negate()  long allWelderCount = Arrays.stream(workers).filter(person -\u0026gt; allWelderCheck .test((String) person[0], (Integer) person[1])) .count(); Assertions.assertEquals(3L, allWelderCount); } The test demonstrates default methods in BiPredicate. It defines predicates for various worker conditions, like junior carpenters and welders. Using default methods or(), and(), and negate(), it creates new predicates for combinations like all junior workers, groomed carpenters, and non-carpenters. We apply these predicates to filter workers, and verify the counts. This showcases how default methods enhance the functionality of BiPredicate by enabling logical operations like OR, AND, and negation.\nIntPredicate IntPredicate represents a predicate (boolean-valued function) that takes a single integer argument and returns a boolean result.\n@FunctionalInterface public interface IntPredicate { boolean test(int value); // default methods } This is the int-consuming primitive type specialization of Predicate.\nUse IntPredicate to filter collections of primitive integer values or evaluate conditions based on integer inputs. It provides several default methods for composing predicates, including and(), or(), and negate(), allowing for logical combinations of predicates.\nHere\u0026rsquo;s a simple example:\n@Test void testIntPredicate() { IntPredicate isZero = num -\u0026gt; num == 0; IntPredicate isPositive = num -\u0026gt; num \u0026gt; 0; IntPredicate isNegative = num -\u0026gt; num \u0026lt; 0; IntPredicate isOdd = num -\u0026gt; num % 2 == 1; IntPredicate isPositiveOrZero = isPositive.or(isZero); IntPredicate isPositiveAndOdd = isPositive.and(isOdd); IntPredicate isNotZero = isZero.negate(); IntPredicate isAlsoZero = isPositive.negate().and(isNegative.negate()); // check zero or greater  Assertions.assertArrayEquals(new int[] {0, 1, 2, 3, 4, 5}, IntStream.range(-5, 6).filter(isPositiveOrZero).toArray()); // check greater than zero and odd  Assertions.assertArrayEquals(new int[] {1, 3, 5}, IntStream.range(-5, 6).filter(isPositiveAndOdd).toArray()); // check not zero  Assertions.assertArrayEquals(new int[] {-5, -4, -3, -2, -1, 1, 2, 3, 4, 5}, IntStream.range(-5, 6).filter(isNotZero).toArray()); // check neither positive nor negative  Assertions.assertArrayEquals( IntStream.range(-5, 6).filter(isZero).toArray(), IntStream.range(-5, 6).filter(isAlsoZero).toArray()); } The testIntPredicate() method demonstrates various scenarios using IntPredicate. Predicates like isZero, isPositive, and isNegative check specific conditions on integers. Combined predicates like isPositiveOrZero and isPositiveAndOdd perform logical operations. Tests verify filtering of integer ranges based on these predicates, ensuring correct outcomes for conditions like zero or greater, greater than zero and odd, not zero, and neither positive nor negative. Each assertion validates the filtering results against expected integer arrays, covering a wide range of scenarios.\nLike IntPredicate, we also have LongPredicate and DoublePredicate. These can be used to handle long and double values.\nFunctions The Function functional interface in Java represents a single-valued function that takes one argument and produces a result. It\u0026rsquo;s part of the java.util.function package.\nThe Function Interface and Its Variants The Function interface contains a single abstract method called apply(), which takes an argument of type T and returns a result of type R.\n@FunctionalInterface public interface Function\u0026lt;T, R\u0026gt; { R apply(T t); // default methods } This interface enables developers to define and use functions that transform input values into output values, facilitating various data processing tasks. With Function we create reusable and composable transformations, making code more concise and expressive. We widely use it for mapping, filtering, and transforming data streams.\nFunction interface has several variants like BiFunction, IntFunction, and more. We\u0026rsquo;ll also learn about them in sections to follow.\nLet\u0026rsquo;s witness the power of Function in action:\n@Test void simpleFunction() { Function\u0026lt;String, String\u0026gt; toUpper = s -\u0026gt; s == null ? null : s.toUpperCase(); Assertions.assertEquals(\u0026#34;JOY\u0026#34;, toUpper.apply(\u0026#34;joy\u0026#34;)); Assertions.assertNull(toUpper.apply(null)); } The test applies a Function to convert a string to uppercase. It asserts the converted value and also checks for null input handling.\nFunction Composition Function composition is a process of combining multiple functions to create a new function. The compose() method in the Function interface combines two functions by applying the argument function first and then the caller function. Conversely, the andThen() method applies the caller function first and then the argument function.\nFor example, if we have two functions: one to convert a string to upper case and another to remove vowels from it, we can compose them using compose() or andThen(). If we use compose(), it first converts the string to uppercase and then removes vowels from it. Conversely, if we use andThen(), it first removes vowels from it and then converts the string to uppercase.\nLet\u0026rsquo;s verify function composition:\nvoid functionComposition() { Function\u0026lt;String, String\u0026gt; toUpper = s -\u0026gt; s == null ? null : s.toUpperCase(); Function\u0026lt;String, String\u0026gt; replaceVowels = s -\u0026gt; s == null ? null : s.replace(\u0026#34;A\u0026#34;, \u0026#34;\u0026#34;) .replace(\u0026#34;E\u0026#34;, \u0026#34;\u0026#34;) .replace(\u0026#34;I\u0026#34;, \u0026#34;\u0026#34;) .replace(\u0026#34;O\u0026#34;, \u0026#34;\u0026#34;) .replace(\u0026#34;U\u0026#34;, \u0026#34;\u0026#34;); Assertions.assertEquals(\u0026#34;APPLE\u0026#34;, toUpper.compose(replaceVowels).apply(\u0026#34;apple\u0026#34;)); Assertions.assertEquals(\u0026#34;PPL\u0026#34;, toUpper.andThen(replaceVowels).apply(\u0026#34;apple\u0026#34;)); } In the functionComposition test, we compose two functions to manipulate a string. The first function converts the string to uppercase, while the second one removes vowels. Using compose(), it first removes vowels and then converts to uppercase. Using andThen(), it first converts to uppercase and then removes vowels. We verify the results using assertion.\nBiFunction The BiFunction interface represents a function that accepts two arguments and produces a result. It\u0026rsquo;s similar to the Function interface, but it operates on two input parameters instead of one:\n@FunctionalInterface public interface BiFunction\u0026lt;T, U, R\u0026gt; { R apply(T t, U u); // default methods This is the specialized version of Function with two arguments. It is a functional interface that defines the apply(Object, Object) functional method.\nFor example, suppose we have a BiFunction that takes two integers as input and returns the bigger number.\nLet\u0026rsquo;s define it and test the results:\n@Test void biFunction() { BiFunction\u0026lt;Integer, Integer, Integer\u0026gt; bigger = (first, second) -\u0026gt; first \u0026gt; second ? first : second; Function\u0026lt;Integer, Integer\u0026gt; square = number -\u0026gt; number * number; Assertions.assertEquals(10, bigger.apply(4, 10)); Assertions.assertEquals(100, bigger.andThen(square).apply(4, 10)); } The BiFunction interface combines two input values and produces a result. In this test, bigger selects the larger of two integers. square then calculates the square of a number. The result of bigger is passed to square, which squares the larger integer.\nIntFunction The IntFunction interface represents a function that takes an integer as input and produces a result of any type.\n@FunctionalInterface public interface IntFunction\u0026lt;R\u0026gt; { R apply(int value); } This represents the int-consuming specialization for Function. It is a functional interface with a functional method named apply(int).\nWe can define custom logic based on integer inputs and return values of any type, making it versatile for various use cases in Java programming.\nLet\u0026rsquo;s witness the IntFunction in action:\n@Test void intFunction() { IntFunction\u0026lt;Integer\u0026gt; square = number -\u0026gt; number * number; Assertions.assertEquals(100, square.apply(10)); } The test applies a IntFunction to compute the square of an integer. It ensures that the square function correctly calculates the square of the input integer.\nSimilarly, we have LongFunction and DoubleFunction that accept long and double results respectively.\nIntToDoubleFunction The IntToDoubleFunction interface represents a function that accepts an int-valued argument and produces a double-valued result.\n@FunctionalInterface public interface IntToDoubleFunction { double applyAsDouble(int value); } This is the specialized int-to-double conversion for the Function interface. It is a functional interface with a method called applyAsDouble(int).\nLet\u0026rsquo;s explore the implementation of IntToDoubleFunction:\n@Test void intToDoubleFunction() { int principalAmount = 1000; // Initial investment amount  double interestRate = 0.05; // Annual interest rate (5%)  IntToDoubleFunction accruedInterest = principal -\u0026gt; principal * interestRate; Assertions.assertEquals(50.0, accruedInterest.applyAsDouble(principalAmount)); } In this example, IntToDoubleFunction is used to define a function accruedInterest that calculates the interest accrued based on the principal amount provided as an integer input. Then the test verifies the calculated interest.\nSimilarly, we have IntToLongFunction, LongToIntFunction, LongToDoubleFunction, DoubleToIntFunction and DoubleToLongFunction to map the input to respective result types.\nFunctions and Stream Operations Functional interfaces like IntToDoubleFunction and IntToLongFunction are particularly useful when working with streams of primitive data types. For instance, if we have a stream of integers, and we need to perform operations that require converting those integers to doubles or longs, we can use these functional interfaces within stream operations like mapToInt, mapToDouble, and mapToLong. This allows us to efficiently perform transformations on stream elements without the overhead of autoboxing and unboxing.\n ToIntFunction The ToIntFunction interface represents a function that produces an int-valued result.\n@FunctionalInterface public interface ToIntFunction\u0026lt;T\u0026gt; { int applyAsInt(T t); } This is the integer-producing primitive specialization for the Function interface. It provides a template for functions that take an argument and return a result. Its specialization, applyAsInt(Object), is a functional method specifically designed to produce an integer result. Its purpose is to allow for operations on data that return a primitive integer, thereby improving performance by avoiding unnecessary object wrappers. This specialization is an essential tool in functional programming paradigms within Java, allowing developers to write cleaner and more efficient code.\nLet\u0026rsquo;s see how we can use the ToIntFunction in action:\n@Test void toIntFunction() { ToIntFunction\u0026lt;String\u0026gt; charCount = input -\u0026gt; input == null ? 0 : input.trim().length(); Assertions.assertEquals(0, charCount.applyAsInt(null)); Assertions.assertEquals(0, charCount.applyAsInt(\u0026#34;\u0026#34;)); Assertions.assertEquals(3, charCount.applyAsInt(\u0026#34;JOY\u0026#34;)); } This test counts the characters in a string using a function. It verifies the character count for null, empty string, and \u0026ldquo;JOY\u0026rdquo;, expecting 0, 0, and 3, respectively. The function handles null inputs gracefully, returning 0, and trims white space before counting characters.\nSimilarly, we have ToLongFunction and ToDoubleFunction that produce long and double results respectively.\nToIntBiFunction The ToIntBiFunction interface represents a function that accepts two arguments and produces an int-valued result.\n@FunctionalInterface public interface ToIntBiFunction\u0026lt;T, U\u0026gt; { int applyAsInt(T t, U u); } The int-producing primitive specialization for BiFunction is a functional interface that contains a single abstract method called applyAsInt(), which takes two input parameters of type Object and returns a int.\nLet\u0026rsquo;s discover how to use ToIntBiFunction:\n@Test void toIntBiFunction() { // discount on product  ToIntBiFunction\u0026lt;String, Integer\u0026gt; discount = (season, quantity) -\u0026gt; \u0026#34;WINTER\u0026#34;.equals(season) || quantity \u0026gt; 100 ? 40 : 10; Assertions.assertEquals(40, discount.applyAsInt(\u0026#34;WINTER\u0026#34;, 50)); Assertions.assertEquals(40, discount.applyAsInt(\u0026#34;SUMMER\u0026#34;, 150)); Assertions.assertEquals(10, discount.applyAsInt(\u0026#34;FALL\u0026#34;, 50)); } This test calculates discounts based on the season and quantity. If it\u0026rsquo;s winter or the quantity exceeds 100, we apply a 40% discount, otherwise, it\u0026rsquo;s 10%. The test validates discounts for winter with 50 items, summer with 150 items, and fall with 50 items, expecting 40, 40, and 10, respectively.\nSimilarly, we have ToLongBiFunction and ToDoubleBiFunction that produce long and double results respectively.\nOperators We\u0026rsquo;ll now explore operators, fundamental functional interfaces in Java. We commonly use operators to perform operations on data, such as mathematical calculations, comparisons, or logical operations. Furthermore, we use operators to transform or manipulate data in our programs. These interfaces provide a way to encapsulate these operations, making our code more concise and readable. Whether it\u0026rsquo;s adding numbers, checking for equality, or combining conditions, operators play a crucial role in various programming scenarios, offering flexibility and efficiency in our code.\nLet\u0026rsquo;s learn about unary and binary operators.\nUnaryOperator The UnaryOperator interface represents an operation on a single operand that produces a result of the same type as its operand.\n@FunctionalInterface public interface UnaryOperator\u0026lt;T\u0026gt; extends Function\u0026lt;T, T\u0026gt; { // helper methods } This is a specialization of Function for the case where the operand and result are of the same type. This is a functional interface whose functional method is apply(Object).\nLet\u0026rsquo;s check out an example of UnaryOperator:\npublic class OperatorTest { @Test void unaryOperator() { UnaryOperator\u0026lt;String\u0026gt; trim = value -\u0026gt; value == null ? null : value.trim(); UnaryOperator\u0026lt;String\u0026gt; upperCase = value -\u0026gt; value == null ? null : value.toUpperCase(); Function\u0026lt;String, String\u0026gt; transform = trim.andThen(upperCase); Assertions.assertEquals(\u0026#34;joy\u0026#34;, trim.apply(\u0026#34; joy \u0026#34;)); Assertions.assertEquals(\u0026#34; JOY \u0026#34;, upperCase.apply(\u0026#34; joy \u0026#34;)); Assertions.assertEquals(\u0026#34;JOY\u0026#34;, transform.apply(\u0026#34; joy \u0026#34;)); } } In the OperatorTest, unary operators trim and convert strings. The transform function combines them, trimming white space and converting to uppercase. Tests verify individual and combined functionalities.\nIntUnaryOperator The IntUnaryOperator interface represents an operation on a single int-valued operand that produces an int-valued result.\n@FunctionalInterface public interface IntUnaryOperator { int applyAsInt(int operand); // helper methods } This represents the primitive type specialization of UnaryOperator for integers. It\u0026rsquo;s a functional interface featuring a method named applyAsInt(int).\nLet\u0026rsquo;s learn how to use the IntUnaryOperator:\nvoid intUnaryOperator() { // formula y = x^2 + 2x + 1  IntUnaryOperator formula = x -\u0026gt; (x * x) + (2 * x) + 1; Assertions.assertEquals(36, formula.applyAsInt(5)); IntStream input = IntStream.of(2, 3, 4); int[] result = input.map(formula).toArray(); Assertions.assertArrayEquals(new int[] {9, 16, 25}, result); // the population doubling every 3 years, one fifth migrate and 10% mortality  IntUnaryOperator growth = number -\u0026gt; number * 2; IntUnaryOperator migration = number -\u0026gt; number * 4 / 5; IntUnaryOperator mortality = number -\u0026gt; number * 9 / 10; IntUnaryOperator population = growth.andThen(migration).andThen(mortality); Assertions.assertEquals(1440000, population.applyAsInt(1000000)); } This test defines an IntUnaryOperator to calculate a quadratic formula, then applies it to an array. It also models population growth, migration, and mortality rates, calculating the population size.\nSimilarly, we have LongUnaryOperator and DoubleUnaryOperator that produce long and double results respectively.\nBinaryOperator The BinaryOperator interface represents operation upon two operands of the same type, producing a result of the same type as the operands.\n@FunctionalInterface public interface BinaryOperator\u0026lt;T\u0026gt; extends BiFunction\u0026lt;T,T,T\u0026gt; { // helper methods } BiFunction is a specialized functional interface. We use it when the operands and the result are all the same type. It has a functional method called apply() that takes two objects as input and produces an object of the same type as the operands.\nLet\u0026rsquo;s try out BinaryOperator:\n@Test void binaryOperator() { LongUnaryOperator factorial = n -\u0026gt; { long result = 1L; for (int i = 1; i \u0026lt;= n; i++) { result *= i; } return result; }; // Calculate permutations  BinaryOperator\u0026lt;Long\u0026gt; npr = (n, r) -\u0026gt; factorial.applyAsLong(n) / factorial.applyAsLong(n - r); // Verify permutations  // 3P2: the number of permutations of 2 that can be achieved from a choice of 3.  Long result3P2 = npr.apply(3L, 2L); Assertions.assertEquals(6L, result3P2); // Add two prices  BinaryOperator\u0026lt;Double\u0026gt; addPrices = Double::sum; // Apply discount  UnaryOperator\u0026lt;Double\u0026gt; applyDiscount = total -\u0026gt; total * 0.9; // 10% discount  // Apply tax  UnaryOperator\u0026lt;Double\u0026gt; applyTax = total -\u0026gt; total * 1.07; // 7% tax  // Composing the operation  BiFunction\u0026lt;Double, Double, Double\u0026gt; finalCost = addPrices.andThen(applyDiscount).andThen(applyTax); // Prices of two items  double item1 = 50.0; double item2 = 100.0; // Calculate cost  double cost = finalCost.apply(item1, item2); // Verify the calculated cost  Assertions.assertEquals(144.45D, cost, 0.01); } In this test, we define a factorial function and use it to compute permutations (nPr). For pricing, we combine BinaryOperator\u0026lt;Double\u0026gt; for summing prices with UnaryOperator\u0026lt;Double\u0026gt; for applying discount and tax, and then validate the cost calculations.\nIntBinaryOperator The IntBinaryOperator interface represents an operation upon two int-valued operands and produces an int-valued result.\n@FunctionalInterface public interface IntBinaryOperator { int applyAsInt(int left, int right); } This is the primitive type specialization of BinaryOperator for numbers. It\u0026rsquo;s a special type of interface that has a functional method called applyAsInt(), which takes two numbers as input and returns an integer.\nHere\u0026rsquo;s an example of how to use the IntBinaryOperator. Check it out:\n@Test void intBinaryOperator() { IntBinaryOperator add = Integer::sum; Assertions.assertEquals(10, add.applyAsInt(4, 6)); IntStream input = IntStream.of(2, 3, 4); OptionalInt result = input.reduce(add); Assertions.assertEquals(OptionalInt.of(9), result); } In this test, we use IntBinaryOperator to sum two integers. We use it to add two numbers and apply it to a stream to sum all elements. We validate both operations. The reduce() method with IntBinaryOperator is useful for operations like summing, finding the maximum or minimum, or other cumulative operations on stream elements.\nSimilarly, we have LongBiaryOperator and DoubleBiaryOperator that produce long and double results respectively.\nConsumers A Consumer is a functional interface that represents an operation that accepts a single input argument and returns no result. It is part of the java.util.function package. Unlike most other functional interfaces, we use it to perform side-effect operations on an input, such as printing, modifying state, or storing values.\nConsumer The Consumer Represents an operation that accepts a single input argument and returns no result:\n@FunctionalInterface public interface Consumer\u0026lt;T\u0026gt; { void accept(T t); // default methods } Consumer is a unique functional interface that stands out from the rest because it operates through side effects. It performs actions rather than returning a value. The functional method of Consumer is accept(Object), which allows it to accept an object and perform some operation on it.\nConsumers are particularly useful in functional programming and stream processing, where we perform operations on elements of collections or streams in a concise and readable manner. They enable us to focus on the action to be performed rather than the iteration logic.\nExample showcasing use of Consumer:\n@Test void consumer() { Consumer\u0026lt;List\u0026lt;String\u0026gt;\u0026gt; trim = strings -\u0026gt; { if (strings != null) { strings.replaceAll(s -\u0026gt; s == null ? null : s.trim()); } }; Consumer\u0026lt;List\u0026lt;String\u0026gt;\u0026gt; upperCase = strings -\u0026gt; { if (strings != null) { strings.replaceAll(s -\u0026gt; s == null ? null : s.toUpperCase()); } }; List\u0026lt;String\u0026gt; input = null; input = Arrays.asList(null, \u0026#34;\u0026#34;, \u0026#34; Joy\u0026#34;, \u0026#34; Joy \u0026#34;, \u0026#34;Joy \u0026#34;, \u0026#34;Joy\u0026#34;); trim.accept(input); Assertions.assertEquals(Arrays.asList(null, \u0026#34;\u0026#34;, \u0026#34;Joy\u0026#34;, \u0026#34;Joy\u0026#34;, \u0026#34;Joy\u0026#34;, \u0026#34;Joy\u0026#34;), input); input = Arrays.asList(null, \u0026#34;\u0026#34;, \u0026#34; Joy\u0026#34;, \u0026#34; Joy \u0026#34;, \u0026#34;Joy \u0026#34;, \u0026#34;Joy\u0026#34;); trim.andThen(upperCase).accept(input); Assertions.assertEquals(Arrays.asList(null, \u0026#34;\u0026#34;, \u0026#34;JOY\u0026#34;, \u0026#34;JOY\u0026#34;, \u0026#34;JOY\u0026#34;, \u0026#34;JOY\u0026#34;), input); } The test demonstrates the use of the Consumer interface to perform operations on a list of strings. The consumer trim trims white space from each string and the consumer upperCase converts them to uppercase. It shows the composition of consumers using andThen to chain operations.\nBiConsumer The BiConsumer represents an operation that accepts two input arguments and returns no result.\n@FunctionalInterface public interface BiConsumer\u0026lt;T, U\u0026gt; { void accept(T t, U u); // default methods } This is the special version of Consumer that takes two arguments. Unlike other functional interfaces, BiConsumer results in side effects. It is a functional interface with a functional method called accept(Object, Object).\nWe\u0026rsquo;re going to figure out how to utilize BiConsumer in following example:\n@Test void biConsumer() { BiConsumer\u0026lt;List\u0026lt;Double\u0026gt;, Double\u0026gt; discountRule = (prices, discount) -\u0026gt; { if (prices != null \u0026amp;\u0026amp; discount != null) { prices.replaceAll(price -\u0026gt; price * discount); } }; BiConsumer\u0026lt;List\u0026lt;Double\u0026gt;, Double\u0026gt; bulkDiscountRule = (prices, discount) -\u0026gt; { if (prices != null \u0026amp;\u0026amp; discount != null \u0026amp;\u0026amp; prices.size() \u0026gt; 2) { // 20% discount cart has 2 items or more  prices.replaceAll(price -\u0026gt; price * 0.80); } }; double discount = 0.90; // 10% discount  List\u0026lt;Double\u0026gt; prices = null; prices = Arrays.asList(20.0, 30.0, 100.0); discountRule.accept(prices, discount); Assertions.assertEquals(Arrays.asList(18.0, 27.0, 90.0), prices); prices = Arrays.asList(20.0, 30.0, 100.0); discountRule.andThen(bulkDiscountRule).accept(prices, discount); Assertions.assertEquals(Arrays.asList(14.4, 21.6, 72.0), prices); } This test demonstrates the use of the BiConsumer interface to apply discounts to a list of prices. The BiConsumer applies a standard discount and a bulk discount if there are more than two items in the list.\nNext, we\u0026rsquo;ll explore various specializations of consumers and provide examples to illustrate their use cases.\nIntConsumer The IntConsumer an operation that accepts a single int-valued argument and returns no result.\n@FunctionalInterface public interface IntConsumer { void accept(int value); // default methods } IntConsumer is a specialized type of Consumer for integers. Unlike most other functional interfaces, IntConsumer produces side effects. It is a functional interface with a method called accept(int).\nHere is an illustration of how to use the IntConsumer interface:\n@ParameterizedTest @CsvSource({ \u0026#34;15,Turning off AC.\u0026#34;, \u0026#34;22,---\u0026#34;, \u0026#34;25,Turning on AC.\u0026#34;, \u0026#34;52,Alert! Temperature not safe for humans.\u0026#34; }) void intConsumer(int temperature, String expected) { AtomicReference\u0026lt;String\u0026gt; message = new AtomicReference\u0026lt;\u0026gt;(); IntConsumer temperatureSensor = t -\u0026gt; { message.set(\u0026#34;---\u0026#34;); if (t \u0026lt;= 20) { message.set(\u0026#34;Turning off AC.\u0026#34;); } else if (t \u0026gt;= 24 \u0026amp;\u0026amp; t \u0026lt;= 50) { message.set(\u0026#34;Turning on AC.\u0026#34;); } else if (t \u0026gt; 50) { message.set(\u0026#34;Alert! Temperature not safe for humans.\u0026#34;); } }; temperatureSensor.accept(temperature); Assertions.assertEquals(expected, message.toString()); } This test verifies a IntConsumer handling temperature sensor responses. Depending on the temperature, it sets a message indicating if the AC should be turned off, turned on, or if we need an alert. The @ParameterizedTest runs multiple scenarios, checking the expected message for each temperature input.\nSimilarly, we have LongConsumer and DoubleConsumer that consume long and double inputs respectively.\nObjIntConsumer The ObjIntConsumer an operation that accepts an object-valued and an int-valued argument, and returns no result.\n@FunctionalInterface public interface ObjIntConsumer\u0026lt;T\u0026gt; { void accept(T t, int value); } ObjIntConsumer interface is a special type of BiConsumer. Unlike most other functional interfaces, ObjIntConsumer is designed to work by directly changing the input. Its functional method is accept(Object, int).\nLet\u0026rsquo;s now check how to use ObjIntConsumer:\n@Test void objIntConsumer() { AtomicReference\u0026lt;String\u0026gt; result = new AtomicReference\u0026lt;\u0026gt;(); ObjIntConsumer\u0026lt;String\u0026gt; trim = (input, len) -\u0026gt; { if (input != null \u0026amp;\u0026amp; input.length() \u0026gt; len) { result.set(input.substring(0, len)); } }; trim.accept(\u0026#34;123456789\u0026#34;, 3); Assertions.assertEquals(\u0026#34;123\u0026#34;, result.get()); } The test applies a ObjIntConsumer to trim a string if its length exceeds a given limit. It asserts the trimmed string.\nSimilarly, we have ObjLongConsumer and ObjDoubleConsumer that consume long and double inputs respectively.\nSuppliers The Supplier functional interface represents a supplier of results. Unlike other functional interfaces like Function or Consumer, the Supplier doesn\u0026rsquo;t accept any arguments. Instead, it provides a result of a specified type when called. This makes it particularly useful in scenarios where we need to generate or supply values without any input.\nWe commonly use suppliers for lazy evaluation to enhance performance by postponing expensive computations until necessary. We can use suppliers in factory methods to create new object instances, in dependency injection frameworks, or to encapsulate object creation logic. Suppliers also retrieve cached values, generate missing values, and store them in the cache. Additionally, suppliers provide default configurations, fallback values, or mock data for testing isolated components.\nSupplier Supplier represents a supplier of results.\n@FunctionalInterface public interface Supplier\u0026lt;T\u0026gt; { T get(); } Each time we invoke a supplier, it may return a distinct result or predefined result. This is a functional interface whose functional method is get().\nLet\u0026rsquo;s consider a simple example where we generate a random number:\npublic class SupplierTest { @Test void supplier() { // Supply random numbers  Supplier\u0026lt;Integer\u0026gt; randomNumberSupplier = () -\u0026gt; new Random().nextInt(100); int result = randomNumberSupplier.get(); Assertions.assertTrue(result \u0026gt;=0 \u0026amp;\u0026amp; result \u0026lt; 100); } } In this test, randomNumberSupplier generates a random number between 0 and 99. The test verifies that the generated number is within the expected range.\nLazy Initialization Traditionally, we populate the needed data first and then pass it to the processing logic. With suppliers, that is no longer needed. We can now defer it to the point when it is actually needed. The supplier would generate the data when we call get() method on it. We may not use the input due to conditional logic. Sometimes such preparations are costly e.g., file resource, network connection. In such cases, we could even avoid such eager preparation of costly inputs.\n IntSupplier IntSupplier represents a supplier of int-valued results.\n@FunctionalInterface public interface IntSupplier { int getAsInt(); } This specialized version of Supplier produces int values. It offers the flexibility to return a distinct result for each invocation. As a functional interface, it provides the getAsInt() functional method as its core functionality.\nHere is an example showcasing the use of IntSupplier:\n@Test void intSupplier() { IntSupplier nextWinner = () -\u0026gt; new Random().nextInt(100, 200); int result = nextWinner.getAsInt(); Assertions.assertTrue(result \u0026gt;= 100 \u0026amp;\u0026amp; result \u0026lt; 200); } In this test, nextWinner generates a random number between 100 and 199. The test verifies that the generated number is within this range by asserting the result is at least 100 and less than 200.\nSimilarly, we have LongSupplier, DoubleSupplier and BooleanSupplier that produce long, double and boolean results respectively.\nBooleanSupplier Use Cases While it\u0026rsquo;s true that a boolean value can only be true or false, a BooleanSupplier can be useful in scenarios where the boolean value needs to be determined dynamically based on some conditions or external factors. Here are a few practical use cases:\n Feature Flags: In applications with feature toggles, use a BooleanSupplier to check whether a feature is on or off. Conditional Execution: Use it to decide whether to execute certain logic based on dynamic conditions. Health Checks: In microservices, determine the health status of a service or component using it. Security: It can check if a user has the necessary permissions to access a resource or perform an action.   Conclusion In this article, we learned functional interfaces, and how functional programming and lambda expressions bring a new level of elegance and efficiency to our code. We began by understanding the core concept of functional programming, where functions are first-class citizens, allowing us to pass and return them just like any other variable.\nThen we dip dived into Function interfaces, which enable us to create concise and powerful transformations of data. Method references provided a shorthand for lambda expressions, making our code even cleaner and more readable.\nPredicates, as powerful boolean-valued functions, helped us filter and match conditions seamlessly. We then moved on to operators, which perform operations on data, and consumers, which act on data without returning any result. This is particularly useful for processing lists and other collections in a streamlined manner.\nLastly, we explored suppliers, which generate data on demand, perfect for scenarios requiring dynamic data creation, such as random number generation or data sampling.\nEach of these functional interfaces has shown us how to write more modular, reusable, and expressive code. By leveraging these idioms, we\u0026rsquo;ve learned to tackle complex tasks with simpler, more readable solutions. Embracing these concepts helps us become more effective Java developers, capable of crafting elegant and efficient code.\nHappy coding! 🚀\n","date":"June 11, 2024","image":"https://reflectoring.io/images/stock/0088-jigsaw-1200x628-branded_hu5d0fbb80fd5a577c9426d368c189788e_197833_650x0_resize_q90_box.jpg","permalink":"/one-stop-guide-to-java-functional-interfaces/","title":"One Stop Guide to Java Functional Interfaces"},{"categories":["Java"],"contents":"Configuring Apache HttpClient is essential for tailoring its behavior to meet specific requirements and optimize performance. From setting connection timeouts to defining proxy settings, configuration options allow developers to fine-tune the client\u0026rsquo;s behavior according to the needs of their application. In this section, we will explore various configuration options available in Apache HttpClient, covering aspects such as connection management, request customization, authentication, and error handling. Understanding how to configure the client effectively empowers developers to build robust and efficient HTTP communication within their applications.\nThe \u0026ldquo;Create an HTTP Client with Apache HttpClient\u0026rdquo; Series This article is the second part of a series:\n Introduction to Apache HttpClient Apache HttpClient Configuration Classic APIs Offered by Apache HttpClient Async APIs Offered by Apache HttpClient Reactive APIs Offered by Apache HttpClient   Example Code This article is accompanied by a working code example on GitHub. Let\u0026rsquo;s now learn commonly used options to configure Apache HttpClient for web communication.\nHttpClient Client Connection Management Connection management in Apache HttpClient refers to the management of underlying connections to remote servers. Efficient connection management is crucial for optimizing performance and resource utilization. Apache HttpClient provides various options for configuring connection management.\nPoolingHttpClientConnectionManager manages a pool of client connections and is able to service connection requests from multiple execution threads. It pools connections as per route basis. It maintains a maximum limit of connections on a for each route basis and in total. By default, it creates up to 2 concurrent connections per given route and up to 20 connections in total. For real-world applications, we can increase these limits if needed.\nThis example shows how the connection pool parameters can be adjusted:\npublic CloseableHttpClient getPooledCloseableHttpClient( String host, int port, int maxTotalConnections, int defaultMaxPerRoute, long requestTimeoutMillis, long responseTimeoutMillis, long connectionKeepAliveMillis ) { PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager(); connectionManager.setMaxTotal(maxTotalConnections); connectionManager.setDefaultMaxPerRoute(defaultMaxPerRoute); Timeout requestTimeout = Timeout.ofMilliseconds(requestTimeoutMillis); Timeout responseTimeout = Timeout.ofMilliseconds(responseTimeoutMillis); TimeValue connectionKeepAlive = TimeValue.ofMilliseconds(connectionKeepAliveMillis); RequestConfig requestConfig = RequestConfig.custom() .setConnectionRequestTimeout(requestTimeout) .setResponseTimeout(responseTimeout) .setConnectionKeepAlive(connectionKeepAlive) .build(); HttpHost httpHost = new HttpHost(host, port); connectionManager.setMaxPerRoute(new HttpRoute(httpHost), 50); return HttpClients.custom() .setDefaultRequestConfig(requestConfig) .setConnectionManager(connectionManager) .build(); } This code snippet creates a customized CloseableHttpClient with specific connection pool properties. It first initializes a PoolingHttpClientConnectionManager and configures the maximum total connections and default connections per route. Then, it sets timeout values for connection requests and responses, as well as the duration for keeping connections alive.\nThe RequestConfig is configured with the specified timeout values and connection keep-alive duration. It then creates a HttpHost based on the provided host and port.\nFinally, it sets the maximum connections per route for the specified host and port, and builds the CloseableHttpClient with the custom request configuration and connection manager.\nThis customized CloseableHttpClient is useful for controlling connection pooling behavior, managing timeouts, and optimizing resource utilization in HTTP communication.\nConnection Pooling Apache HttpClient utilizes connection pooling to reuse existing connections instead of establishing a new connection for each request. This minimizes the overhead of creating and closing connections, resulting in improved performance.\nMax Connections Developers can specify the maximum number of connections allowed per route or per client. This prevents resource exhaustion and ensures that the client operates within specified limits.\nConnection Timeout It defines the maximum time allowed for establishing a connection with the server. Setting an appropriate connection timeout prevents the client from waiting indefinitely for a connection to be established.\nSocket Timeout Socket timeout specifies the maximum time allowed for data transfer between the client and the server. It prevents the client from blocking indefinitely if the server is unresponsive or the network is slow.\nConnection Keep-Alive Keep-Alive is a mechanism that allows multiple requests to be sent over the same TCP connection, thus reducing the overhead of establishing new connections. Apache HttpClient supports Keep-Alive by default, but developers can configure its behavior according to their requirements.\nBy understanding and configuring connection management settings, developers can optimize resource utilization, improve performance, and ensure the robustness of their HTTP communication.\nCaching Configuration The caching HttpClient inherits all configuration options and parameters of the default non-caching implementation (this includes setting options like timeouts and connection pool sizes). For caching-specific configurations, you can provide a CacheConfig instance to customize behavior.\nHttpClient Caching Mechanism The HttpClient Cache module integrates caching functionality into HttpClient, mimicking a browser cache for HTTP/1.1 compliance. It seamlessly replaces the default client, serving cached requests when possible. It follows the Chain of Responsibility pattern, ensuring transparent client-server interaction. Not only that, but it handles cache validation using conditional GETs and Cache-Control extensions. The module adheres to HTTP protocol standards, providing transparent caching proxy capabilities. It corrects the requests for protocol compliance, and invalidates the cache entries accordingly. It serves the cached responses directly if valid, revalidates if necessary, or fetches from the origin server. Furthermore, it examines the responses for cacheability, stored if applicable, or directly returned if too large. The caching mechanism operates within the request execution pipeline, augmenting HttpClient\u0026rsquo;s functionality without altering its core implementation.\nThe caching HttpClient\u0026rsquo;s default implementation stores cache entries and responses in JVM memory, prioritizing performance. For applications needing larger caches or persistence, options like EhCache or memcached are available, allowing disk storage or external process storage. Alternatively, custom storage backends can be implemented via the HttpCacheStorage interface, ensuring HTTP/1.1 compliance while tailoring storage to specific needs. Multi-tier caching hierarchies are achievable, combining different storage methods like in-memory and disk or remote storage, akin to virtual memory systems. This flexibility enables tailored caching solutions to suit diverse application requirements.\n First, we need to add the maven dependency for Apache HttpClient cache:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.httpcomponents.client5\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;httpclient5-cache\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.3.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Here is an example to build a client supporting cache:\nCacheConfig cacheConfig = CacheConfig.custom() .setMaxCacheEntries(maxCacheEntries) .setMaxObjectSize(maxObjectSize) .build(); CachingHttpClientBuilder builder = CachingHttpClients.custom(); CloseableHttpClient client = builder.setCacheConfig(cacheConfig).build(); This code snippet first creates a CacheConfig object using the CacheConfig.custom() builder method. This configures parameters such as the maximum number of cache entries (maxCacheEntries) and the maximum size of cached objects (maxObjectSize).\nNext, it creates a CachingHttpClientBuilder and set the CacheConfig object to the builder using builder.setCacheConfig(cacheConfig).\nFinally, it builds the CloseableHttpClient by calling builder.build(), which creates an HTTP client with caching enabled according to the specified configuration.\nThis setup enables caching of HTTP responses, which can improve performance by serving cached responses for repeated requests, reducing the need for repeated network requests and server processing.\nConfiguring Request Interceptors A custom request interceptor in Apache HttpClient allows developers to intercept outgoing HTTP requests before they are sent to the server. Typically, a custom response interceptor implements the HttpRequestInterceptor interface and overrides the process() method. This interceptor can modify the request headers, request parameters, or even the entire request body based on specific requirements. For example, it can add authentication headers, logging headers, or handle request retries.\nLet\u0026rsquo;s implement a request interceptor:\npublic class CustomHttpRequestInterceptor implements HttpRequestInterceptor { @Override public void process(HttpRequest request, EntityDetails entity, HttpContext context) throws HttpException, IOException { request.setHeader(\u0026#34;x-request-id\u0026#34;, UUID.randomUUID().toString()); request.setHeader(\u0026#34;x-api-key\u0026#34;, \u0026#34;secret-key\u0026#34;); } } The CustomHttpRequestInterceptor implements the HttpRequestInterceptor interface provided by Apache HttpClient. This interceptor intercepts outgoing HTTP requests before they are sent to the server. In the process() method the interceptor modifies the request object by adding custom headers. In this specific implementation, it sets x-request-id header to a randomly generated UUID string. We commonly use such a header to uniquely identify each request, which can be helpful for tracking and debugging purposes. Then it sets x-api-key header to a predefined secret key. This header would be used for authentication or authorization purposes, allowing the server to verify the identity of the client making the request.\nOverall, this interceptor enhances outgoing HTTP requests by adding custom headers, which can serve various purposes such as request identification, security, or API key authentication.\nNow let\u0026rsquo;s build an HTTP client using this request interceptor:\nHttpRequestInterceptor interceptor = new CustomHttpRequestInterceptor(); HttpClientBuilder builder = HttpClients.custom(); CloseableHttpClient client = builder.addRequestInterceptorFirst(interceptor).build(); In this code snippet, we first create an instance of CustomHttpRequestInterceptor to customize outgoing HTTP requests. Then we build the client using HttpClientBuilder.\nThen we call addRequestInterceptorFirst() method passing the interceptor object as an argument. This method adds the interceptor as the first request interceptor in the chain of interceptors. It also has a method addRequestInterceptorLast() to add the interceptor at the end.\nAdding the interceptor first ensures that the custom headers set by the CustomHttpRequestInterceptor will be included in all outgoing HTTP requests made by the HttpClient instance. Finally, it calls the build() method to create the CloseableHttpClient instance with the configured request interceptor.\nConfiguring Response Interceptors A custom response interceptor intercepts and processes HTTP responses received from the server before they are returned to the client. It allows developers to customize and modify the response data based on specific requirements.\nTypically, a custom response interceptor implements the HttpResponseInterceptor interface and overrides the process() method. Inside this method, developers can access and manipulate the HTTP response object, such as modifying headers, inspecting status codes, or extracting content.\nCustom response interceptors are useful for tasks like logging responses, handling error conditions, extracting specific information from responses, or performing additional processing before passing the response back to the client code. They provide flexibility and extensibility to tailor the behavior of HTTP responses according to application needs.\nLet\u0026rsquo;s implement a response interceptor:\npublic class CustomHttpResponseInterceptor implements HttpResponseInterceptor { @Override public void process(HttpResponse response, EntityDetails entity, HttpContext context) throws HttpException, IOException { log.debug(\u0026#34;Got {} response from server.\u0026#34;, response.getCode()); } } The CustomHttpResponseInterceptor implements the HttpResponseInterceptor interface provided by Apache HttpClient. This interceptor intercepts incoming HTTP responses before they are returned to the client. In the process() method the interceptor logs the status code of the response object.\nNow let\u0026rsquo;s build an HTTP client using this request interceptor:\nHttpResponseInterceptor interceptor = new CustomHttpResponseInterceptor(); HttpClientBuilder builder = HttpClients.custom(); CloseableHttpClient client = builder.addResponseInterceptorFirst(interceptor).build(); In this code snippet, we first create an instance of CustomHttpResponseInterceptor to handle incoming HTTP responses. Then we build the client using HttpClientBuilder.\nThen we call addResponseInterceptorFirst() method passing the interceptor object as an argument. This method adds the interceptor as the first response interceptor in the chain of interceptors. It also has a method addResponseInterceptorLast() to add the interceptor at the end.\nBy adding the interceptor first, it ensures that the response code is logged before we manipulate the response further. Finally, it calls the build() method to create the CloseableHttpClient instance with the configured response interceptor.\nConfiguring Execution Interceptors An execution interceptor allows developers to intercept the execution of HTTP requests and responses. It can intercept various stages of the request execution, such as before sending the request, after receiving the response, or when an exception occurs during execution. Execution interceptors can be used for tasks like logging, caching, manipulate request and response, or error handling.\nExecChain and Scope In Apache HttpClient 5, ExecChain and ExecChain.Scope play a key role in request execution and interception.\nExecChain represents the execution chain for processing HTTP requests and responses. It defines the core method proceed(), which is responsible for executing the request and returning the response. This interface allows for interception and modification of the request and response at various stages of execution.\nExecChain.Scope, on the other hand, represents the scope within which it executes the request. It provides contextual information about the execution environment, such as the target host and the request configuration. This scope helps in determining the context of request execution, allowing interceptors and handlers to make informed decisions based on the execution context.\n Let\u0026rsquo;s implement an execution chain interceptor:\npublic class CustomHttpExecutionInterceptor implements ExecChainHandler { @Override public ClassicHttpResponse execute( ClassicHttpRequest classicHttpRequest, ExecChain.Scope scope, ExecChain execChain ) throws IOException, HttpException { try { classicHttpRequest.setHeader(\u0026#34;x-request-id\u0026#34;, UUID.randomUUID().toString()); classicHttpRequest.setHeader(\u0026#34;x-api-key\u0026#34;, \u0026#34;secret-key\u0026#34;); ClassicHttpResponse response = execChain.proceed(classicHttpRequest, scope); log.debug(\u0026#34;Got {} response from server.\u0026#34;, response.getCode()); return response; } catch (IOException | HttpException ex) { String msg = \u0026#34;Failed to execute request.\u0026#34;; log.error(msg, ex); throw new RequestProcessingException(msg, ex); } } } The provided code defines a custom HTTP execution interceptor named CustomHttpExecutionInterceptor, implementing the ExecChainHandler interface. This interceptor intercepts the execution of HTTP requests.\nWithin the execute() method, the interceptor first sets custom headers (x-request-id and x-api-key) to the incoming HTTP request.\nNext, the interceptor proceeds with the execution of the request by calling execChain.proceed(classicHttpRequest, scope), which delegates the request execution to the next handler in the execution chain.\nUpon receiving the response from the server, the interceptor logs the status code of the response. This logging statement provides visibility into the response received from the server.\nIf a IOException or HttpException occurs during the execution of the request or response handling, the interceptor catches these exceptions. It logs an error message indicating the failure and wraps the exception in a RequestProcessingException, which is then thrown to indicate the failure of the request execution.\nNow let\u0026rsquo;s build an HTTP client using this execution interceptor:\nHttpExecutionInterceptor interceptor = new CustomHttpExecutionInterceptor(); HttpClientBuilder builder = HttpClients.custom(); CloseableHttpClient client = null; client = builder.addExecInterceptorFirst(\u0026#34;customExecInterceptor\u0026#34;, interceptor).build(); In this code snippet, we first create an instance of CustomHttpExecutionInterceptor to intercept request execution. Then we build the client using HttpClientBuilder.\nThen we call addResponseInterceptorFirst() method passing the interceptor object as an argument. This method adds the interceptor as the first response interceptor in the chain of interceptors. It also has a method addResponseInterceptorLast() to add the interceptor at the end and methods addResponseInterceptorBefore() and addResponseInterceptorAfter() to add the interceptor before and after an existing interceptor respectively.\nBy adding the interceptor first, it ensures that the interceptor get a chance to perform its logic ahead of other interceptors. Finally, it calls the build() method to create the CloseableHttpClient instance with the configured response interceptor.\nConclusion In this part of te article series, we explored the configuration aspects of Apache HttpClient, focusing on connection management, caching, and interceptor setup.\nFirstly, we learned connection management, discussing the customization of connection pools, timeouts, and keep-alive settings to optimize HTTP request handling and resource utilization.\nNext, we examined how to configure caching in Apache HttpClient, enabling the caching of HTTP responses to improve performance and reduce network overhead.\nFinally, we explored interceptor configuration, including the implementation of custom request and response interceptors to modify HTTP requests and responses at various stages of execution, providing flexibility for logging, header manipulation, and centralized exception handling.\n","date":"May 29, 2024","image":"https://reflectoring.io/images/stock/0125-tools-1200x628-branded_hu82ff8da5122675223ceb88a08f293300_139357_650x0_resize_q90_box.jpg","permalink":"/apache-http-client-config/","title":"Apache HttpClient Configuration"},{"categories":["Java"],"contents":"In this article, we are going to learn about the async APIs offered by Apache HttpClient. We are going to explore the different ways Apache HttpClient enables developers to send and receive data over the internet in asynchronous mode. From simple GET requests to complex multipart POST requests, we\u0026rsquo;ll cover it all with real-world examples. So get ready to learn to implement HTTP interactions with Apache HttpClient!\nThe \u0026ldquo;Create an HTTP Client with Apache HttpClient\u0026rdquo; Series This article is the fourth part of a series:\n Introduction to Apache HttpClient Apache HttpClient Configuration Classic APIs Offered by Apache HttpClient Async APIs Offered by Apache HttpClient Reactive APIs Offered by Apache HttpClient   Example Code This article is accompanied by a working code example on GitHub. Let\u0026rsquo;s now learn how to use Apache HttpClient for web communication. We have grouped the examples under the following categories of APIs: classic, async, and reactive. In this article, we will learn about the async APIs offered by Apache HttpClient.\nReqres Fake Data CRUD API We are going to use Reqres API Server to test different HTTP methods. It is a free online API that can be used for testing and prototyping. It provides a variety of endpoints that can be used to test different HTTP methods. The Reqres API is a good choice for testing CRUD operations because it supports all the HTTP methods that CRUD allows.\n HttpClient (Async APIs) In this section of examples, we are going to learn how to use the HttpAsyncClient for sending requests and consuming responses in asynchronous mode. The client code will wait until it receives a response from the server without blocking the current thread.\nHTTP and CRUD Operations CRUD operations refer to Create, Read, Update, and Delete actions performed on data. In the context of HTTP endpoints for a /users resource:\n Create: Use HTTP POST to add a new user: POST /users Read: Use HTTP GET to retrieve user data: GET /users/{userId} for a specific user or GET /users?page=1 for a list of users with pagination. Update: Use HTTP PUT or PATCH to modify user data: PUT /users/{userId} Delete: Use HTTP DELETE to remove a user: DELETE /users/{userId}   When Should We Use HttpAsyncClient? Apache\u0026rsquo;s HttpAsyncClient is an HTTP client that enables non-blocking and parallel processing of long-lasting HTTP calls. This library incorporates a non-blocking IO model, allowing multiple requests to be active simultaneously without the need for additional background threads. By leveraging this approach, HttpAsyncClient offers significant performance benefits over blocking HTTP clients, particularly when dealing with high-volume, long-running HTTP requests. Additionally, this library provides a robust and flexible interface for building HTTP clients, making it an ideal choice for developers looking to optimize their asynchronous HTTP processing workflows.\nAsynchronous HTTP clients have a thread pool to handle responses, with explicit timeouts for idle, TTL, and request. For low workloads, synchronous HTTP clients perform better with dedicated threads per connection. For higher throughput, non-blocking IO (NIO) clients are more effective.\nBasic Asynchronous HTTP Request / Response Exchange Let\u0026rsquo;s now understand how to send a simple HTTP request asynchronously.\nIO Reactor HttpAsyncClient uses IO Reactor to exchange messages asynchronously. HttpCore NIO is a system that uses the Reactor pattern, created by Doug Lea. Its purpose is to react to I/O events and to send event notifications to individual I/O sessions. The idea behind the I/O Reactor pattern is to avoid having one thread per connection, which is the case with the classic blocking I/O model.\nThe Apache HttpClient\u0026rsquo;s IOReactor interface represents an abstract object that implements the Reactor pattern. I/O reactors use a few dispatch threads (usually one) to send I/O event notifications to a much greater number of I/O sessions or connections (often several thousand). It is recommended to have one dispatch thread per CPU core.\n Let\u0026rsquo;s now implement the logic to call the endpoints asynchronously. Here is the helper class that has methods to start and stop the async client and methods to execute HTTP requests:\npublic class UserAsyncHttpRequestHelper extends BaseHttpRequestHelper { private CloseableHttpAsyncClient httpClient; /** Starts http async client. */ public void startHttpAsyncClient() { if (httpClient == null) { try { PoolingAsyncClientConnectionManager cm = PoolingAsyncClientConnectionManagerBuilder.create().build(); IOReactorConfig ioReactorConfig = IOReactorConfig.custom().setSoTimeout(Timeout.ofSeconds(5)).build(); httpClient = HttpAsyncClients.custom() .setIOReactorConfig(ioReactorConfig) .setConnectionManager(cm) .build(); httpClient.start(); } catch (Exception e) { // handle exception  } } } /** Stop http async client. */ public void stopHttpAsyncClient() { if (httpClient != null) { log.info(\u0026#34;Shutting down.\u0026#34;); httpClient.close(CloseMode.GRACEFUL); httpClient = null; } } // Helper methods to execute HTTP requests. } We use CloseableHttpAsyncClient to execute HTTP requests. In this implementation, it is set up once. In the startHttpAsyncClient() method, we first build the connection manager. Then we configure the IO reactor, and build and start the async client.\nThe method stopHttpAsyncClient() stops the client gracefully.\nNow let\u0026rsquo;s understand why do we need to start and stop the HTTP async client. We did not need to do so for the classic HTTP client.\nThe need to start and stop the Apache HttpAsyncClient but not for the classic HttpClient is primarily due to their underlying architectures and usage scenarios.\nApache HttpAsyncClient is designed for asynchronous, non-blocking HTTP communication. It operates based on an event-driven model, sends requests asynchronously, and processes responses in a non-blocking manner. This asynchronous nature requires explicit management of the client\u0026rsquo;s life cycle, including starting and stopping it, to control the execution of asynchronous tasks and resources.\nOn the other hand, the classic HttpClient operates synchronously by default. It sends HTTP requests and blocks until it receives a response, making it straightforward to use without the need for explicit start and stop operations. Each request in the classic HttpClient is executed synchronously, and there\u0026rsquo;s no ongoing asynchronous activity that needs to be managed.\nWe are going to use the execute() method of HttpAsyncClient:\npublic \u0026lt;T\u0026gt; Future \u0026lt;T\u0026gt; execute(AsyncRequestProducer requestProducer, AsyncResponseConsumer \u0026lt;T\u0026gt; responseConsumer, FutureCallback \u0026lt;T\u0026gt; callback) Let\u0026rsquo;s now learn how to do it. Here is the implementation of a custom callback. We can also implement it inline using an anonymous class:\npublic class SimpleHttpResponseCallback implements FutureCallback\u0026lt;SimpleHttpResponse\u0026gt; { /** The Http get request. */ SimpleHttpRequest httpRequest; /** The Error message. */ String errorMessage; public SimpleHttpResponseCallback(SimpleHttpRequest httpRequest, String errorMessage) { this.httpRequest = httpRequest; this.errorMessage = errorMessage; } @Override public void completed(SimpleHttpResponse response) { log.debug(httpRequest + \u0026#34;-\u0026gt;\u0026#34; + new StatusLine(response)); log.debug(\u0026#34;Got response: {}\u0026#34;, response.getBody()); } @Override public void failed(Exception ex) { log.error(httpRequest + \u0026#34;-\u0026gt;\u0026#34; + ex); throw new RequestProcessingException(errorMessage, ex); } @Override public void cancelled() { log.debug(httpRequest + \u0026#34; cancelled\u0026#34;); } } We have overridden the life cycle methods of the FutureCallback interface. Furthermore, we have also defined the response type SimpleHttpResponse it will receive when the HTTP request call completes. When the call fails, we opt to raise an exception in the implementation of the failed() method.\nNow let\u0026rsquo;s see how to use this custom callback:\npublic Map\u0026lt;Long, String\u0026gt; getUserWithCallback(List\u0026lt;Long\u0026gt; userIdList, int delayInSec) throws RequestProcessingException { Map\u0026lt;Long, String\u0026gt; userResponseMap = new HashMap\u0026lt;\u0026gt;(); Map\u0026lt;Long, Future\u0026lt;SimpleHttpResponse\u0026gt;\u0026gt; futuresMap = new HashMap\u0026lt;\u0026gt;(); for (Long userId : userIdList) { try { // Create request  HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); URI uri; uri = new URIBuilder(\u0026#34;/api/users/\u0026#34; + userId + \u0026#34;?delay=\u0026#34; + delayInSec).build(); SimpleHttpRequest httpGetRequest = SimpleRequestBuilder.get().setHttpHost(httpHost) .setPath(uri.getPath()).build(); // log request  Future\u0026lt;SimpleHttpResponse\u0026gt; future = httpClient.execute( SimpleRequestProducer.create(httpGetRequest), SimpleResponseConsumer.create(), new SimpleHttpResponseCallback( httpGetRequest, MessageFormat.format(\u0026#34;Failed to get user for ID: {0}\u0026#34;, userId))); futuresMap.put(userId, future); } catch (Exception e) { userResponseMap.put(userId, \u0026#34;Failed to get user for ID: \u0026#34; + userId)); } } The code snippet aims to retrieve user data for a list of user IDs asynchronously using Apache HttpAsyncClient. It starts by ensuring that the HTTPAsyncClient is initialized. It then initializes data structures to store user responses and futures for asynchronous HTTP requests.\nFor each user ID in the provided list, it constructs a GET request with a specified delay parameter and executes it asynchronously. It stores response futures in a map for later retrieval. It logs any exceptions that occur during request execution and adds corresponding error messages to the response map.\nNote that we have added a delay to the GET endpoint. It simulates a delayed server operation. The HTTP client sends the request, one after the other, without waiting for the response. We can verify it by checking the logs:\nStarted HTTP async client. Executing GET request: https://reqres.in/api/users/1 on host https://reqres.in Executing GET request: https://reqres.in/api/users/2 on host https://reqres.in ... Executing GET request: https://reqres.in/api/users/10 on host https://reqres.in Got 10 futures. GET https://reqres.in/api/users/1-\u0026gt;HTTP/1.1 200 OK GET https://reqres.in/api/users/2-\u0026gt;HTTP/1.1 200 OK ... GET https://reqres.in/api/users/10-\u0026gt;HTTP/1.1 200 OK It will send the requests in the order of the IDs in the list. However, the requests may complete in any order. So our implementation should be agnostic to the order of request completion.\nNow let\u0026rsquo;s verify the implementation using a unit test:\nclass UserAsyncHttpRequestHelperTests extends BaseAsyncExampleTests { private UserAsyncHttpRequestHelper userHttpRequestHelper = new UserAsyncHttpRequestHelper(); private Condition\u0026lt;String\u0026gt; getUserErrorCheck = new Condition\u0026lt;String\u0026gt;(\u0026#34;Check failure response.\u0026#34;) { @Override public boolean matches(String value) { // value should not be null  // value should not be a failure message  return value != null \u0026amp;\u0026amp; (!value.startsWith(\u0026#34;Failed to get user\u0026#34;) || value.equals(\u0026#34;Server does not support HTTP/2 multiplexing.\u0026#34;)); } }; /** Tests get user. */ @Test void getUserWithCallback() { try { userHttpRequestHelper.startHttpAsyncClient(); // Send 10 requests in parallel  // call the delayed endpoint  List\u0026lt;Long\u0026gt; userIdList = List.of(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L); Map\u0026lt;Long, String\u0026gt; responseBodyMap = userHttpRequestHelper.getUserWithCallback(userIdList, 3); // verify  assertThat(responseBodyMap) .hasSameSizeAs(userIdList) .doesNotContainKey(null) .doesNotContainValue(null) .hasValueSatisfying(getUserErrorCheck); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } finally { userHttpRequestHelper.stopHttpAsyncClient(); } } } Here, we verify fetching user data asynchronously with Apache HttpAsyncClient. First, we initialize the client, and send 10 parallel requests to a delayed endpoint, each with a unique user ID and 3-second delay. Stores the responses in a map. After receiving all responses, we validate their correctness: ensuring the map matches the user IDs, contains no null key-value pairs, and no responses indicating failure. If an exception occurs, the test fails with an error message. Finally, we stop the client.\nAsynchronous Content Stream HTTP Request / Response Exchange Let\u0026rsquo;s now understand how to handle content stream HTTP requests asynchronously. We would extend AbstractCharResponseConsumer to implement the content consumer. AbstractCharResponseConsumer is a base class that developers can extend to create custom response consumers for handling character-based content streams. This class specifically handles scenarios where the HTTP response entity contains character data, such as text-based content like HTML, JSON, or XML.\nWhen extending AbstractCharResponseConsumer, we typically override methods as follows. First, start() marks the beginning of the response stream. We perform any initialization or setup tasks required for processing the incoming character data stream. Then we have the data() method. Apache client repeatedly calls it to process the content received from the server. We implement logic to read and process the character data in chunks as it becomes available from the response stream. And finally, in buildResult() the response stream ends. We perform any cleanup or finalization tasks, such as closing resources or finalizing the processing of the received content. For error handling, we override the failed() method.\nContent Streaming User Scenarios In scenarios where large volumes of data need to be processed in real-time or near-real-time, asynchronous streaming with Apache HttpAsyncClient can be beneficial. For example, in a big data analytics platform, data streams from various sources such as sensors, logs, or social media feeds can be asynchronously streamed to a central processing system for analysis and insights generation.\nIoT devices often generate continuous streams of data that need to be transmitted and processed efficiently. We can use Apache HttpAsyncClient\u0026rsquo;s asynchronous streaming feature to handle such data streams from IoT devices. For instance, in a smart city deployment, sensor data from various devices like traffic cameras, environmental sensors, and smart meters can be asynchronously streamed to a central server for real-time monitoring and analysis.\nOTT platforms deliver streaming media content such as videos, audio, and live broadcasts over the internet. We can use Apache HttpAsyncClient\u0026rsquo;s asynchronous streaming capability to handle the transmission of media streams between servers and client applications. For example, in a video streaming service, video content can be asynchronously streamed from content servers to end-user devices, ensuring smooth playback and minimal buffering delays.\n Here\u0026rsquo;s the implementation of the consumer response:\npublic class SimpleCharResponseConsumer extends AbstractCharResponseConsumer\u0026lt;SimpleHttpResponse\u0026gt; { // fields  // constructor  @Override protected void start(HttpResponse httpResponse, ContentType contentType) throws HttpException, IOException { responseBuilder.setLength(0); } @Override protected SimpleHttpResponse buildResult() throws IOException { return SimpleHttpResponse.create(HttpStatus.SC_OK, responseBuilder.toString()); } @Override protected void data(CharBuffer src, boolean endOfStream) throws IOException { while (src.hasRemaining()) { responseBuilder.append(src.get()); } if (endOfStream) { log.debug(responseBuilder.toString()); } } @Override public void failed(Exception ex) { throw new RequestProcessingException(errorMessage, ex); } // other overridden methods } We process character-based HTTP responses asynchronously. Extending AbstractCharResponseConsumer, we override methods to handle the response stream. start() initializes response logging and content accumulation. data() appends received data to a StringBuilder. buildResult() constructs a SimpleHttpResponse with HTTP status code and accumulated content. On failure, failed() logs errors and throws a RequestProcessingException.\nNow let\u0026rsquo;s test this functionality:\nclass UserAsyncHttpRequestHelperTests extends BaseAsyncExampleTests { private UserAsyncHttpRequestHelper userHttpRequestHelper = new UserAsyncHttpRequestHelper(); private Condition\u0026lt;String\u0026gt; getUserErrorCheck = new Condition\u0026lt;String\u0026gt;(\u0026#34;Check failure response.\u0026#34;) { @Override public boolean matches(String value) { // value should not be null  // value should not be failure message  return value != null \u0026amp;\u0026amp; !value.startsWith(\u0026#34;Failed to get user\u0026#34;); } }; @Test void getUserWithStream() { try { userHttpRequestHelper.startHttpAsyncClient(); // Send 10 requests in parallel  // call the delayed endpoint  List\u0026lt;Long\u0026gt; userIdList = List.of(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L); Map\u0026lt;Long, String\u0026gt; responseBodyMap = userHttpRequestHelper.getUserWithStreams(userIdList, 3); // verify  assertThat(responseBodyMap) .hasSameSizeAs(userIdList) .doesNotContainKey(null) .doesNotContainValue(null) .hasValueSatisfying(getUserErrorCheck); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } finally { userHttpRequestHelper.stopHttpAsyncClient(); } } } The getUserWithStream() test method in the UserAsyncHttpRequestHelperTests class verifies the functionality of retrieving user data asynchronously using streams.\nFirst, we start the HTTP asynchronous client using userHttpRequestHelper.startHttpAsyncClient().\nThen, we prepare a list of user IDs and call the method getUserWithStreams() from the UserAsyncHttpRequestHelper class, passing the list of user IDs and a delay value of 3 seconds.\nThe method sends HTTP requests in parallel for each user ID, fetching user data from the delayed endpoint. It returns a map containing the response bodies for each user ID.\nFinally, the test verifies the correctness of the responses. It ensures that the response map has the same size as the list of user IDs. The map does not contain null keys or values. Furthermore, the map satisfies the predefined condition getUserErrorCheck, which checks that the response does not contain a failure message.\nIf any exception occurs during the execution of the test, the test fails with an error message indicating the failure to execute the HTTP request. Finally, we stop the HTTP asynchronous client using userHttpRequestHelper.stopHttpAsyncClient().\nPipelined HTTP Request / Response Exchange HTTP pipelining is a technique that allows a client to send multiple HTTP requests to a server without waiting for a response. The server, in turn, must respond to all the requests in the same order they were received. This technique is a way to improve the performance of HTTP/1.1 connections.\nWhen a client makes an HTTP request, it has to wait for the server to respond before sending another request. This waiting time can be significant, especially on high-latency networks. HTTP pipelining allows a client to send multiple requests at once, without waiting for the server to respond. By doing this, the client can make better use of the connection and reduce overall loading times.\nIt\u0026rsquo;s worth noting that HTTP pipelining is not supported by all servers, so it\u0026rsquo;s not always a reliable way to improve performance. Additionally, if there is an error in one of the requests, the entire pipeline will fail, and the client will need to resend all the requests.\nPipelining can also improve performance by packing multiple HTTP requests into a single TCP message. This can help to reduce the overhead of the connection and improve the overall speed of the transfer. However, we don\u0026rsquo;t use this technique widely, as it can be challenging to implement correctly and may lead to compatibility issues with some servers.\nNow let\u0026rsquo;s understand how to pipeline requests using Apache HttpClient:\npublic class CustomHttpResponseCallback implements FutureCallback\u0026lt;SimpleHttpResponse\u0026gt; { // fields  // constructor  @Override public void completed(SimpleHttpResponse response) { latch.countDown(); } @Override public void failed(Exception ex) { latch.countDown(); throw new RequestProcessingException(errorMessage, ex); } @Override public void cancelled() { latch.countDown(); } } We have overridden the life cycle methods of FutureCallback. We have also mentioned the response type SimpleHttpResponse it will receive when the HTTP request call completes. When the call fails, we opt to raise an exception in failed.\nNow let\u0026rsquo;s see how to use this custom callback:\npublic Map\u0026lt;String, String\u0026gt; getUserWithPipelining( MinimalHttpAsyncClient minimalHttpClient, List\u0026lt;String\u0026gt; userIdList, int delayInSec, String scheme, String hostname) throws RequestProcessingException { return getUserWithParallelRequests(minimalHttpClient, userIdList, delayInSec, scheme, hostname); } private Map\u0026lt;String, String\u0026gt; getUserWithParallelRequests( MinimalHttpAsyncClient minimalHttpClient, List\u0026lt;String\u0026gt; userIdList, int delayInSec, String scheme, String hostname) throws RequestProcessingException { Map\u0026lt;String, String\u0026gt; userResponseMap = new HashMap\u0026lt;\u0026gt;(); Map\u0026lt;String, Future\u0026lt;SimpleHttpResponse\u0026gt;\u0026gt; futuresMap = new HashMap\u0026lt;\u0026gt;(); AsyncClientEndpoint endpoint = null; String userId = null; try { HttpHost httpHost = new HttpHost(scheme, hostname); Future\u0026lt;AsyncClientEndpoint\u0026gt; leaseFuture = minimalHttpClient.lease(httpHost, null); endpoint = leaseFuture.get(30, TimeUnit.SECONDS); CountDownLatch latch = new CountDownLatch(userIdList.size()); for (String currentUserId : userIdList) { userId = currentUserId; Future\u0026lt;SimpleHttpResponse\u0026gt; future = executeRequest(minimalHttpClient, delayInSec, userId, httpHost, latch); futuresMap.put(userId, future); } latch.await(); } catch (Exception e) { // handle exception  userResponseMap.put(userId, e.getMessage()); } finally { // release resources  } handleFutureResults(futuresMap, userResponseMap); return userResponseMap; } private Future\u0026lt;SimpleHttpResponse\u0026gt; executeRequest( MinimalHttpAsyncClient minimalHttpClient, int delayInSec, Long userId, HttpHost httpHost, CountDownLatch latch) throws URISyntaxException { // Create request  URI uri = new URIBuilder(\u0026#34;/api/users/\u0026#34; + userId + \u0026#34;?delay=\u0026#34; + delayInSec).build(); SimpleHttpRequest httpGetRequest = SimpleRequestBuilder.get().setHttpHost(httpHost).setPath(uri.getPath()).build(); log.debug( \u0026#34;Executing {} request: {} on host {}\u0026#34;, httpGetRequest.getMethod(), httpGetRequest.getUri(), httpHost); Future\u0026lt;SimpleHttpResponse\u0026gt; future = minimalHttpClient.execute( SimpleRequestProducer.create(httpGetRequest), SimpleResponseConsumer.create(), new CustomHttpResponseCallback( httpGetRequest, MessageFormat.format(\u0026#34;Failed to get user for ID: {0}\u0026#34;, userId), latch)); return future; } private void handleFutureResults( Map\u0026lt;Long, Future\u0026lt;SimpleHttpResponse\u0026gt;\u0026gt; futuresMap, Map\u0026lt;Long, String\u0026gt; userResponseMap) { for (Map.Entry\u0026lt;Long, Future\u0026lt;SimpleHttpResponse\u0026gt;\u0026gt; futureEntry : futuresMap.entrySet()) { Long currentUserId = futureEntry.getKey(); try { userResponseMap.put(currentUserId, futureEntry.getValue().get().getBodyText()); } catch (Exception e) { // prepare error message  userResponseMap.put(currentUserId, message); } } } This code retrieves user data asynchronously using pipelining. It sends parallel requests to the server for each user ID. The method getUserWithPipelining() orchestrates this process, while getUserWithParallelRequests() handles actual request execution. Processes each request asynchronously, and stores responses in a map. If an error occurs, it logs the error, and adds an appropriate message to the response map. Finally, the method returns the map containing user responses.\nThe ConnectionClosedException For Unsupported HTTP/2 Async Features It may be noted that not all servers support HTTP/2 features like multiplexing. In that case, Apache HttpAsyncClient multiplexer encounters ConnectionClosedException with the message \u0026ldquo;Frame size exceeds maximum\u0026rdquo; when executing requests with an enclosed message body and the remote endpoint having negotiated a maximum frame size larger than the protocol default (16 KB).\n Now let\u0026rsquo;s understand how to call this functionality.\nFirst, let\u0026rsquo;s understand the operations to start and stop the client for HTTP/1:\npublic class UserAsyncHttpRequestHelper extends BaseHttpRequestHelper { private MinimalHttpAsyncClient minimalHttp1Client; // Starts minimal http 1 async client.  public MinimalHttpAsyncClient startMinimalHttp1AsyncClient() { if (minimalHttp1Client == null) { minimalHttp1Client = startMinimalHttpAsyncClient(HttpVersionPolicy.FORCE_HTTP_1); } return minimalHttp1Client; } // Starts minimal HTTP async client.  private MinimalHttpAsyncClient startMinimalHttpAsyncClient( HttpVersionPolicy httpVersionPolicy ) { try { MinimalHttpAsyncClient minimalHttpClient = HttpAsyncClients.createMinimal( H2Config.DEFAULT, Http1Config.DEFAULT, IOReactorConfig.DEFAULT, PoolingAsyncClientConnectionManagerBuilder.create() .setTlsStrategy(getTlsStrategy()) .setDefaultTlsConfig( TlsConfig.custom().setVersionPolicy(httpVersionPolicy).build()) .build()); minimalHttpClient.start(); log.debug(\u0026#34;Started minimal HTTP async client for {}.\u0026#34;, httpVersionPolicy); return minimalHttpClient; } catch (Exception e) { // handle exception  } } // Stops minimal http async client.  public void stopMinimalHttpAsyncClient(MinimalHttpAsyncClient minimalHttpClient) { if (minimalHttpClient != null) { log.info(\u0026#34;Shutting down minimal http async client.\u0026#34;); minimalHttpClient.close(CloseMode.GRACEFUL); minimalHttpClient = null; } } } The UserAsyncHttpRequestHelper class facilitates the management of a minimal HTTP asynchronous client for making requests. It contains methods to start and stop the client.\nThe startMinimalHttp1AsyncClient() method initiates the minimal HTTP/1 async client if it hasn\u0026rsquo;t been started already. It checks if the client is null, and if so, it starts the client with HTTP/1 enforced as the HTTP version policy. It then returns the initialized client.\nThe startMinimalHttpAsyncClient() method is a private helper method responsible for initializing the minimal HTTP async client. It creates a MinimalHttpAsyncClient instance with default configurations such as HTTP/2, HTTP/1, I/O reactor, and connection manager settings. It starts the client, and if successful, it logs the event and returns the initialized client. If an exception occurs during initialization, it logs an error message and throws a runtime exception.\nThe stopMinimalHttpAsyncClient() method gracefully stops the minimal HTTP async client. It takes the client as an argument, checks if it\u0026rsquo;s not null, shuts down the client gracefully, logs the shutdown event, and sets the client reference to null.\nThese methods provide a convenient way to manage the life cycle of the minimal HTTP async client, ensuring proper initialization and shutdown procedures.\nHere\u0026rsquo;s the test to execute the pipelined HTTP requests:\n@Test void getUserWithPipelining() { MinimalHttpAsyncClient minimalHttpAsyncClient = null; try { minimalHttpAsyncClient = userHttpRequestHelper.startMinimalHttp1AsyncClient(); // Send 10 requests in parallel  // call the delayed endpoint  List\u0026lt;Long\u0026gt; userIdList = List.of(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L); Map\u0026lt;Long, String\u0026gt; responseBodyMap = userHttpRequestHelper.getUserWithPipelining( minimalHttpAsyncClient, userIdList, 3, \u0026#34;https\u0026#34;, \u0026#34;reqres.in\u0026#34;); // verify  assertThat(responseBodyMap) .hasSameSizeAs(userIdList) .doesNotContainKey(null) .doesNotContainValue(null) .hasValueSatisfying(getUserErrorCheck); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } finally { userHttpRequestHelper.stopMinimalHttpAsyncClient(minimalHttpAsyncClient); } } In the getUserWithPipelining() test method, an instance of MinimalHttpAsyncClient is initialized to null. The method starts by attempting to start a minimal HTTP/1 asynchronous client using the startMinimalHttp1AsyncClient() method of the userHttpRequestHelper object. Finally, it assigns this client to the minimalHttpAsyncClient variable.\nIt creates a list of user IDs (userIdList). Then invokes getUserWithPipelining() method on the userHttpRequestHelper object, passing the minimalHttpAsyncClient, the userIdList, a delay of 3 seconds, and the scheme and hostname of the target server (\u0026ldquo;https\u0026rdquo; and \u0026ldquo;reqres.in\u0026rdquo; respectively). This method orchestrates the parallel execution of pipelined requests to the specified endpoints.\nAfter executing all the requests, the method retrieves the response body for each request and populates a map (responseBodyMap) with the URI as the key and the response body as the value.\nThe test then verifies the correctness of the responses by asserting that the responseBodyMap has the same size as the userIdList, does not contain any null keys or values, and satisfies the getUserErrorCheck condition.\nIf any exception occurs during the execution of the HTTP requests, the test fails with an appropriate error message. Finally, the stopMinimalHttpAsyncClient method is called to stop and release resources associated with the minimalHttpAsyncClient.\nMultiplexed HTTP Request / Response Exchange In HTTP/2, multiplexing enables a web server connection to handle multiple requests and responses simultaneously, leading to improved efficiency and resource utilization. Unlike HTTP/1.1, where requests had to wait for responses before sending the next request, HTTP/2 allows for parallel processing. This means that resources can load concurrently, preventing one resource from blocking others. By using a single TCP connection to transmit multiple data streams, HTTP/2 eliminates the need to establish new connections for each request, resulting in faster loading times. Inspired by Google\u0026rsquo;s SPDY protocol, HTTP/2 enhances web page performance by compressing, multiplexing, and prioritizing HTTP requests, making pages load much faster than with HTTP/1.1.\nThere is little difference between the way pipelined and multiplexed HTTP request processing. In pipelined exchange, we enforce HTTP/1 version policy whereas in multiplexed exchange we enforce HTTP/2.\nHere\u0026rsquo;s the implementation for client setup for multiplexed exchange:\npublic class UserAsyncHttpRequestHelper extends BaseHttpRequestHelper { private MinimalHttpAsyncClient minimalHttp2Client; public MinimalHttpAsyncClient startMinimalHttp2AsyncClient() { if (minimalHttp2Client == null) { minimalHttp2Client = startMinimalHttpAsyncClient(HttpVersionPolicy.FORCE_HTTP_2); } return minimalHttp2Client; } } We have already seen the method startMinimalHttpAsyncClient(). We pass HttpVersionPolicy.FORCE_HTTP_2 to start the client for multiplexed exchanges.\nAnd here is the logic to call request processing with multiplexing:\npublic Map\u0026lt;String, String\u0026gt; getUserWithMultiplexing( MinimalHttpAsyncClient minimalHttpClient, List\u0026lt;String\u0026gt; userIdList, int delayInSec, String scheme, String hostname) throws RequestProcessingException { return getUserWithParallelRequests(minimalHttpClient, userIdList, delayInSec, scheme, hostname); } Here\u0026rsquo;s the test to verify this functionality:\n@Test void getUserWithMultiplexing() { MinimalHttpAsyncClient minimalHttpAsyncClient = null; try { minimalHttpAsyncClient = userHttpRequestHelper.startMinimalHttp2AsyncClient(); // Send 10 requests in parallel  // call the delayed endpoint  List\u0026lt;Long\u0026gt; userIdList = List.of(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L); Map\u0026lt;Long, String\u0026gt; responseBodyMap = userHttpRequestHelper.getUserWithMultiplexing( minimalHttpAsyncClient, userIdList, 3, \u0026#34;https\u0026#34;, \u0026#34;reqres.in\u0026#34;); // verify  assertThat(responseBodyMap) .hasSameSizeAs(userIdList) .doesNotContainKey(null) .doesNotContainValue(null) .hasValueSatisfying(getUserErrorCheck); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } finally { userHttpRequestHelper.stopMinimalHttpAsyncClient(minimalHttpAsyncClient); } } We first attempt to start a minimal HTTP/2 asynchronous client using the startMinimalHttp2AsyncClient() method of the userHttpRequestHelper object. Finally, it assigns this client to the minimalHttpAsyncClient variable.\nThen we populate the list of user IDs (userIdList). Then we invoke getUserWithMultiplexing() method on the userHttpRequestHelper object, passing the minimalHttpAsyncClient, the userIdList, a delay of 3 seconds, and the scheme and host name of the target server (\u0026ldquo;https\u0026rdquo; and \u0026ldquo;reqres.in\u0026rdquo; respectively). This method orchestrates the parallel execution of multiplexed requests to the specified endpoints.\nOnce it executes the requests, the method retrieves the response body for each request and populates a map (responseBodyMap) with the URI as the key and the response body as the value.\nThe test then verifies the correctness of the responses by asserting that the responseBodyMap has the same size as the userIdList, does not contain any null keys or values, and satisfies the getUserErrorCheck condition.\nIf any exception occurs during the execution of the HTTP requests, the test fails with an appropriate error message. Finally, we call stopMinimalHttpAsyncClient() method to stop and release resources associated with the minimalHttpAsyncClient.\nPipelining vs Multiplexing In HTTP/1.1 pipelining, requests must still wait for their turn, and we should return them in the exact order they were sent, which can cause delays known as head-of-line blocking.\nHowever, HTTP/2 improves on this by dividing response data into smaller chunks and returning them in an interleaved manner. This prevents any single request from blocking others, resulting in faster loading times.\nIt\u0026rsquo;s important to note that HTTP/1.1 pipelining never became widely used due to limited browser and server support. For more details, visit HTTP Pipelining, HTTP/2 and Multiplexing.\nWhile both HTTP/1.1 pipelining and HTTP/2 offer similar performance benefits, in theory, HTTP/2 is favored for its more extensive features and broader support.\n Request Execution Interceptors Request and response interceptors in Apache HttpAsyncClient allow developers to intercept and modify requests and responses before they are sent or received by the client.\nHttpRequestInterceptor is an interface used to intercept and modify HTTP requests before they are sent to the server. It has the method:\nvoid process(HttpRequest request, EntityDetails entity, HttpContext context) It provides a mechanism to add custom headers, modify request parameters, or perform any other preprocessing tasks on the request.\nAsyncExecChainHandler is an interface used to intercept and process requests and responses as they pass through the execution chain of the HTTP async client. It has the method:\nvoid execute(HttpRequest httpRequest, AsyncEntityProducer asyncEntityProducer, AsyncExecChain.Scope scope, AsyncExecChain asyncExecChain, AsyncExecCallback asyncExecCallback) throws HttpException, IOException It allows developers to perform custom actions such as logging, error handling, creating mock responses, or modifying the behavior of the client based on the response received from the server.\nThese interceptors are useful in various crosscutting scenarios, such as:\nIntercept requests and responses to log information such as request parameters, response status codes, or response bodies for debugging or auditing purposes. Add authentication tokens or credentials to outgoing requests before sending them to the server. Intercept responses to handle errors or exceptions gracefully and take appropriate actions based on the response received from the server. Modify requests to add custom headers, parameters, or payloads before sending them to the server.\nNow let\u0026rsquo;s understand one of these scenarios with an example. Let\u0026rsquo;s learn how to create a mock response:\npublic class UserResponseAsyncExecChainHandler implements AsyncExecChainHandler { @Override public void execute(HttpRequest httpRequest, AsyncEntityProducer asyncEntityProducer, AsyncExecChain.Scope scope, AsyncExecChain asyncExecChain, AsyncExecCallback asyncExecCallback ) throws HttpException, IOException { try { boolean requestHandled = false; if (httpRequest.containsHeader(\u0026#34;x-base-number\u0026#34;) \u0026amp;\u0026amp; httpRequest.containsHeader(\u0026#34;x-req-exec-number\u0026#34;)) { String path = httpRequest.getPath(); if (StringUtils.startsWith(path, \u0026#34;/api/users/\u0026#34;)) { requestHandled = handleUserRequest(httpRequest, asyncExecCallback); } } if (!requestHandled) { asyncExecChain.proceed(httpRequest, asyncEntityProducer, scope, asyncExecCallback); } } catch (IOException | HttpException ex) { String msg = \u0026#34;Failed to execute request.\u0026#34;; log.error(msg, ex); throw new RequestProcessingException(msg, ex); } } private boolean handleUserRequest(HttpRequest httpRequest, AsyncExecCallback asyncExecCallback) throws HttpException, IOException { boolean requestHandled = false; Header baseNumberHeader = httpRequest.getFirstHeader(\u0026#34;x-base-number\u0026#34;); String baseNumberStr = baseNumberHeader.getValue(); int baseNumber = Integer.parseInt(baseNumberStr); Header reqExecNumberHeader = httpRequest.getFirstHeader(\u0026#34;x-req-exec-number\u0026#34;); String reqExecNumberStr = reqExecNumberHeader.getValue(); int reqExecNumber = Integer.parseInt(reqExecNumberStr); // check if user id is multiple of base value  if (reqExecNumber % baseNumber == 0) { String reasonPhrase = \u0026#34;Multiple of \u0026#34; + baseNumber; HttpResponse response = new BasicHttpResponse(HttpStatus.SC_OK, reasonPhrase); ByteBuffer content = ByteBuffer.wrap(reasonPhrase.getBytes(StandardCharsets.US_ASCII)); BasicEntityDetails entityDetails = new BasicEntityDetails(content.remaining(), ContentType.TEXT_PLAIN); AsyncDataConsumer asyncDataConsumer = asyncExecCallback.handleResponse(response, entityDetails); asyncDataConsumer.consume(content); asyncDataConsumer.streamEnd(null); requestHandled = true; } return requestHandled; } } It overrides the default behavior of handling HTTP requests in the asynchronous execution chain. It checks if the request contains specific headers (x-base-number and x-req-exec-number) and if the request path starts with \u0026ldquo;/api/users/\u0026rdquo;. If it meets these conditions, it extracts the values of these headers and parses them into integers. Then, it checks if the reqExecNumber is a multiple of the baseNumber. If so, it creates a custom response with the status code HTTP OK (200) and a reason phrase indicating that it\u0026rsquo;s a multiple of the base number. Otherwise, it proceeds with the execution chain to handle the request normally. Finally, it handles any exceptions that occur during the execution process.\nNow let\u0026rsquo;s prepare a client and configure it to use an interceptor:\npublic CloseableHttpAsyncClient startHttpAsyncInterceptingClient() { try { if (httpAsyncInterceptingClient == null) { PoolingAsyncClientConnectionManager cm = PoolingAsyncClientConnectionManagerBuilder.create() .setTlsStrategy(getTlsStrategy()) .build(); IOReactorConfig ioReactorConfig = IOReactorConfig.custom().setSoTimeout(Timeout.ofSeconds(5)).build(); httpAsyncInterceptingClient = HttpAsyncClients.custom() .setIOReactorConfig(ioReactorConfig) .setConnectionManager(cm) .addExecInterceptorFirst(\u0026#34;custom\u0026#34;, new UserResponseAsyncExecChainHandler()) .build(); httpAsyncInterceptingClient.start(); log.debug(\u0026#34;Started HTTP async client with requests interceptors.\u0026#34;); } return httpAsyncInterceptingClient; } catch (Exception e) { String errorMsg = \u0026#34;Failed to start HTTP async client.\u0026#34;; log.error(errorMsg, e); throw new RuntimeException(errorMsg, e); } } It initializes and returns an HTTP asynchronous client with request interceptors. It first checks if the client already is in initialized state. If not, it creates a pooling asynchronous client connection manager with a specified TLS strategy. Then, it configures an I/O reactor with a socket timeout of 5 seconds. Next, it creates the HTTP asynchronous client, adds a custom execution interceptor named \u0026ldquo;custom\u0026rdquo; (which is an instance of UserResponseAsyncExecChainHandler) as the first interceptor, and sets the connection manager and I/O reactor configuration. Finally, it starts the client and logs the action.\nNow let\u0026rsquo;s see the scenario of executing an HTTP request and its interception:\npublic Map\u0026lt;Integer, String\u0026gt; executeRequestsWithInterceptors( CloseableHttpAsyncClient closeableHttpAsyncClient, Long userId, int count, int baseNumber) throws RequestProcessingException { Map\u0026lt;Integer, String\u0026gt; userResponseMap = new HashMap\u0026lt;\u0026gt;(); Map\u0026lt;Integer, Future\u0026lt;SimpleHttpResponse\u0026gt;\u0026gt; futuresMap = new LinkedHashMap\u0026lt;\u0026gt;(); try { HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); URI uri = new URIBuilder(\u0026#34;/api/users/\u0026#34; + userId).build(); String path = uri.getPath(); SimpleHttpRequest httpGetRequest = SimpleRequestBuilder.get() .setHttpHost(httpHost) .setPath(path) .addHeader(\u0026#34;x-base-number\u0026#34;, String.valueOf(baseNumber)) .build(); for (int i = 0; i \u0026lt; count; i++) { try { Future\u0026lt;SimpleHttpResponse\u0026gt; future = null; future = executeInterceptorRequest(closeableHttpAsyncClient, httpGetRequest, i, httpHost); futuresMap.put(i, future); } catch (RequestProcessingException e) { userResponseMap.put(i, e.getMessage()); } } } catch (Exception e) { String message = MessageFormat.format(\u0026#34;Failed to get user for ID: {0}\u0026#34;, userId); log.error(message, e); throw new RequestProcessingException(message, e); } handleInterceptorFutureResults(futuresMap, userResponseMap); return userResponseMap; } private Future\u0026lt;SimpleHttpResponse\u0026gt; executeInterceptorRequest( CloseableHttpAsyncClient closeableHttpAsyncClient, SimpleHttpRequest httpGetRequest, int i, HttpHost httpHost) throws URISyntaxException { // Update request  httpGetRequest.removeHeaders(\u0026#34;x-req-exec-number\u0026#34;); httpGetRequest.addHeader(\u0026#34;x-req-exec-number\u0026#34;, String.valueOf(i)); log.debug( \u0026#34;Executing {} request: {} on host {}\u0026#34;, httpGetRequest.getMethod(), httpGetRequest.getUri(), httpHost); return closeableHttpAsyncClient.execute( httpGetRequest, new SimpleHttpResponseCallback(httpGetRequest, \u0026#34;\u0026#34;)); } private void handleInterceptorFutureResults( Map\u0026lt;Integer, Future\u0026lt;SimpleHttpResponse\u0026gt;\u0026gt; futuresMap, Map\u0026lt;Integer, String\u0026gt; userResponseMap) { log.debug(\u0026#34;Got {} futures.\u0026#34;, futuresMap.size()); for (Map.Entry\u0026lt;Integer, Future\u0026lt;SimpleHttpResponse\u0026gt;\u0026gt; futureEntry : futuresMap.entrySet()) { Integer currentRequestId = futureEntry.getKey(); try { userResponseMap.put(currentRequestId, futureEntry.getValue().get().getBodyText()); } catch (Exception e) { String message = MessageFormat.format(\u0026#34;Failed to get user for request id: {0}\u0026#34;, currentRequestId); log.error(message, e); userResponseMap.put(currentRequestId, message); } } } It sends multiple asynchronous HTTP requests with interceptors applied. It initializes a map to store the responses and a map for the futures of each request. Then, it constructs a request with a specified base number and user ID. For each request, it updates the request with the current request ID, executes the request asynchronously using the provided HTTP client, and adds the future to the map. If an exception occurs during execution, it logs the error message. After executing all requests, it retrieves the responses from the futures and populates the response map. Finally, it returns the map containing the request IDs and corresponding responses.\nFinally, let\u0026rsquo;s test our logic:\n@Test void getUserWithInterceptors() { try (CloseableHttpAsyncClient closeableHttpAsyncClient = userHttpRequestHelper.startHttpAsyncInterceptingClient()) { int baseNumber = 3; int requestExecCount = 5; Map\u0026lt;Integer, String\u0026gt; responseBodyMap = userHttpRequestHelper.executeRequestsWithInterceptors( closeableHttpAsyncClient, 1L, requestExecCount, baseNumber); // verify  assertThat(responseBodyMap) .hasSize(requestExecCount) .doesNotContainKey(null) .doesNotContainValue(null) .hasValueSatisfying(getUserErrorCheck); String expectedResponse = \u0026#34;Multiple of \u0026#34; + baseNumber; for (Integer i : responseBodyMap.keySet()) { if (i % baseNumber == 0) { assertThat(responseBodyMap).containsEntry(i, expectedResponse); } } } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } We execute asynchronous HTTP requests with interceptors applied. First, we start a new closeable HTTP async client with interceptors enabled using the startHttpAsyncInterceptingClient() method. Then, we define parameters like the base number and request execution count and invoke the executeRequestsWithInterceptors() method to send multiple requests asynchronously. After receiving the responses, we verify the size and content of the response map, ensuring that all responses are valid. Finally, we check if the responses contain the expected response for requests where the request ID is a multiple of the base number.\nConclusion In this article, we got familiar with the async APIs of Apache HttpClient, and we explored a multitude of essential functionalities vital for interacting with web servers. We learned its key functionalities including basic request processing, content streaming, pipelining, and multiplexing. We learned how to use interceptors to customize request and response processing, enhancing flexibility and control. Overall, the Apache HTTP Async Client is suitable for situations requiring efficient, non-blocking HTTP communication, offering a wide range of features to meet diverse requirements in modern web development.\n","date":"May 29, 2024","image":"https://reflectoring.io/images/stock/0075-envelopes-1200x628-branded_hu2f9dd448936f3159981d5b962b2c979c_136735_650x0_resize_q90_box.jpg","permalink":"/apache-http-client-async-apis/","title":"Async APIs Offered by Apache HttpClient"},{"categories":["Java"],"contents":"In this article, we are going to learn about the classic APIs offered by Apache HttpClient. We are going to explore the different ways Apache HttpClient helps us to send and receive data over the internet in classic (synchronous) mode. From simple GET requests to complex multipart POST requests, we\u0026rsquo;ll cover it all with real-world examples. So get ready to learn how to implement HTTP interactions with Apache HttpClient!\nThe \u0026ldquo;Create an HTTP Client with Apache HttpClient\u0026rdquo; Series This article is the third part of a series:\n Introduction to Apache HttpClient Apache HttpClient Configuration Classic APIs Offered by Apache HttpClient Async APIs Offered by Apache HttpClient Reactive APIs Offered by Apache HttpClient   Example Code This article is accompanied by a working code example on GitHub. We have grouped the examples under following categories of APIs: classic, async, and reactive. In this article, we will learn about the classic APIs offered by Apache HttpClient.\nReqres Fake Data CRUD API We are going to use Reqres API Server to test different HTTP methods. It is a free online API that can be used for testing and prototyping. It provides a variety of endpoints that can be used to test different HTTP methods. The Reqres API is a good choice for testing CRUD operations because it supports all the HTTP methods that CRUD allows.\n HttpClient (Classic APIs) In this section of examples we are going to learn how to use HttpClient for sending requests and consuming responses in synchronous mode. The client code will wait until it receives a response from the server.\nHTTP and CRUD Operations CRUD operations refer to Create, Read, Update, and Delete actions performed on data. In the context of HTTP endpoints for a /users resource:\n Create: Use HTTP POST to add a new user: POST /users Read: Use HTTP GET to retrieve user data: GET /users/{userId} for a specific user or GET /users?page=1 for a list of users with pagination. Update: Use HTTP PUT or PATCH to modify user data: PUT /users/{userId} Delete: Use HTTP DELETE to remove a user: DELETE /users/{userId}   Now let\u0026rsquo;s learn to process HTTP responses using a response handler.\nThe motivation behind using a response handler in Apache HttpClient is to provide a structured and reusable way to process HTTP responses.\nResponse handlers encapsulate the logic for extracting data from HTTP responses, allowing developers to define how to handle different types of responses in a modular and consistent manner.\nBy using response handlers, developers can centralize error handling, data extraction, and resource cleanup, resulting in cleaner and more maintainable code.\nAdditionally, response handlers promote code reusability, as the same handler can be used across multiple HTTP requests with similar response processing requirements.\nOverall, response handlers enhance the flexibility, readability, and maintainability of code that interacts with HTTP responses using Apache HttpClient.\nOverview of Executing and Testing HTTP Methods Before we start going through the code snippet, let\u0026rsquo;s understand the general structure of the logic to execute HTTP methods and unit test to verify the logic. Here is the sample code to execute an HTTP method:\npublic class UserSimpleHttpRequestHelper extends BaseHttpRequestHelper { public String executeHttpMethod(Map\u0026lt;String, String\u0026gt; optionalRequestParameters) throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { // Create request  HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); // Populate NameValuePair list from optionalRequestParameters  // Populate URI  // Populate HTTP request  // Create a response handler  BasicHttpClientResponseHandler handler = new BasicHttpClientResponseHandler(); String responseBody = httpClient.execute(httpHost, httpRequest, handler); return responseBody; } catch (Exception e) { throw new RequestProcessingException(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } } We define a class UserSimpleHttpRequestHelper that extends BaseHttpRequestHelper. It contains a method executeHttpMethod() that takes optional request parameters as input and returns the response body as a string.\nInside the method, we create an HTTP client using HttpClients.createDefault(). Then we create an HTTP host object representing the target host. Next, we prepare the HTTP request by populating parameters such as name-value pairs, URI, and HTTP method.\nAfter preparing the request, we create a response handler of type BasicHttpClientResponseHandler to handle the response. Finally, we execute the HTTP request using the HTTP client, passing the host, request, and handler, and returns the response body as a string. If any exception occurs during this process, we throw a RequestProcessingException with an appropriate error message.\nHere is a test case to verify this functionality:\npublic class UserSimpleHttpRequestHelperTests extends BaseClassicExampleTests { private UserSimpleHttpRequestHelper userHttpRequestHelper = new UserSimpleHttpRequestHelper(); /** Execute HTTP request. */ @Test void executeHttpMethod() { try { // prepare optional request parameters  Map\u0026lt;String, String\u0026gt; params = Map.of(\u0026#34;page\u0026#34;, \u0026#34;1\u0026#34;); // execute  String responseBody = userHttpRequestHelper.executeHttpMethod(params); // verify  assertThat(responseBody).isNotEmpty(); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } } Inside the test method, we first prepare the optional request parameters, creating a map containing key-value pairs. These parameters might include details such as the page number for pagination.\nThen, we invoke the executeHttpMethod() method of the UserSimpleHttpRequestHelper, passing the prepared parameters. This method executes an HTTP request using the Apache HttpClient and returns the response body as a string.\nAfter executing the HTTP request, the test verifies the response body. It asserts that the response body is not empty, ensuring that the HTTP request was successful and returned some data.\nIf any exception occurs during the execution of the test, the test fails and provides details about the failure, including the exception message. It properly reports any errors encountered during the test execution.\nHTTP Methods Used to Create Records There\u0026rsquo;s one CRUD method to create records: POST.\nExecuting an HTTP POST Request to Create a New Record We use HTTP POST to create a new user. We need to provide the details needed to create a new user.\nHere\u0026rsquo;s the code to create a new record:\npublic String createUser( String firstName, String lastName, String email, String avatar ) throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { // Create request  List\u0026lt;NameValuePair\u0026gt; formParams = new ArrayList\u0026lt;NameValuePair\u0026gt;(); formParams.add(new BasicNameValuePair(\u0026#34;first_name\u0026#34;, firstName)); formParams.add(new BasicNameValuePair(\u0026#34;last_name\u0026#34;, lastName)); formParams.add(new BasicNameValuePair(\u0026#34;email\u0026#34;, email)); formParams.add(new BasicNameValuePair(\u0026#34;avatar\u0026#34;, avatar)); try (UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formParams, StandardCharsets.UTF_8)) { HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); URI uri = new URIBuilder(\u0026#34;/api/users/\u0026#34;).build(); HttpPost httpPostRequest = new HttpPost(uri); httpPostRequest.setEntity(entity); // Create a response handler  BasicHttpClientResponseHandler handler = new BasicHttpClientResponseHandler(); String responseBody = httpClient.execute(httpHost, httpPostRequest, handler); return responseBody; } catch (Exception e) { throw new RequestProcessingException(\u0026#34;Failed to create user.\u0026#34;, e); } } The example illustrates a method for creating a new user by sending an HTTP POST request to the specified endpoint. We construct a list of form parameters containing the user\u0026rsquo;s details such as first name, last name, email, and avatar. Then call the execute() method and receive a response body containing the created user\u0026rsquo;s data.\nAnd here\u0026rsquo;s the test:\n@Test void executePostRequest() { try { // execute  String createdUser = userHttpRequestHelper.createUser( \u0026#34;DummyFirst\u0026#34;, \u0026#34;DummyLast\u0026#34;, \u0026#34;DummyEmail@example.com\u0026#34;, \u0026#34;DummyAvatar\u0026#34;); // verify  assertThat(createdUser).isNotEmpty(); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } The unit test verifies the functionality of the createUser() method. It calls the createUser() method with dummy user details (first name, last name, email, and avatar). The response represents the created user\u0026rsquo;s data. Using assertions, the test verifies the response.\nHTTP Methods Used to Read Records The CRUD methods to read records are: GET, HEAD, OPTIONS, and TRACE.\nExecuting an HTTP GET Request to Get Paginated Records We use an HTTP GET request to retrieve a single record as well as records in bulk. Furthermore, we can use pagination to split requests that return large responses into multiple requests.\nPagination, Its Advantages, Disadvantages, and Complexities Pagination in HTTP request processing involves dividing large sets of data into smaller, manageable pages. Clients specify the page they want using parameters like page=1. The server processes the request, retrieves the relevant page of data, and returns it to the client, enabling efficient data retrieval and presentation. Advantages of pagination include improved performance, reduced server load, enhanced user experience, and efficient handling of large datasets.\nPagination in HTTP REST calls can cause complexities on both the server and client sides. Server-side complexities include additional logic for managing paginated data, increased resource usage for deep pagination, potential data consistency issues due to changing underlying data, and scalability challenges in distributed systems.\nOn the client side, complexities arise from managing the pagination state, handling additional network overhead due to more HTTP requests, ensuring a smooth user experience with pagination controls, and managing errors during pagination. These factors can impact performance, user experience, and scalability, requiring careful design and error handling on both the server and client sides.\n Let\u0026rsquo;s implement a paginated HTTP GET request using a response handler:\npublic class UserSimpleHttpRequestHelper extends BaseHttpRequestHelper { public String getPaginatedUsers(Map\u0026lt;String, String\u0026gt; requestParameters) throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { // Create request  HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); List\u0026lt;NameValuePair\u0026gt; nameValuePairs = requestParameters.entrySet().stream() .map(entry -\u0026gt; new BasicNameValuePair(entry.getKey(), entry.getValue())) .map(entry -\u0026gt; (NameValuePair) entry) .toList(); URI uri = new URIBuilder(\u0026#34;/api/users/\u0026#34;).addParameters(nameValuePairs).build(); HttpGet httpGetRequest = new HttpGet(uri); // Create a response handler  BasicHttpClientResponseHandler handler = new BasicHttpClientResponseHandler(); String responseBody = httpClient.execute(httpHost, httpGetRequest, handler); return responseBody; } catch (Exception e) { throw new RequestProcessingException(\u0026#34;Failed to get paginated users.\u0026#34;, e); } } } The code defines the getPaginatedUsers() method to retrieve a list of users from an external API, specified by the request parameters map. The requestParameters are mapped into a list of NameValuePairs. Then we create HttpGet instance, representing the GET request, and call HttpClient\u0026rsquo;s execute() method. It stores the response body returned by the server in the responseBody variable.\nHere is a test case to verify this functionality:\npublic class UserSimpleHttpRequestHelperTests extends BaseClassicExampleTests { private UserSimpleHttpRequestHelper userHttpRequestHelper = new UserSimpleHttpRequestHelper(); /** Execute get paginated request. */ @Test void executeGetPaginatedRequest() { try { // prepare  Map\u0026lt;String, String\u0026gt; params = Map.of(\u0026#34;page\u0026#34;, \u0026#34;1\u0026#34;); // execute  String responseBody = userHttpRequestHelper.getAllUsers(params); // verify  assertThat(responseBody).isNotEmpty(); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } } In the test method executeGetPaginatedRequest(), we populate the request parameter (page=1) and execute an HTTP GET request to retrieve the first page of paginated users and verify the response.\nExecuting an HTTP GET Request to Get a Specific Record Let\u0026rsquo;s execute HTTP GET request to get a specific user record using a response handler:\npublic class UserSimpleHttpRequestHelper extends BaseHttpRequestHelper { /** Gets user for given user id. */ public String getUser(long userId) throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { // Create request  HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); HttpGet httpGetRequest = new HttpGet(new URIBuilder(\u0026#34;/api/users/\u0026#34; + userId).build()); // Create a response handler  BasicHttpClientResponseHandler handler = new BasicHttpClientResponseHandler(); String responseBody = httpClient.execute(httpHost, httpGetRequest, handler); return responseBody; } catch (Exception e) { throw new RequestProcessingException( MessageFormat.format(\u0026#34;Failed to get user for ID: {0}\u0026#34;, userId), e); } } } In this example, the getUser() method retrieves a user by its id. As we have learned in getAllUsers() code example, in this case also, we create a HttpGet request object, a HttpHost object and response handler. Then we call the execute() method on the client and obtain the response in string form.\nHere\u0026rsquo;s test case that verifies execute specific request:\n/** Execute get specific request. */ @Test void executeGetSpecificRequest() { try { // prepare  long userId = 2L; // execute  String existingUser = userHttpRequestHelper.getUser(userId); // verify  assertThat(existingUser).isNotEmpty(); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } In this example, we call getUser() method to get user with specific id and then check its response.\nExecuting an HTTP HEAD Request to Get the Status of a Record The HEAD method in HTTP can request information about a document without retrieving the document itself. It is similar to GET, but it does not receive the response body. It\u0026rsquo;s used for caching, resource existence, modification checks, and link validation. Faster than GET, it saves bandwidth by omitting response data, making it ideal for resource checks and link validation, optimizing network efficiency.\nHere is the code to execute HTTP HEAD request to get the status of a specific user record using a response handler:\npublic Integer getUserStatus(long userId) throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { // Create request  HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); URI uri = new URIBuilder(\u0026#34;/api/users/\u0026#34; + userId).build(); HttpHead httpHeadRequest = new HttpHead(uri); // Create a response handler, lambda to implement  // HttpClientResponseHandler::handleResponse(ClassicHttpResponse response)  HttpClientResponseHandler\u0026lt;Integer\u0026gt; handler = HttpResponse::getCode; Integer code = httpClient.execute(httpHost, httpHeadRequest, handler); log.info(\u0026#34;Got response status code: {}\u0026#34;, code); return code; } catch (Exception e) { throw new RequestProcessingException( MessageFormat.format(\u0026#34;Failed to get user for ID: {0}\u0026#34;, userId), e); } } In this example, we send a HEAD request to the user endpoint to retrieve the status code of an HTTP request without fetching the response body.\nTest for this functionality:\n/** Execute get specific request. */ @Test void executeUserStatus() { try { // prepare  long userId = 2L; // execute  Integer userStatus = userHttpRequestHelper.getUserStatus(userId); // verify  assertThat(userStatus).isEqualTo(HttpStatus.SC_OK); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } This test method verifies the status returned by the HEAD method for a user. First, it prepares the user ID to be used in the request. Then, it executes the getUserStatus() method from the UserSimpleHttpRequestHelper class to fetch the status code for the specified user ID. Finally, it verifies that the obtained user status is equal to HttpStatus.SC_OK (200), indicating a successful request.\nExecuting an HTTP OPTIONS Request to Find out Request Methods Allowed by Server The HTTP OPTION method is a type of HTTP call that explains what are the options for a target resource such as an API endpoint. We can use OPTION to find out which HTTP methods the server supports.\nHere\u0026rsquo;s the command line example to execute it:\ncurl https://reqres.in -X OPTIONS -i We can also find out the allowed methods for specific URI path:\ncurl https://reqres.in/api/users/ -X OPTIONS -i We get a response from the server as below:\nHTTP/2 204 date: Sat, 24 Feb 2024 05:02:34 GMT report - to: { \u0026#34;group\u0026#34;: \u0026#34;heroku-nel\u0026#34;, \u0026#34;max_age\u0026#34;: 3600, \u0026#34;endpoints\u0026#34;: [{ \u0026#34;url\u0026#34;: \u0026#34;https://nel.heroku.com/reports ?ts=1708750954\u0026amp;sid=c4c9725f-1ab0-44d8-820f-430df2718e11 \u0026amp;s=Yy4ohRwVOHU%2F%2FK7CXkQCt4qraPmzmqEwLt50qhzv1jg%3D\u0026#34; } ] } reporting-endpoints: heroku-nel=https://nel.heroku.com/reports ?ts=1708750954\u0026amp;sid=c4c9725f-1ab0-44d8-820f-430df2718e11 \u0026amp;s=Yy4ohRwVOHU%2F%2FK7CXkQCt4qraPmzmqEwLt50qhzv1jg%3D nel: { \u0026#34;report_to\u0026#34;: \u0026#34;heroku-nel\u0026#34;, \u0026#34;max_age\u0026#34;: 3600, \u0026#34;success_fraction\u0026#34;: 0.005, \u0026#34;failure_fraction\u0026#34;: 0.05, \u0026#34;response_headers\u0026#34;: [\u0026#34;Via\u0026#34;] } x-powered-by: Express access-control-allow-origin: * access-control-allow-methods: GET,HEAD,PUT,PATCH,POST,DELETE vary: Access-Control-Request-Headers via: 1.1 vegur cf-cache-status: DYNAMIC server: cloudflare cf-ray: 85a52838ff1f2e32-BOM In this command output, there is a line access-control-allow-methods: GET,HEAD,PUT,PATCH,POST,DELETE that tells us all \u0026lsquo;HTTP\u0026rsquo; methods allowed by the server.\nThe response headers will include the necessary information. The Allow or access-control-allow-methods header indicates the HTTP methods supported for the requested resource.\nHTTP OPTIONS Facts We use the OPTION method to make a preflight request to the server. A preflight request is a request we send to the server to determine if the server allows the actual request. The server will respond to the preflight request with a list of the HTTP methods it allows. The browser will then send the actual request if the requested method is in the list. The server also includes a message that indicates the allowed origin, methods, and headers.\nWe need header Access-Control-Allow-Methods for cross-origin resource sharing (CORS). CORS is a security mechanism that prevents websites from accessing resources from other domains.\nThe Access-Control-Allow-Methods header tells the browser the list of allowed HTTP methods when accessing the resource.\n Here\u0026rsquo;s how we can send an OPTIONS request using HTTP client:\npublic Map\u0026lt;String, String\u0026gt; executeOptions() throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); URI uri = new URIBuilder(\u0026#34;/api/users/\u0026#34;).build(); HttpOptions httpOptionsRequest = new HttpOptions(uri); // Create a response handler, lambda to implement  // HttpClientResponseHandler::handleResponse(ClassicHttpResponse response)  HttpClientResponseHandler\u0026lt;Map\u0026lt;String, String\u0026gt;\u0026gt; handler = response -\u0026gt; StreamSupport.stream( Spliterators.spliteratorUnknownSize( response.headerIterator(), Spliterator.ORDERED), false) .collect(Collectors.toMap(Header::getName, Header::getValue)); return httpClient.execute(httpHost, httpOptionsRequest, handler); } catch (Exception e) { throw new RequestProcessingException(\u0026#34;Failed to execute the request.\u0026#34;, e); } } In this example, we populate the HttpOptions request and call the HttpClient.execute() method. The handler processes the response from the server and returns the resulting map of headers to the caller.\nLet\u0026rsquo;s now test the OPTIONS request:\n@Test void executeOptions() { try { // execute  Map\u0026lt;String, String\u0026gt; headers = userHttpRequestHelper.executeOptions(); assertThat(headers.keySet()) .as(\u0026#34;Headers do not contain allow header\u0026#34;) .containsAnyOf(\u0026#34;Allow\u0026#34;, \u0026#34;Access-Control-Allow-Methods\u0026#34;); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } The test calls executeOptions() to perform the OPTIONS request and retrieve the headers from the server. Then it verifies that the keys of the \u0026lsquo;headers\u0026rsquo; map contain at least one of the expected headers (\u0026lsquo;Allow\u0026rsquo; or \u0026lsquo;Access-Control-Allow-Methods\u0026rsquo;).\nExecuting an HTTP TRACE Request to Perform Diagnosis The HTTP TRACE method performs a message loop-back test along the path to the target resource, providing a useful debugging mechanism. However, it is advised not to use this method as it can open the gates to intruders.\nThe Vulnerability of TRACE As warned by OWASP in the documentation on Test HTTP Methods the TRACE method, or TRACK in Microsoft\u0026rsquo;s systems, makes the server repeat what it receives in a request. This caused a problem known as Cross-Site Tracing (XST) in 2003, allowing access to cookies marked with the HttpOnly flag. Browsers and plugins have blocked TRACE for years, so this problem is no longer a risk. However, if a server still allows TRACE, it might indicate security weaknesses.\n HTTP Methods Used to Update Records The CRUD methods to update records are PUT and PATCH.\nExecuting an HTTP PUT Request to Update an Existing Record We use HTTP PUT to update an existing user. We need to provide the details needed to update the user.\nImplementation for updating an existing user:\npublic String updateUser( long userId, String firstName, String lastName, String email, String avatar ) throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { // Update request  List\u0026lt;NameValuePair\u0026gt; formParams = new ArrayList\u0026lt;NameValuePair\u0026gt;(); formParams.add(new BasicNameValuePair(\u0026#34;first_name\u0026#34;, firstName)); formParams.add(new BasicNameValuePair(\u0026#34;last_name\u0026#34;, lastName)); formParams.add(new BasicNameValuePair(\u0026#34;email\u0026#34;, email)); formParams.add(new BasicNameValuePair(\u0026#34;avatar\u0026#34;, avatar)); try (UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formParams, StandardCharsets.UTF_8)) { HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); URI uri = new URIBuilder(\u0026#34;/api/users/\u0026#34; + userId).build(); HttpPut httpPutRequest = new HttpPut(uri); httpPutRequest.setEntity(entity); // Create a response handler  BasicHttpClientResponseHandler handler = new BasicHttpClientResponseHandler(); String responseBody = httpClient.execute(httpHost, httpPutRequest, handler); return responseBody; } } catch (Exception e) { throw new RequestProcessingException(\u0026#34;Failed to update user.\u0026#34;, e); } } The example above shows how to update a user\u0026rsquo;s information via an HTTP PUT request. The method constructs the update request by creating a list of NameValuePair objects containing the user\u0026rsquo;s updated details (first name, last name, email, and avatar). Then we send a request to the specified user\u0026rsquo;s endpoint (/api/users/{userId}). The response body from the server, indicating the success or failure of the update operation, is captured and returned as a string.\nLet\u0026rsquo;s test the update user workflow:\n@Test void executePutRequest() { try { // prepare  int userId = 2; // execute  String updatedUser = userHttpRequestHelper.updateUser( userId, \u0026#34;UpdatedDummyFirst\u0026#34;, \u0026#34;UpdatedDummyLast\u0026#34;, \u0026#34;UpdatedDummyEmail@example.com\u0026#34;, \u0026#34;UpdatedDummyAvatar\u0026#34;); // verify  assertThat(updatedUser).isNotEmpty(); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } In this example, we execute an HTTP PUT request to update a user\u0026rsquo;s information. The method first prepares the necessary parameters for the update operation, including the user\u0026rsquo;s ID and the updated details (first name, last name, email, and avatar). It then invokes the updateUser() method of the userHttpRequestHelper object, passing these parameters. The method captures the response from the server, indicating the success or failure of the update operation, and asserts that the response body is not empty to verify the update\u0026rsquo;s success.\nExecuting an HTTP PATCH Request to Partially Update an Existing Record We use HTTP PATCH to update an existing user partially. We need to provide the details needed to update the user.\nLogic to update an existing user partially:\npublic String patchUser(long userId, String firstName, String lastName) throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { // Update request  List\u0026lt;NameValuePair\u0026gt; formParams = new ArrayList\u0026lt;NameValuePair\u0026gt;(); formParams.add(new BasicNameValuePair(\u0026#34;first_name\u0026#34;, firstName)); formParams.add(new BasicNameValuePair(\u0026#34;last_name\u0026#34;, lastName)); try (UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formParams, StandardCharsets.UTF_8)) { HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); URI uri = new URIBuilder(\u0026#34;/api/users/\u0026#34; + userId).build(); HttpPatch httpPatchRequest = new HttpPatch(uri); httpPatchRequest.setEntity(entity); // Create a response handler  BasicHttpClientResponseHandler handler = new BasicHttpClientResponseHandler(); String responseBody = httpClient.execute(httpHost, httpPatchRequest, handler); return responseBody; } } catch (Exception e) { throw new RequestProcessingException(\u0026#34;Failed to patch user.\u0026#34;, e); } } The example above shows how to update a user\u0026rsquo;s information via an HTTP PATCH request. The method constructs the patch request by creating a list of NameValuePair objects containing a few of the user\u0026rsquo;s updated details (first name and last name). Then we send the request to the specified user\u0026rsquo;s endpoint (/api/users/{userId}). The response body from the server, indicating the success or failure of the update operation, is captured and returned as a string.\nTest to verify patch request:\n@Test void executePatchRequest() { try { // prepare  int userId = 2; // execute  String patchedUser = userHttpRequestHelper.patchUser( userId, \u0026#34;UpdatedDummyFirst\u0026#34;, \u0026#34;UpdatedDummyLast\u0026#34;); // verify  assertThat(patchedUser).isNotEmpty(); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } In this example, we execute an HTTP PATCH request to partially update a user\u0026rsquo;s information. It first prepares the necessary parameters for the update operation, including the user\u0026rsquo;s ID and some of the user\u0026rsquo;s details (first name and last name). It then invokes the patchUser(), passing these parameters. The method captures the response from the server, indicating the success or failure of the update operation, and asserts that the response body is not empty to verify the patch\u0026rsquo;s success.\nHTTP Methods Used to Delete Records There\u0026rsquo;s one CRUD method to delete a record: DELETE.\nExecuting an HTTP DELETE Request to Delete an Existing Record We use HTTP DELETE to delete an existing user. We need the user ID to delete the user.\nLet\u0026rsquo;s implement delete user logic:\npublic void deleteUser(long userId) throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { HttpHost httpHost = HttpHost.create(\u0026#34;https://reqres.in\u0026#34;); URI uri = new URIBuilder(\u0026#34;/api/users/\u0026#34; + userId).build(); HttpDelete httpDeleteRequest = new HttpDelete(uri); // Create a response handler  BasicHttpClientResponseHandler handler = new BasicHttpClientResponseHandler(); String responseBody = httpClient.execute(httpHost, httpDeleteRequest, handler); } catch (Exception e) { throw new RequestProcessingException(\u0026#34;Failed to update user.\u0026#34;, e); } } The example demonstrates how to implement an HTTP DELETE request to delete an existing user. It constructs the URI for the delete request and calls execute(), passing the HttpDelete request and a response handler. Finally, it captures the null response from the server.\nTest case verifying delete functionality:\n@Test void executeDeleteRequest() { try { // prepare  int userId = 2; // execute  userHttpRequestHelper.deleteUser(userId); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } The provided test aims to verify the functionality of the deleteUser() method. It prepares by specifying the user ID of the user to be deleted, in this case, userId = 2. It then executes the deleteUser() method, passing the userId as an argument.\nUsing User-Defined Type in Request Processing So far we have used built-in Java types like String, and Integer in requests and responses. But we are not limited to using those built-in types.\nUser-Defined Request and Response We can use Plain Old Java Objects (POJO objects) in requests sent using HttpClient execute(). However, we typically do not directly use a POJO as the request entity. Instead, we convert the POJO into a format that can be sent over HTTP, such as JSON or XML, and then include that data in the request entity.\nThe HttpEntity interface represents an entity in an HTTP message, but it typically encapsulates raw data, such as text, binary content, or form parameters. While we cannot directly use a POJO as a HttpEntity, we can serialize the POJO into a suitable format and then create a HttpEntity instance from that serialized data.\nCustom HTTP Response Handler For example, if we want to send a POJO as JSON in an HTTP request, we would first serialize the POJO into a JSON string, and then create a StringEntity instance with that JSON string as the content.\nHere\u0026rsquo;s an example using Jackson ObjectMapper to serialize a POJO into JSON and include it in the request entity:\n/** Generic HttpClientResponseHandler */ public class DataObjectResponseHandler\u0026lt;T\u0026gt; extends AbstractHttpClientResponseHandler\u0026lt;T\u0026gt; { private ObjectMapper objectMapper = new ObjectMapper(); @NonNull private Class\u0026lt;T\u0026gt; realType; public DataObjectResponseHandler(@NonNull Class\u0026lt;T\u0026gt; realType) { this.realType = realType; } @Override public T handleEntity(HttpEntity httpEntity) throws IOException { try { return objectMapper.readValue(EntityUtils.toString(httpEntity), realType); } catch (ParseException e) { throw new ClientProtocolException(e); } } } // Get user using custom HttpClientResponseHandler public class UserTypeHttpRequestHelper extends BaseHttpRequestHelper { public User getUser(long userId) throws RequestProcessingException { try (CloseableHttpClient httpClient = HttpClients.createDefault()) { // Create request  HttpHost httpHost = userRequestProcessingUtils.getApiHost(); URI uri = userRequestProcessingUtils.prepareUsersApiUri(userId); HttpGet httpGetRequest = new HttpGet(uri); // Create a response handler  HttpClientResponseHandler\u0026lt;User\u0026gt; handler = new DataObjectResponseHandler\u0026lt;\u0026gt;(User.class); return httpClient.execute(httpHost, httpGetRequest, handler); } catch (Exception e) { throw new RequestProcessingException( MessageFormat.format(\u0026#34;Failed to get user for ID: {0}\u0026#34;, userId), e); } } } The DataObjectResponseHandler is a generic HTTP response handler that deserializes JSON into specified POJO using the Jackson ObjectMapper. It converts the HTTP response entity to a JSON string using EntityUtils.toString(), then deserializes it into a POJO of the given type. This design reduces code duplication, enhancing reusability and maintainability.\nThe UserTypeHttpRequestHelper class has a method getUser() that retrieves a user from a server using a custom HttpGet request. DataObjectResponseHandler processes the response, which deserializes the server\u0026rsquo;s JSON response into a User object. We catch the errors during execution and throw them again as RequestProcessingException.\nTest case to get a user:\n@Test void executeGetUser() { try { // prepare  long userId = 2L; // execute  User existingUser = userHttpRequestHelper.getUser(userId); // verify  ThrowingConsumer\u0026lt;User\u0026gt; responseRequirements = user -\u0026gt; { assertThat(user).as(\u0026#34;Created user cannot be null.\u0026#34;).isNotNull(); assertThat(user.getId()).as(\u0026#34;ID should be positive number.\u0026#34;) .isEqualTo(userId); assertThat(user.getFirstName()).as(\u0026#34;First name cannot be null.\u0026#34;) .isNotEmpty(); assertThat(user.getLastName()).as(\u0026#34;Last name cannot be null.\u0026#34;) .isNotEmpty(); assertThat(user.getAvatar()).as(\u0026#34;Avatar cannot be null.\u0026#34;).isNotNull(); }; assertThat(existingUser).satisfies(responseRequirements); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } } It prepares by defining the userId variable, executes the method using the userHttpRequestHelper, and verifies the response received from the server. If exceptions occur, the test fails with an error message.\nChoosing User-Defined Type Vs Built-in Type Typed classes offer advantages such as enhanced type safety, allowing for better code readability and preventing type-related errors. They also facilitate better code organization and maintainability by encapsulating related functionality within specific classes. However, they may introduce complexity and require additional effort for implementation. In contrast, built-in types like String offer simplicity and ease of use but may lack the specific functionality and type safety provided by custom-typed classes. The choice between typed classes and built-in types depends on factors such as project requirements, complexity, and maintainability concerns.\n Conclusion In this article we got familiar with the classic APIs of Apache HttpClient, and we explored a multitude of essential functionalities vital for interacting with web servers. From fetching paginated records to pinpointing specific data, and from determining server statuses to manipulating records, we learned a comprehensive array of HTTP methods. Understanding these capabilities equips us with the tools needed to navigate and interact with web resources efficiently and effectively. With this knowledge, our applications can communicate seamlessly with web servers, ensuring smooth data exchanges and seamless user experiences.\n","date":"May 29, 2024","image":"https://reflectoring.io/images/stock/0077-request-response-1200x628-branded_hub0acddd9d3251f270c0c84786c3942f9_709974_650x0_resize_q90_box.jpg","permalink":"/apache-http-client-classic-apis/","title":"Classic APIs Offered by Apache HttpClient"},{"categories":["Java"],"contents":"In this article series, we\u0026rsquo;re going to explore Apache HTTPClient APIs. We\u0026rsquo;ll get familiar with the different ways Apache HttpClient enables developers to send and receive data over the internet. From simple GET requests to complex multipart POST requests, we\u0026rsquo;ll cover it all with real-world examples.\nSo get ready to learn web communication with Apache HttpClient!\nThe \u0026ldquo;Create an HTTP Client with Apache HttpClient\u0026rdquo; Series This article is the first part of a series:\n Introduction to Apache HttpClient Apache HttpClient Configuration Classic APIs Offered by Apache HttpClient Async APIs Offered by Apache HttpClient Reactive APIs Offered by Apache HttpClient  Why Should We Care About HTTP Clients? Have you ever wondered how your favorite apps seamlessly fetch data from the internet or communicate with servers behind the scenes? That\u0026rsquo;s where HTTP clients come into play — they\u0026rsquo;re the silent heroes of web communication, doing the heavy lifting, so you don\u0026rsquo;t have to.\nImagine you\u0026rsquo;re using a weather app to check the forecast for the day. Behind the scenes, the app sends an HTTP request to a weather service\u0026rsquo;s server, asking for the latest weather data. The server processes the request, gathers the relevant information, and sends back an HTTP response with the forecast. All of this happens in the blink of an eye, thanks to the magic of HTTP clients.\nHTTP clients are like digital messengers, facilitating communication between client software and web servers across the internet. They handle all the details of making a connection to the server, sending HTTP requests, and processing responses, so you can focus on building great software without getting bogged down in the complexities of web communication.\nSo why should you care about HTTP clients? Well, imagine if every time you wanted to fetch data from a web server or interact with a web service, you had to manually craft and send HTTP requests, then parse and handle the responses — it would be a nightmare! HTTP clients automate all of that for you, making it easy to send and receive data over the web with just a few lines of code.\nWhen it comes to developing a mobile app, a web service, or anything in between, HTTP clients plays a crucial role in facilitating interaction with remote resources on the internet. Therefore, it is important to acknowledge its significance when building software that requires web communication.\nExamples of HTTP Clients There are many Java HTTP clients available. Check this article on Comparison of Java HTTP Clients for more details.\n Brief Overview of the Apache HttpClient Apache HttpClient is a robust Java library popular for its handling of HTTP requests and responses. Its open-source nature and adherence to modern HTTP standards contribute to its popularity among developers.\nKey features include support for various authentication mechanisms and connection pooling, enhancing performance by reusing connections. It also facilitates request and response interception, allowing for easy modification or inspection of data.\nNotably, Apache HttpClient is known for its reliability and resilience, making it ideal for critical applications. Its extensive functionality, including support for multiple HTTP methods and advanced handling capabilities, caters to diverse needs in the HTTP ecosystem.\nThe library\u0026rsquo;s flexibility and extensibility enable customization to specific requirements, while its supportive community ensures continuous development and maintenance. With a commitment to backward compatibility, they facilitate seamless upgrades, ensuring long-term applicability and ease of use. Overall, Apache HttpClient stands as a mature and reliable choice for Java developers handling HTTP interactions.\nGetting Familiar With Useful Terms of the Apache HttpClient In the domain of Apache HttpClient, a lot of terms are essential for comprehending the functionality of this robust tool. At its core lies the HTTPClient. It comes in two versions — the classic HttpClient and the async HttpAsyncClient. CloseableHttpClient is an abstract class implementing the HttpClient interface. The library provides MinimalHttpClient that extends it. It is a vital component that manages connections to HTTP servers. Think of it as the communication manager, ensuring seamless and secure data exchanges between your application and web resources.\nCloseableHttpClient provides full control over resources and ensures proper closure of connections after use. It supports connection pooling and resource management, making it suitable for long-lived applications.\nMinimalHttpClient is a minimal implementation of CloseableHttpClient. Apache optimizes this client for HTTP/ 1.1 message transport and does not support advanced HTTP protocol functionality such as request execution via a proxy, state management, authentication, and request redirects.\nNow let\u0026rsquo;s check the async client. HttpAsyncClient is an asynchronous HTTPClient in Apache HttpComponents, designed for non-blocking I/O operations, making it suitable for high-performance, scalable applications with many concurrent requests.\nCloseableHttpAsyncClient is an abstract class. It implements HttpAsyncClient, providing a convenient way to manage the life cycle of the asynchronous HTTP client, allowing for graceful shutdown.\nMinimalHttpAsyncClient is a minimal implementation of CloseableHttpAsyncClient. Apache optimizes this client for HTTP/ 1.1 and HTTP/ 2 message transport and does not support advanced HTTP protocol functionality such as request execution via a proxy, state management, authentication, and request redirects.\nAs your application makes interactions with remote resources on the internet, it encounters HttpResponse, a capsule of information that carries the outcome of each interaction. This response conveys the server\u0026rsquo;s message, whether it signifies success, error, or redirection.\nHttpResponse comes with its counterpart, CloseableHttpResponse. It not only conveys the server\u0026rsquo;s response but also gracefully closes connections after use, preventing resource leaks and enhancing performance. Isn\u0026rsquo;t that a nice to have feature?\nThen we also have Headers, tiny snippets of metadata that accompany every HTTP request and response. These headers contain valuable details like content type, encoding, and authentication tokens, facilitating the exchange of data between client and server.\nWe put to use HttpHost to encapsulate the server\u0026rsquo;s hostname and port number, acting as a navigational aid for our HTTP requests.\nImplementing web interceptions would be incomplete without encountering HttpEntity, the carrier that transports data across the servers. Whether it\u0026rsquo;s text, binary, or streaming content, HttpEntity offers a unified interface for managing data payloads effortlessly.\nWe would come across a variety of HTTP methods, each serving a distinct purpose. From HttpGet for retrieving data to HttpPost for creating new resources, and HttpPut for updating existing ones, these methods empower us to engage with web resources effectively.\nIn upcoming articles in this series, we\u0026rsquo;re going to learn how to implement our web interactions using these terms.\nConclusion Apache HttpClient simplifies HTTP communication in Java applications. With intuitive APIs, it enables developers to perform various HTTP operations, including GET, POST, PUT, and DELETE, and more. Offering flexibility and robustness, it facilitates seamless integration with web services, making it ideal for building web applications, RESTful APIs, and microservices. Whether fetching data from external APIs or interacting with web resources, Apache HttpClient provides a reliable solution for handling HTTP requests and responses efficiently. Its extensive features, along with easy-to-use interfaces, make it a preferred choice for developers seeking a powerful and versatile HTTP client library in their Java projects.\nApache HttpClient offers classic (synchronous or blocking), asynchronous and reactive APIs. In the upcoming articles of this series, we will learn about these APIs.\n","date":"May 29, 2024","image":"https://reflectoring.io/images/stock/0063-interface-1200x628-branded_hu8c3a5b7a897a90fddea1af1e185fffb6_93041_650x0_resize_q90_box.jpg","permalink":"/create-a-http-client-with-apache-http-client/","title":"Create a HTTP Client with Apache HttpClient"},{"categories":["Java"],"contents":"In this article, we are going to learn about reactive APIs offered by Apache HttpClient APIs. We are going to explore how to use reactive, full-duplex HTTP/1.1 message exchange using RxJava and Apache HttpClient. So get ready to learn to implement HTTP interactions with Apache HttpClient!\nThe \u0026ldquo;Create an HTTP Client with Apache HttpClient\u0026rdquo; Series This article is the fifth part of a series:\n Introduction to Apache HttpClient Apache HttpClient Configuration Classic APIs Offered by Apache HttpClient Async APIs Offered by Apache HttpClient Reactive APIs Offered by Apache HttpClient   Example Code This article is accompanied by a working code example on GitHub. Let\u0026rsquo;s now learn how to use Apache HttpClient for web communication. We have grouped the examples under the following categories of APIs: classic, async, and reactive. In this article we will learn about the reactive APIs offered by Apache HttpClient.\nReqres Fake Data CRUD API We are going to use Reqres API Server to test different HTTP methods. It is a free online API that can be used for testing and prototyping. It provides a variety of endpoints that can be used to test different HTTP methods. The Reqres API is a good choice for testing CORS because it supports all the HTTP methods that are allowed by CORS.\n HttpClient (Reactive APIs) In this section of examples, we are going to learn how to use HttpAsyncClient in combination with RxJava for sending reactive, full-duplex HTTP/1.1 message exchange.\nHTTP and CRUD Operations CRUD operations refer to Create, Read, Update, and Delete actions performed on data. In the context of HTTP endpoints for a /users resource:\n Create: Use HTTP POST to add a new user. Example URL: POST /users Read: Use HTTP GET to retrieve user data. Example URL: GET /users/{userId} for a specific user or GET /users?page=1 for a list of users with pagination. Update: Use HTTP PUT or PATCH to modify user data. Example URL: PUT /users/{userId} Delete: Use HTTP DELETE to remove a user. Example URL: DELETE /users/{userId}   Basic Reactive HTTP Request / Response Exchange Let\u0026rsquo;s look at an example of how to send a simple HTTP reactive request.\nReactive Java Programming and RxJava Reactive Java Programming, also known as ReactiveX or Reactive Extensions, is an approach to programming that emphasizes asynchronous and event-driven processing. It enables developers to write code that reacts to changes or events in the system, rather than relying on traditional imperative programming paradigms.\nRxJava, a library for reactive programming in Java, implements the principles of ReactiveX. It provides a powerful toolkit for composing asynchronous and event-based programs using observable sequences. These sequences represent streams of data or events that can be manipulated and transformed using a wide range of operators.\nRxJava allows developers to write concise and expressive code by leveraging operators like map, filter, and reduce to perform common data transformations. It also provides features for error handling, backpressure handling, and concurrency control, making it suitable for building responsive and resilient applications.\n Project Setup We need to set up following Maven dependencies:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.httpcomponents.core5\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;httpcore5-reactive\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.2.4\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.reactivex.rxjava3\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;rxjava\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;3.1.8\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Implementing the Reactive Request Processing In the following example we\u0026rsquo;ll implement a helper class that has methods to start and stop the async client and methods to execute HTTP requests:\npublic class UserAsyncHttpRequestHelper extends BaseHttpRequestHelper { private MinimalHttpAsyncClient minimalHttp1Client; private MinimalHttpAsyncClient minimalHttp2Client; // methods to start and stop the http clients  public User createUserWithReactiveProcessing( MinimalHttpAsyncClient minimalHttpClient, String userName, String userJob, String scheme, String hostname) throws RequestProcessingException { try { // Prepare request payload  HttpHost httpHost = new HttpHost(scheme, hostname); URI uri = new URIBuilder(httpHost.toURI() + \u0026#34;/api/users/\u0026#34;).build(); String payloadStr = preparePayload(userName, userJob); ReactiveResponseConsumer consumer = new ReactiveResponseConsumer(); // execute the request  Future\u0026lt;Void\u0026gt; requestFuture = executeRequest(minimalHttpClient, consumer, uri, payloadStr); // Print headers  Message\u0026lt;HttpResponse, Publisher\u0026lt;ByteBuffer\u0026gt;\u0026gt; streamingResponse = consumer.getResponseFuture().get(); printHeaders(streamingResponse); // Prepare result  return prepareResult(streamingResponse, requestFuture); } catch (Exception e) { String errorMessage = \u0026#34;Failed to create user. Error: \u0026#34; + e.getMessage(); throw new RequestProcessingException(errorMessage, e); } } private String preparePayload(String userName, String userJob) throws JsonProcessingException { Map\u0026lt;String, String\u0026gt; payload = new HashMap\u0026lt;\u0026gt;(); payload.put(\u0026#34;name\u0026#34;, userName); payload.put(\u0026#34;job\u0026#34;, userJob); return OBJECT_MAPPER.writeValueAsString(payload); } private Future\u0026lt;Void\u0026gt; executeRequest( MinimalHttpAsyncClient minimalHttpClient, ReactiveResponseConsumer consumer, URI uri, String payloadStr) { byte[] bs = payloadStr.getBytes(StandardCharsets.UTF_8); ReactiveEntityProducer reactiveEntityProducer = new ReactiveEntityProducer(Flowable.just(ByteBuffer.wrap(bs)), bs.length, ContentType.TEXT_PLAIN, null); return minimalHttpClient.execute( new BasicRequestProducer(\u0026#34;POST\u0026#34;, uri, reactiveEntityProducer), consumer, null); } private void printHeaders( Message\u0026lt;HttpResponse, Publisher\u0026lt;ByteBuffer\u0026gt;\u0026gt; streamingResponse) { log.debug(\u0026#34;Head: {}\u0026#34;, streamingResponse.getHead()); for (Header header : streamingResponse.getHead().getHeaders()) { log.debug(\u0026#34;Header : {}\u0026#34;, header); } } private User prepareResult( Message\u0026lt;HttpResponse, Publisher\u0026lt;ByteBuffer\u0026gt;\u0026gt; streamingResponse, Future\u0026lt;Void\u0026gt; requestFuture) throws InterruptedException, ExecutionException, TimeoutException, JsonProcessingException { StringBuilder result = new StringBuilder(); Observable.fromPublisher(streamingResponse.getBody()) .map( byteBuffer -\u0026gt; { byte[] bytes = new byte[byteBuffer.remaining()]; byteBuffer.get(bytes); return new String(bytes); }) .materialize() .forEach( stringNotification -\u0026gt; { String value = stringNotification.getValue(); if (value != null) { result.append(value); } }); requestFuture.get(1, TimeUnit.MINUTES); return OBJECT_MAPPER.readerFor(User.class).readValue(result.toString()); } } This code creates a user using reactive processing with Apache HttpClient\u0026rsquo;s minimal reactive component and RxJava. It constructs an HTTP POST request with user data and sends it asynchronously. Upon receiving the response, it reads the response body as a stream of bytes and converts it into a string. Then, it deserializes the string into a User object using Jackson\u0026rsquo;s ObjectMapper.\nThe process starts by constructing the request payload and setting up the request entity. It then executes the HTTP request asynchronously and processes the response using a reactive approach. It converts the response body into a stream of byte buffers. Then it transforms the buffer into a stream of strings using RxJava. Finally, it obtains the string stream, and uses the result to deserialize the user object.\nIf there are any exceptions during this process, it catches such exceptions and wraps those in a RequestProcessingException. Overall, this approach leverages reactive programming to handle HTTP requests and responses asynchronously, providing better scalability and responsiveness.\nThe code sample demonstrates how to use notable classes and methods from Apache reactive APIs:\nReactive Streams Specification is a standard for processing asynchronous data using streaming with non-blocking backpressure. ReactiveEntityProducer is a AsyncEntityProducer that subscribes to a Publisher instance, as defined by the Reactive Streams specification. It is responsible for producing HTTP request entity content reactively. It accepts a Flowable\u0026lt;ByteBuffer\u0026gt; stream of data chunks and converts it into an HTTP request entity. In the code sample, it is used to create the request entity from the payload data (payloadStr).\nBasicRequestProducer is a basic implementation of AsyncRequestProducer that produces one fixed request and relies on a AsyncEntityProducer to generate a request entity stream. It constructs an HTTP request with the specified method, URI, and request entity. In the code, it creates a POST request with the URI obtained from the provided scheme and hostname.\nReactiveResponseConsumer is a AsyncResponseConsumer that publishes the response body through a Publisher, as defined by the Reactive Streams specification. The response represents a Message consisting of a HttpResponse representing the headers and a Publisher representing the response body as an asynchronous stream of ByteBuffer instances. It is a reactive implementation of the ResponseConsumer interface, designed to consume HTTP response asynchronously. It processes the response stream reactively and provides access to the response body as a Publisher\u0026lt;ByteBuffer\u0026gt;. In the code, it is used to consume the HTTP response asynchronously.\nMessage represents a generic message consisting of both a head (metadata) and a body (payload). In the code sample, it\u0026rsquo;s used as the return type of getResponseFuture() method of ReactiveResponseConsumer, providing access to the HTTP response\u0026rsquo;s head and body.\nPublisher is a provider of a potentially unbounded number of sequenced elements, publishing them according to the demand received from its Subscriber(s). A Publisher can serve multiple Subscribers subscribed through subscribe(Subscriber) dynamically at various points in time. It\u0026rsquo;s used to publish data asynchronously, and in the code, it represents the body of the HTTP response, providing a stream of byte buffers.\nRxJava Classes Now let\u0026rsquo;s get familiar with the RxJava noteworthy classes.\nThe Observable class is the non-backpressured, optionally multivalued base reactive class that offers factory methods, intermediate operators and the ability to consume synchronous and/ or asynchronous reactive data flows. Its fromPublisher() method converts an arbitrary reactive stream Publisher into an Observable. Its map() method returns an Observable that applies a specified function to each item emitted by the current Observable and emits the results of these function applications. Furthermore, materialize() method returns an Observable that represents all the emissions and notifications from the current Observable into emissions marked with their original types within Notification objects.\nThe Flowable class that implements the reactive streams Publisher pattern, offers factory methods, intermediate operators and the ability to consume reactive data flows. Reactive streams operates with Publishers which Flowable extends. Many operators therefore accept general Publishers directly and allow direct interoperation with other reactive streams implementations.\nTesting the Reactive Request Processing Now let\u0026rsquo;s test out reactive functionality:\n@Test void createUserWithReactiveProcessing() { MinimalHttpAsyncClient minimalHttpAsyncClient = null; try { minimalHttpAsyncClient = userHttpRequestHelper.startMinimalHttp1AsyncClient(); User responseBody = userHttpRequestHelper.createUserWithReactiveProcessing( minimalHttpAsyncClient, \u0026#34;RxMan\u0026#34;, \u0026#34;Manager\u0026#34;, \u0026#34;https\u0026#34;, \u0026#34;reqres.in\u0026#34;); // verify  assertThat(responseBody).extracting(\u0026#34;id\u0026#34;, \u0026#34;createdAt\u0026#34;).isNotNull(); } catch (Exception e) { Assertions.fail(\u0026#34;Failed to execute HTTP request.\u0026#34;, e); } finally { userHttpRequestHelper.stopMinimalHttpAsyncClient(minimalHttpAsyncClient); } } This test validates the functionality of creating a user with reactive processing using the Apache HttpClient.\nIt starts by initializing the MinimalHttpAsyncClient and setting it to null. Then, it attempts to create a user with the specified name and job role using reactive processing through the createUserWithReactiveProcessing() method of the userHttpRequestHelper.\nAfter executing the request, it verifies the response by asserting that the response body contains non-null values for the user\u0026rsquo;s ID and creation timestamp.\nIf any exception occurs during the execution of the test, it fails with an appropriate error message. Finally, it ensures that the MinimalHttpAsyncClient is stopped regardless of the test outcome.\nComparing Async and Reactive APIs Finally, let\u0026rsquo;s compare the reactive APIs with the async APIs and understand when to use each.\nApache HttpClient provides two powerful paradigms for handling HTTP requests: Async APIs and Reactive APIs. Both styles offer non-blocking operations, but they differ in their design, usage patterns, and underlying concepts. Let\u0026rsquo;s compare these two approaches.\nAsync APIs The Async APIs allow us to send and receive HTTP requests asynchronously. Apache built them on top of Java\u0026rsquo;s Future and CompletableFuture classes. We use them to execute HTTP requests concurrently without blocking the main thread.\nAsync APIs have the following key features. First, they are callback-based. They use callbacks to handle responses once they are available. It is easier to integrate them into existing codebases that are already using Future and CompletableFuture. Furthermore, they allow more control over individual request handling, such as custom timeout settings and retry logic.\nFor example, we would use them to execute multiple HTTP requests concurrently to fetch data from different services and aggregate the results.\nReactive APIs The Reactive APIs follow the principles of reactive programming. They implement the Reactive Streams specification, typically involving frameworks like RxJava or Reactor. They are ideal for applications that need to handle large volumes of data streams or require high responsiveness and scalability.\nReactive APIs have the following key features. They are event-driven. They use an event-driven model to process HTTP responses as they arrive. Furthermore, they support backpressure handling. That in turn allows consumers to process data at their own pace without being overwhelmed. Last but not least, they offer composability. Composing allows for more complex data processing pipelines using reactive operators (e.g., map, flatMap).\nFor example, we would use a reactive approach to build a real-time data processing application that continuously receives and processes data from multiple sources.\nComparison    Aspect Async APIs Reactive APIs     Programming Model Future-based, callback-driven Reactive Streams, event-driven   Concurrency Easy to manage with CompletableFuture Inherent support for handling asynchronous data streams   Scalability Suitable for moderate concurrency Highly scalable, suitable for high-throughput scenarios   Backpressure Not inherently supported Built-in backpressure support   Integration Seamless with existing CompletableFuture codebases Ideal for applications using reactive frameworks   Complexity Simpler for straightforward async tasks More complex but powerful for advanced use cases    Choosing the Right API Use async APIs to make concurrent HTTP requests with simpler control over futures and callbacks. It\u0026rsquo;s a good fit for applications that are already leveraging CompletableFuture. On the other hand, use reactive APIs to build a highly responsive, scalable application that needs to process streams of data efficiently. It\u0026rsquo;s particularly suitable if we\u0026rsquo;re already using a reactive programming framework like Reactor or RxJava.\nBy understanding the differences and strengths of Async and Reactive APIs, we can choose the most appropriate approach for the application\u0026rsquo;s needs, ensuring efficient and effective handling of HTTP requests with Apache HttpClient.\nConclusion In this article, we got familiar with the integration of Apache reactive HTTP client with RxJava for reactive streams processing. We learned how to leverage reactive programming paradigms for handling HTTP requests and responses asynchronously. By combining Apache\u0026rsquo;s reactive stream client with RxJava\u0026rsquo;s powerful capabilities, developers can create efficient and scalable applications.\nWe learned the usage of reactive entities like ReactiveEntityProducer and ReactiveResponseConsumer, along with RxJava\u0026rsquo;s Observable and Flowable, to perform asynchronous data processing. We now better understand the benefits of reactive streams processing, such as improved responsiveness and resource utilization, and saw practical examples demonstrating the integration of Apache HTTP client and RxJava.\n","date":"May 29, 2024","image":"https://reflectoring.io/images/stock/0120-data-stream-1200x628-branded_hu1a8be14cb26cc63e1ae5be2e641a079f_478220_650x0_resize_q90_box.jpg","permalink":"/apache-http-client-reactive-apis/","title":"Reactive APIs Offered by Apache HttpClient"},{"categories":["AWS","Spring Boot","Java"],"contents":"In modern web applications, storing and retrieving files has become a common requirement. Whether it is user uploaded content like images and documents or application generated logs and reports, having a reliable and scalable storage solution is crucial.\nOne such solution provided by AWS is Amazon S3 (Simple Storage Service), which is a widely used, highly scalable, and durable object storage service.\nWhile interacting with the S3 service directly through the AWS SDK for Java is possible, it often leads to verbose configuration classes and boilerplate code. But fortunately, the Spring Cloud AWS project simplifies this integration by providing a layer of abstraction over the official SDK, making it easier to interact with services like S3.\nIn this article, we will explore how to leverage Spring Cloud AWS to easily integrate Amazon S3 in our Spring Boot application. We\u0026rsquo;ll go through the required dependencies, configurations, and IAM policy in order to interact with our provisioned S3 bucket. We will use this to build our service layer that performs basic S3 operations like uploading, fetching, and deleting files.\nAnd finally, to validate our application\u0026rsquo;s interaction with the AWS S3 service, we will be writing integration tests using LocalStack and Testcontainers.\n Example Code This article is accompanied by a working code example on GitHub. Configurations The main dependency that we will need is spring-cloud-aws-starter-s3, which contains all the S3 related classes needed by our application.\nWe will also make use of Spring Cloud AWS BOM (Bill of Materials) to manage the version of the S3 starter in our project. The BOM ensures version compatibility between the declared dependencies, avoids conflicts, and makes it easier to update versions in the future.\nHere is how our pom.xml file would look like:\n\u0026lt;properties\u0026gt; \u0026lt;spring.cloud.version\u0026gt;3.1.1\u0026lt;/spring.cloud.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;!-- Other project dependencies... --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws-starter-s3\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${spring.cloud.version}\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; Now, the only thing left in order to allow Spring Cloud AWS to establish a connection with the AWS S3 service, is to define the necessary configuration properties in our application.yaml file:\nspring: cloud: aws: credentials: access-key: ${AWS_ACCESS_KEY} secret-key: ${AWS_SECRET_KEY} s3: region: ${AWS_S3_REGION} Spring Cloud AWS will automatically create the necessary configuration beans using the above defined properties, allowing us to interact with the S3 service in our application.\nS3 Bucket Name To perform operations against a provisioned S3 bucket, we need to provide its name. We will store this property in our project’s application.yaml file and make use of @ConfigurationProperties to map the value to a POJO, which our service layer will reference when interacting with S3:\n@Getter @Setter @Validated @ConfigurationProperties(prefix = \u0026#34;io.reflectoring.aws.s3\u0026#34;) public class AwsS3BucketProperties { @NotBlank(message = \u0026#34;S3 bucket name must be configured\u0026#34;) private String bucketName; } We have also added the @NotBlank annotation to validate that the bucket name is configured when the application starts. If the corresponding value is not provided, it will result in the Spring Application Context failing to start up.\nBelow is a snippet of our application.yaml file where we have defined the required property which will be automatically mapped to the above defined class:\nio: reflectoring: aws: s3: bucket-name: ${AWS_S3_BUCKET_NAME} This setup allows us to externalize the bucket name attribute and easily access it in our code. The created class AwsS3BucketProperties can be extended later on, if additional S3 related attributes are needed by our application.\nInteracting with the S3 Bucket Now that we have our configurations set up, we will create a service class that will interact with our provisioned S3 bucket and expose the following functionalities:\n Storing a file in the S3 bucket Retrieving a file from the S3 bucket Deleting a file from the S3 bucket  @Service @RequiredArgsConstructor @EnableConfigurationProperties(AwsS3BucketProperties.class) public class StorageService { private final S3Template s3Template; private final AwsS3BucketProperties awsS3BucketProperties; public void save(MultipartFile file) { var objectKey = file.getOriginalFilename(); var bucketName = awsS3BucketProperties.getBucketName(); s3Template.upload(bucketName, objectKey, file.getInputStream()); } public S3Resource retrieve(String objectKey) { var bucketName = awsS3BucketProperties.getBucketName(); return s3Template.download(bucketName, objectKey); } public void delete(String objectKey) { var bucketName = awsS3BucketProperties.getBucketName(); s3Template.deleteObject(bucketName, objectKey); } } We have used the S3Template class provided by Spring Cloud AWS in our service layer. S3Template is a high level abstraction over the S3Client class provided by the AWS SDK.\nWhile it is possible to use the S3Client directly, S3Template reduces boilerplate code and simplifies interaction with S3 by offering convenient, Spring-friendly methods for common S3 operations.\nWe also make use of our custom AwsS3BucketProperties class which we had created earlier, to reference the S3 bucket name defined in our application.yaml file.\nRequired IAM Permissions To have our service layer operate normally, the IAM user whose security credentials we have configured must have the necessary permissions of s3:GetObject, s3:PutObject and s3:DeleteObject.\nHere is what our policy should look like:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:PutObject\u0026#34;, \u0026#34;s3:DeleteObject\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:s3:::bucket-name/*\u0026#34; } ] } The above IAM policy conforms to the least privilege principle, by granting only the necessary permissions required for our service layer to operate correctly. We also specify the bucket ARN in the Resource field, further limiting the scope of the IAM policy to work with a single bucket that is provisioned for our application.\nValidating Bucket Existence During Startup If no S3 bucket exists in our AWS account corresponding to the configured bucket name in our application.yaml file, the service layer we have created will encounter exceptions at runtime when attempting to interact with the S3 service. This can lead to unexpected application behavior and a poor user experience.\nTo address this issue, we will leverage the Bean Validation API and create a custom constraint to validate the existence of the configured S3 bucket during application startup, ensuring that our application fails fast if the bucket does not exist, rather than encountering runtime exceptions later on:\n@RequiredArgsConstructor public class BucketExistenceValidator implements ConstraintValidator\u0026lt;BucketExists, String\u0026gt; { private final S3Template s3Template; @Override public boolean isValid(String bucketName, ConstraintValidatorContext context) { return s3Template.bucketExists(bucketName); } } Our validation class BucketExistenceValidator implements the ConstraintValidator interface and injects an instance of the S3Template class. We override the isValid method and use the convenient bucketExists functionality provided by the injected S3Template instance to validate the existence of the bucket.\nNext, we will create our custom constraint annotation:\n@Documented @Target(ElementType.FIELD) @Retention(RetentionPolicy.RUNTIME) @Constraint(validatedBy = BucketExistenceValidator.class) public @interface BucketExists { String message() default \u0026#34;No bucket exists with the configured name.\u0026#34;; Class\u0026lt;?\u0026gt;[] groups() default {}; Class\u0026lt;? extends Payload\u0026gt;[] payload() default {}; } The @BucketExists annotation is meta-annotated with @Constraint, which specifies the validator class BucketExistenceValidator that we created earlier to perform the validation logic. The annotation also defines a default error message that will be logged in case of validation failure.\nNow, with our custom constraint created, we can annotate the bucketName field in our AwsS3BucketProperties class with our custom annotation @BucketExists:\n@BucketExists @NotBlank(message = \u0026#34;S3 bucket name must be configured\u0026#34;) private String bucketName; If the bucket with the configured name does not exist, the application context will fail to start, and we will see an error message in the console similar to:\n*************************** APPLICATION FAILED TO START *************************** Description: Binding to target org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under \u0026#39;io.reflectoring.aws.s3\u0026#39; to io.reflectoring.configuration.AwsS3BucketProperties failed:  Property: io.reflectoring.aws.s3.bucketName Value: \u0026#34;non-existent-bucket-name\u0026#34; Origin: class path resource [application.yaml] - 14:24 Reason: No bucket exists with configured name. Action: Update your application\u0026#39;s configuration To finish our implementation, we need to add an additional statement to our IAM policy, one which allows permission to perform the s3:ListBucket action:\n{ \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:ListBucket\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:s3:::*\u0026#34; } The above IAM statement is necessary for us to execute the s3Template.bucketExists() method in our custom validation class.\nBy validating the existence of the configured S3 bucket at startup, we ensure that our application fails fast and provides clear feedback when an S3 bucket does not exist corresponding to the configured name. This approach helps maintain a more stable and predictable application behavior.\nIntegration Testing We cannot conclude this article without testing the code we have written so far. We need to ensure that our configurations and service layer work correctly. We will be making use of LocalStack and Testcontainers, but first let’s look at what these two tools are:\n LocalStack : is a cloud service emulator that enables local development and testing of AWS services, without the need for connecting to a remote cloud provider. We\u0026rsquo;ll be provisioning the required S3 bucket inside this emulator. Testcontainers : is a library that provides lightweight, throwaway instances of Docker containers for integration testing. We will be starting a LocalStack container via this library.  The prerequisite for running the LocalStack emulator via Testcontainers is, as you’ve guessed it, an up-and-running Docker instance. We need to ensure this prerequisite is met when running the test suite either locally or when using a CI/CD pipeline.\nLet’s start by declaring the required test dependencies in our pom.xml:\n\u0026lt;!-- Test dependencies --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.testcontainers\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;localstack\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; The declared spring-boot-starter-test gives us the basic testing toolbox as it transitively includes JUnit, AssertJ and other utility libraries, that we will be needing for writing assertions and running our tests.\nAnd org.testcontainers:localstack dependency will allow us to run the LocalStack emulator inside a disposable Docker container, ensuring an isolated environment for our integration test.\nProvisioning S3 Bucket Using Init Hooks Localstack gives us the ability to create required AWS resources when the container is started via Initialization Hooks. We will be creating a bash script init-s3-bucket.sh for this purpose inside our src/test/resources folder:\n#!/bin/bash bucket_name=\u0026#34;reflectoring-bucket\u0026#34; awslocal s3api create-bucket --bucket $bucket_name echo \u0026#34;S3 bucket \u0026#39;$bucket_name\u0026#39; created successfully\u0026#34; echo \u0026#34;Executed init-s3-bucket.sh\u0026#34; The script creates an S3 bucket with name reflectoring-bucket. We will copy this script to the path /etc/localstack/init/ready.d inside the LocalStack container for execution in our integration test class.\nStarting LocalStack via Testcontainers At the time of this writing, the latest version of the LocalStack image is 3.4, we will be using this version in our integration test class:\n@SpringBootTest class StorageServiceIT { private static final LocalStackContainer localStackContainer; // Bucket name as configured in src/test/resources/init-s3-bucket.sh  private static final String BUCKET_NAME = \u0026#34;reflectoring-bucket\u0026#34;; static { localStackContainer = new LocalStackContainer(DockerImageName.parse(\u0026#34;localstack/localstack:3.4\u0026#34;)) .withCopyFileToContainer(MountableFile.forClasspathResource(\u0026#34;init-s3-bucket.sh\u0026#34;, 0744), \u0026#34;/etc/localstack/init/ready.d/init-s3-bucket.sh\u0026#34;) .withServices(Service.S3) .waitingFor(Wait.forLogMessage(\u0026#34;.*Executed init-s3-bucket.sh.*\u0026#34;, 1)); localStackContainer.start(); } @DynamicPropertySource static void properties(DynamicPropertyRegistry registry) { // spring cloud aws properties  registry.add(\u0026#34;spring.cloud.aws.credentials.access-key\u0026#34;, localStackContainer::getAccessKey); registry.add(\u0026#34;spring.cloud.aws.credentials.secret-key\u0026#34;, localStackContainer::getSecretKey); registry.add(\u0026#34;spring.cloud.aws.s3.region\u0026#34;, localStackContainer::getRegion); registry.add(\u0026#34;spring.cloud.aws.s3.endpoint\u0026#34;, localStackContainer::getEndpoint); // custom properties  registry.add(\u0026#34;io.reflectoring.aws.s3.bucket-name\u0026#34;, () -\u0026gt; BUCKET_NAME); } } In our integration test class StorageServiceIT, we do the following:\n Start a new instance of the LocalStack container and enable the S3 service. Copy our bash script init-s3-bucket.sh into the container to ensure bucket creation. Configure a strategy to wait for the log \u0026quot;Executed init-s3-bucket.sh\u0026quot; to be printed, as defined in our init script. Dynamically define the AWS configuration properties needed by our application in order to create the required S3 related beans using @DynamicPropertySource.  Our @DynamicPropertySource code block declares an additional spring.cloud.aws.s3.endpoint property, which is not present in the main application.yaml file.\nThis property is necessary when connecting to the LocalStack container\u0026rsquo;s S3 bucket, reflectoring-bucket, as it requires a specific endpoint URL. However, when connecting to an actual AWS S3 bucket, specifying an endpoint URL is not required. AWS automatically uses the default endpoint for each service in the configured region.\nThis LocalStack container will be automatically destroyed post test suite execution, hence we do not need to worry about manual cleanups.\nWith this setup, our application will use the started LocalStack container for all interactions with AWS cloud during the execution of our integration test, providing an isolated and ephemeral testing environment.\nTesting the Service Layer With the LocalStack container set up successfully via Testcontainers, we can now write test cases to ensure our service layer works as expected and interacts with the provisioned S3 bucket correctly:\n@SpringBootTest class StorageServiceIT { @Autowired private S3Template s3Template; @Autowired private StorageService storageService; // LocalStack setup as seen above  @Test void shouldSaveFileSuccessfullyToBucket() { // Prepare test file to upload  var key = RandomString.make(10) + \u0026#34;.txt\u0026#34;; var fileContent = RandomString.make(50); var fileToUpload = createTextFile(key, fileContent); // Invoke method under test  storageService.save(fileToUpload); // Verify that the file is saved successfully in S3 bucket  var isFileSaved = s3Template.objectExists(BUCKET_NAME, key); assertThat(isFileSaved).isTrue(); } private MultipartFile createTextFile(String fileName, String content) { var fileContentBytes = content.getBytes(); var inputStream = new ByteArrayInputStream(fileContentBytes); return new MockMultipartFile(fileName, fileName, \u0026#34;text/plain\u0026#34;, inputStream); } } In our initial test case, we verify that the StorageService class can successfully upload a file to the provisioned S3 bucket.\nWe begin by preparing a file with random content and name and pass this test file to the save() method exposed by our service layer.\nFinally, we make use of S3Template to assert that the file is indeed saved in the S3 bucket.\nNow, to validate the functionality of fetching a saved file:\n@Test void shouldFetchSavedFileSuccessfullyFromBucket() { // Prepare test file and upload to S3 Bucket  var key = RandomString.make(10) + \u0026#34;.txt\u0026#34;; var fileContent = RandomString.make(50); var fileToUpload = createTextFile(key, fileContent); storageService.save(fileToUpload); // Invoke method under test  var retrievedObject = storageService.retrieve(key); // Read the retrieved content and assert integrity  var retrievedContent = readFile(retrievedObject.getContentAsByteArray()); assertThat(retrievedContent).isEqualTo(fileContent); } private String readFile(byte[] bytes) { var inputStreamReader = new InputStreamReader(new ByteArrayInputStream(bytes)); return new BufferedReader(inputStreamReader).lines().collect(Collectors.joining(\u0026#34;\\n\u0026#34;)); } We begin by saving a test file to the S3 bucket. Then, we invoke the retrieve() method of our service layer with the corresponding random file key. We read the content of the retrieved file and assert that it matches with the original file content.\nFinally, let\u0026rsquo;s conclude by testing our delete functionality:\n@Test void shouldDeleteFileFromBucketSuccessfully() { // Prepare test file and upload to S3 Bucket  var key = RandomString.make(10) + \u0026#34;.txt\u0026#34;; var fileContent = RandomString.make(50); var fileToUpload = createTextFile(key, fileContent); storageService.save(fileToUpload); // Verify that the file is saved successfully in S3 bucket  var isFileSaved = s3Template.objectExists(BUCKET_NAME, key); assertThat(isFileSaved).isTrue(); // Invoke method under test  storageService.delete(key); // Verify that file is deleted from the S3 bucket  isFileSaved = s3Template.objectExists(BUCKET_NAME, key); assertThat(isFileSaved).isFalse(); } In this test case, we again create a test file and upload it to our S3 bucket. We verify that the file is successfully saved using S3Template. Then, we invoke the delete() method of our service layer with the generated file key.\nTo verify that the file is indeed deleted from our bucket, we again use the S3Template instance to assert that the file is no longer present in our bucket.\nBy executing the above integration test cases, we simulate different interactions with our S3 bucket and ensure that our service layer works as expected.\nConclusion In this article, we explored how to integrate the AWS S3 service in a Spring Boot application using Spring Cloud AWS.\nWe started by adding the necessary dependencies and configurations to establish a connection with the S3 service. Then, we used the auto configuration feature of Spring Cloud AWS to create a service class that performs basic S3 operations of uploading, retrieving, and deleting files.\nWe also discussed the required IAM permissions, and enhanced our application\u0026rsquo;s behaviour by validating the existence of the configured S3 bucket at application startup using a custom validation annotation.\nFinally, to ensure our application works and interacts with the provisioned S3 bucket correctly, we wrote a few integration tests using LocalStack and Testcontainers.\nThe source code demonstrated throughout this article is available on Github. I would highly encourage you to explore the codebase and set it up locally.\n","date":"May 27, 2024","image":"https://reflectoring.io/images/stock/0138-bucket-alternative-1200x628-branded_hu53ed73a1525f7cae8069f95328ca02fc_99732_650x0_resize_q90_box.jpg","permalink":"/integrating-amazon-s3-with-spring-boot-using-spring-cloud-aws/","title":"Integrating Amazon S3 with Spring Boot Using Spring Cloud AWS"},{"categories":["Kotlin"],"contents":"In the realm of object-oriented programming (OOP), Kotlin stands out as an expressive language that seamlessly integrates modern features with a concise syntax. Inheritance, polymorphism and encapsulation play a crucial role in object-oriented code. In this blog post, we\u0026rsquo;ll delve into these concepts in the context of Kotlin, exploring how they enhance code reusability, flexibility, and security.\nObject-Oriented Programming Object-oriented programming OOP is a programming paradigm that organizes software design around the concept of objects, which can be thought of as instances of classes. A class is a blueprint for creating objects, and it defines a set of attributes and methods or rather functions that operate on these attributes.\nInheritance in Kotlin Inheritance is a concept of OOP, allowing one class to inherit properties and behaviors from another. Kotlin supports both single and multiple inheritance through the use of classes and interfaces. Let\u0026rsquo;s consider a scenario where we have a base class Vehicle:\nopen class Vehicle(val brand: String, val model: String) { fun start() { println(\u0026#34;The $brand$modelis starting.\u0026#34;) } fun stop() { println(\u0026#34;The $brand$modelhas stopped.\u0026#34;) } } In Kotlin, the open keyword plays a crucial role in class and function inheritance. By default, all classes in Kotlin are \u0026ldquo;closed\u0026rdquo; for inheritance, which means they cannot be subclassed. This design choice enhances the safety and integrity of your code by preventing unintended modifications through inheritance.\nWhen we want a class or function to be inheritable, we need to explicitly mark it with the open keyword. Now, we can create a derived class Car that inherits from the Vehicle class:\nclass Car(brand: String, model: String, val color: String) : Vehicle(brand, model) { fun drive() { println(\u0026#34;The $color$brand$modelis on the move.\u0026#34;) } } Here, the Car class inherits the start() and stop() methods from the Vehicle class showcasing the simplicity and effectiveness of inheritance in Kotlin.\nPolymorphism in Kotlin Polymorphism, a Greek term meaning \u0026ldquo;many forms,\u0026rdquo; enables a single interface to represent different types. Kotlin supports polymorphism through interfaces and abstract classes. Let\u0026rsquo;s extend our example by introducing an interface Drivable:\ninterface Drivable { fun drive() } Now, we can modify the Car class to implement the Drivable interface:\nclass Car(brand: String, model: String, val color: String) : Vehicle(brand, model), Drivable { override fun drive() { println(\u0026#34;The $color$brand$modelis smoothly cruising.\u0026#34;) } } With this implementation, a Car object can now be treated as a Drivable allowing for more flexibility in our code. Polymorphism facilitates code extensibility and maintenance by decoupling the implementation details from the interfaces.\nLet\u0026rsquo;s show an example of Polymorphism while using an abstract class:\nabstract class Shape { // Define an abstract method `area()` that must be overridden in subclasses  abstract fun area(): Double // A non-abstract method to print the area  fun printArea() { println(\u0026#34;The area is: ${area()}\u0026#34;) } } class Circle(private val radius: Double) : Shape() { override fun area(): Double { return Math.PI * radius * radius } } class Rectangle(private val width: Double, private val height: Double) : Shape() { override fun area(): Double { return width * height } } In the example above, we define an abstract class Shape with an abstract method area(). The classes Circle and Rectangle inherit from the abstract class Shape and provide their own implementations for the area() method.\nEncapsulation in Kotlin Encapsulation involves bundling data and methods that operate on that data within a single unit, known as a class. This concept ensures that the internal workings of a class are hidden from the outside world promoting data integrity and security. In Kotlin, encapsulation is achieved through access modifiers such as private, protected, internal and public.\nLet\u0026rsquo;s us briefly learn about these modifiers:\nprivate: When we mark a declaration (such as a class, function, or property) as private, it is accessible only within the same file in which it is declared. Other classes, functions, or properties outside of the file cannot access it. This is the most restrictive visibility modifier.\nprotected: The protected modifier is similar to private, but it also allows subclasses to access the declaration. This means that the declaration is accessible within its own class and by subclasses. For example:\nopen class Base { protected fun protectedFunction() { // This function can be accessed within this class and subclasses  } } class Derived : Base() { fun useProtectedFunction() { protectedFunction() // Allowed because Derived is a subclass of Base  } } internal: The internal modifier restricts access to declarations within the same module (a module is a set of Kotlin files compiled together, such as a library or an application). Anything marked as internal is visible to other code in the same module but not to code in other modules.\npublic: This is the default visibility in Kotlin. When a declaration is marked as public (or if no visibility modifier is specified), it is accessible from any other code. In most cases, you won\u0026rsquo;t need to explicitly use the public modifier, as it\u0026rsquo;s the default.\nLet\u0026rsquo;s modify our Vehicle class to encapsulate its properties:\nopen class Vehicle(private val brand: String, private val model: String) { fun start() { println(\u0026#34;The $brand$modelis starting.\u0026#34;) } fun stop() { println(\u0026#34;The $brand$modelhas stopped.\u0026#34;) } fun getBrandModel(): String { return \u0026#34;$brand$model\u0026#34; } } In this example, the brand and model properties are marked as private, restricting their access to within the Vehicle class. The getBrandModel() method acts as a getter method allowing controlled access to the encapsulated data.\nConclusion In this exploration of inheritance, polymorphism and encapsulation in Kotlin, we\u0026rsquo;ve witnessed how these OOP principles contribute to code organization, reusability and flexibility. By leveraging these principles, developers can create robust and extensible codebases, fostering a modular and collaborative development environment.\n","date":"May 12, 2024","image":"https://reflectoring.io/images/stock/0065-java-1200x628-branded_hu49f406cdc895c98f15314e0c34cfd114_116403_650x0_resize_q90_box.jpg","permalink":"/kotlin-object-oriented-programming/","title":"Inheritance, Polymorphism, and Encapsulation in Kotlin"},{"categories":["AWS","Spring Boot","Java"],"contents":"In an event-driven architecture where multiple microservices need to communicate with each other, the publisher-subscriber pattern provides an asynchronous communication model to achieve this. It enables us to design a loosely coupled architecture that is easy to extend and scale.\nIn this article, we will be looking at how we can use AWS SNS and SQS services to implement the publisher-subscriber pattern in Spring Boot based microservices.\nWe will configure a microservice to act as a publisher and send messages to an SNS topic, and another to act as a subscriber which consumes messages from an SQS queue subscribed to that topic:\nDecoupling with SNS: Advantages over Direct Messaging Queues Before we begin implementing our microservices, I want to explain the decision to have an SNS topic in front of an SQS queue, rather than directly using an SQS queue in both microservices.\nTraditional messaging queues like SQS, Kafka, or RabbitMQ allow asynchronous communication as well, wherein the publisher publishes the payload required by the listener of the queue. This facilitates point-to-point communication where the publisher is aware of the existence and identity of the subscriber.\nIn contrast, the pub/sub pattern facilitated by SNS allows for a more loosely coupled approach. SNS acts as a middleware between the parties, allowing them to evolve independently. Using this pattern, the publisher is not concerned about who the payload is intended for, that allows it to remain unchanged in the event where multiple new subscribers are added to receive the same payload.\n Example Code This article is accompanied by a working code example on GitHub. Publisher Microservice Now that we have understood the \u0026ldquo;Why\u0026rdquo; of our topic, we will proceed with creating our publisher microservice.\nThe microservice will simulate a user management service, where a single API endpoint is exposed to create a user record. Once this API is invoked, the service publishes a trimmed-down version of the API request to the SNS topic user-account-created signifying successful account creation.\nSpring Cloud AWS We will be using Spring Cloud AWS to establish a connection and interact with the SNS service, rather than using the SNS SDK provided by AWS directly. Spring Cloud AWS is a wrapper around the official AWS SDKs, which significantly simplifies configuration and provides simple methods to interact with AWS services.\nThe main dependency that we will need is spring-cloud-aws-starter-sns, which contains all SNS related classes needed by our application.\nWe will also make use of Spring Cloud AWS BOM (Bill of Materials) to manage the versions of the Spring Cloud AWS dependencies in our project. The BOM ensures version compatibility between the declared dependencies, avoids conflicts, and makes it easier to update versions in the future.\nHere is how our pom.xml would look like:\n\u0026lt;properties\u0026gt; \u0026lt;spring.cloud.version\u0026gt;3.1.1\u0026lt;/spring.cloud.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;!-- other project dependencies --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws-starter-sns\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${spring.cloud.version}\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; The only thing left for us to establish connection with the AWS SNS service, is to define the necessary configuration properties in our application.yaml file:\nspring: cloud: aws: credentials: access-key: ${AWS_ACCESS_KEY} secret-key: ${AWS_SECRET_KEY} sns: region: ${AWS_SNS_REGION} Spring Cloud AWS will automatically create the necessary configuration beans using the above-defined properties, allowing us to interact with the SNS service in our application.\nConfiguring SNS Topic ARN The recommended approach to interacting with an SNS topic is through its Amazon Resource Name (ARN). We will store this property in our project\u0026rsquo;s application.yaml file and make use of @ConfigurationProperties to map the defined ARN to a POJO, which our application will reference while publishing messages to SNS:\n@Getter @Setter @Validated @ConfigurationProperties(prefix = \u0026#34;io.reflectoring.aws.sns\u0026#34;) public class AwsSnsTopicProperties { @NotBlank(message = \u0026#34;SNS topic ARN must be configured\u0026#34;) private String topicArn; } We have also added the @NotBlank annotation to validate that the ARN value is configured when the application starts. If the corresponding value is not provided, it will result in the Spring Application Context failing to start up.\nBelow is a snippet of our application.yaml file where we have defined the required property which will be automatically mapped to the above-defined class:\nio: reflectoring: aws: sns: topic-arn: ${AWS_SNS_TOPIC_ARN} Required IAM Permissions To publish messages to our SNS topic, the IAM user whose security credentials have been configured in our publisher microservice must have the necessary permission of sns:Publish.\nHere is what our policy should look like:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;sns:Publish\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;sns-topic-arn\u0026#34; } ] } It is worth noting that Spring Cloud AWS also allows us to specify the SNS topic name instead of the full ARN. In such cases, the sns:CreateTopic permission needs to be attached to the IAM policy as well, to allow the library to fetch the ARN of the topic. However, I do not recommend this approach to be used since the library would create a new topic if one with the configured name doesn\u0026rsquo;t already exist. Moreover, resource creation should not be done in our Spring Boot microservices.\nPublishing Messages to the SNS Topic Now that we are done with the SNS-related configurations, we will create a service method that accepts a DTO containing user creation details and publishes a message to the SNS topic:\n@Slf4j @Service @RequiredArgsConstructor @EnableConfigurationProperties(AwsSnsTopicProperties.class) public class UserService { private final SnsTemplate snsTemplate; private final AwsSnsTopicProperties awsSnsTopicProperties; public void create(UserCreationRequestDto userCreationRequest) { // save user record in database or other business logic  var topicArn = awsSnsTopicProperties.getTopicArn(); var payload = removePassword(userCreationRequest); snsTemplate.convertAndSend(topicArn, payload); log.info(\u0026#34;Successfully published message to topic ARN: {}\u0026#34;, topicArn); } // Rest of the service class implementation } We have used the SnsTemplate class provided by Spring Cloud AWS, to publish a message to the SNS topic in our service layer. We also make use of our custom AwsSnsTopicProperties class to reference the SNS topic ARN defined in our active application.yaml file.\nTo finish the implementation of our publisher microservice user-management-service, we will expose an API endpoint on top of our service layer method:\n@RestController @RequiredArgsConstructor @RequestMapping(\u0026#34;/api/v1/users\u0026#34;) public class UserController { private final UserService userService; @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE) public ResponseEntity\u0026lt;HttpStatus\u0026gt; createUser(@Valid @RequestBody UserCreationRequestDto userCreationRequest) { userService.create(userCreationRequest); return ResponseEntity.status(HttpStatus.CREATED).build(); } } We can now test our publisher microservice by making a POST request to the exposed API endpoint with a sample payload:\ncurl -X POST http://localhost:8080/api/v1/users \\  -H \u0026#34;Content-Type: application/json\u0026#34; \\  -d \u0026#39;{ \u0026#34;name\u0026#34;: \u0026#34;Hardik Singh Behl\u0026#34;, \u0026#34;emailId\u0026#34;: \u0026#34;behl@reflectoring.io\u0026#34;, \u0026#34;password\u0026#34;: \u0026#34;somethingSecure\u0026#34; }\u0026#39; If everything is configured correctly, we should see a log message in the console indicating that the service layer was invoked and the message was successfully published to our SNS topic:\nSuccessfully published message to topic ARN: \u0026lt;ARN-value-here\u0026gt; Subscriber Microservice Now that we have our publisher microservice up and running, let\u0026rsquo;s shift our focus to developing the second component of our architecture: the subscriber microservice.\nFor our use case, the subscriber microservice will simulate a notification dispatcher service that sends out account creation confirmation emails to users. It will listen for messages on an SQS queue dispatch-email-notification and perform the email dispatch logic, which for the sake of demonstration will be a simple log statement. (I wish everything was this easy 😆)\nSQS Queue Configuration Similar to the publisher microservice, we will be using Spring Cloud AWS to connect to and poll messages from our SQS queue. We will take advantage of the library\u0026rsquo;s automatic deserialization and message deletion features to simplify our implementation.\nThe only change needed in our pom.xml file is to include the SQS starter dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws-starter-sqs\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Similarly, in our application.yaml file, we need to define the necessary configuration properties required by Spring Cloud AWS to establish a connection and interact with the SQS service:\nspring: cloud: aws: credentials: access-key: ${AWS_ACCESS_KEY} secret-key: ${AWS_SECRET_KEY} sqs: region: ${AWS_SQS_REGION} And just like that, we have successfully given our application the ability to poll messages from our SQS queue. With the addition of the above configuration properties, Spring Cloud AWS will automatically create the necessary SQS-related beans required by our application.\nConsuming Messages from an SQS Queue The recommended attribute to use when interacting with a provisioned SQS queue is the queue URL, which we will be configuring in our application.yaml file:\nio: reflectoring: aws: sqs: queue-url: ${AWS_SQS_QUEUE_URL} We will now use the @SqsListener annotation provided by Spring Cloud AWS on a method in a @Component class, to listen to messages received on the queue and process them as required:\n@Slf4j @Component public class EmailNotificationListener { @SqsListener(\u0026#34;${io.reflectoring.aws.sqs.queue-url}\u0026#34;) public void listen(UserCreatedEventDto userCreatedEvent) { log.info(\u0026#34;Dispatching account creation email to {} on {}\u0026#34;, userCreatedEvent.getName(), userCreatedEvent.getEmailId()); // business logic to send email  } } In our listener, we have referenced the queue URL defined in our application.yaml file using the property placeholder (${…​}) capability in the @SqsListener annotation. This is why we did not create a corresponding @ConfigurationProperties class for it.\nThe payload received by the SQS queue will be automatically deserialized into a UserCreatedEventDto object, which we have declared as a method argument.\nOnce the listen method in our EmailNotificationListener class has been executed successfully i.e., it completes without any exceptions, Spring Cloud AWS will automatically delete the processed message from the queue to avoid the same message from being processed again.\nRaw Message Delivery and @SnsNotificationMessage When an SQS queue subscribed to an SNS topic receives a message, the message contains not only the actual payload but also various metadata. This additional metadata can cause automatic message deserialization to fail.\nOne way to resolve this issue is to enable the raw message delivery attribute on our active subscription. When enabled, all the metadata is stripped from the message, and only the actual payload is delivered as is.\nAnother approach that allows us to deserialize the entire SNS payload without enabling the raw message delivery attribute is to use the @SnsNotificationMessage annotation on the method parameter:\n@SqsListener(\u0026#34;${io.reflectoring.aws.sqs.queue-url}\u0026#34;) public void listen(@SnsNotificationMessage UserCreatedEventDto userCreatedEvent) { // processing logic  } In the above code, the @SnsNotificationMessage annotation automatically extracts the payload from the SNS message and deserializes it into a UserCreatedEventDto object.\nThe message format used, based on whether this attribute is enabled or not, can be viewed in this reference document.\nRequired IAM Permissions To have our subscriber microservice operate normally, the IAM user whose security credentials we have configured must have the necessary permissions of sqs:ReceiveMessage and sqs:DeleteMessage.\nHere is what our policy should look like:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;sqs:ReceiveMessage\u0026#34;, \u0026#34;sqs:DeleteMessage\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;sqs-queue-arn\u0026#34; } ] } Spring Cloud AWS also allows us to specify the SQS queue name instead of the queue URL. In such cases, the read-only permissions of sqs:GetQueueAttributes and sqs:GetQueueUrl need to be attached to the IAM policy as well.\nSince the additional permissions needed are read-only, there is no harm in configuring the queue name and allowing the library to fetch the URL instead. However, I would still prefer to use the queue URL directly, since it leads to faster application startup time and avoids unnecessary calls to AWS cloud.\nSubscribing SQS Queue to an SNS Topic Now that we have both of our microservices set up, there\u0026rsquo;s one final piece of the puzzle to connect: subscribing our SQS queue to our SNS topic. This will allow the messages published to the SNS topic user-account-created to automatically be forwarded to the SQS queue dispatch-email-notification for consumption by our subscriber microservice.\nTo create a subscription between the services, the official documentation guide can be referenced.\nResource Based Policy Once our subscription has been created, we need to grant our SNS topic permission to send messages to our SQS queue. This permission needs to be added to our queue\u0026rsquo;s resource-based policy (Access policy).\nYou might wonder why this is necessary when we have already granted the required IAM permissions to our microservices. The answer lies in the way AWS services communicate with each other. IAM permissions control what actions an IAM user can perform on an AWS resource, while resource-based policies determine what actions another AWS service can perform on it. Resource-based policies are attached to an AWS resource (SQS in this context).\nIn our case, we need to create a resource-based policy on the SQS queue to allow the SNS topic to send messages to it. Without this policy, even though our microservices have the necessary IAM permissions, the SNS topic will not be able to forward messages to the SQS queue.\nHere is what our SQS resource policy should look like:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Principal\u0026#34;: { \u0026#34;Service\u0026#34;: \u0026#34;sns.amazonaws.com\u0026#34; }, \u0026#34;Action\u0026#34;: \u0026#34;sqs:SendMessage\u0026#34;, \u0026#34;Resource\u0026#34;: \u0026#34;sqs-queue-arn\u0026#34;, \u0026#34;Condition\u0026#34;: { \u0026#34;ArnEquals\u0026#34;: { \u0026#34;aws:SourceArn\u0026#34;: \u0026#34;sns-topic-arn\u0026#34; } } } ] } In this policy, we are granting the SNS service (sns.amazonaws.com) permission to perform the sqs:SendMessage action on our SQS queue. We specify the ARNs of our SQS queue and SNS topic in the Resource and Condition fields respectively to ensure that only messages from our specific topic are allowed.\nOnce this resource-based policy is attached to our SQS queue, the SNS topic will be able to forward messages to it, finally completing the setup of our publisher-subscriber architecture.\nEncryption at Rest Using KMS When dealing with sensitive data, it is recommended to ensure that the data is encrypted not only in transit but also at rest. Encryption at rest, not only enhances the security of our architecture but also makes our life easier when going through HIPAA and PCI-DSS audits.\nWhile SNS and SQS are primarily used for message delivery (data in transit), the messages themselves are stored temporarily in these services until they are successfully delivered or processed. This temporary storage period is considered \u0026ldquo;data at rest\u0026rdquo;. Additionally, if our subscriber microservice is down or is unable to poll the SQS queue, the messages will remain in the queue until the microservice is operational again.\nBy enabling encryption at rest, we safeguard the confidentiality of our data throughout its lifecycle, including the intermediary stages of message delivery and temporary storage.\nIn this section, we will discuss the necessary steps to integrate our architecture with AWS KMS and ensure data in our SNS topic and SQS queue is always encrypted.\nTo encrypt data at rest, we start by creating a custom symmetric AWS KMS key. Once the custom key is created, we need to enable encryption on both our SNS topic and SQS queue by configuring them to use our newly created KMS key.\nAfter enabling encryption, our developed publisher-subscriber flow will\u0026hellip; drumroll please \u0026hellip; stop working! 😭. This is due to our SNS topic and SQS queue not having the required permissions to perform encryption and decryption operations using our custom KMS key, also our publisher microservice now lacks the necessary IAM permissions to encrypt the data before publishing it to our SNS topic.\nTo resolve the above issues, we need to update our KMS key policy (resource-based policy) to include the following statements that grant SNS and SQS, the necessary permissions to interact with our custom KMS key:\n[ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Principal\u0026#34;: { \u0026#34;Service\u0026#34;: \u0026#34;sqs.amazonaws.com\u0026#34; }, \u0026#34;Action\u0026#34;: [ \u0026#34;kms:GenerateDataKey\u0026#34;, \u0026#34;kms:Decrypt\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;kms-key-arn\u0026#34;, \u0026#34;Condition\u0026#34;: { \u0026#34;ArnEquals\u0026#34;: { \u0026#34;aws:SourceArn\u0026#34;: \u0026#34;sqs-queue-arn\u0026#34; } } }, { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Principal\u0026#34;: { \u0026#34;Service\u0026#34;: \u0026#34;sns.amazonaws.com\u0026#34; }, \u0026#34;Action\u0026#34;: [ \u0026#34;kms:GenerateDataKey\u0026#34;, \u0026#34;kms:Decrypt\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;kms-key-arn\u0026#34;, \u0026#34;Condition\u0026#34;: { \u0026#34;ArnEquals\u0026#34;: { \u0026#34;aws:SourceArn\u0026#34;: \u0026#34;sns-topic-arn\u0026#34; } } } ] The above policy statements allow SNS and SQS to use the kms:GenerateDataKey and kms:Decrypt actions on our custom KMS key. The Condition block ensures that only our specific SNS topic and SQS queue are granted these permissions, conforming to least privilege principle.\nAdditionally, we need to attach the following IAM statement to the policy of the IAM user whose security credentials have been configured in our publisher microservice:\n{ \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;kms:GenerateDataKey\u0026#34;, \u0026#34;kms:Decrypt\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;kms-key-arn\u0026#34; } The above IAM statement allows our publisher microservice to use our custom KMS key, enabling it to encrypt the data before publishing it to the SNS topic.\nBy configuring encryption at rest using AWS KMS, we have further enhanced our architecture by adding an extra layer of security to it.\nValidating Pub/Sub Functionality with LocalStack and Testcontainers Before concluding this article, we will test the publisher-subscriber flow that we have implemented so far with an integration test. We will be making use of LocalStack and Testcontainers. Before we begin, let\u0026rsquo;s look at what these two tools are:\n LocalStack : is a cloud service emulator that enables local development and testing of AWS services, without the need for connecting to a remote cloud provider. We\u0026rsquo;ll be provisioning the required SNS table and SQS queue inside this emulator. Testcontainers : is a library that provides lightweight, throwaway instances of Docker containers for integration testing. We will be starting a LocalStack container via this library.  The prerequisite for running the LocalStack emulator via Testcontainers is, as you\u0026rsquo;ve guessed it, an up-and-running Docker instance. We need to ensure this prerequisite is met when running the test suite either locally or when using a CI/CD pipeline.\nDependencies Let’s start by declaring the required test dependencies in our pom.xml:\n\u0026lt;!-- Test dependencies --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.testcontainers\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;localstack\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.awaitility\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;awaitility\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; The declared spring-boot-starter-test gives us the basic testing toolbox as it transitively includes JUnit, MockMVC, and other utility libraries, that we will require for writing assertions and running our tests.\nAnd org.testcontainers:localstack dependency will allow us to run the LocalStack emulator inside a disposable Docker container, ensuring an isolated environment for our integration test.\nFinally, awaitility will help us validate the integrity of our asynchronous system.\nCreating AWS Resources Using Init Hooks Localstack gives us the ability to create required AWS resources when the container is started via Initialization Hooks. We will be creating a bash script provision-resources.sh for this purpose inside our src/test/resources folder:\n#!/bin/bash topic_name=\u0026#34;user-account-created\u0026#34; queue_name=\u0026#34;dispatch-email-notification\u0026#34; sns_arn_prefix=\u0026#34;arn:aws:sns:us-east-1:000000000000\u0026#34; sqs_arn_prefix=\u0026#34;arn:aws:sqs:us-east-1:000000000000\u0026#34; awslocal sns create-topic --name $topic_name echo \u0026#34;SNS topic \u0026#39;$topic_name\u0026#39; created successfully\u0026#34; awslocal sqs create-queue --queue-name $queue_name echo \u0026#34;SQS queue \u0026#39;$queue_name\u0026#39; created successfully\u0026#34; awslocal sns subscribe --topic-arn \u0026#34;$sns_arn_prefix:$topic_name\u0026#34; --protocol sqs --notification-endpoint \u0026#34;$sqs_arn_prefix:$queue_name\u0026#34; echo \u0026#34;Subscribed SQS queue \u0026#39;$queue_name\u0026#39; to SNS topic \u0026#39;$topic_name\u0026#39; successfully\u0026#34; echo \u0026#34;Successfully provisioned resources\u0026#34; The script creates an SNS topic with the name user-account-created and an SQS queue named dispatch-email-notification. After creating these resources, it subscribes the queue to the created SNS topic. We will copy this script to the path /etc/localstack/init/ready.d in the LocalStack container for execution in our integration test class.\nStarting LocalStack via Testcontainers At the time of this writing, the latest version of the LocalStack image is 3.4, we will be using this version in our integration test class:\n@SpringBootTest class PubSubIT { private static final LocalStackContainer localStackContainer; // as configured in initializing hook script \u0026#39;provision-resources.sh\u0026#39; in src/test/resources  private static final String TOPIC_ARN = \u0026#34;arn:aws:sns:us-east-1:000000000000:user-account-created\u0026#34;; private static final String QUEUE_URL = \u0026#34;http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/dispatch-email-notification\u0026#34;; static { localStackContainer = new LocalStackContainer(DockerImageName.parse(\u0026#34;localstack/localstack:3.3\u0026#34;)) .withCopyFileToContainer(MountableFile.forClasspathResource(\u0026#34;provision-resources.sh\u0026#34;, 0744), \u0026#34;/etc/localstack/init/ready.d/provision-resources.sh\u0026#34;) .withServices(Service.SNS, Service.SQS) .waitingFor(Wait.forLogMessage(\u0026#34;.*Successfully provisioned resources.*\u0026#34;, 1)); localStackContainer.start(); } @DynamicPropertySource static void properties(DynamicPropertyRegistry registry) { registry.add(\u0026#34;spring.cloud.aws.credentials.access-key\u0026#34;, localStackContainer::getAccessKey); registry.add(\u0026#34;spring.cloud.aws.credentials.secret-key\u0026#34;, localStackContainer::getSecretKey); registry.add(\u0026#34;spring.cloud.aws.sns.region\u0026#34;, localStackContainer::getRegion); registry.add(\u0026#34;spring.cloud.aws.sns.endpoint\u0026#34;, localStackContainer::getEndpoint); registry.add(\u0026#34;io.reflectoring.aws.sns.topic-arn\u0026#34;, () -\u0026gt; TOPIC_ARN); registry.add(\u0026#34;spring.cloud.aws.sqs.region\u0026#34;, localStackContainer::getRegion); registry.add(\u0026#34;spring.cloud.aws.sqs.endpoint\u0026#34;, localStackContainer::getEndpoint); registry.add(\u0026#34;io.reflectoring.aws.sqs.queue-url\u0026#34;, () -\u0026gt; QUEUE_URL);\t} } In our integration test class PubSubIT, we do the following:\n Start a new instance of the LocalStack container and enable the required services of SNS and SQS. Copy our bash script provision-resources.sh into the container to ensure AWS resource creation. Configure a strategy to wait for the log \u0026quot;Successfully provisioned resources\u0026quot; to be printed, as defined in our init script. Dynamically define the AWS configuration properties needed by our applications in order to create the required SNS and SQS-related beans using @DynamicPropertySource.  With this setup, our applications will use the started LocalStack container for all interactions with AWS cloud during the execution of our integration test, providing an isolated and ephemeral testing environment.\nTest Case Now that we have configured the LocalStack container successfully via Testcontainers, we can test our publisher-subscriber functionality:\n@SpringBootTest @AutoConfigureMockMvc @ExtendWith(OutputCaptureExtension.class) class PubSubIT { @Autowired private MockMvc mockMvc; // LocalStack setup as seen above \t@Test @SneakyThrows void test(CapturedOutput output) { // prepare API request body to create user  var name = RandomString.make(); var emailId = RandomString.make() + \u0026#34;@reflectoring.io\u0026#34;; var password = RandomString.make(); var userCreationRequestBody = String.format(\u0026#34;\u0026#34;\u0026#34; { \u0026#34;name\u0026#34; : \u0026#34;%s\u0026#34;, \u0026#34;emailId\u0026#34; : \u0026#34;%s\u0026#34;, \u0026#34;password\u0026#34; : \u0026#34;%s\u0026#34; } \u0026#34;\u0026#34;\u0026#34;, name, emailId, password); // execute API request to create user  var userCreationApiPath = \u0026#34;/api/v1/users\u0026#34;; mockMvc.perform(post(userCreationApiPath) .contentType(MediaType.APPLICATION_JSON) .content(userCreationRequestBody)) .andExpect(status().isCreated()); // assert that message has been published to SNS topic  var expectedPublisherLog = String.format(\u0026#34;Successfully published message to topic ARN: %s\u0026#34;, TOPIC_ARN); Awaitility.await().atMost(1, TimeUnit.SECONDS).until(() -\u0026gt; output.getAll().contains(expectedPublisherLog)); // assert that message has been received by the SQS queue  var expectedSubscriberLog = String.format(\u0026#34;Dispatching account creation email to %s on %s\u0026#34;, name, emailId); Awaitility.await().atMost(1, TimeUnit.SECONDS).until(() -\u0026gt; output.getAll().contains(expectedSubscriberLog)); } } By executing the above test case, we simulate the complete flow of our publisher-subscriber architecture.\nUsing MockMVC, we invoke the user creation API endpoint exposed by our publisher microservice. We then use the CapturedOutput instance provided by the OutputCaptureExtension to assert that the expected logs are generated by both the publisher and subscriber microservices, confirming that the message has been successfully published to the SNS topic and consumed from the SQS queue.\nWith this integration test in place, we have confidently validated the functionality of our publisher-subscriber architecture.\nConclusion In this article, we explored how to implement the publisher-subscriber pattern in Spring Boot microservices using AWS SNS and SQS services.\nThroughout the implementation, we made use of Spring Cloud AWS to simplify the configurations required to interact with AWS services. We also discussed the necessary IAM and resource policies required by our loosely coupled architecture to function seamlessly.\nThe source code demonstrated throughout this article is available on Github. The codebase is built as a Maven multi-module project and has been integrated with LocalStack and Docker Compose, to enable local development without the need for provisioning real AWS services. I would highly encourage you to explore the codebase and set it up locally.\n","date":"May 3, 2024","image":"https://reflectoring.io/images/stock/0112-ide-1200x628-branded_hu3b7dcb6bd35b7043d8f1c81be3dcbca2_169620_650x0_resize_q90_box.jpg","permalink":"/publisher-subscriber-pattern-using-aws-sns-and-sqs-in-spring-boot/","title":"Publisher-Subscriber Pattern Using AWS SNS and SQS in Spring Boot"},{"categories":["Node"],"contents":"Endpoints or APIs that perform complex computations and handle large amounts of data face several performance and responsiveness challenges. This occurs because each request initiates a computation or data retrieval process from scratch, which can take time. As a result, users and services that use our application might experience slower performance. An effective solution to this problem is to implement a caching mechanism.\nCaching is a popularly used technique for improving application performance. It can mean the difference between a frustrating or enjoyable user experience.\nCaching allows us to temporarily store frequently used data rather than repeatedly computing it. This enables quick retrieval without rerunning computations or database searches, significantly improving application performance.\nIn this article, we\u0026rsquo;ll look at what caching is, when do we need it in our application, and how to incorporate it into a Node.js application using the Redis database.\nPrerequisites To follow along with this article, you will require:\n Some experience with JavaScript and Node.js. Nodejs version 18 or newer installed on your computer. Redis installed on your computer.   Example Code This article is accompanied by a working code example on GitHub. Cache A cache serves as a temporary storage where copies of response data are stored to expedite loading times and enhance an application\u0026rsquo;s responsiveness. Every stored data in a cache is organized using key-value pairs, each piece of data is linked to a unique key.\nA cache unique key can be generated using various components from the client\u0026rsquo;s request, such as the URL, query parameters, request body, headers, method, etc.\nBy using relevant request components as our cache key, we guarantee that our cached data can be distinguished based on the specifics of each request. This approach prevents the delivery of inaccurate or outdated data from the cache, ensuring that our cached responses are tailored to fulfill the specific requirements of each request.\nWhen Do We Need to Cache? Deciding when and how to use caching in an application can be hard because it depends on many things like data access patterns, performance requirements, and how big we want our application to grow. Various strategies exist for storing and using cache within an application. Let us briefly explore a few commonly used types and their use cases.\nClient-side Caching Client-side caching means storing data on the user\u0026rsquo;s device. Developers employ storage mechanisms such as the browser\u0026rsquo;s cache, local storage, session storage, IndexedDB, or third-party solutions for this purpose.\nClient-side caching lends itself well to the following scenarios:\n Frequent Page Visits: Use client-side caching for frequently accessed data or resources to speed up page loading for returning visitors by serving cached content, avoiding repeated server requests. This includes storing web pages, images, stylesheets, and scripts on the user\u0026rsquo;s device. Reduce Server Calls: Employ client-side caching to minimize the time between client and server interactions, resulting in faster response times and reduced backend server load. This improves scalability and lowers server traffic. Non-Sensitive Data: Consider the sensitivity of the cached data; avoid caching sensitive or confidential information on the front end to mitigate security risks associated with less secure client devices. Offline Access: Utilize browser storage for offline access to cached data, enabling uninterrupted usage of the application even without a network connection. This provides seamless offline experiences without relying on constant connectivity. Reduced Server Load: Cache resources like images, videos, and static files on the client side to lighten the server\u0026rsquo;s load. Bandwidth Conservation: Conserve bandwidth by caching resources on the client side, reducing the need for repeated content downloads from the server. This is particularly advantageous for mobile users or those with limited data plans, minimizing data usage and accelerating content loading. Enhanced User Experience: Enhance user experience by leveraging client-side caching to achieve faster page loads and reduced latency. Users perceive the application as more responsive and reliable, leading to higher satisfaction and engagement. Personalization and Customization: Store user-specific data, preferences, and settings locally on the user\u0026rsquo;s device with client-side caching. This enables a personalized experience where users can access their customized settings without server requests each time.  Server-Side Caching Server-side caching temporarily saves frequently accessed data on the server, this speeds up client load time. It involves caching database queries, web pages, API responses, and other frequently used data. This caching is typically performed within the application or web server, utilizing methods such as in-memory caching, file-based caching, or third-party systems like Redis.\nServer-side caching is beneficial in the following scenarios:\n Processing Location: Use server-side caching when most data processing occurs on the server, reducing the load on backend servers. Resource Intensive: Employ server-side caching for resource-intensive tasks or complex computations, centralizing caching and easing backend server loads. Scalability: Opt for server-side caching to manage caches centrally and scale horizontally by adding more database nodes. Utilize features like expiration time and pub/sub mechanisms for cache invalidation. Sensitive Data: Assess the sensitivity of cached data. Server-side caching offers greater security. If caching sensitive data is necessary, consider implementing encryption and access controls to secure the data stored in the cache database, regardless of regardless of caching location. Frequently Accessed Data: Improve performance by caching frequently accessed data or computation results on the server, enhancing application responsiveness. Database Query Results: Cache database query results on the server to lighten the database server load and speed up subsequent requests. Session Data: Enhance session management by caching session data on the server, reducing database or external storage access overhead. Personalization and User Data: Cache personalized and user-specific content on the server to boost application responsiveness for authenticated users. High Traffic Peaks: Use server-side caching during traffic spikes to serve cached content directly, relieving application servers and ensuring scalability while averting performance degradation or downtime.  CDNs (Content Delivery Networks) CDNs are networks of servers distributed across multiple locations throughout the world, known as points of presence (PoPs). Their primary responsibility is to store and distribute content to users from servers near their geographic location.\nCDNs cache static and dynamic content, such as web pages, photos, videos, scripts, and API answers. CDNs increase website speed, reduce delays, and improve the overall user experience by storing material on strategically positioned servers throughout the world.\nCDNs work well in the following scenarios:\n Global Audience: Use CDNs if our website or app is used by people in different parts of the world. CDNs speed up content delivery and reduce delays for users in different regions. High Traffic Peaks: CDNs are useful during busy times like product launches or events. They distribute content efficiently and ease the load on your main servers. Static Content Delivery: CDNs are great for delivering static content like images, videos, and scripts. By using CDN servers, you lighten the load on your main server and improve site performance. Dynamic Content Acceleration: Some CDNs speed up dynamic content like personalized pages and API responses. This boosts performance for dynamic websites and apps. Improved Website Performance: CDNs use techniques like caching and optimized routing to deliver content faster. This reduces page load times and improves user experience. Redundancy and Failover: CDNs spread content across multiple servers and data centers, ensuring availability even if some servers fail. Security: CDNs offer security features like DDoS protection and encryption. They can protect your site from cyber attacks by routing traffic through their servers. Streaming Media Delivery: CDNs are ideal for streaming media like live video and audio. They cache and distribute content to edge servers for smooth streaming. SEO Benefits: Faster websites rank better in search engines. CDNs can improve your site\u0026rsquo;s speed, potentially attracting more organic traffic.  In summary, it is vital to assess these factors and pick the approach that fits our application\u0026rsquo;s needs and structure.\nTo further understand how to use cache in an application, we\u0026rsquo;ll look at how to implement server-side caching in our application.\nServer-Side Caching with Node.js In this section, we will configure server-side caching with Node.js and a Redis database. Before we do this, let\u0026rsquo;s first understand Redis and how to efficiently store and retrieve data for caching using Redis.\nRedis Redis (Remote Dictionary Server) is an open-source, in-memory data structure store, used as a distributed, in-memory key-value database, cache, and message broker, with optional durability. Redis employs a key-value pair structure to organize and manage data, where each piece of data is associated with a unique key for retrieval or manipulation. It supports various abstract data structures like strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indexes.\nRedis is ideal for scenarios requiring fast data retrieval and delivery. It excels in caching, session storage, message brokering, event streaming, real-time features, and more.\nHow to Store and Retrieve Cache Data in Redis Redis provides us with various data types for storing and retrieving data like Strings, JSON, Lists, Sets, and Hashes.\nWe will concentrate on the Redis string data type. It is the go-to choice for caching, providing a versatile solution for various use cases. Redis strings provide strong caching capabilities, ranging from simple key-value caching to more complex cases that need expiration, atomic operations, and persistence.\nTo store a string in Redis, use the set() method:\n// Set a string value for the key redis.set(\u0026#39;key\u0026#39;, \u0026#39;Value\u0026#39;); // Set the value of the key with options redis.set(\u0026#39;key\u0026#39;, \u0026#39;value\u0026#39;, { EX: 10, NX: true }); To retrieve stored string data, use the GET method:\nredis.get(\u0026#39;key\u0026#39;, function (err, value) { console.log(value); // value }); To delete a stored string data, use the DEL method:\n// Removes the specified keys redis.del(\u0026#39;key\u0026#39;, function (err, value) { console.log(value); // value }); Redis Strings are the fundamental Redis data type, used for storing sequences of bytes such as text, serialized JSON objects, HTML snippets, and binary arrays. They support Time To Live (TTL), enabling us to set an expiration time for each key-value pair. This makes them ideal for implementing time-based caching strategies, automatically invalidating cached data after a certain period.\nBelow is an example demonstrating how to store and retrieve a JSON string using Redis strings, suitable for caching API responses:\nconst bikeData = JSON.stringify({ id: 1, color: \u0026#34;red\u0026#34; }); await redisClient.set(\u0026#34;bike:1\u0026#34;, bikeData, {EX: 10}); const value = await redisClient.get(\u0026#34;bike:1\u0026#34;); console.log(JSON.parse(value)); // { id: 1, color: \u0026#34;red\u0026#34; } In the above code snippet, we first stringify the JSON data before saving it as a string, ensuring our JSON data is stored in the correct format. When retrieving the data, we parse it back into its original JSON format.\nSetting up Caching with Redis in a Node.Js Application Next, let\u0026rsquo;s proceed to set up caching with Redis in a Node.js application.\nTo begin open a terminal in a directory of your choice. Then run the following command. This will create a new folder for our demo application and initialize Node.js:\nmkdir nodecache-app cd nodecache-app npm init -y Next, to generate the necessary folders and files for our application, run:\nmkdir controllers middlewares \u0026amp;\u0026amp; \\  touch controllers/product.js middlewares/redis.js index.js Our server setup will reside in the index.js file, while our Redis caching configuration will be implemented as middleware in our application.\nMiddleware in Node.js refers to a series of functions executed sequentially during HTTP request processing. These functions have access to the request and response objects, along with a special next function that passes control to the subsequent middleware.\nMiddleware commonly handles tasks like authentication, logging, data validation, caching, and rate limiting.\nBy utilizing Redis for caching and encapsulating caching logic in a middleware function, we gain control over which routes to cache easily. This enables us to leverage caching benefits without over-engineering our codebase.\nOur cache configuration resides in the redis.js file, provided as a middleware.\nOnce defined, we can apply this middleware selectively to frequently accessed, read-heavy, or computation-intensive routes.\nNext, execute the following command to install the required dependencies for our application:\nnpm install express object-hash Redis Here\u0026rsquo;s a brief overview of each dependency:\n  express: This library facilitates the creation of REST APIs and route management in our application.\n  redis: Redis is utilized as an in-memory data structure store, serving as a database for our cache.\n  object-hash: This library generates consistent and reliable hashes from objects and values. We\u0026rsquo;ll use object-hash to create our cache key.\n  Here\u0026rsquo;s how our project directory will look so far:\n├── controllers/ │ └── product.js ├── middlewares/ │ └── redis.js ├── node_modules/ ├── index.js ├── package-lock.json ├── package.json To begin developing our application logic, we\u0026rsquo;ll create a product controller for the application.\nCopy and paste the following code into the controllers/product.js file:\nconst productController = { getproducts: async (req, res) =\u0026gt; { // emulating data store delay time to retrieve product data  await new Promise(resolve =\u0026gt; setTimeout(resolve, 750)); const products = [ { id: 1, name: \u0026#34;Desk Bed\u0026#34;, price: 854.44 }, { id: 2, name: \u0026#34;Shelf Table\u0026#34;, price: 357.08 }, { id: 3, name: \u0026#34;Couch Lamp\u0026#34;, price: 594.53 }, { id: 4, name: \u0026#34;Bed Couch\u0026#34;, price: 309.62 }, { id: 5, name: \u0026#34;Desk Shelf\u0026#34;, price: 116.39 }, { id: 6, name: \u0026#34;Couch Lamp\u0026#34;, price: 405.03 }, { id: 7, name: \u0026#34;Rug Chair\u0026#34;, price: 47.77 }, { id: 8, name: \u0026#34;Sofa Shelf\u0026#34;, price: 359.85 }, { id: 9, name: \u0026#34;Desk Table\u0026#34;, price: 823.21 }, { id: 10, name: \u0026#34;Table Shelf\u0026#34;, price: 758.91 }, ]; res.json({ products }); }, }; module.exports = { productController }; In the code above, we\u0026rsquo;ve defined a controller named productController with a single method getProducts(), responsible for handling requests to retrieve all available product data.\nWe also included a simulated delay of 750 milliseconds using setTimeout(). This delay mirrors the time it takes to retrieve our product list from the application\u0026rsquo;s data store replicating the delay time of a database query or heavy computation while retrieving all products.\nEach time this controller is invoked, we\u0026rsquo;ll consistently experience a delayed response due to the retrieval process delay time.\nNow, let\u0026rsquo;s proceed with setting up our caching process, which will allow us to bypass the delay, ensuring users don\u0026rsquo;t have to wait for too long every time the product controller route is called.\nTo integrate caching into our application we have to initialize Redis Client into our application and then create our caching helper middleware.\nTo do this, copy and paste the following code in the middlewares/redis.js file,\nconst { createClient } = require(\u0026#34;redis\u0026#34;); const hash = require(\u0026#34;object-hash\u0026#34;); let redisClient; async function initializeRedisClient() { try { redisClient = createClient(); await redisClient.connect(); console.log(\u0026#34;Redis Connected Successfully\u0026#34;); } catch (e) { console.error(`Redis connection failed with error:`); console.error(e); } } function generateCacheKey(req, method = \u0026#34;GET\u0026#34;) { let type = method.toUpperCase() // build a custom object to use as a part of our Redis key  const reqDataToHash = { query: req.query, }; return `${type}-${req.path}/${hash.sha1(reqDataToHash)}`; } function cacheMiddleware( options = { EX: 10800, // 3h  }, ) { return async (req, res, next) =\u0026gt; { if (redisClient?.isOpen) { const key = generateCacheKey(req, req.method); //if cached data is found retrieve it  const cachedValue = await redisClient.get(key); if (cachedValue) { return res.json(JSON.parse(cachedValue)); } else { const oldSend = res.send; // When the middleware function redisCachingMiddleware is executed, it replaces the res.send function with a custom function.  res.send = async function saveCache(data) { res.send = oldSend; // cache the response only if it is successful  if (res.statusCode \u0026gt;= 200 \u0026amp;\u0026amp; res.statusCode \u0026lt; 300) { await redisClient.set(key, data, options); } return res.send(data); }; // continue to the controller function  next(); } } else { next(); } }; } function invalidateCacheMiddleware(req, res, next) { // Invalidate the cache for the cache key  const key = generateCacheKey(req); redisClient.del(key); next(); } module.exports = { initializeRedisClient, cacheMiddleware, invalidateCacheMiddleware, }; The code snippet above contains the following methods:\n  initializeRedisClient(): Responsible for setting up our Redis client. We are initializing the Redis client by creating a client instance and connecting to the Redis server.\n  generateCacheKey(): This method generates a unique cache key based on our request object and HTTP method. Using the object-hash library it hashes our request query parameters, this will come in handy if our query parameters change or are rearranged regularly.\n  cacheMiddleware(): Our cacheMiddleware() function is declared to accept an optional options object, which defaults to { EX: 10800 } (indicating a cache expiration time of 3 hours). This function returns another function that acts as middleware.\nInside the middleware function, req, res, and next are directly passed as parameters by the Express.js framework when the middleware is invoked.\nThese parameters represent the request object req, the response object res, and next to continue to the next middleware in the chain.\nWithin the middleware function, our logic for caching response data is implemented. We first check if the Redis client (redisClient) is available and open. If Redis is not available or open, we simply call next() to continue to the next middleware in the chain without performing caching.\nIf Redis is available, we generate a cache key based on the request. If cached data is available for the key generated, the cached value is retrieved using redisClient.get(key) and the cached value is immediately sent back to the client using res.json(), and the middleware is exited.\nIf no cached data is found, the res.send function is replaced with a custom function saveCache. This will intercept the response data before sending it to the client. This is achieved by temporarily overriding res.send with our custom saveCache() implementation.\n  invalidateCacheMiddleware(): This middleware function is used for invalidating cache entries. It generates a cache key based on the request and deletes the corresponding cache entry from Redis.\nThere are several ways we can invalidate our cache data, depending on the caching strategy and requirements of our application we can use Time-Based Expiration, Manual Invalidation, Versioning, Event-Driven Invalidation, TTL (Time-to-Live), or a combination of these techniques can be used to achieve optimal cache management.\nWe can choose the most appropriate cache invalidation strategy based on the requirements and characteristics of our application.\n  The above functions enable caching functionality within our application. By exporting these functions, they become available for use throughout our application, providing flexibility in caching strategies and management.\nNext, we\u0026rsquo;ll set up our basic application settings and start our application by heading to the index.js file.\nHere\u0026rsquo;s the code to copy and paste:\nconst express = require(\u0026#34;express\u0026#34;); const { initializeRedisClient, cacheMiddleware, invalidateCacheMiddleware, } = require(\u0026#34;./middlewares/redis\u0026#34;); const { productController } = require(\u0026#34;./controllers/product\u0026#34;); const app = express(); app.use(express.json()); // connect to Redis initializeRedisClient(); // register an endpoint app.get( \u0026#34;/api/v1/products\u0026#34;, cacheMiddleware({ EX: 3600, // 1h  }), productController.getproducts ); app.post(\u0026#34;/api/v1/products\u0026#34;, invalidateCacheMiddleware, (req, res) =\u0026gt; { // Implement your logic to update data in Application data store  res.json({ message: \u0026#34;Product data updated successfully\u0026#34; }); }); // start the server const port = 7000; app.listen(port, () =\u0026gt; { console.log(`Server is running on port: http://localhost:${port}`); }); Here\u0026rsquo;s what\u0026rsquo;s happening in the code.\nWe are setting up a basic Express server and routes.\nWe called the initializeRedisClient() function to set up and connect to our Redis database. This function initializes the Redis client, allowing our application to interact with Redis for caching purposes.\nThen we use the cacheMiddleware() function as a middleware to cache responses for our GET endpoint /api/v1/product. We specify an expiration time of 3600 seconds (1 hour) for our cached data.\nThe invalidateCacheMiddleware() function is used as middleware to invalidate cached data when the POST endpoint /api/v1/product is called before processing the request. This ensures that stale data is not passed to the client.\nOur application is configured to listen on port 7000.\nWith this setup, our application is ready to cache responses and invalidate cache entries as needed.\nLet\u0026rsquo;s proceed to test our application caches.\nTesting To test the caching system, ensure that your Redis server is started locally. We now have a working application we can start it by running the following command:\nnode index.js Our demo server should now be listening at port 7000.\nMake a GET HTTP request to http://localhost:7000/api/v1/products. This will trigger our caching logic, and the API response will be stored in the Redis database.\ninitially, we will notice a delay due to the request processing time. However, subsequent requests to the same endpoint will be significantly faster as the response is retrieved from the cache.\nFor example, our initial request took 781ms to process, and subsequent requests returned in as little as 8ms, demonstrating the efficiency of caching.\nConclusion In conclusion, implementing caching in an application is undeniably essential for optimizing performance and delivering a satisfying user experience. However, the journey doesn\u0026rsquo;t end with merely enabling caching. Choosing the right caching strategy, or even blending multiple strategies, requires careful consideration and expertise.\nIt is important to perform rigorous load testing on our endpoints to guarantee they can manage expected traffic and scale properly. By constantly assessing and optimizing our caching technique, we can ensure that our application remains performant and responsive, giving users an amazing experience they will value.\n","date":"April 20, 2024","image":"https://reflectoring.io/images/stock/0137-speed-1200x628-branded_hub713cc45004fb4a228379981531d1996_109522_650x0_resize_q90_box.jpg","permalink":"/node-js-cache/","title":"Optimizing Node.js Application Performance with Caching"},{"categories":["Kotlin"],"contents":"Bubble Sort, a basic yet instructive sorting algorithm, takes us back to the fundamentals of sorting. In this tutorial, we\u0026rsquo;ll look at the Kotlin implementation of Bubble Sort, understanding its simplicity and exploring its limitations. While not the most efficient sorting algorithm, Bubble Sort serves as an essential stepping stone for grasping fundamental sorting concepts.\nBubble Sort Implementation fun bubbleSort(arr: IntArray) { val n = arr.size for (i in 0 until n - 1) { for (j in 0 until n - i - 1) { if (arr[j] \u0026gt; arr[j + 1]) { val temp = arr[j] arr[j] = arr[j + 1] arr[j + 1] = temp } } } } fun main() { val array = intArrayOf(64, 34, 25, 12, 22, 11, 90) println(\u0026#34;Original Array: ${array.joinToString(\u0026#34;, \u0026#34;)}\u0026#34;) bubbleSort(array) println(\u0026#34;Sorted Array: ${array.joinToString(\u0026#34;, \u0026#34;)}\u0026#34;) } In this code:\n arr is the input array that needs to be sorted. n is the length of the array.  The sorting process involves two nested loops. The outer loop (i) iterates through each element of the array, and the inner loop (j) iterates from 0 to n - i - 1. This nested loop structure is fundamental to the Bubble Sort algorithm. Within these loops, the function checks whether the current element arr[j] is greater than the next element arr[j + 1]. If this condition is true, the elements are swapped. This swapping mechanism ensures that the largest element \u0026ldquo;bubbles\u0026rdquo; to the end of the array during each pass through the loops. The entire process is repeated until the entire array is sorted in ascending order.\nThe main() function serves to demonstrate the application of the bubbleSort() function. It initializes an array with a set of unsorted values, providing it as input for the sorting algorithm. After printing the original array, the function calls bubbleSort() to sort the array in-place. Finally, the sorted array is printed, allowing us to observe the transformation of the initial unsorted state to the sorted state as a result of the Bubble Sort algorithm. This structure provides a clear and concise way to visualize and understand the working of Bubble Sort within the Kotlin programming language.\nBubble Sort Complexity Bubble Sort\u0026rsquo;s simplicity comes at the cost of efficiency.\nWorst-Case Time Complexity In the worst-case scenario, when the array is in reverse order, Bubble Sort\u0026rsquo;s time complexity is O(n²). The algorithm\u0026rsquo;s need for multiple passes through the entire array makes it impractical for large datasets.\nAverage-Case Time Complexity On average, Bubble Sort still exhibits a time complexity of O(n²). Its nature of indiscriminate element comparisons and swaps results in quadratic time complexity as the input size increases.\nBest-Case Time Complexity: The best-case scenario occurs when the array is already sorted, yielding a time complexity of O(n). However, even in the best case, a full pass of the array is required, making Bubble Sort less efficient compared to other algorithms designed for pre-sorted or partially sorted data.\nConclusion In conclusion, Bubble Sort provides a foundational understanding of sorting but falls short when efficiency is crucial. Its quadratic time complexity makes it unsuitable for large datasets. While valuable for educational purposes, practical sorting scenarios often demand more efficient algorithms like QuickSort or MergeSort. Exploring Bubble Sort sets the stage for comprehending the trade-offs and optimizations employed in advanced sorting algorithms.\n","date":"April 14, 2024","image":"https://reflectoring.io/images/stock/0135-sorted-1200x628-branded_hu18fbb96cfe480b8ec8f03f2a6dbe633b_279812_650x0_resize_q90_box.jpg","permalink":"/bubble-sort-in-kotlin/","title":"Bubble Sort in Kotlin"},{"categories":["Kotlin"],"contents":"Sorting is a fundamental operation in computer science and Quick Sort stands out as one of the most efficient sorting algorithms. In this blog, we will explore the Quick Sort algorithm in Kotlin, understanding its principles, implementation and performance characteristics. Quick Sort\u0026rsquo;s elegance lies in its divide-and-conquer strategy making it a go-to choice for efficient sorting in various applications.\nQuick Sort Overview Quick Sort follows the divide-and-conquer paradigm, breaking down the sorting process into three main steps: partitioning, sorting and combining.\nPartitioning The algorithm selects a pivot element from the array and rearranges the array elements so that elements smaller than the pivot element are on the left, and elements greater are on the right. The pivot element is now in its final sorted position.\nSorting The algorithm now recursively applies the same process to the sub-arrays on the left and right of the initial pivot element. Each recursive call involves selecting a new pivot element and partitioning the sub-array around it.\nCombining The sorted sub-arrays are combined to produce the final sorted array.\nImplementation in Kotlin Let\u0026rsquo;s take a look at an implementation in Kotlin:\nfun quickSort(arr: IntArray, low: Int, high: Int) { if (low \u0026lt; high) { val pivotIndex = partition(arr, low, high) quickSort(arr, low, pivotIndex - 1) quickSort(arr, pivotIndex + 1, high) } } fun partition(arr: IntArray, low: Int, high: Int): Int { val pivot = arr[high] var i = low - 1 for (j in low until high) { if (arr[j] \u0026lt;= pivot) { i++ swap(arr, i, j) } } swap(arr, i + 1, high) return i + 1 } fun swap(arr: IntArray, i: Int, j: Int) { val temp = arr[i] arr[i] = arr[j] arr[j] = temp } fun main() { val array = intArrayOf(64, 34, 25, 12, 22, 11, 90) println(\u0026#34;Original Array: ${array.joinToString(\u0026#34;, \u0026#34;)}\u0026#34;) quickSort(array, 0, array.size - 1) println(\u0026#34;Sorted Array: ${array.joinToString(\u0026#34;, \u0026#34;)}\u0026#34;) } In this code, the quickSort() function recursively divides the input array into sub-arrays and sorts them. It checks if the range specified by the parameters low and high is valid (i.e., low is less than high). If so, it determines the pivot element\u0026rsquo;s final position using the partition function and then recursively applies the quickSort() function to the sub-arrays on the left and right of the pivot. The partition() function plays a crucial role by selecting a pivot element (in this case, the last element) and rearranging the array such that elements smaller than the pivot are on the left and those greater are on the right. The function returns the index where the pivot element is now in its sorted position. The swap() function facilitates the swapping of elements within the array.\nThe main() function showcases the algorithm by initializing an array with unsorted values, printing the original array, calling the quickSort() function to sort the array and finally printing the sorted array.\nOverall, the code elegantly demonstrates the divide-and-conquer strategy of Quick Sort providing an efficient solution for sorting arrays in Kotlin.\nQuick Sort Complexity Time Complexity On average, Quick Sort achieves an O(n log n) time complexity, making it highly efficient. In the worst-case scenario, when a poor pivot choice consistently leads to unbalanced partitions, the time complexity degrades to O(n²). However, such cases are rare in practice.\nSpace Complexity Quick Sort is an in-place sorting algorithm, meaning it doesn\u0026rsquo;t require additional memory proportional to the input size.\nConclusion In conclusion, Quick Sort stands as a powerful sorting algorithm with impressive average-case performance. Its divide-and-conquer strategy, combined with efficient in-place sorting, makes it a preferred choice for applications demanding fast and reliable sorting. While understanding the intricacies of the algorithm, developers can appreciate the balance it strikes between simplicity and efficiency. Incorporating Quick Sort into our toolkit empowers us to handle sorting tasks with elegance and speed a crucial skill in the realm of algorithmic problem-solving.\n","date":"April 14, 2024","image":"https://reflectoring.io/images/stock/0135-sorted-1200x628-branded_hu18fbb96cfe480b8ec8f03f2a6dbe633b_279812_650x0_resize_q90_box.jpg","permalink":"/quick-sort-in-kotlin/","title":"Quick Sort in Kotlin"},{"categories":["Kotlin"],"contents":"Sorting is a fundamental operation in computer science, and there are various algorithms to achieve it. One such simple yet effective algorithm is Selection Sort. In this blog post, we\u0026rsquo;ll explore the Selection Sort algorithm, its implementation in Kotlin, and analyze its time complexity.\nSelection Sort Overview Selection Sort is a comparison-based sorting algorithm that works by dividing the input array into two parts: the sorted and the unsorted sub-arrays. The algorithm repeatedly finds the minimum element from the unsorted sub-array and swaps it with the first element of the unsorted subarray. This process is repeated until the entire array is sorted.\nImplementation in Kotlin Let\u0026rsquo;s implement Selection Sort in Kotlin:\nfun selectionSort(arr: IntArray) { val n = arr.size for (i in 0 until n - 1) { var minIndex = i for (j in i + 1 until n) { if (arr[j] \u0026lt; arr[minIndex]) { minIndex = j } } // Swap the found minimum element with the first element  val temp = arr[minIndex] arr[minIndex] = arr[i] arr[i] = temp } } fun main() { val array = intArrayOf(64, 25, 12, 22, 11) println(\u0026#34;Original Array: ${array.joinToString(\u0026#34;, \u0026#34;)}\u0026#34;) selectionSort(array) println(\u0026#34;Sorted Array: ${array.joinToString(\u0026#34;, \u0026#34;)}\u0026#34;) } In this implementation, the selectionSort() function takes an array of integers and sorts it in ascending order using the Selection Sort algorithm.\nAnalysis of Time Complexity Selection Sort has a time complexity of O(n²) where n is the number of elements in the array. This makes it inefficient for large datasets. The algorithm performs poorly compared to more advanced sorting algorithms like Merge Sort or Quick Sort.\nConclusion While Selection Sort is a simple algorithm to understand and implement, it is not the most efficient sorting algorithm for large datasets. In real-world scenarios, it\u0026rsquo;s often better to use more optimized algorithms for sorting. Nevertheless, learning and implementing Selection Sort can be a valuable exercise in understanding sorting algorithms and their performance characteristics.\nIn summary, we\u0026rsquo;ve covered the basics of Selection Sort, implemented it in Kotlin, and discussed its time complexity. I hope this blog post helps you gain a better understanding of Selection Sort and its role in sorting algorithms. Happy coding!\n","date":"April 14, 2024","image":"https://reflectoring.io/images/stock/0135-sorted-1200x628-branded_hu18fbb96cfe480b8ec8f03f2a6dbe633b_279812_650x0_resize_q90_box.jpg","permalink":"/selection-sort-in-kotlin/","title":"Selection Sort in Kotlin"},{"categories":["Kotlin"],"contents":"One of the standout features that sets Kotlin apart is its robust approach to null safety. Null safety is a critical aspect of programming languages, aiming to eliminate the notorious null pointer exceptions that often plague developers. In this blog post, we will delve into the core concepts of null safety in Kotlin, exploring nullable types, non-null types, safe calls, the Elvis operator, the !! operator, and the safe cast operator as?.\nNullable Types and Non-Null Types In Kotlin, every variable has a type, and these types can be categorized into nullable and non-null types. A nullable type is denoted by appending a question mark (?) to the type declaration. For example, String? represents a nullable String, meaning it can either hold a valid string or be null.\nOn the other hand, a non-null type, without the question mark, signifies that the variable cannot hold null values. For instance, String denotes a non-null string type. This clear distinction allows Kotlin to enforce null safety at compile-time, reducing the likelihood of null pointer exceptions during runtime.\nNull safety is a crucial feature in programming languages, and Kotlin addresses it comprehensively for several reasons:\n  Null safety contributes to the overall reliability and stability of the codebase. By explicitly specifying whether a variable can be null or not, Kotlin encourages developers to handle null cases more consciously. This explicitness results in code that is less error-prone and easier to reason about.\n  Null safety in Kotlin aims to eradicate NPEs (null pointer exceptions) by enforcing a clear distinction between nullable and non-nullable types. This distinction helps developers catch potential nullability issues at compile-time, preventing unexpected crashes during runtime.\n  Kotlin\u0026rsquo;s null safety features are primarily enforced at compile-time. This means that potential nullability issues are identified and flagged by the compiler before the code is executed. This proactive approach helps catch errors early in the development process, reducing the likelihood of bugs in production.\nSafe Calls Operator ?. fun main() { val nullableString: String? = \u0026#34;Hello, Kotlin!\u0026#34; val length: Int? = nullableString?.length println(\u0026#34;Length: $length\u0026#34;) } In this Kotlin code, we begin by declaring a nullable string variable named nullableString, initialized with the value \u0026ldquo;Hello, Kotlin!\u0026rdquo;. The type is explicitly set as String?, indicating that it can hold either a valid string or a null value. Subsequently, another variable named length is declared as an optional integer (Int?). We access the length property of nullableString using the safe call operator ?.. If nullableString is not null, the length is retrieved. If nullableString is null, length is assigned a null value.\nIf we tried this code without the safe call operator (i.e. nullableString.length), the compiler would not allow this and complain that nullableString could be null.\nThe Elvis Operator ?: While safe calls provide a graceful way to handle nullability, there are scenarios where we might want to provide a default value or perform an alternative action when the variable is null. This is where the Elvis operator comes into play.\nfun main() { val nullableString: String? = null val length: Int = nullableString?.length ?: 0 println(\u0026#34;Length: $length\u0026#34;) } In this example, if nullableString is null, the expression evaluates to 0 because we added ?: 0 to the end. Otherwise, it returns the length of the string. The Elvis operator simplifies the code and makes it more readable by providing a default value in case of null.\nThe Not Null Assertion Operator !! While Kotlin encourages null safety, there might be situations where you, as a developer, are certain that a nullable variable is non-null at a particular point in your code. In such cases, you can use the not-null assertion operator to tell the compiler that you are taking responsibility for the null check.\nfun main() { val nullableString: String? = \u0026#34;Hello, Kotlin!\u0026#34; val length: Int = nullableString!!.length println(\u0026#34;Length: $length\u0026#34;) } Using !! essentially asserts to the compiler that nullableString is non-null, and it proceeds with the operation. However, if nullableString is null, a NullPointerException will be thrown at runtime. Therefore, it should be used judiciously and only when you are confident about the non-null status.\nSafe Cast Operator as? The safe cast operator in Kotlin provides a safe way to attempt type casting without risking a ClassCastException. This operator is declared as an infix function, allowing for a more readable syntax when used.\nConsider the following example:\nfun main() { val y: Any? = \u0026#34;Hello, Kotlin!\u0026#34; val x: String? = y as? String if (x != null) { println(\u0026#34;Casting successful. x is a String: $x\u0026#34;) } else { println(\u0026#34;Casting failed. x is null.\u0026#34;) } } In this example, if y is indeed a String, the safe cast succeeds, and x contains the casted value. If y is null or not of type String, x is assigned null. This way, we can gracefully handle different cases without encountering runtime exceptions.\nConclusion Null safety is a fundamental aspect of Kotlin that significantly enhances the reliability and robustness of code. By incorporating nullable types, non-null types, safe calls, the Elvis operator, the !! operator, and the safe cast operator as?, Kotlin empowers developers to write code that is less prone to null pointer exceptions.\nEmbracing these null safety features not only improves the overall quality of code but also contributes to a smoother development experience. As Kotlin continues to evolve, its emphasis on null safety remains a key factor in its appeal to developers seeking a modern and pragmatic programming language.\n","date":"March 2, 2024","image":"https://reflectoring.io/images/stock/0136-ring-1200x628-branded_hu7d18c73725f2e79d14bd6345257ba04a_57870_650x0_resize_q90_box.jpg","permalink":"/kotlin-null-safety/","title":"Understanding Null Safety in Kotlin"},{"categories":["Kotlin"],"contents":"Sorting is a fundamental operation that plays a crucial role in various applications. Among the many sorting algorithms, merge sort stands out for its efficiency and simplicity. In this blog post, we will delve into the details of merge sort and implement it in Kotlin.\nMerge Sort Algorithm Merge Sort is a popular sorting algorithm that follows the divide and conquer paradigm. It was developed by John von Neumann in 1945. The basic idea behind Merge Sort is to divide the array into two halves, recursively sort each half and then merge the sorted halves to produce a sorted array.\nHere are the main steps of the Merge Sort algorithm:\n Divide: Divide the unsorted array into two halves until each sub-array contains only one element. Conquer: Recursively sort each sub-array. Merge: Merge the sorted sub-arrays to produce a single sorted array.  The merging process is a crucial step in Merge Sort. It involves comparing elements from the two sorted sub-arrays and merging them into a new sorted array.\nKotlin Implementation Now, let\u0026rsquo;s dive into the implementation of Merge Sort in Kotlin. We\u0026rsquo;ll start by defining a function for the merging process:\nfun merge(left: IntArray, right: IntArray): IntArray { var i = 0 var j = 0 val merged = IntArray(left.size + right.size) for (k in 0 until merged.size) { when { i \u0026gt;= left.size -\u0026gt; merged[k] = right[j++] j \u0026gt;= right.size -\u0026gt; merged[k] = left[i++] left[i] \u0026lt;= right[j] -\u0026gt; merged[k] = left[i++] else -\u0026gt; merged[k] = right[j++] } } return merged } In this function, we compare elements from the left and right subarrays, merging them into a single sorted array.\nNow, let\u0026rsquo;s implement the recursive Merge Sort function:\nfun mergeSort(arr: IntArray): IntArray { if (arr.size \u0026lt;= 1) return arr val mid = arr.size / 2 val left = arr.copyOfRange(0, mid) val right = arr.copyOfRange(mid, arr.size) return merge(mergeSort(left), mergeSort(right)) } In this code, the mergeSort function recursively divides the array into halves and calls itself until the base case is reached when the array size is 1 or empty. Then, it merges the sorted subarrays using the previously defined merge function.\nTesting the Merge Sort Implementation Let\u0026rsquo;s test our merge sort implementation with a sample array:\nfun main() { val unsortedArray = intArrayOf(64, 34, 25, 12, 22, 11, 90) val sortedArray = mergeSort(unsortedArray) println(\u0026#34;Original Array: ${unsortedArray.joinToString()}\u0026#34;) println(\u0026#34;Sorted Array: ${sortedArray.joinToString()}\u0026#34;) } This program initializes an array, performs the merge sort and prints both the original and sorted arrays.\nAnalysis of Merge Sort Algorithm Merge Sort is a sorting algorithm that follows the divide-and-conquer paradigm. Let\u0026rsquo;s analyze its key aspects.\nTime Complexity Merge Sort guarantees a consistent time complexity of O(n log n) for the worst, average and best cases. This efficiency is achieved by dividing the array into halves and recursively sorting them before merging resulting in a logarithmic depth and linear work at each level.\nDivide Phase Dividing the array into halves requires O(log n) operations. This is because the array is continually divided until each subarray contains only one element.\nMerge Phase Merging two sorted arrays of size n/2 takes O(n) time. Since there are log n levels in the recursive tree, the total merging time is O(n log n).\nThe overall time complexity is dominated by the merging phase, making merge sort particularly efficient for large datasets. It outperforms algorithms with higher time complexities, such as Bubble Sort or Insertion Sort.\nSpace Complexity Merge Sort has a space complexity of O(n) due to the need for additional space to store the temporary merged arrays during the merging phase. Each recursive call creates new subarrays, and the merging process involves creating a new array that stores the sorted elements.\nTemporary Arrays During the merging phase, temporary arrays are created to store the sorted subarrays. The size of these arrays is proportional to the size of the input.\nRecursive Stack The recursive calls contribute to the space complexity. In the worst case, the maximum depth of the recursion tree is log n, which determines the space required for the function call stack. Despite the additional space requirements, merge sort\u0026rsquo;s stability, predictable performance and ease of parallelization make it a viable choice in scenarios where memory usage is not a critical concern.\nStability and Parallelization Merge sort is a stable sorting algorithm, meaning that equal elements maintain their relative order in the sorted output. This stability is essential in applications where the original order of equal elements should be preserved.\nAdditionally, merge sort is inherently parallelizable. The divide-and-conquer nature of the algorithm allows for straightforward parallel implementations. Each subarray can be sorted independently and the merging process can be parallelized leading to potential performance gains on multi-core architectures.\nConclusion Merge Sort is a highly efficient and predictable sorting algorithm with a consistent time complexity of O(n log n). Its stability and parallelizability make it a popular choice in various applications, especially when dealing with large datasets. While it incurs a space overhead due to the need for temporary arrays, the trade-off in terms of time complexity and reliability often justifies its use in practical scenarios.\n","date":"February 20, 2024","image":"https://reflectoring.io/images/stock/0022-sorting-1200x628-branded_hu1c1aaf0d36678df59bf1aa24b7e4f082_180085_650x0_resize_q90_box.jpg","permalink":"/kotlin-merge-sort/","title":"Merge Sort in Kotlin"},{"categories":["Kotlin"],"contents":"One of Kotlin\u0026rsquo;s standout features is extension functions, a mechanism that empowers developers to enhance existing classes without modifying their source code. In this blog post, we will explore Kotlin extension functions, understanding their syntax, exploring use cases and recognizing their impact on code maintainability and readability.\nUnderstanding Extension Functions At its core, an extension function is a way to augment a class with new functionality without inheriting from it. In Kotlin, extension functions are defined outside the class they extend and are called as if they were regular member functions. This provides a seamless way to add utility methods to existing classes, promoting code reuse and maintainability.\nLet\u0026rsquo;s delve into the syntax of extension functions by creating a simple example:\n// Define an extension function for Int class fun Int.addition(other: Int): Int { return this + other } fun main() { val result = 5.addition(3) println(\u0026#34;Result of addition: $result\u0026#34;) } In this example, addition is an extension function for the Int class. It takes another Int as a parameter and returns the result of the addition. In the main function, we use this extension function to perform addition on the Int values 5 and 3.\nAdvantages of Extension Functions Code Organization and Readability Extension functions contribute to better code organization by grouping related functionality together. This is particularly beneficial when working with large codebases, as it helps maintain a clear and modular structure. By encapsulating related operations in extension functions, developers can quickly locate and understand the purpose of specific functionality.\nConsider a scenario where we need to manipulate dates in various ways throughout our codebase. Instead of scattering date-related functions across different classes or files, we can create a DateUtils class and define extension functions for the Date class within it. This centralizes date-related operations, improving code readability and maintainability.\nfun Date.isFutureDate(): Boolean { return this \u0026gt; Date() } fun Date.isPastDate(): Boolean { return this \u0026lt; Date() } // Usage val today = Date() val futureDate = today.plusDays(7) println(futureDate.isFutureDate()) // Output: true println(today.isPastDate()) // Output: false  Enhanced API Design Extension functions contribute to a more fluent and expressive API design. They allow developers to add domain-specific methods to classes, making the codebase more intuitive and readable. This is particularly advantageous when working with third-party libraries or APIs as extension functions enable developers to adapt and extend functionality seamlessly.\nConsider a scenario where we\u0026rsquo;re working with the Android framework and want to format a timestamp as a user-friendly string. Instead of creating a utility class with static methods, we can extend the Long class directly, enhancing the readability of our code.\nfun Long.toFormattedDateString(): String { val dateFormat = SimpleDateFormat(\u0026#34;MMM dd, yyyy\u0026#34;, Locale.getDefault()) return dateFormat.format(Date(this)) } // Usage val timestamp = System.currentTimeMillis() val formattedDate = timestamp.toFormattedDateString() println(formattedDate) // Output: Dec 16, 2023  Interoperability Kotlin\u0026rsquo;s interoperability with Java is a key strength of the language. Extension functions play a crucial role in enhancing the interoperability between Kotlin and Java code. When we extend a Java class in Kotlin, the extension functions seamlessly become part of the class\u0026rsquo;s API making it easier for Kotlin developers to work with Java libraries.\nConsider a scenario where you\u0026rsquo;re dealing with Java\u0026rsquo;s List interface and want to filter its elements based on a predicate. We can create an extension function in Kotlin to add this functionality:\nfun \u0026lt;T\u0026gt; List\u0026lt;T\u0026gt;.filterCustom(predicate: (T) -\u0026gt; Boolean): List\u0026lt;T\u0026gt; { return this.filter(predicate) } // Usage val numbers = listOf(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) val evenNumbers = numbers.filterCustom { it % 2 == 0 } println(evenNumbers) // Output: [2, 4, 6, 8, 10] This seamless integration of extension functions across language boundaries enhances the collaboration between Kotlin and Java components in a codebase.\nExtension Functions Use Cases String Manipulation Extension functions are excellent for enhancing the functionality of the String class. For instance, we can create an extension function to capitalize the first letter of a string:\nfun String.capitalizeFirstLetter(): String { return if (isNotEmpty()) { this[0].toUpperCase() + substring(1) } else { this } } // Usage val originalString = \u0026#34;hello, world!\u0026#34; val modifiedString = originalString.capitalizeFirstLetter() println(modifiedString) // Output: Hello, world!  This extension function, capitalizeFirstLetter, enhances the functionality of the String class by capitalizing the first letter of a given string. It ensures that the modified string maintains its original length.\nCollections Operations Simplify common operations on collections by creating extension functions.\nHere\u0026rsquo;s an example that calculates the average of a list of numbers:\nfun List\u0026lt;Int\u0026gt;.average(): Double { return if (isNotEmpty()) { sum().toDouble() / size } else { 0.0 } } // Usage val numbers = listOf(1, 2, 3, 4, 5) val avg = numbers.average() println(avg) // Output: 3.0 The extension function average simplifies the process of calculating the average of a list of integers. It adds a concise and reusable method to the List\u0026lt;Int\u0026gt; class promoting cleaner code.\nView-related Operations in Android In Android development, extension functions can be valuable for simplifying common operations on View objects. Consider the following example, which shows how to hide and show a View:\nfun View.hide() { visibility = View.GONE } fun View.show() { visibility = View.VISIBLE } // Usage val myView = findViewById\u0026lt;View\u0026gt;(R.id.myView) myView.hide() In Android development, these extension functions hide and show provide a convenient way to manipulate the visibility of a View. They enhance the readability of code when dealing with UI elements.\nValidation Functions Create extension functions to validate input data. For instance, we can validate an email address format:\nfun String.isValidEmail(): Boolean { val emailRegex = \u0026#34;^[A-Za-z](.*)([@]{1})(.{1,})(\\\\.)(.{1,})\u0026#34; return matches(emailRegex.toRegex()) } // Usage val email = \u0026#34;user@example.com\u0026#34; println(email.isValidEmail()) // Output: true The extension function isValidEmail adds a validation check to the String class for email addresses. This promotes the reuse of the validation logic and keeps it closely tied to the data it validates.\nFile and Directory Operations Simplify file and directory operations using extension functions. Here\u0026rsquo;s an example that reads the contents of a file:\nfun File.readText(): String { return readText(Charsets.UTF_8) } // Usage val file = File(\u0026#34;example.txt\u0026#34;) val fileContents = file.readText() println(fileContents) The extension function readText simplifies reading the contents of a file by extending the functionality of the File class. It provides a more concise and expressive way to handle file operations.\nNetwork Request Handling Simplify network request handling by adding extension functions to classes like Response:\nfun Response\u0026lt;*\u0026gt;.isSuccessful(): Boolean { return isSuccessful } // Usage val response = //... (retrofit or okhttp response) if (response.isSuccessful()) { // Handle successful response } else { // Handle error } The extension function isSuccessful simplifies network request handling by providing a convenient method to check whether a network response is successful. It improves the readability of code dealing with network requests.\nBest Practices for Using Extension Functions While extension functions offer a powerful tool for enhancing code, it\u0026rsquo;s essential to follow some best practices to ensure their effective and maintainable usage.\nAvoid Overuse While extension functions can greatly improve code readability and organization, excessive use can lead to confusion and code bloat. It\u0026rsquo;s crucial to strike a balance and reserve extension functions for cases where they genuinely improve the API and maintainability of the code.\nBe Mindful of Scope Extension functions are powerful tools but it\u0026rsquo;s crucial to be mindful of their scope. The scope of an extension function is tied to the package in which it is defined. When we import some extension functions, those functions become accessible throughout the whole file where the import statement is declared.\nConsider the following example:\n// File: StringUtil.kt package com.example.util fun String.customExtensionFunction(): String { return this.toUpperCase() } In the above code, we\u0026rsquo;ve defined an extension function customExtensionFunction for the String class inside the com.example.util package.\nNow, let\u0026rsquo;s use this extension function in another file:\n// File: Main.kt import com.example.util.customExtensionFunction fun main() { val myString = \u0026#34;hello, world!\u0026#34; val result = myString.customExtensionFunction() println(result) } In the Main.kt file, we import the customExtensionFunction from the com.example.util package and apply it to a string. The extension function is in scope wherever the function is imported.\nPrioritize Clarity Over Cleverness When defining extension functions, prioritize clarity and readability over cleverness. While concise and expressive code is desirable, it should not compromise the ability of other developers (or your future self) to understand the code easily. Choose function names that clearly convey their purpose.\nLeverage Extension Properties In addition to functions, Kotlin also supports extension properties. Extension properties allow us to add new properties to existing classes. While using them judiciously, extension properties can complement extension functions, providing a comprehensive and cohesive augmentation of class functionality.\nval String.isPalindrome: Boolean get() = this == this.reversed() // Usage val palindromeString = \u0026#34;level\u0026#34; println(palindromeString.isPalindrome) // Output: true Conclusion By providing a mechanism to extend existing classes without modifying their source code, extension functions empower developers to create modular, maintainable, and interoperable code. Whether enhancing APIs, improving code organization, or facilitating interoperability with Java, extension functions are a versatile tool in the Kotlin developer\u0026rsquo;s toolkit. As you embark on your Kotlin journey, consider the judicious use of extension functions to unlock the full potential of this dynamic and elegant language.\n","date":"February 13, 2024","image":"https://reflectoring.io/images/stock/0112-ide-1200x628-branded_hu3b7dcb6bd35b7043d8f1c81be3dcbca2_169620_650x0_resize_q90_box.jpg","permalink":"/extension%20functions%20in%20kotlin/","title":"Extension Functions in Kotlin"},{"categories":["Java"],"contents":"Java Records introduce a simple syntax for creating data-centric classes, making our code more concise, expressive, and maintainable.\nIn this guide, we\u0026rsquo;ll explore the key concepts and practical applications of Java Records, providing a step-by-step guide to creating records and sharing best practices for using them effectively in projects. We\u0026rsquo;ll cover everything from defining components and constructors to accessing components, overriding methods, and implementing additional methods. Along the way, we\u0026rsquo;ll compare Java Records with traditional Java classes, highlighting the reduction in boilerplate code and showcasing the power and efficiency of records.\n Out of clutter, find simplicity \u0026mdash; Albert Einstein.\n Java records offer a cleaner, hassle-free way to handle our data. Say goodbye to verbose classes and hello to a world where our code expresses intentions with crystal clarity. Java records are like the wizard\u0026rsquo;s tool, magically generating essential methods, leaving us with more time to focus on the business logic! 🎩✨\n Example Code This article is accompanied by a working code example on GitHub. Get Familiar With Java Records In the constantly changing world of Java programming, simplicity is the most desired approach. Java records transform the way we manage data classes. It simplifies the way we implement the data classes by taking out the verbosity.\nWhat are Java Records? Java records are a special kind of class introduced in Java 14. They are tailored for simplicity, designed to encapsulate data without the clutter of boilerplate code. With a concise syntax, records enable us to create immutable data holders effortlessly.\nComparison to Lombok and Kotlin Lombok: Lombok simplifies Java development by reducing boilerplate code. It offers annotations like @Data for automatic generation of methods, akin to Java records. While providing flexibility, Lombok requires external dependencies and may lack the language-level support and immutability guarantees inherent in Java records.\nKotlin: Kotlin, as a modern, concise language for the JVM, incorporates data classes that share similarities with Java records. Kotlin\u0026rsquo;s data classes automatically generate equals(), hashCode(), and toString() methods, enhancing developer productivity. Unlike Java records, Kotlin\u0026rsquo;s data classes support default parameter values and additional features within a concise syntax, providing expressive and powerful data modeling capabilities.\n Why Records? Traditionally, Java classes representing data structures contained repetitive code for methods such as equals(), hashCode(), toString(), getter/setter methods, and public constructors. This resulted in bloated classes, making the codebase more difficult to read, understand, maintain, and extend. Java records were introduced as a solution, simplifying the creation of data-centric classes and addressing the issues of verbosity and redundancy.\nLet\u0026rsquo;s look at some of the benefits of using Java records.\nSimplicity One of the key advantages of records is their ability to create simple data objects without the need for writing extensive boilerplate code. With records, we can define the structure of our data concisely and straightforwardly.\nImmutability by Default Java records are immutable by default, meaning that once we create a record, we cannot modify its fields. This immutability ensures data integrity and makes it easier to reason about the state of the objects.\nA Note on Immutability Records ensure the immutability of their components by making all components provided in the constructor final. However, if a record contains a mutable object, for example an ArrayList this object itself can be modified. Furthermore, records can contain mutable static fields.\n Conciseness and Readability Java records offer a more concise and readable syntax compared to traditional Java classes. The streamlined syntax of records allows us to define our data fields and their types in a compact and visually appealing way, making it easier to understand and maintain our code.\nIn summary, Java records provide a powerful and efficient way to create data-centric classes. They simplify the process of defining and working with data objects, promote immutability, and enhance code readability. By leveraging the benefits of records, we can write cleaner and more maintainable code in our Java projects.\nThe Evolution of Java Records  Java 14 (March 2020): Records made their first appearance as a preview feature, giving us a glimpse into the exciting future of simplified Java programming. Java 15 and 16: Records kept evolving and improving, thanks to the valuable feedback from the community. Java 16 (March 2021): Records were officially established as a standard feature, indicating their readiness to shine in Java applications.  With Java records, we can experience a shift from complexity to clarity in Java programming.\nIn the following sections, we will dive into the details of Java records, including their syntax, use cases, and how they can bring elegance to our code.\nNotes on Perview Enablement And Why We Need To Know It  For Java versions before 14, records are not available as a language feature. For Java versions 14, 15, or between 14 and 16, we need to use the --enable-preview flag to use records. For Java versions 16 and above, records are a standard feature, and the --enable-preview flag is not required for using records.  Preview features are introduced in a specific Java version but are not considered part of the official language specification until they are finalized in a subsequent release.\nHere\u0026rsquo;s how we can use --enable-preview:\nCompilation:\njavac --enable-preview --release 14 SomeFile.java Execution:\njava --enable-preview SomeClass In the above commands:\n --enable-preview enables preview features during compilation and execution. --release 14 specifies the Java version we are targeting. Replace 14 with the appropriate version number if we are using a different version of Java.  It is important to note that when using preview features, we might encounter changes or improvements in subsequent Java versions, and the syntax or behavior of these features could be refined before they become permanent parts of the language.\n👉 Always check the official Java documentation and release notes for the specific version we are using to understand the details of the preview features and any changes made in subsequent releases.\nThe configurations for IDEs depend on the IDE we are using.\n👉 Refer IDE documentation for specific details on code generation and other developer tools provided for implementing records.\n Let\u0026rsquo;s explore the challenges we face and how Java records can help solve them. We\u0026rsquo;ll also take a closer look at the syntax of Java records to gain a better understanding.\nChallenges Faced by Developers Creating and maintaining data-centric classes in Java traditionally required substantial effort.\nTraditional Java Results in Verbosity Overload Traditional Java classes required us to write verbose code for basic functionalities like constructor, gettter, setter, equals(), hashCode(), and toString(). This verbosity cluttered the code, making it difficult to write, read and understand.\n// Verbose Java class public class Person { private String name; private int age; public Person(String name, int age) { this.name = name; this.age = age; } @Override public boolean equals(Object obj) { // Manually written equals() method  // ...  } @Override public int hashCode() { // Manually written hashCode() method  // ...  } @Override public String toString() { // Manually written toString() method  // ...  } // getters and setters } Developers Need to Write Boilerplate Code Writing repetitive boilerplate code was not only time-consuming but also error-prone. Any change in the class structure meant manually updating multiple methods, leading to maintenance nightmares.\nDevelopers Need to Implement Immutability Ensuring immutability, a key aspect of data integrity, demanded additional effort. We had to meticulously design classes to make them immutable, often leading to complex and convoluted code.\nHow Does Java Records Address These Issues? Java Records, with their concise syntax, automate essential method generation, eliminating boilerplate code. They prioritize immutability by default, ensuring data integrity without manual enforcement. Automatic generation of methods like equals(), hashCode(), and toString() simplifies development, enhancing code readability and maintainability. Classes become concise, improving clarity and facilitating quick comprehension.\nSyntax of Java Records Let\u0026rsquo;s dive into the syntax of Java records, covering its key elements, various types of constructors, methods, and how records compare with traditional Java classes:\npublic record Person(String name, int age) { // Constructor and methods are automatically generated. } In this example, Person is a Java record with two components: name (a String) and age (an int). The record keyword signifies the creation of a record class.\nA record acquires these members automatically:\n A private final field for each of its components A public read accessor method for each component with the same name and type of the component; in this example, these methods are Person::name() and Person::age() A public constructor whose signature is derived from the record components list. The constructor initializes each private field from the corresponding argument. Implementations of the equals() and hashCode() methods, which specify that two records are equal if they are of the same type and their corresponding record components are equal An implementation of the toString() method that includes the string representation of all the record\u0026rsquo;s components, with their names  Records extend java.lang.Record, are final, and cannot be extended.\nHere\u0026rsquo;s a breakdown of the essential components.\nTypes of Constructors Canonical Constructor When we declare the record, canonical constructor is automatically generated behind the scene:\npublic record Person(String name, int age) { // autogenerated code  // private final String name;  // private final int age;  // public Person(String name, int age) {  // this.name = name;  // this.age = age;  // } } public Person(String name, int age) is the canonical constructor (all-arguments constructor) generated for us.\nCompact Constructor A compact constructor enables developers to add custom logic during object initialization. This constructor is explicitly declared within the record and can perform additional operations beyond the simple initialization.\nHowever, unlike a class constructor, a record constructor doesn\u0026rsquo;t have a formal parameter list; this is called a compact constructor.\nBy providing an opportunity to perform custom operations during initialization, developers can ensure that their objects are correctly and completely initialized before they are used. For example, we can implement data validation rules before initializing the fields:\npublic record Person(String name, int age) { // consturctor defined by the developer  // Remember that it not like a standard constructor  // We do not declare the constructor arguments  // Just add body with custom logic  public Person { // do not accept bad input  if (age \u0026lt; 0) { throw new IllegalArgumentException(\u0026#34;Age must be greater than zero.\u0026#34;); } } } // Usage Person person = new Person(\u0026#34;Alice\u0026#34;, 30); // Valid Person invalidPerson = new Person(\u0026#34;Bob\u0026#34;, -5); // Throws IllegalArgumentException For example, the record Person has fields name and age. Its custom constructor checks if the age is negative. If yes, it throws a IllegalArgumentException.\nDefault Constructor Records can have a default constructor that initializes all components to their default values. This must delegate, directly or indirectly, to the canonical constructor:\npublic record Person(String name, int age) { public Person() { // we must call the canonical constructor  this(\u0026#34;Foo\u0026#34;, 50); } } // Usage Person person = new Person(); // Components initialized to \u0026#34;Foo\u0026#34; and 50 In this example, it will generate the no-arguments constructor (Java\u0026rsquo;s standard default initialization for the object state):\n// autogenerated cannonical constructor public Person(String name, int age) { this.name = name; this.age = age; } public Person() { this(\u0026#34;Foo\u0026#34;, 50); } Custom Constructor We can create custom constructors with subset of parameters, allowing flexibility in object creation. This must delegate, directly or indirectly, to the canonical constructor:\npublic record Person(String name, int age) { public Person { if (name == null) { throw new IllegalArgumentException(\u0026#34;Name cannot be null.\u0026#34;); } } public Person(int age) { this(\u0026#34;Bob\u0026#34;, age); } } // Usage Person person = new Person(\u0026#34;Alice\u0026#34;, 30); // Valid Person unknownPerson = new Person(25); // Uses custom constructor with default name \u0026#34;Bob\u0026#34; In this example, we can create a custom constructor that only takes in the age parameter. This means that when we create an instance of the Person class, we only need to provide the age value. The other properties will be set to their default values.\nThis feature is especially useful when dealing with large and complex classes that have many properties. It allows developers to create objects quickly and efficiently without having to specify all the properties every time.\nHow to Access Components? Components in a record are implicitly final, making them immutable. Records automatically generate getters for components, providing read-only access:\nPerson person = new Person(\u0026#34;Alice\u0026#34;, 30); String name = person.name(); // Getter for \u0026#39;name\u0026#39; int age = person.age(); // Getter for \u0026#39;age\u0026#39; When working with records in Java, it is important to understand how to access their components. Components are the individual fields or properties that make up a record. For example, in the code above, the Person record has two components: name and age.\nTo access the components of a record, we first need to create an instance of the record using the new keyword. In this case, we are creating a new Person record and assigning it to a variable called person. The values Alice and 30 are passed as arguments to the Person constructor, which initializes the name and age components of the record.\nOnce we have an instance of the record, we can access its components using getters. Getters are methods that are automatically generated by the Java compiler for each component of a record. In the code above, we are using the name() and age() getters to retrieve the values of the name and age components, respectively. As these are read only properties, there are no setters available for name and age.\nHow to Override Methods? Records automatically generate methods like equals(), hashCode(), and toString() based on components. We can customize these methods if needed:\npublic record Person(String name, int age) { // Automatically generated methods can be overridden  @Override public String toString() { return String.format(\u0026#34;Person{name=\u0026#39;%s\u0026#39;, age=%d}\u0026#34;, name, age); } } // Usage Person person = new Person(\u0026#34;Alice\u0026#34;, 30); System.out.println(person); // Output: Person{name=\u0026#39;Alice\u0026#39;, age=30} How to Delegate Methods? Record methods can delegate behavior to other methods within the record or external methods. This enables code reuse and modular design:\npublic record Person(String name, int age) { public Person withName(String name) { return new Person(name, age); } } // Usage Person person = new Person(\u0026#34;Alice\u0026#34;, 50); // Person[name=Alice, age=50] Person newPerson = person.withName(\u0026#34;Tom\u0026#34;); // Person[name=Tom, age=50] How to Implement Methods? We can add additional methods to records, allowing them to encapsulate behavior related to the data:\npublic record Person(String name, int age) { public boolean isAdult() { return age \u0026gt;= 18; } } // Usage Person person = new Person(\u0026#34;Alice\u0026#34;, 30); System.out.println(person.isAdult()); // Output: true One-to-one Comparison Let\u0026rsquo;s compare a record with a traditional Java class to highlight the reduction in boilerplate code:\n// Traditional Java class public class Person { private String name; private int age; public Person(String name, int age) { this.name = name; this.age = age; } public String getName() { return name; } public int getAge() { return age; } @Override public boolean equals(Object obj) { // Manually written equals() method  // ...  } @Override public int hashCode() { // Manually written hashCode() method  // ...  } @Override public String toString() { // Manually written toString() method  // ...  } } The equivalent Java record achieves the same functionality with significantly less code:\n// Java record public record Person(String name, int age) {} Java records not only enhance readability and maintainability but also promote a more expressive and concise coding style, allowing us to focus on the core logic of their applications.\nMaintain the Overall Immutability of the Record If a record contains non-primitive or mutable components, ensure that these components are also immutable or use defensive copies to maintain the overall immutability of the record:\npublic record Address(String city, List\u0026lt;String\u0026gt; streets) { // Automatically generated methods ensure immutability, but caution with mutable components } List\u0026lt;String\u0026gt; streets = new ArrayList\u0026lt;\u0026gt;(); streets.add(\u0026#34;Street 1\u0026#34;); streets.add(\u0026#34;Street 2\u0026#34;); Address address = new Address(\u0026#34;City\u0026#34;, List.copyOf(streets)); streets.add(\u0026#34;Street 3\u0026#34;); System.out.println(address.streets()); // Output: [Street 1, Street 2] What Are the Best Practices for Using Records? Let\u0026rsquo;s explore best practices for using Java records effectively in projects, guidelines for maintaining code consistency and readability, and practical applications and conclusions.\nChoose Appropriate Use Cases Choosing the appropriate use cases for Java records is paramount. Utilize records for simple data carriers where immutability and value semantics are crucial. However, refrain from employing records for classes with complex business logic or abundant behavior:\n Use Case 1: Configuration Settings: Use records to represent configuration settings for an application. These settings are typically immutable, and their value semantics are crucial for consistent behavior across the application\u0026rsquo;s lifecycle. Use Case 2: DTOs in RESTful Services: Apply records for Data Transfer Objects (DTOs) in RESTful services. DTOs often serve as simple data carriers between the client and server, emphasizing immutability to prevent unintended modifications during data transmission.  Avoid Overcomplication Avoid overcomplicating records. Resist the temptation to overload them with unnecessary methods. Embrace simplicity, letting records manage fundamental operations like equals(), hashCode(), and toString():\n Use Case 1: Domain Entities: Employ records to model simple domain entities where basic CRUD operations suffice. Avoid cluttering these entities with unnecessary methods, letting records automatically handle essential operations for improved code maintainability. Use Case 2: Event Payloads: Use records to represent event payloads in an event-driven architecture. Overcomplicating these payloads with excessive methods is unnecessary; records naturally handle equality and string representation, simplifying event handling.  Ensure Immutability Ensuring immutability is key to leveraging the benefits of Java records. Keep components final to enforce immutability. In cases where a component is mutable, such as a collection, create defensive copies during construction to maintain the desired immutability:\n Use Case 1: Currency Conversion Rates: Represent currency conversion rates using records. As these rates are immutable once defined, marking the components as final ensures that the rates remain constant throughout the application\u0026rsquo;s execution. Use Case 2: Configuration Properties: Utilize records for storing configuration properties, such as database connection details. Immutability is crucial to prevent unintended modifications to these properties, ensuring the stability of the application\u0026rsquo;s configuration.  Avoid Business Logic in Records Steer clear of embedding business logic within records. Maintain the focus of records on data representation and delegate business logic to separate classes. This approach enhances maintainability and adheres to the principle of separation of concerns:\n Use Case 1: Employee Information: Model employee information using records but avoid incorporating complex business logic. Instead, delegate tasks like salary calculations or performance assessments to separate classes, adhering to the best practice of separation of concerns. Use Case 2: Sensor Readings: Use records to represent sensor readings in an Internet of Things (IoT) application. Keep records focused on data representation, and delegate any complex processing or analysis of sensor data to dedicated classes.  Maintain Readability To maintain readability, adhere to consistent naming conventions for both record classes and their components. Meaningful names for records and components contribute to code clarity, making it easier for developers to understand the structure and purpose of the classes:\n Use Case 1: User Preferences: Apply records to store user preferences in a system. Consistent naming conventions for record classes and components enhance code readability, making it easier for developers to understand and modify user preference-related code. Use Case 2: Geographic Coordinates: Represent geographic coordinates using records. Meaningful names for record classes and components contribute to the clarity of the code, allowing developers to quickly comprehend the structure and purpose of the geographic coordinate representation.  Ways to Master Java Records Let\u0026rsquo;s Recap the Key Concepts We learnt that Java records streamline data class creation, reducing redundant code. They prioritize immutability and value semantics, ensuring that instances of records represent fixed data values. With records, developers can focus on defining the structure and properties of their data without being burdened by repetitive code.\nThen we understood the automatic method generation offered by Java records. Java records automatically generate essential methods such as equals(), hashCode(), and toString(). This automation guarantees consistent behavior across different record instances, enhancing code reliability and reducing the likelihood of errors. By providing these methods out of the box, records simplify the development process and enable developers to work more efficiently.\nFinally, we covered the best practices for effective use of Java records. Adhering to best practices is essential for leveraging the benefits of Java records effectively. This includes selecting appropriate use cases where immutability and value semantics are crucial, such as simple data carriers. Additionally, maintaining readability through consistent naming conventions and avoiding overcomplication ensures that records remain manageable and easy to understand throughout the development lifecycle. Following these practices fosters a more robust and maintainable codebase.\nNext Steps We Should Take Here are few steps you can take in order to master the Java record concepts:\n Apply the Concepts Learned: Do not just stop at understanding — implement records in projects. Create immutable, clean, and readable data classes. Experiment and Refine: Play with different scenarios and refine understanding. Experimentation leads to mastery. Share and Collaborate: Share knowledge with peers, and collaborate on projects to learn from real-world use cases. Refer Official Java Documentation: Dive deep into the official documentation to explore advanced topics and nuances. Read JDK Enhancement Proposal for Java Records: Refer official documentation to know more about the proposal for enhancements to the JDK for Java records.  Conclusion In this article, we learnt how Java records make data class construction easier by reducing repetitive code and automating functions such as equals(), hashCode(), and toString(). Following best practices promotes effective use, focusing on appropriate scenarios and preserving code readability. Adopting records results in more efficient and manageable Java codebases.\nEquipped with the knowledge of Java records, go, create, innovate, and let the code shape the future!\nHappy coding! 🚀\n","date":"February 11, 2024","image":"https://reflectoring.io/images/stock/0069-testcontainers-1200x628-branded_hu3b772680baa3d43165c81d2cadb1c4a7_781128_650x0_resize_q90_box.jpg","permalink":"/beginner-friendly-guide-to-java-records/","title":"Use Cases for Java Records"},{"categories":["Spring"],"contents":"The FasterXML Jackson library is the most popular JSON parser in Java. Spring internally uses this API for JSON parsing. For details on other commonly used Jackson annotations, refer to this article. Also, you can deep dive into another useful Jackson annotation @JsonView. In this article, we will look at how to use the @JsonCreator annotation. Subsequently, we will also take a look at a specific use case of using this annotation in the context of a Spring Boot application.\n Example Code This article is accompanied by a working code example on GitHub. What is @JsonCreator The @JsonCreator annotation is a part of the Jackson API that helps in deserialization. Deserialization is a process of converting a JSON string into a Java object. This is especially useful when we have multiple constructors/static factory methods for object creation. With the @JsonCreator annotation we can specify which constructor/static factory method to use during the deserialization process.\nWorking With @JsonCreator Annotation In this section, we\u0026rsquo;ll look at a few use-cases of how this annotation works.\nDeserializing Immutable Objects Java encourages creating immutable objects since they are thread-safe and easy to maintain. To get a better understanding of how to use immutable objects in java, refer to this article. Let\u0026rsquo;s try to deserialize an immutable object UserData which is defined as below:\n@JsonIgnoreProperties(ignoreUnknown=true) public class UserData { private final long id; private final String firstName; private final String lastName; private LocalDate createdDate; public UserData(long id, String firstName, String lastName) { this.id = id; this.firstName = firstName; this.lastName = lastName; this.createdDate = LocalDate.now(); } public UserData(long id, String firstName, String lastName, LocalDate createdDate) { this.id = id; this.firstName = firstName; this.lastName = lastName; this.createdDate = createdDate; } // Getters here... } Next, let\u0026rsquo;s try to deserialize the object with Jackson\u0026rsquo;s ObjectMapper class:\n@Test public void deserializeImmutableObjects() throws JsonProcessingException { String userData = objectMapper.writeValueAsString(MockedUsersUtility.getMockedUserData()); System.out.println(\u0026#34;USER: \u0026#34; + userData); UserData user = objectMapper.readValue(userData, UserData.class); assertNotNull(user); } In the above example the writeValueAsString() serializes the UserData object to a String. The readValue() method is responsible for deserializing the String Object back to the UserData object. When we run the test above, we see this error:\ncom.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of `com.reflectoring.userdetails.model.UserData` (no Creators, like default constructor, exist): cannot deserialize from Object value (no delegate- or property-based Creator) at [Source: (String)\u0026#34;{\u0026#34;id\u0026#34;:100,\u0026#34;firstName\u0026#34;:\u0026#34;Ranjani\u0026#34;,\u0026#34;lastName\u0026#34;:\u0026#34;Harish\u0026#34;, createdDate\u0026#34;:\u0026#34;2024-01-16\u0026#34;}\u0026#34;; line: 1, column: 2] The error message explicitly states that the test could not run successfully as there was an error during deserialization. This is because the ObjectMapper class, by default, looks for a no-arg constructor to set the values. Since our Java class is immutable, it has neither a no-arg constructor nor setter methods to set values. Also, we see that the UserData class has multiple constructors. Therefore, we would need a way to instruct the ObjectMapper class to use the correct constructor for deserialization. Let\u0026rsquo;s modify our code to use the @JsonCreator annotation as :\n@JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public UserData(@JsonProperty(\u0026#34;id\u0026#34;) long id, @JsonProperty(\u0026#34;firstName\u0026#34;) String firstName, @JsonProperty(\u0026#34;lastName\u0026#34;) String lastName) { this.id = id; this.firstName = firstName; this.lastName = lastName; this.createdDate = LocalDate.now(); } Now, when we run our test again, we can see that the deserialization is successful. Let\u0026rsquo;s look at the additional annotation we\u0026rsquo;ve added to get this working.\n By applying the @JsonCreator annotation to the constructor, the Jackson deserializer knows which constructor needs to be used. JsonCreator.Mode.PROPERTIES indicates that the creator should match the incoming object with the constructor arguments. This is the most commonly used JsonCreator Mode. We annotate all the constructor arguments with @JsonProperty for the creator to map the arguments.  Understanding All Available JsonCreator Modes We can pass one of these four values as parameters to this annotation:\n JsonCreator.Mode.PROPERTIES : This is the most commonly used mode where every constructor/factory argument is annotated with @JsonProperty to indicate the name of the property to bind to. JsonCreator.Mode.DELEGATING : Single-argument constructor/factory method without JsonProperty annotation for the argument. Here, Jackson first binds JSON into type of the argument, and then calls the creator. Most commonly, we want to use this option in conjunction with JsonValue (used for serialization). JsonCreator.Mode.DEFAULT : If we do not choose any mode or the DEFAULT mode, Jackson decides internally which of the PROPERTIES / DELEGATING modes are applied. JsonCreator.Mode.DISABLED : This mode indicates the creator method is not to be used.  In the further sections, we will take a look at examples and how to use them effectively.\nAdditional Use Cases Let\u0026rsquo;s look at a few scenarios that will help us understand how and when to use the @JsonCreator annotation.\nSingle @JsonCreator in a Class To understand this, let\u0026rsquo;s first add @JsonCreator annotation with the same mode to two constructors in the same class as below:\n@JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public UserData(@JsonProperty(\u0026#34;id\u0026#34;) long id, @JsonProperty(\u0026#34;firstName\u0026#34;) String firstName, @JsonProperty(\u0026#34;lastName\u0026#34;) String lastName) { this.id = id; this.firstName = firstName; this.lastName = lastName; this.createdDate = LocalDate.now(); } @JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public UserData(@JsonProperty(\u0026#34;id\u0026#34;) long id, @JsonProperty(\u0026#34;firstName\u0026#34;) String firstName, @JsonProperty(\u0026#34;lastName\u0026#34;) String lastName, @JsonProperty(\u0026#34;createdDate\u0026#34;) LocalDate createdDate) { this.id = id; this.firstName = firstName; this.lastName = lastName; this.createdDate = createdDate; } When we try to run our test, it will fail with an error:\ncom.fasterxml.jackson.databind.exc.InvalidDefinitionException: Conflicting property-based creators: already had explicitly marked creator [constructor for `com.reflectoring.userdetails.model.UserData` (3 args), annotations: {interface com.fasterxml.jackson.annotation.JsonCreator= @com.fasterxml.jackson.annotation.JsonCreator(mode=PROPERTIES)}, encountered another: [constructor for `com.reflectoring.userdetails.model.UserData` (4 args), annotations: {interface com.fasterxml.jackson.annotation.JsonCreator= @com.fasterxml.jackson.annotation.JsonCreator(mode=PROPERTIES)} at [Source: (String)\u0026#34;{\u0026#34;id\u0026#34;:100,\u0026#34;firstName\u0026#34;:\u0026#34;Ranjani\u0026#34;,\u0026#34;lastName\u0026#34;:\u0026#34;Harish\u0026#34; ,\u0026#34;createdDate\u0026#34;:\u0026#34;2024-01-16\u0026#34;}\u0026#34;; line: 1, column: 1] As we can see, the error mentions \u0026ldquo;Conflicting property-based creators\u0026rdquo;. That indicates that the Jackson deserializer could not resolve the constructor to be used during deserialization as we have annotated both the constructors with @JsonCreator. When we remove one of them, the test runs successfully.\nUsing @JsonProperty with @JsonCreator for the PROPERTIES Mode To understand this let\u0026rsquo;s remove the @JsonProperty annotation set to the constructor arguments:\n@JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public UserData(long id, String firstName, String lastName) { this.id = id; this.firstName = firstName; this.lastName = lastName; this.createdDate = LocalDate.now(); } When we run our test, we see this error:\ncom.fasterxml.jackson.databind.exc.InvalidDefinitionException: Invalid type definition for type `com.reflectoring.userdetails.model.UserData`: Argument #0 of constructor [constructor for `com.reflectoring.userdetails.model.UserData` (3 args), annotations: {interface com.fasterxml.jackson.annotation.JsonCreator= @com.fasterxml.jackson.annotation.JsonCreator(mode=PROPERTIES)} has no property name (and is not Injectable): can not use as property-based Creator at [Source: (String)\u0026#34;{\u0026#34;id\u0026#34;:100,\u0026#34;firstName\u0026#34;:\u0026#34;Ranjani\u0026#34;,\u0026#34;lastName\u0026#34;:\u0026#34;Harish\u0026#34; ,\u0026#34;createdDate\u0026#34;:\u0026#34;2024-01-16\u0026#34;}\u0026#34;; line: 1, column: 1] which indicates that adding the @JsonProperty annotation is mandatory. As we can see the JSON property names and the deserialized object property names are exactly the same. In such cases there is an alternative way, where we can skip using the @JsonProperty annotation. Let\u0026rsquo;s modify our ObjectMapper bean :\n@Bean public ObjectMapper objectMapper() { ObjectMapper mapper = new ObjectMapper(); mapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false); mapper.registerModule(new JavaTimeModule()); mapper.registerModule(new ParameterNamesModule()); return mapper; } If we run our tests now, we can see that the test passes even without the use of @JsonProperty. This is because we registed the Jackson Parameters module by adding mapper.registerModule(new ParameterNamesModule()). This is a Jackson module that allows accessing parameter names without explicitly specifying the @JsonProperty annotation.\nAdditional Notes Another option is to register the Jdk8Module which includes the ParameterNamesModule along with other modules. Refer to the documentation for its usage.\n Apply @JsonCreator to Static Factory Methods Another way of creating immutable java objects is via static factory methods. We can apply the @JsonCreator annotations to static factory methods as well:\n@JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public static UserData getUserData(long id, String firstName,String lastName) { return new UserData(id, firstName, lastName, LocalDate.now()); } Here, since we\u0026rsquo;ve registered the ParametersNamesModule, we need not add the @JsonProperty annotation.\nUse a DELEGATING JsonCreator mode Let\u0026rsquo;s see how to deserialize an object using the DELEGATING JsonCreator Mode:\n@JsonCreator(mode = JsonCreator.Mode.DELEGATING) public UserData(Map\u0026lt;String,String\u0026gt; map) throws JsonProcessingException { this.id = Long.parseLong(map.get(\u0026#34;id\u0026#34;)); this.firstName = map.get(\u0026#34;firstName\u0026#34;); this.lastName = map.get(\u0026#34;lastName\u0026#34;); } When we pass a serialized Map object to the ObjectMapper class, it will automatically make use of the DELEGATING mode to create the UserData object:\npublic static Map\u0026lt;String, String\u0026gt; getMockedUserDataMap() { return Map.of(\u0026#34;id\u0026#34;, \u0026#34;100\u0026#34;, \u0026#34;firstName\u0026#34;,\u0026#34;Ranjani\u0026#34;, \u0026#34;lastName\u0026#34;,\u0026#34;Harish\u0026#34;); } @Test public void jsonCreatorWithDelegatingMode3() throws JsonProcessingException { String userDataJson = objectMapper.writeValueAsString(getMockedUserDataMap()); assertNotNull(userDataJson); UserData data = objectMapper.readValue(userDataJson, UserData.class); } Now that we understand how to use the @JsonCreator annotation in Java, let\u0026rsquo;s look at a specific use case where this annotation is required in a Spring Boot application.\nUsing @JsonCreator In a Spring Boot Application Let\u0026rsquo;s create a basic Spring Boot application with Rest endpoints that support pagination. Pagination is the concept of dividing a large number of records in parts/slices. It is particularly useful, when we have to create REST endpoints to be consumed by a front end application. The Spring paging framework converts this data for us which makes retrieving data easier. This sample application only demonstrates the usage of @JsonCreator. To understand how pagination works in Spring Boot, refer this article.\nThis sample User Application uses H2 database to store and retrieve data. The application is configured to run on port 8083, so let\u0026rsquo;s start our application first:\nmvnw clean verify spring-boot:run (for Windows) ./mvnw clean verify spring-boot:run (for Linux) Create Paginated Data Using Spring REST Let\u0026rsquo;s understand how @JsonCreator can be used to create paginated data in Spring. For this purpose, we will first create a GET endpoint that returns a paged object. Here, we have converted the List\u0026lt;User\u0026gt; returned from the DB into a paged object:\n@GetMapping(\u0026#34;/userdetails/page\u0026#34;) public ResponseEntity\u0026lt;Page\u0026lt;UserData\u0026gt;\u0026gt; getPagedUser( @RequestParam(defaultValue = \u0026#34;0\u0026#34;) int page, @RequestParam(defaultValue = \u0026#34;20\u0026#34;) int size) { List\u0026lt;UserData\u0026gt; usersList = userService.getUsers(); // First let\u0026#39;s split the List depending on the pagesize  int totalCount = usersList.size(); int startIndex = page * size; int endIndex = Math.min(startIndex + size, totalCount); List\u0026lt;UserData\u0026gt; pageContent = usersList.subList(startIndex, endIndex); Page\u0026lt;UserData\u0026gt; employeeDtos = new PageImpl\u0026lt;\u0026gt;(pageContent, PageRequest.of(page, size), totalCount); return ResponseEntity.ok() .body(employeeDtos); } Here, the Page class is an interface in the org.springframework.data.domain package. When we make a GET request to the endpoint http://localhost:8083/data/userdetails/page, we see the JSON response as below:\n{ \u0026#34;content\u0026#34;: [ { \u0026#34;id\u0026#34;: 1000, \u0026#34;firstName\u0026#34;: \u0026#34;Abel\u0026#34;, \u0026#34;lastName\u0026#34;: \u0026#34;Doe\u0026#34;, \u0026#34;createdDate\u0026#34;: \u0026#34;2024-01-26\u0026#34; }, { \u0026#34;id\u0026#34;: 1001, \u0026#34;firstName\u0026#34;: \u0026#34;Abuela\u0026#34;, \u0026#34;lastName\u0026#34;: \u0026#34;Marc\u0026#34;, \u0026#34;createdDate\u0026#34;: \u0026#34;2024-01-26\u0026#34; } // 18 more elements here  ], \u0026#34;pageable\u0026#34;: { \u0026#34;sort\u0026#34;: { \u0026#34;empty\u0026#34;: true, \u0026#34;sorted\u0026#34;: false, \u0026#34;unsorted\u0026#34;: true }, \u0026#34;offset\u0026#34;: 0, \u0026#34;pageNumber\u0026#34;: 0, \u0026#34;pageSize\u0026#34;: 20, \u0026#34;paged\u0026#34;: true, \u0026#34;unpaged\u0026#34;: false }, \u0026#34;last\u0026#34;: false, \u0026#34;totalElements\u0026#34;: 45, \u0026#34;totalPages\u0026#34;: 3, \u0026#34;size\u0026#34;: 20, \u0026#34;number\u0026#34;: 0 // More paged elements } The endpoint returned us some pagination metadata like the totalElements, totalPages, sort, size. Here, we see that the application has a total of 45 records, which is divided into 3 pages where each page has a maximum of 20 records. This JSON response gives us the first 20 elements from the List. As we can see, we were able to create paginated data successfully. In the next section, let\u0026rsquo;s look at how to test this GET endpoint.\nTesting Paginated Data using TestRestTemplate In this section, let\u0026rsquo;s write a Spring Boot test to understand how @JsonCreator helps us with deserialization when we call the GET endpoint using TestRestTemplate. Here, the Spring Boot test uses the same H2 database to retrieve data:\n@Test void givenGetData_whenRestTemplateExchange_thenReturnsPageOfUser() { ResponseEntity\u0026lt;Page\u0026lt;UserData\u0026gt;\u0026gt; responseEntity = restTemplate.exchange( \u0026#34;http://localhost:\u0026#34; + port + \u0026#34;/data/userdetails/page\u0026#34;, HttpMethod.GET, null, new ParameterizedTypeReference\u0026lt;Page\u0026lt;UserData\u0026gt;\u0026gt;() { }); assertEquals(200, responseEntity.getStatusCodeValue()); Page\u0026lt;UserData\u0026gt; restPage = responseEntity.getBody(); assertNotNull(restPage); assertEquals(45, restPage.getTotalElements()); assertEquals(20, restPage.getSize()); } When we run this test, we see an error as below:\norg.springframework.http.converter.HttpMessageConversionException: Type definition error: [simple type, class org.springframework.data.domain.Page]; nested exception is com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of `org.springframework.data.domain.Page` (no Creators, like default constructor, exist): abstract types either need to be mapped to concrete types, have custom deserializer, or contain additional type information at [Source: (org.springframework.util.StreamUtils$NonClosingInputStream); line: 1, column: 1] The error indicates that the response couldn\u0026rsquo;t be mapped to concrete types.\nNext, let\u0026rsquo;s update the test to use PageImpl which is a concrete implementation of the Page interface as below:\n@Test void givenGetData_whenRestTemplateExchange_thenReturnsPageOfUser() { addDataToDB(); ResponseEntity\u0026lt;PageImpl\u0026lt;UserData\u0026gt;\u0026gt; responseEntity = restTemplate.exchange( \u0026#34;http://localhost:\u0026#34; + port + \u0026#34;/data/userdetails/page\u0026#34;, HttpMethod.GET, null, new ParameterizedTypeReference\u0026lt;PageImpl\u0026lt;UserData\u0026gt;\u0026gt;() { }); assertEquals(200, responseEntity.getStatusCodeValue()); PageImpl\u0026lt;UserData\u0026gt; restPage = responseEntity.getBody(); assertNotNull(restPage); assertEquals(45, restPage.getTotalElements()); } When we run this test, we see this error now:\norg.springframework.http.converter.HttpMessageConversionException: Type definition error: [simple type, class org.springframework.data.domain.PageImpl]; nested exception is com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of `org.springframework.data.domain.PageImpl` (no Creators, like default constructor, exist): cannot deserialize from Object value (no delegate- or property-based Creator) at [Source: (org.springframework.util.StreamUtils$NonClosingInputStream); line: 1] The Jackson API was not able to map the pagination metadata into the PageImpl class. This is because the PageImpl class provided by Spring Data does not provide a constructor that Jackson can use to directly map the pagination metadata.\n@JsonCreator to the Rescue To resolve this issue, let\u0026rsquo;s create a class that implements PageImpl and use the @JsonCreator annotation and explicitly specify the mapping:\n@JsonIgnoreProperties(ignoreUnknown = true) public class RestPageImpl\u0026lt;T\u0026gt; extends PageImpl\u0026lt;T\u0026gt; { @JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public RestPageImpl(@JsonProperty(\u0026#34;content\u0026#34;) List\u0026lt;T\u0026gt; content, @JsonProperty(\u0026#34;number\u0026#34;) int number, @JsonProperty(\u0026#34;size\u0026#34;) int size, @JsonProperty(\u0026#34;totalElements\u0026#34;) Long totalElements, @JsonProperty(\u0026#34;pageable\u0026#34;) JsonNode pageable, @JsonProperty(\u0026#34;last\u0026#34;) boolean last, @JsonProperty(\u0026#34;totalPages\u0026#34;) int totalPages, @JsonProperty(\u0026#34;sort\u0026#34;) JsonNode sort, @JsonProperty(\u0026#34;numberOfElements\u0026#34;) int numberOfElements) { super(content, PageRequest.of(number, numberOfElements), totalElements); } } Let\u0026rsquo;s update the test to make use of the RestPageImpl class and rerun the test again:\n@Test void givenGetData_whenRestTemplateExchange_thenReturnsPageOfUser() { ResponseEntity\u0026lt;RestPageImpl\u0026lt;UserData\u0026gt;\u0026gt; responseEntity = restTemplate.exchange( \u0026#34;http://localhost:\u0026#34; + port + \u0026#34;/data/userdetails/page\u0026#34;, HttpMethod.GET, null, new ParameterizedTypeReference\u0026lt;RestPageImpl\u0026lt;UserData\u0026gt;\u0026gt;() { }); assertEquals(200, responseEntity.getStatusCodeValue()); RestPageImpl\u0026lt;UserData\u0026gt; restPage = responseEntity.getBody(); assertNotNull(restPage); assertEquals(45, restPage.getTotalElements()); assertEquals(20, restPage.getSize()); } Now, test runs successfully.\nAddressing Similar Scenarios Another scenario where we would need to follow a similar approach is when we use a Spring client such as RestTemplate to consume an API that returns a paged response. In such a case, we can use the @JsonCreator annotation explained in the example above to help Jackson deserialize the response.\nConclusion In this article, we took a closer look at how to make use of the @JsonCreator annotation including some examples. Also, we created a simple Spring Boot application that uses pagination to demo how this annotation comes handy during the deserialization process.\n","date":"February 9, 2024","image":"https://reflectoring.io/images/stock/0112-decision-1200x628-branded_hu7f90dfae195e1917856533d5015220f4_81515_650x0_resize_q90_box.jpg","permalink":"/spring-jsoncreator/","title":"Deserialize with Jackson's @JsonCreator in a Spring Boot Application"},{"categories":["Node"],"contents":"Google Cloud Pub/Sub employs a publish-subscribe model, streamlining modern software communication and reshaping how information is shared. Publishers send messages to topics, and subscribers interested in those messages can retrieve them flexibly and asynchronously. This approach redefines the rules of engagement for both microservices and monolith applications.\nIn this article, we\u0026rsquo;ll explore the workings of Google Cloud Pub/Sub, delve into common use cases, and demonstrate how to seamlessly integrate Google Pub/Sub with Node.js for real-world scenarios.\nPrerequisites To follow this tutorial, ensure you have the following:\n Basic knowledge of JavaScript and Node.js. A Google account An API Client (eg: Postman) Ngrok installed on your computer.   Example Code This article is accompanied by a working code example on GitHub. What Is Google Cloud Pub/Sub? Google Cloud Pub/Sub is a scalable message queuing service that ensures asynchronous and reliable messaging between various applications and services. It offers flexibility, supporting one-to-many, many-to-one, and many-to-many communication patterns. This service is designed to track changes in different applications and communicate these updates to diverse systems in real-time.\nHow Google Cloud Pub/Sub Works Consider two servers, server A and server B, requiring communication. Traditionally, server A directly notifies server B of changes using synchronous HTTP calls. However, this method has drawbacks: if server B is busy or slow, server A faces problems with communication.\nTo tackle these issues, we can switch to asynchronous communication using a Pub/Sub pattern. Pub/Sub allows systems to work independently, solving issues like unavailability and queuing. It also introduces flexibility and scalability to our systems.\nUsing Pub/Sub, server A (publisher) publishes events related to changes, categorizing them as a topic, and server B (subscriber) subscribes to these events. The publisher-subscription model allows messages to be disseminated to all subscriptions associated with a specific topic.\nThis approach provides flexibility to subscribers like server B, who can choose to pull messages at their convenience or have messages pushed to a specified endpoint. The push mechanism ensures proactive message delivery, creating a dynamic and adaptable communication framework for diverse application needs.\nFor a deeper understanding of Pub/Sub, let\u0026rsquo;s explore some of its key terminology and common use cases.\nGoogle Cloud Pub/Sub Terminology  Publisher: These are services that add new messages to a topic. Topics can have a single or multiple publishers Message: This is the data exchanged between services. It can be stored as a text or byte string, offering flexibility in formatting. Messages usually indicate events, such as data modification or action completion. Topic: This is like a message folder. We can add messages to or read messages from it. We can have multiple topics, and each topic is for a different type of message. Messages are put into specific topics, and we choose which topic we want messages from. This helps specify the messages we get. We can make as many topics as we want. Subscription: This is the act of subscribing to a topic. By subscribing to a topic, we express interest in receiving messages published on that specific topic. Each subscription is associated with a particular topic. Creating a subscription is a key step in the Google Pub/Sub messaging system to establish the link between a subscriber and the messages published on a topic. Receiving Messages: Subscribers receive messages from subscribed topics based on the subscription settings. There are two ways to receive messages:  Pull: Subscribers manually pull the latest messages through a direct request. Push: Subscribers request that all new messages be pushed to a provided endpoint URL.   Acknowledgment (Ack): Subscribers play a vital role by acknowledging each received message. This acknowledgment ensures that the same messages won\u0026rsquo;t be sent (Push) or read (Pull) repeatedly. If a message is not acknowledged, it triggers the assumption that it needs to be resent. Acknowledgment is a vital step in maintaining the efficiency of the messaging system, preventing redundant message delivery. Subscribers: These are services designed to receive messages from one or more topics. Subscribers have the flexibility to subscribe to multiple topics, providing a versatile and comprehensive method for message reception.  Common Pub/Sub Use Cases:  Real-time Event Processing: Monitor and react in real-time to user interactions, system malfunctions, and business events. Parallel Processing: Efficiently distribute and manage numerous tasks concurrently for improved performance. Tracking Database Changes: Keep a watchful eye on database changes and respond to them instantly for timely and effective updates.  Next, let\u0026rsquo;s start the process of setting up our Google Cloud Pub/Sub for integration into a Node.js app.\nSetting up a Google Cloud Pub/Sub in Our Node.js Application. To begin using Google Cloud Pub/Sub, we must first configure and create a Google Cloud Pub/Sub instance. Here are the steps to do this:\n Login To Google Cloud Console   After logging in, locate the Pub/Sub section in the left-hand menu or use the search bar.   Next, we will create a new project. A project provides a logical grouping of resources, making it easier to manage and organize Google Cloud resources. Click on the CREATE PROJECT button to create one.  Name the project nodejs-pub-sub and click the CREATE button to create a new project.\n To establish our channel for publishing and subscribing to messages we will start by creating our topic. Return to the Pub/Sub page, and click on the CREATE TOPIC button. This action prompts a configuration interface where we can define crucial parameters for our topic.   Input the topic name user_creation and click the CREATE button.   This action will create our user_creation topic and a default subscription named user_creation-sub will be automaticallygenerated since we left the Add default subscription box checked.   Subscriptions define how messages are delivered to subscribers, and specifying the delivery type is crucial for receiving real-time updates. Next, we will create two subscriptions with delivery types pull and push for our subscribers to connect with a topic.  To create this custom subscription, click on CREATE SUBSCRIPTION button and create a pull or push subscriptions following the steps in the upcoming sections.\nCreate a Pull Subscription  Subscription ID: email_subscription_pull Specify the Topic Name Select the delivery type as pull Click on the CREATE button at the bottom of the page to initiate the subscription creation process.  Create a Push Subscription  Subscription ID: email_subscription_push Specify the Topic Name Select the delivery type as push. Specify the live Endpoint URL for Pub/Sub to notify our subscriber service about new messages. Click on the CREATE button at the bottom of the page to finalize the subscription setup.  The HTTPS_LIVE_URL parameter used above signifies the HTTPS host URL where the subscriber service is hosted. For push subscriptions, Google Cloud Pub/Sub requires all endpoints to be deployed on HTTPS.\nNext, To connect to Google Cloud Pub/Sub from a Node.js application, authentication credentials are necessary. This typically involves obtaining a service account key. Let\u0026rsquo;s proceed by setting up our service account.\nSetting up a Service Account for Google Cloud Pub/Sub This involves configuring a service account with Pub/Sub access on the Google Cloud Console. A service account acts as a means for our application to authenticate itself with Google Cloud services. Once created, we can download its key in JSON format. The service account key contains essential information for our application to prove its identity and gain access to the Pub/Sub functionalities.\nFollow these steps to set up a service account::\n Navigate to the Pub/Sub Section and enable Pub/Sub API for our project Next, In the Google Cloud Console, go to the IAM \u0026amp; Admin section using the search bar. Click on Service accounts   Then click Create Service Account button.   Enter a service account name nodejs_app-pub-sub and description then click on the CREATE AND CONTINUE button.   Next, to give us full access to topics and subscriptions, filter and assign the role Pub/Sub Admin to our service account nodejs_app-pub-sub. After that, click the Continue button.    We can skip the Grant users access to this service account option since we are not giving access to other users or groups in this article. Finally, click on the Done button.\n  This should redirect us to the Service accounts page.\n    Next, click to open the newly created service account and locate the key section.\n  Click on Add Key, then select Create new Key, choose the JSON option, and download the JSON file. This file is essential for authentication within our Node.js project directory for our Pub/Sub setup.\n  Great! We\u0026rsquo;re ready to begin integrating Google Cloud Pub/Sub into our application.\nHow to Integrate Google Pub/Sub in Node.js In our Node.js application, we will use Pub/Sub to handle sending data from a User profile creation service to an Email subscription service in real-time. Instead of basic HTTP, Google Pub/Sub connects the User Service and Email Service in real-time, preventing data loss during service downtime and ensuring scalability and flexibility when adding new services.\nHere\u0026rsquo;s the process: When a new user is created, the User service announces it through a Pub/Sub topic. The Email service listens to this topic and promptly receives the data input.\nTo implement this, we\u0026rsquo;ll create the logic for our User service (publisher) in user-pub.js and the Email service (subscriber) in email-sub.js. Each service will have its routes and controllers but both will share the same pub-sub configuration.\nLet\u0026rsquo;s kick off by setting up our Node.js application. Copy and paste the following commands into your terminal:\nmkdir nodejs-pub-sub cd nodejs_app-pub-sub npm init -y Above, we created a new folder for our application and initialized Node.js within it.\nNow, let\u0026rsquo;s install the required packages:\nnpm install @google-cloud/pubsub express Here, @google-cloud/pubsub manages Pub/Sub functionality, serving as a fully managed real-time messaging service for sending and receiving messages between applications. While express is a Node.js framework designed to streamline API development.\nNext, to create the needed folders and files for our application, run the following:\nmkdir src src/routes \\  src/controllers \\  src/helper touch src/user-pub.js \\  src/email-sub.js \\  src/routes/email.js \\  src/routes/user.js \\  src/controllers/emailController.js \\  src/controllers/userController.js \\  src/helper/pub-sub-config.js Our file structure is nearly complete. Lastly, move the service account key we downloaded earlier into the src/helper folder.\nOur application\u0026rsquo;s file structure should now look like this:\nNext up we will start by writing our Pub/Sub helper functions. Copy and paste the following code into the src/helper/pub-sub-config.js file:\nconst { PubSub } = require(\u0026#34;@google-cloud/pubsub\u0026#34;); const path = require(\u0026#34;path\u0026#34;); const keyFilePath = path.join(__dirname, \u0026#34;nodejs-pub-sub.json\u0026#34;); const projectId = \u0026#34;nodejs-pub-sub\u0026#34;; // Create an instance of PubSub with the provided service account key const pubSubClient = new PubSub({ keyFilename: keyFilePath, }); const publishMessage = async (topicName, payload) =\u0026gt; { const dataBuffer = Buffer.from(JSON.stringify(payload)); try { const messageId = await pubSubClient .topic(topicName) .publishMessage({ data: dataBuffer }); console.log(`Message ${messageId}published.`); return messageId; } catch (error) { console.error(`Received error while publishing: ${error.message}`); } }; const listenForPullMessages = async (subscriptionName, timeout) =\u0026gt; { const subscription = pubSubClient.subscription(subscriptionName); let messageCount = 0; let data = []; const messageHandler = message =\u0026gt; { const jsonData = JSON.parse(message.data); data.push({ id: message.id, attributes: message.attributes, ...jsonData, }); messageCount += 1; message.ack(); }; subscription.on(\u0026#34;message\u0026#34;, messageHandler); setTimeout(() =\u0026gt; { console.log(\u0026#34;Message Pulled: \\n\u0026#34;, data); console.log(`${messageCount}message(s) received.`); subscription.removeListener(\u0026#34;message\u0026#34;, messageHandler); }, timeout * 100); }; const listenForPushMessages = payload =\u0026gt; { const message = Buffer.from(payload, \u0026#34;base64\u0026#34;).toString(\u0026#34;utf-8\u0026#34;); let parsedMessage = JSON.parse(message); console.log(\u0026#34;Message Pushed: \\n\u0026#34;, parsedMessage); return parsedMessage; }; module.exports = { publishMessage, listenForPullMessages, listenForPushMessages, }; The above snippet is where we define all functions that will allow us to carry out all of our Pub/Sub related tasks, where:\n publishMessage: This function takes two parameters, topicName, and payload. It serializes the JSON payload into a buffer and publishes it to the specified topic upon execution. listenForPullMessages: This subscriber function pulls messages broadcasted to a subscription associated with a topic. When called, this function listens to messages distributed by the publisher. listenForPushMessages: This function receives a message from a configured subscriber endpoint and parses the buffer into JSON format for consumption by individual subscribers.  Now, let\u0026rsquo;s begin creating our Publisher and Subscribers Service.\nBuilding the Publisher Service The core logic for our publisher resides in the User service. This service accepts user data, creates a user profile, and then uses Pub/Sub to send a message to connected services (email), notifying them of the newly created user.\nCopy and paste the following code into the src/user-pub.js file, which will serve as the entry point for our User service:\nconst express = require(\u0026#34;express\u0026#34;); const app = express(); const userRoute = require(\u0026#34;./routes/user\u0026#34;); const PORT = 3000; app.use(express.urlencoded({ extended: true })); app.use(express.json()); app.use(\u0026#34;/api/user\u0026#34;, userRoute); app.listen(PORT, () =\u0026gt; { console.log(`User service is running at http://localhost:${PORT}`); }); For our User service route, copy and paste the following code into the `src/routes/user.js`` file:\nconst express = require(\u0026#34;express\u0026#34;); const router = express(); const userController = require(\u0026#34;../controllers/userController\u0026#34;); router.get(\u0026#34;/\u0026#34;, userController.welcome); router.post(\u0026#34;/create\u0026#34;, userController.createUser); module.exports = router; Next, in the src/controllers/userController.js file, paste the following code:\nconst { publishMessage } = require(\u0026#34;../helper/pub-sub-config\u0026#34;); const topicName = \u0026#34;user_creation\u0026#34;; const welcome = (req, res) =\u0026gt; { return res.status(200).json({ success: true, message: \u0026#34;Welcome to User Profile Service:)\u0026#34;, }); }; const createUser = async (req, res) =\u0026gt; { let userObj = req.body; // create user profile logic goes here....  let messageId = await publishMessage(topicName, userObj); return res.status(200).json({ success: true, message: `Message ${messageId}published :)`, }); }; module.exports = { welcome, createUser }; In the above code, the welcome function is a simple welcome message for our service, while the createUser function will contain our user profile creation logic. It invokes the Pub/Sub function to publish the user data. This publication allows any subscriber subscribed to our user_creation topic to listen and receive the update.\nOur publisher service is ready, and we can begin creating users and publishing their data to our Pub/Sub. Next, let\u0026rsquo;s set up our Email service.\nBuilding the Subscribers The Email service acts as our subscriber.\nTo start creating our subscriber logic, copy and paste the following code into the src/email-sub.js file to set up our email service server entry:\nconst express = require(\u0026#34;express\u0026#34;); const app = express(); const emailRoute = require(\u0026#34;./routes/email\u0026#34;); const PORT = 5000; app.use(express.urlencoded({ extended: true })); app.use(express.json()); app.use(\u0026#34;/api/email\u0026#34;, emailRoute); app.listen(PORT, () =\u0026gt; { console.log(`Email Notification Service is running at http://localhost:${PORT}`); }); For the Email service routes, paste the following in the src/routes/email.js:\nconst express = require(\u0026#34;express\u0026#34;); const router = express(); const emailController = require(\u0026#34;../controllers/emailController\u0026#34;); router.get(\u0026#34;/\u0026#34;, emailController.welcome); router.post(\u0026#34;/pull\u0026#34;, emailController.pullEmail); router.post(\u0026#34;/push\u0026#34;, emailController.pushEmail); module.exports = router; Next, for our email service configuration copy and paste the following into the src/controllers/emailController.js:\nconst { listenForPullMessages, listenForPushMessages, } = require(\u0026#34;../helper/pub-sub-config\u0026#34;); const subscriptionName = \u0026#34;email_subscription_pull\u0026#34;; const timeout = 60; const welcome = (req, res) =\u0026gt; { return res.status(200).json({ success: true, message: \u0026#34;Welcome to Email Service:)\u0026#34;, }); }; const pullEmail = async (req, res) =\u0026gt; { try { await listenForPullMessages(subscriptionName, timeout); return res.status(200).json({ success: true, message: \u0026#34;Pull message received successfully :\u0026#34;, }); } catch (error) { return res.status(500).json({ success: false, message: \u0026#34;Couldn\u0026#39;t receive pull message :(\u0026#34;, data: error.message, }); } }; const pushEmail = async (req, res) =\u0026gt; { try { let messageResponse = await listenForPushMessages(req.body.message.data); return res.status(200).json({ success: true, message: \u0026#34;Push Message received successfully :)\u0026#34;, data: messageResponse, }); } catch (error) { return res.status(500).json({ success: false, message: \u0026#34;Couldn\u0026#39;t receive push message :(\u0026#34;, data: error, }); } }; module.exports = { welcome, pullEmail, pushEmail }; In the above code, we have a welcome function sending a simple welcome message for our service. The pullEmail function contains logic to pull published messages from the topic our subscriber is subscribed to. The pushEmail function works as a webhook ready to receive calls from our pub/sub about any new updates or user data.\nNow that our entire setup is complete, let\u0026rsquo;s begin testing.\nTesting Our Services To do this, open two terminals in your project directory to run both our services concurrently.\nTo run the User service (publisher), copy and paste the following command into one of the terminals:\nnode src/user-pub.js The above command will start our User service\nNext, Start the Email service (subscriber) service by running the following code in the other terminal.\nnode src/email-sub.js This will start our Email service.\nWe\u0026rsquo;ll start testing by publishing a message through the User Service, then pulling and pushing this published messages from our Email service.\nPublish Message To publish a message, initiate a POST request to the User Service route /api/user/create using Postman. This process will generate a new user profile, and the user data will be published through Pub/Sub to our topic, allowing it to be accessed by any subscriber.\nNow, we can retrieve the published message in two ways:\nReceive Message Via Pull To verify message publication, our Email service can make an API call to the /api/email/pull route using the pull subscription.\nAfter making this call, check the Email service terminal; we should be able to see the received data in our terminal logs.\nReceiving Message Via Push To receive push messages, we will use an HTTPS endpoint for our webhook. You can achieve this with either a live endpoint or, if using local routes, employ Ngrok to expose them.\nNgrok creates secure tunnels from localhost, making a locally running web service remotely accessible. It\u0026rsquo;s often used during development and testing to make a locally running web service accessible remotely.\nNgrok generates a public URL (e.g. https://random-string.ngrok.io) to forward traffic to our local server. This URL is used to create a PUSH subscriber endpoint for Google Pub/Sub. Consequently, when a message is published, Google Pub/Sub pushes the message data to our subscriber endpoint.\nIf you haven\u0026rsquo;t set up Ngrok on your device yet, click here\nOpen a new terminal to run the Ngrok command, Copy and Paste the following command to confirm that Ngrok is installed:\nngrok --version To expose our Email service port as an HTTPs URL, run the following command, ensuring that the port used matches your Email service port:\nngrok http 5000 Using Ngrok, our server is currently operational on the highlighted HTTPS URL displayed in the image.\nReturn to the Google Cloud Console and update the push subscription webhook endpoint using our Ngrok URL. It will look like this:\nFinally, attempt to create a new User. After completing this action, inspect the Email Service terminal log to observe the automatic pushing of published data to our subscriber webhook in real-time.\nConclusion In this tutorial, we\u0026rsquo;ve covered what Google Cloud Pub/Sub is and how to integrate it into a Node.js application. Typically our code will do a lot more than just print log messages, however, this should be sufficient to kickstart our utilization of Pub/Sub in Node.js applications. For more detailed information on Google Cloud Pub/Sub, refer to the official documentation.\n","date":"January 12, 2024","image":"https://reflectoring.io/images/stock/0076-airmail-1200x628-branded_hu11b26946a4345a7ce4c5465e5e627838_150840_650x0_resize_q90_box.jpg","permalink":"/google-pub-sub-in-node-js/","title":"Publish and Receive Messages with Google Pub/Sub in Node.js"},{"categories":["Kotlin"],"contents":"One of Kotlin\u0026rsquo;s standout features is its robust support for functions, including high-order functions and inline functions. In this blog post, we will delve into the world of functions in Kotlin, exploring the differences between high-order functions and inline functions and understanding when to leverage each for optimal code efficiency.\nHigh-Order Functions High-order functions are a fundamental concept in functional programming and play a crucial role in Kotlin\u0026rsquo;s functional paradigm. A high-order function is a function that takes one or more functions as parameters or returns a function. This ability to treat functions as first-class citizens allows for more modular and flexible code.\nThe primary advantage of high-order functions lies in their ability to promote code reuse. By passing functions as parameters, developers can create generic functions that operate on various data types or behaviors. This enhances the readability and maintainability of the code by isolating specific functionalities into separate functions.\nConsider the following example of a simple high-order function in Kotlin:\nfun \u0026lt;T\u0026gt; List\u0026lt;T\u0026gt;.customFilter(predicate: (T) -\u0026gt; Boolean): List\u0026lt;T\u0026gt; { val result = mutableListOf\u0026lt;T\u0026gt;() for (item in this) { if (predicate(item)) { result.add(item) } } return result } fun main() { val numbers = listOf(1, 2, 3, 4, 5, 6) val evenNumbers = numbers.customFilter { it % 2 == 0 } println(evenNumbers) // Output: [2, 4, 6] } In this example, customFilter is a high-order function that takes a predicate function as a parameter. This allows us to filter a list based on various conditions without duplicating filtering logic.\nInline Functions While high-order functions provide modularity and reusability, they may introduce some runtime overhead due to the creation of function objects. This is where inline functions come into play. Inline functions, as the name suggests, are a way to instruct the compiler to replace the function call site with the actual body of the function during compilation. This process eliminates the overhead associated with function calls and can lead to more efficient code execution.\nConsider the following example:\ninline fun executeOperation(a: Int, b: Int, operation: (Int, Int) -\u0026gt; Int): Int { return operation(a, b) } fun main() { val result = executeOperation(5, 3) { x, y -\u0026gt; x + y } println(result) // Output: 8 } In this example, the executeOperation function is declared as inline. When the compiler encounters a call to this function, it replaces the call site with the actual body of the function, avoiding the creation of additional function objects. This can be particularly beneficial in scenarios where performance is a critical factor.\nWhen to use High-Order vs. Inline Functions? High order functions are useful when we want to abstract over actions, parameterize behavior or create more flexible and reusable code.\nInline functions are used when we want to eliminate the overhead of function calls and improve performance. The inline keyword suggests to the compiler that it should insert the function\u0026rsquo;s code directly at the call site avoiding the overhead of function invocation.\nEach higher-order function will build a new object and assign it to memory. This will increase the amount of time spent running. We use inline functions to solve this problem. We can use the inline keyword to prevent creating a new object for each higher-level function.\nHowever, inline functions might increase the number of cache misses. Inlining might cause an inner loop to span across multiple lines of the memory cache and that might cause thrashing of the memory-cache.\nArguments for High-Order Functions   Modularity and Reusability: We want to create modular and reusable code by abstracting certain behaviors into functions.\n  Flexibility: We need the flexibility to pass different functions dynamically.\n  Code Readability: We prioritize readability and maintainability, as high-order functions contribute to cleaner and more organized code.\n  Arguments for Inline Functions   Performance: Performance is a critical concern, and we want to eliminate the overhead associated with function calls.\n  Code Size: We aim to reduce the size of our compiled code by inlining small functions.\n  Lambda Expressions: We frequently work with lambda expressions and want to minimize the overhead introduced by function objects.\n  Conclusion Kotlin\u0026rsquo;s support for high-order functions and inline functions provides developers with powerful tools to write expressive and efficient code. High-order functions enhance code modularity and readability, while inline functions offer performance improvements by eliminating the overhead of function calls. Choosing between the two depends on the specific needs of our application, striking a balance between readability and performance for optimal code efficiency. As we navigate the world of Kotlin development, understanding when to leverage each type of function will empower us to write clean, maintainable and performant code.\n","date":"January 10, 2024","image":"https://reflectoring.io/images/stock/0134-kotlin-1200x628-branded_hu9055033ca4379bb3460170d98856f4c0_131591_650x0_resize_q90_box.jpg","permalink":"/kotlin-high-order-vs-inline-functions/","title":"High-Order Functions vs. Inline Functions in Kotlin"},{"categories":["Kotlin"],"contents":"A design pattern is a general repeatable solution to a commonly occurring problem in software design. In this blog post, we will delve into various design patterns and explore how they can be effectively implemented in Kotlin.\nAdvantages of Using Design Patterns Reusability Design patterns promote the reuse of proven solutions to common problems. By applying design patterns, we can use established templates to solve recurring design issues, saving time and effort in development.\nMaintainability Design patterns enhance code maintainability by providing a clear and organized structure. When developers are familiar with common design patterns, it becomes easier for them to understand and modify the code, reducing the chances of introducing bugs during maintenance.\nScalability Design patterns contribute to the scalability of a codebase by providing modular and extensible solutions. As our application evolves, we can add new features or modify existing ones without having to overhaul the entire codebase.\nAbstraction and Encapsulation Design patterns often involve abstraction and encapsulation, which help in hiding the complexity of the implementation details. This separation allows developers to focus on high-level design decisions without getting bogged down by low-level details.\nFlexibility Design patterns make code more flexible and adaptable to change. When the structure of our software is based on well-established patterns, it becomes easier to introduce new functionality or modify existing behavior without affecting the entire system.\nCode Understandability Design patterns provide a common vocabulary for developers. When a developer sees a particular pattern being used, they can quickly understand the intent and functionality without delving deeply into the implementation details.\nTestability Code that follows design patterns is often more modular and, therefore, more easily testable. This makes it simpler to write unit tests and ensures that changes to one part of the codebase do not inadvertently break other components.\nBuilder Pattern The Builder design pattern is used for constructing complex objects by separating the construction process from the actual representation. It is particularly useful when an object has a large number of parameters, and we want to provide a more readable and flexible way to construct it.\nHere\u0026rsquo;s an example of implementing the Builder design pattern in Kotlin:\n// Product class data class Computer( val cpu: String, val ram: String, val storage: String, val graphicsCard: String ) // Concrete builder class class ComputerBuilder { private var cpu: String = \u0026#34;\u0026#34; private var ram: String = \u0026#34;\u0026#34; private var storage: String = \u0026#34;\u0026#34; private var graphicsCard: String = \u0026#34;\u0026#34; fun cpu(cpu: String): ComputerBuilder { this.cpu = cpu return this } fun ram(ram: String): ComputerBuilder { this.ram = ram return this } fun storage(storage: String): ComputerBuilder { this.storage = storage return this } fun graphicsCard(graphicsCard: String): ComputerBuilder { this.graphicsCard = graphicsCard return this } fun build(): Computer { return Computer(cpu, ram, storage, graphicsCard) } } fun main() { // Build the computer with a specific configuration  val builder = ComputerBuilder() val gamingComputer = builder .cpu(\u0026#34;Intel Core i9\u0026#34;) .ram(\u0026#34;32GB DDR4\u0026#34;) .storage(\u0026#34;1TB SSD\u0026#34;) .graphicsCard(\u0026#34;NVIDIA RTX 3080\u0026#34;) .build() } In this code, the Computer class serves as the product to be built, encapsulating attributes like CPU, RAM, storage, and graphics card. The ComputerBuilder interface declares methods for configuring each attribute, while the ComputerBuilder class implements this interface, progressively setting the values. In the client code within the main function, a ComputerBuilder instance is utilized to construct a Computer object with a specific configuration by method chaining. This approach enhances readability and flexibility, especially when dealing with objects with numerous optional or interchangeable components, as the Builder pattern facilitates a step-by-step construction process.\nNote that the Builder pattern is not as commonly used in Kotlin as it is in Java, for example, because Kotlin provides named parameters, which can be used in a constructor to a very similar effect to a Builder:\nfun main() { // Without using the Builder pattern  val simpleComputer = Computer( cpu = \u0026#34;Intel Core i5\u0026#34;, ram = \u0026#34;16GB DDR4\u0026#34;, storage = \u0026#34;512GB SSD\u0026#34;, graphicsCard = \u0026#34;NVIDIA GTX 1660\u0026#34; ) } Using named parameters in a constructor as in the example above is better for null safety, because it doesn\u0026rsquo;t accept null values and the values do not have to be set to empty strings (\u0026quot;\u0026quot;) as in the Builder example.\nSingleton Pattern The Singleton design pattern ensures that a class has only one instance and provides a global point of access to that instance. Every single place where it is used will make use of the same instance, hence reducing memory usage and ensuring consistency. It is useful when exactly one object is needed to coordinate actions across the system, such as managing a shared resource or controlling a single point of control (e.g., a configuration manager or a logging service). The pattern typically involves a private constructor, a method to access the instance, and lazy initialization to create the instance only when it\u0026rsquo;s first requested.\nIn Kotlin, the Singleton design pattern can be implemented in several ways. Here are two common approaches.\nObject Declaration The most straightforward way to implement a Singleton in Kotlin is by using an object declaration. An object declaration defines a singleton class and creates an instance of it at the same time. The instance is created lazily when it\u0026rsquo;s first accessed.\nHere is a code example of using the object declaration methos:\nobject MySingleton { // Singleton properties and methods go here  fun doSomething() { println(\u0026#34;Singleton is doing something\u0026#34;) } } To use our singleton:\nMySingleton.doSomething() Companion Object Another approach is to use a companion object within a class. This approach allows us to have more control over the initialization process, and we can use it when we need to perform some additional setup.\nLet\u0026rsquo;s see how we can make use of the companion object method:\nclass MySingleton private constructor() { companion object { private val instance: MySingleton by lazy { MySingleton() } fun getInstance(): MySingleton { return instance } } // Singleton properties and methods go here  fun doSomething() { println(\u0026#34;Singleton is doing something\u0026#34;) } } To use the singleton:\nval singletonInstance = MySingleton.getInstance() singletonInstance.doSomething() By using by lazy, the instance is created only when it\u0026rsquo;s first accessed, making it a lazy-initialized singleton.\nAdapter Pattern The Adapter design pattern allows the interface of an existing class to be used as another interface. It is often used to make existing classes work with others without modifying their source code.\nIn Kotlin, we can implement the Adapter pattern using either class-based or object-based adapters.\nHere\u0026rsquo;s an example of the class-based Adapter pattern:\n// Target interface that the client expects interface Printer { fun print() } // Adaptee (the class to be adapted) class ModernPrinter { fun startPrint() { println(\u0026#34;Printing in a modern way\u0026#34;) } } // Class-based Adapter class ModernPrinterAdapter(private val modernPrinter: ModernPrinter) : Printer { override fun print() { modernPrinter.startPrint() } } // Client code fun main() { val modernPrinter = ModernPrinter() val legacyPrinter: Printer = ModernPrinterAdapter(modernPrinter) legacyPrinter.print() } In this example:\n Printer is the target interface that the client expects. ModernPrinter is the class to be adapted (Adaptee). ModernPrinterAdapter is the class-based adapter that adapts the ModernPrinter to the Printer interface.  Decorator Pattern The decorator design pattern allows behavior to be added to an individual object, either statically or dynamically without affecting the behavior of other objects from the same class. In Kotlin, we can implement the decorator pattern using interfaces and classes.\nHere\u0026rsquo;s a simple example of the decorator pattern in Kotlin:\n// Component interface interface Car { fun drive() } // Concrete component class BasicCar : Car { override fun drive() { println(\u0026#34;Move from A to B\u0026#34;) } } // Extension function for Car interface fun Car.decorate(initialize: () -\u0026gt; Unit): Car { return object : Car { override fun drive() { initialize() this@decorate.drive() } } } fun main() { // Create a basic car  val myBasicCar: Car = BasicCar() // Decorate it to make it an offroad car  val offroadCar: Car = myBasicCar.decorate { println(\u0026#34;Configure offroad driving mode\u0026#34;) } // Drive the offroad car  offroadCar.drive() } In this example, the decorate extension function is added to the Car interface. This extension function takes a lambda parameter called initialize, which represents the additional behavior to be added. It returns a new instance of Car that incorporates the specified behavior before calling the original drive method.\nIn the main function, the basic car is decorated using the decorate extension function to create an offroad car, and then the offroad car is driven.\nThe output for this code example will be:\nConfigure offroad driving mode Move from A to B Facade Pattern The Facade design pattern provides a simplified interface to a set of interfaces in a subsystem, making it easier to use. It involves creating a class that represents a higher-level, unified interface that makes it easier for clients to interact with a subsystem. This can help simplify the usage of complex systems by providing a single entry point.\nLet\u0026rsquo;s create a simple example of the Facade pattern in Kotlin. Consider a subsystem with multiple classes that handle different aspects of a computer system, CPU, Memory, and Hard Drive.\nWe\u0026rsquo;ll create a ComputerFacade class to provide a simple interface for the client to interact with the subsystem:\n// Subsystem classes class CPU { fun processData() { println(\u0026#34;Processing data...\u0026#34;) } } class Memory { fun load() { println(\u0026#34;Loading data into memory...\u0026#34;) } } class HardDrive { fun readData() { println(\u0026#34;Reading data from hard drive...\u0026#34;) } } // Facade class class ComputerFacade( private val cpu: CPU, private val memory: Memory, private val hardDrive: HardDrive ) { fun start() { println(\u0026#34;ComputerFacade starting...\u0026#34;) cpu.processData() memory.load() hardDrive.readData() println(\u0026#34;ComputerFacade started successfully.\u0026#34;) } } // Client code fun main() { // Create subsystem components  val cpu = CPU() val memory = Memory() val hardDrive = HardDrive() // Create facade and pass subsystem components to it  val computerFacade = ComputerFacade(cpu, memory, hardDrive) // Client interacts with the subsystem through the facade  computerFacade.start() } In this example, the ComputerFacade class serves as a simplified interface for starting the computer system. The client interacts with the subsystem (CPU, Memory, and HardDrive) through the ComputerFacade without needing to know the details of each subsystem component.\nBy using the Facade pattern, the complexity of the subsystem is hidden from the client, and the client can interact with the system through a more straightforward and unified interface provided by the facade. This can be especially useful when dealing with large and complex systems.\nObserver Pattern The Observer design pattern is a behavioral design pattern where an object, known as the subject, maintains a list of its dependents, known as observers, that are notified of any state changes. This pattern is often used to implement distributed event handling systems.\nHere\u0026rsquo;s a simple example:\n// Define an interface for the observer interface Observer { fun update(value: Int) } // Define a concrete observer that implements the Observer interface class ValueObserver(private val name: String) : Observer { override fun update(value: Int) { println(\u0026#34;$namereceived value: $value\u0026#34;) } } // Define a subject that emits values and notifies observers class ValueSubject { private val observers = mutableListOf\u0026lt;Observer\u0026gt;() fun addObserver(observer: Observer) { observers.add(observer) } fun removeObserver(observer: Observer) { observers.remove(observer) } private val observable: Flow\u0026lt;Int\u0026gt; = flow { while (true) { emit(Random.nextInt(0..1000)) delay(100) } } fun startObserving() { val observerJob = coroutineScope.launch { observable.collect { value -\u0026gt; notifyObservers(value) } } } private fun notifyObservers(value: Int) { for (observer in observers) { observer.update(value) } } } In summary, this code sets up a system where multiple observers can be attached to a subject ValueSubject. The subject emits random values in a continuous stream and each attached observer ValueObserver is notified whenever a new value is emitted. The observer then prints a message indicating that it received the new value.\nStrategy Pattern The Strategy design pattern is a behavioral design pattern that defines a family of algorithms, encapsulates each algorithm and makes them interchangeable. It allows a client to choose an algorithm from a family of algorithms at runtime without modifying the client code.\nHere\u0026rsquo;s an example:\n// Define the strategy interface interface PaymentStrategy { fun pay(amount: Double) } // Concrete implementation of a payment strategy: Credit Card class CreditCardPaymentStrategy(private val cardNumber: String, private val expiryDate: String, private val cvv: String) : PaymentStrategy { override fun pay(amount: Double) { // Logic for credit card payment  println(\u0026#34;Paid $amountusing credit card $cardNumber\u0026#34;) } } // Concrete implementation of a payment strategy: PayPal class PayPalPaymentStrategy(private val email: String) : PaymentStrategy { override fun pay(amount: Double) { // Logic for PayPal payment  println(\u0026#34;Paid $amountusing PayPal with email $email\u0026#34;) } } // Context class that uses the strategy class ShoppingCart(private val paymentStrategy: PaymentStrategy) { fun checkout(amount: Double) { paymentStrategy.pay(amount) } } fun main() { // Client code  val creditCardStrategy = CreditCardPaymentStrategy(\u0026#34;1234-5678-9012-3456\u0026#34;, \u0026#34;12/24\u0026#34;, \u0026#34;123\u0026#34;) val payPalStrategy = PayPalPaymentStrategy(\u0026#34;john.doe@example.com\u0026#34;) val shoppingCart1 = ShoppingCart(creditCardStrategy) val shoppingCart2 = ShoppingCart(payPalStrategy) shoppingCart1.checkout(100.0) shoppingCart2.checkout(50.0) } In this example, the PaymentStrategy interface defines the contract for payment strategies and CreditCardPaymentStrategy and PayPalPaymentStrategy are concrete implementations of the strategy. The ShoppingCart class represents the context that uses the selected payment strategy.\nBy using the Strategy Design Pattern, we can easily add new payment strategies without modifying the existing code. We can create new classes that implement the PaymentStrategy interface and use them interchangeably in the ShoppingCart context.\nFactory Design Pattern The Factory Design Pattern is a creational pattern that provides an interface for creating objects in a super class but allows subclasses to alter the type of objects that will be created. This pattern is often used when a class cannot anticipate the class of objects it must create.\nHere\u0026rsquo;s an example of a simple Factory Design Pattern in Kotlin:\n// Product interface interface Product { fun create(): String } // Concrete Product A class ConcreteProductA : Product { override fun create(): String { return \u0026#34;Product A\u0026#34; } } // Concrete Product B class ConcreteProductB : Product { override fun create(): String { return \u0026#34;Product B\u0026#34; } } // Factory interface interface ProductFactory { fun createProduct(): Product } // Concrete Factory A class ConcreteFactoryA : ProductFactory { override fun createProduct(): Product { return ConcreteProductA() } } // Concrete Factory B class ConcreteFactoryB : ProductFactory { override fun createProduct(): Product { return ConcreteProductB() } } // Client code fun main() { val factoryA: ProductFactory = ConcreteFactoryA() val productA: Product = factoryA.createProduct() println(productA.create()) val factoryB: ProductFactory = ConcreteFactoryB() val productB: Product = factoryB.createProduct() println(productB.create()) } In this example, we have a Product interface representing the product to be created. We have two concrete product classes, ConcreteProductA and ConcreteProductB, which implement the Product interface. We also have a ProductFactory interface with a method createProduct() and two concrete factory classes, ConcreteFactoryA and ConcreteFactoryB which implement this interface and return instances of the respective concrete products.\nAbstract Factory Pattern The abstract factory design pattern provides an interface for creating families of related or dependent objects without specifying their concrete classes. This pattern is often used when a system needs to be independent of how its objects are created, composed, represented and the client code should work with multiple families of objects.In Kotlin, you can implement the abstract design pattern using interfaces, abstract classes, and concrete classes.\nLet\u0026rsquo;s look at a simple example to illustrate the abstract design pattern:\n// Abstract Product A interface ProductA { fun operationA(): String } // Concrete Product A1 class ConcreteProductA1 : ProductA { override fun operationA(): String { return \u0026#34;Product A1\u0026#34; } } // Concrete Product A2 class ConcreteProductA2 : ProductA { override fun operationA(): String { return \u0026#34;Product A2\u0026#34; } } // Abstract Product B interface ProductB { fun operationB(): String } // Concrete Product B1 class ConcreteProductB1 : ProductB { override fun operationB(): String { return \u0026#34;Product B1\u0026#34; } } // Concrete Product B2 class ConcreteProductB2 : ProductB { override fun operationB(): String { return \u0026#34;Product B2\u0026#34; } } // Abstract Factory interface AbstractFactory { fun createProductA(): ProductA fun createProductB(): ProductB } // Concrete Factory 1 class ConcreteFactory1 : AbstractFactory { override fun createProductA(): ProductA { return ConcreteProductA1() } override fun createProductB(): ProductB { return ConcreteProductB1() } } // Concrete Factory 2 class ConcreteFactory2 : AbstractFactory { override fun createProductA(): ProductA { return ConcreteProductA2() } override fun createProductB(): ProductB { return ConcreteProductB2() } } // Client Code fun main() { val factory1: AbstractFactory = ConcreteFactory1() val productA1: ProductA = factory1.createProductA() val productB1: ProductB = factory1.createProductB() println(productA1.operationA()) // Output: Product A1  println(productB1.operationB()) // Output: Product B1  val factory2: AbstractFactory = ConcreteFactory2() val productA2: ProductA = factory2.createProductA() val productB2: ProductB = factory2.createProductB() println(productA2.operationA()) // Output: Product A2  println(productB2.operationB()) // Output: Product B2 } In this example, AbstractFactory declares the creation methods for two types of products ProductA and ProductB. Concrete factories ConcreteFactory1 and ConcreteFactory2 implement these creation methods to produce specific products ConcreteProductA1, ConcreteProductA2, ConcreteProductB1 and ConcreteProductB2. The client code can then use a specific factory to create products without needing to know the concrete classes of those products.\nThis structure allows for easy extension of the system by introducing new products and factories without modifying the existing client code.\nKey Differences Between the Abstract Factory and Factory Patterns  The Factory pattern uses inheritance and relies on subclasses to handle the object creation, allowing a class to delegate the instantiation to its subclasses. The Abstract Factory pattern uses object composition and provides an interface for creating families of related or dependent objects. It involves multiple factory methods, each responsible for creating a different type of object within the family. The Factory pattern creates one product, while the Abstract Factory pattern creates families of related products. In the Factory pattern, the client code uses the concrete creator class and relies on polymorphism to instantiate the product. In the Abstract Factory pattern, the client code uses the abstract factory to create families of products, and it\u0026rsquo;s designed to work with multiple families of products.  Conclusion In this article, we learnt what a design pattern is in kotlin , the advantages that design patterns offer in our software development processes and the various design patterns that Kotlin offers.\n","date":"January 2, 2024","image":"https://reflectoring.io/images/stock/0107-puzzle-1200x628-branded_hu54061c11751e36c4d330c77baa0f8ec2_367477_650x0_resize_q90_box.jpg","permalink":"/kotlin-design-patterns/","title":"Design Patterns in Kotlin"},{"categories":["Kotlin"],"contents":"Kotlin, a modern JVM programming language, brings a range of features to enhance expressiveness and conciseness. Two key constructs in Kotlin that contribute to its versatility are sealed classes and enum classes. In this blog post, we\u0026rsquo;ll delve into the characteristics, use cases and differences between sealed classes and enum classes.\nSealed Classes Sealed classes in Kotlin offer a powerful tool for defining restricted class hierarchies. When a class is marked as sealed, it means that the class hierarchy is finite and every subclass must be declared within the same file. This restriction allows the compiler to perform exhaustive checks when used in a when expression, ensuring that all possible subclasses are covered.\nA typical use case for sealed classes is modeling hierarchical data structures, such as expressions in a compiler or states in a finite state machine. By sealing the class hierarchy, developers can guarantee that they handle all possible cases, making the code more robust and less prone to bugs.\nHere\u0026rsquo;s an example of a sealed class representing mathematical expressions:\nsealed class MathExpression { data class Value(val value: Double) : MathExpression() data class Addition(val left: MathExpression, val right: MathExpression) : MathExpression() data class Subtraction(val left: MathExpression, val right: MathExpression) : MathExpression() object Undefined : MathExpression() } Here\u0026rsquo;s an example of using the MathExpression sealed class in a when expression:\nfun evaluateExpression(expression: MathExpression): Double { return when (expression) { is MathExpression.Value -\u0026gt; expression.value is MathExpression.Addition -\u0026gt; evaluateExpression(expression.left) + evaluateExpression(expression.right) is MathExpression.Subtraction -\u0026gt; evaluateExpression(expression.left) - evaluateExpression(expression.right) MathExpression.Undefined -\u0026gt; Double.NaN // Handle undefined case  } } fun main() { // Example 1: Simple value  val valueExpression = MathExpression.Value(42.0) val result1 = evaluateExpression(valueExpression) println(\u0026#34;Result 1: $result1\u0026#34;) // Output: Result 1: 42.0  // Example 2: Addition  val additionExpression = MathExpression.Addition(MathExpression.Value(10.0), MathExpression.Value(20.0)) val result2 = evaluateExpression(additionExpression) println(\u0026#34;Result 2: $result2\u0026#34;) // Output: Result 2: 30.0  // Example 3: Subtraction  val subtractionExpression = MathExpression.Subtraction(MathExpression.Value(30.0), MathExpression.Value(5.0)) val result3 = evaluateExpression(subtractionExpression) println(\u0026#34;Result 3: $result3\u0026#34;) // Output: Result 3: 25.0  // Example 4: Undefined  val undefinedExpression = MathExpression.Undefined val result4 = evaluateExpression(undefinedExpression) println(\u0026#34;Result 4: $result4\u0026#34;) // Output: Result 4: NaN } In this example, the evaluateExpression() function takes a MathExpression as a parameter and uses a when expression to handle different cases.\nEnum Classes Enum classes, short for enumerated classes, are another feature that Kotlin inherits from Java but enhances significantly. Enum classes allow developers to define a fixed set of values, each of which is an instance of the enum class. Enums are particularly useful when modeling a closed set of related constants.\nLet\u0026rsquo;s consider an example of an enum class representing the days of the week:\nenum class DayOfWeek { SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY } Unlike sealed classes, enum classes can\u0026rsquo;t have subclasses, and the set of values is predetermined at compile time. This makes enum classes suitable for scenarios where a predefined set of options is expected, like representing days, months, or error states.\nDifferences and Use Cases While sealed classes and enum classes share some similarities, such as restricting the set of possible values, they serve distinct purposes and are suitable for different scenarios.\n   Sealed classes Enum classes     Sealed classes are ideal for modeling hierarchies where a base class has multiple possible subclasses, providing a structured way to represent complex data structures. Enum classes, on the other hand, are perfect for scenarios where a fixed set of distinct values is needed, such as representing days, colors, or options in a menu.   Sealed classes shine when exhaustive checks are required. The compiler ensures that all possible subclasses are considered when using a sealed class in a when expression, reducing the likelihood of runtime errors. Enum classes, with their fixed set of values, provide a concise way to represent and work with predefined options, making them a natural choice for situations where the set is closed and known in advance.    Conclusion Sealed classes and enum classes in Kotlin are powerful tools for modeling different types of data structures. Sealed classes are suitable for hierarchies with multiple subclasses, enabling exhaustive checks and enhancing code safety. Enum classes, on the other hand, excel in representing closed sets of values, providing a concise and readable way to work with predefined options. By understanding the strengths and use cases of these constructs, developers can make informed decisions when designing their Kotlin applications, leading to more maintainable and robust code.\n","date":"January 2, 2024","image":"https://reflectoring.io/images/stock/0044-lock-1200x628-branded_hufda82673b597e36c6f6f4e174d972b96_267480_650x0_resize_q90_box.jpg","permalink":"/kotlin-enum-vs-sealed-classes/","title":"Sealed Classes vs Enums in Kotlin"},{"categories":["Java"],"contents":"Introduction Java is the only (mainstream) programming language to implement the concept of checked exceptions. Ever since, checked exceptions have been the subject of controversy. Considered an innovative concept at the time (Java was introduced in 1996), nowadays they are commonly considered bad practice.\nIn this article, I\u0026rsquo;d like to discuss the motivation for unchecked and checked exceptions in Java, their benefits and disadvantages. Unlike many discussions on this topic, I\u0026rsquo;d like to provide a balanced view on the topic, not a mere bashing of the concept of checked exceptions.\nFirst, we\u0026rsquo;ll dive into the motivation for checked and unchecked exceptions in Java. What does James Gosling, the father of Java, say about the topic? Next, we\u0026rsquo;ll have a look at how exceptions work in Java and what are the issues with checked exceptions. We\u0026rsquo;ll also discuss which type of exception should be used when. Lastly, we\u0026rsquo;ll look at some common workarounds, like using Lombok\u0026rsquo;s @SneakyThrows.\nHistory of Exceptions in Java and Other Languages Exception handling in software development goes back as far as the introduction of LISP in the 1960\u0026rsquo;s. With exceptions, we can solve several problems that we might encounter in the handling of errors in our program.\nThe main idea behind exceptions is to separate the normal control flow from error handling. Let\u0026rsquo;s look at an example where no exceptions are used:\npublic void handleBookingWithoutExceptions(String customer, String hotel) { if (isValidHotel(hotel)) { int hotelId = getHotelId(hotel); if (sendBookingToHotel(customer, hotelId)) { int bookingId = updateDatabase(customer, hotel); if (bookingId \u0026gt; 0) { if (sendConfirmationMail(customer, hotel, bookingId)) { logger.log(Level.INFO, \u0026#34;Booking confirmed\u0026#34;); } else { logger.log(Level.INFO, \u0026#34;Mail failed\u0026#34;); } } else { logger.log(Level.INFO, \u0026#34;Database couldn\u0026#39;t be updated\u0026#34;); } } else { logger.log(Level.INFO, \u0026#34;Request to hotel failed\u0026#34;); } } else { logger.log(Level.INFO, \u0026#34;Invalid data\u0026#34;); } } The program logic is located in just 5 or so lines of code, the rest is error handling. So instead of focusing on the main flow, the code is cluttered with error checking.\nIf we do not have exceptions available in our language, we can only rely on the return value of a function. Let\u0026rsquo;s rewrite our function using exceptions:\npublic void handleBookingWithExceptions(String customer, String hotel) { try { validateHotel(hotel); sendBookingToHotel(customer, getHotelId(hotel)); int bookingId = updateDatabase(customer, hotel); sendConfirmationMail(customer, hotel, bookingId); logger.log(Level.INFO, \u0026#34;Booking confirmed\u0026#34;); } catch(Exception e) { logger.log(Level.INFO, e.getMessage()); } } With this approach, we do not need to check return values, but the control flow is transferred to the catch block. This approach is clearly much more readable. We have two separate flows: a happy flow and an error-handling flow.\nIn addition to readability, exceptions also solve the semipredicate problem. In a nutshell, the semipredicate problem occurs if a return value that indicates an error (or non-existing value) becomes a valid return value. Let\u0026rsquo;s look at a few examples to illustrate the problem:\nExamples:\nint index = \u0026#34;Hello World\u0026#34;.indexOf(\u0026#34;World\u0026#34;); int value = Integer.parseInt(\u0026#34;123\u0026#34;); int freeSeats = getNumberOfAvailableSeatsOfFlight(); The indexOf() method returns -1 if the substring isn\u0026rsquo;t found. Of course, -1 can never be a valid index, so there\u0026rsquo;s no issue here. However, all possible return values of parseInt() are valid integers. That means we do not have a special return value in case of an error available. The last method, getNumberOfAvailableSeatsOfFlight() could even lead to a hidden issue. We could define -1 as the return value for an error, or no information available. That might seem reasonable at first glance. However, it might turn out later that a negative number means the number of people on a waiting list. Exceptions would solve this problem more elegantly.\nHow Do Exceptions Work in Java? Before going into a discussion a whether or not to use checked exceptions, let\u0026rsquo;s briefly recap how exceptions work in Java. The diagram shows the class hierarchy for exceptions:\nRuntimeException extends Exception and the Error extends Throwable. RuntimeException and Error are so-called unchecked exceptions, meaning that they don\u0026rsquo;t need to be handled by the calling code (i.e. they don\u0026rsquo;t need to be \u0026ldquo;checked\u0026rdquo;). All other classes that extend Throwable (usually via Exception) are checked exceptions, meaning that the compiler expects them to be handled by the calling code (i.e. they must be \u0026ldquo;checked\u0026rdquo;).\nEverything, checked or not, that extends from Throwable can be caught in a catch-block.\nLastly, it\u0026rsquo;s important to note that the concept of checked and unchecked exceptions is a Java compiler feature. The JVM itself doesn\u0026rsquo;t know the difference, all exceptions are unchecked. That\u0026rsquo;s why other JVM languages do not need to implement the feature.\nBefore we start our discussion about wether or not to use checked exceptions, let\u0026rsquo;s briefly recap the difference between the two types of exceptions.\nChecked Exceptions Checked exceptions need to be surrounded by a try-catch block or the calling method needs to declare the exception in its signature. Since the constructor of the Scanner class throws a FileNotFoundException exception, which is a checked exception, the following code does not compile:\npublic void readFile(String filename) { Scanner scanner = new Scanner(new File(filename)); } We get a compilation error:\nUnhandled exception: java.io.FileNotFoundException We have two option to fix the problem. We can add the exception to the method signature:\npublic void readFile(String filename) throws FileNotFoundException { Scanner scanner = new Scanner(new File(filename)); } Or we can handle the exception in-place with a try-catch block:\npublic void readFile(String filename) { try { Scanner scanner = new Scanner(new File(filename)); } catch (FileNotFoundException e) { // handle exception  } } Unchecked Exceptions In case of unchecked exceptions, we do not need to do anything. The NumberFormatException which can be thrown by Integer.parseInt is a runtime exception, so the following code compiles:\npublic int readNumber(String number) { return Integer.parseInt(callEndpoint(number)); } However, we can still choose to handle the exception, so the following code compiles as well:\npublic int readNumber(String number) { try { return Integer.parseInt(callEndpoint(number)); } catch (NumberFormatException e) { // handle exception  return 0; } } Why Should We Use Checked Exceptions? If we want to understand the motivation behind checked exceptions, we need to look at the history of Java. The language was created with a focus on robustness and networking.\nThe best way of putting it is probably a quote by James Gosling (the creator of Java) himself \u0026ldquo;You can\u0026rsquo;t accidentally say, \u0026lsquo;I don\u0026rsquo;t care.\u0026rsquo; You have to explicitly say, \u0026lsquo;I don\u0026rsquo;t care.'\u0026rdquo; The quote is taken from an interesting interview with James Gosling, where he discusses checked exceptions in great detail. I highly recommend reading it.\nIn the book Masterminds of Programming, James also talks about exceptions. \u0026ldquo;People tend to ignore to check the return code\u0026rdquo;.\nThis again underlines the motivation for checked exceptions. As a general rule, an unchecked exception should occur when the error is due to a programming fault or a faulty input. A checked exception should be used if the programmer cannot do anything at the time of writing the code. A good example of the latter case is a networking issue. It\u0026rsquo;s out of the hands of the developer to solve the problem, still, the program should handle the situation appropriately - that could be terminating the program, doing a retry, or simply display an error message.\nWhat Are the Issues with Checked Exceptions? Now that we understand the motivation behind checked and unchecked exceptions, let\u0026rsquo;s look at some of the problems that checked exceptions can introduce in our codebase.\nChecked Exceptions Do Not Scale Well One of the main arguments against checked exceptions is code scalability and maintainability. A change in a method\u0026rsquo;s list of exceptions breaks all method call in the calling chain, starting from the calling method up to the method that eventually implements a try-catch to handle the exception. As an example, let\u0026rsquo;s say we call a method libraryMethod() that is part of an external library:\npublic void retrieveContent() throws IOException { libraryMethod(); } Here, the method libraryMethod() itself is from a dependency, for example, a library that handles REST calls to an external system for us. Its implementation would be:\npublic void libraryMethod() throws IOException { // some code } In the future, we decide to use a new version of the library, or even replace the library with another one. Even though, the functionality is similar, the method in the new library throws two exceptions:\npublic void otherSdkCall() throws IOException, MalformedURLException { // call method from SDK } As we have two checked exceptions, the declaration of our method needs to change as well:\npublic void retrieveContent() throws IOException, MalformedURLException { sdkCall(); } For a small codebase, this might not be a big deal, however, for large codebases, this would require quite some refactoring. Of course, we could also directly handle the exception inside our method:\npublic void retrieveContent() throws IOException { try { otherSdkCall(); } catch (MalformedURLException e) { // do something with the exception  } } With this approach, we introduce an inconsistency in our codebase as we handle one exception immediately and defer the handling of the other.\nException Propagation An argument very similar to scalability is the way checked exceptions propagate through the calling stack. If we follow the \u0026ldquo;throw early, catch late\u0026rdquo; principle, we need to add a throws clause (a) to every calling method:\nUnchecked exceptions (b) on the contrary only need to be declared where they actually occur and once more in the place where we want to handle them. They nicely propagate through the stack automatically until they reach the place where they are actually handled.\nUnnecessary Dependencies Checked dependencies also introduce dependencies that aren\u0026rsquo;t necessary with unchecked exceptions. Let\u0026rsquo;s look again at scenario (a) where we added the IOException in three different places. If methodA(), methodB(), and methodC() are located in different classes, so we\u0026rsquo;ll have a dependency on the exception class in all involved classes. If we had used an unchecked exception, we\u0026rsquo;d only have this dependency in methodA() and methodC(). The class or module where methodB() doesn\u0026rsquo;t even need to know about the exception.\nLet\u0026rsquo;s illustrate this idea with an example. Imagine you travel back home from vacation. You check out at the reception of the hotel, go to the train station by bus, then transfer trains once, and, back in your hometown, you take another bus to go from the station to your home. Back home, you realize that you left our phone at the hotel. Before you start to unpack, you enter the \u0026ldquo;exception\u0026rdquo; flow, and take the bus and train back to the hotel to get your phone. In this case, you do everything you did before in reverse order (like moving up the stack trace when an exception occurs in Java) until you arrive at the hotel. Obviously, neither the bus driver nor the train operator need to know about the \u0026ldquo;exception\u0026rdquo;, they simply do their job. Only at the reception, the starting point of the \u0026ldquo;travel home\u0026rdquo; flow, we need to ask if someone has found the phone.\nBad Coding Practises Of course, as professional software developers, we should never choose convenience over good coding practices. However, it can often be tempting to quickly introduce the below three patterns when it comes to checked exceptions. Typically the idea is to take care of it later. Well, we all know how that ends. Another common statement is \u0026ldquo;I want to write my code for the happy flow, not be bothered with exceptions\u0026rdquo;. There are three such patterns that I\u0026rsquo;ve seen quite frequently.\nThe first one is the catch-all exception:\npublic void retrieveInteger(String endpoint) { try { URL url = new URL(endpoint); int result = Integer.parseInt(callEndpoint(endpoint)); } catch (Exception e) { // do something with the exception  } } We simply catch all possible exceptions instead of handling the different exceptions separately:\npublic void retrieveInteger(String endpoint) { try { URL url = new URL(endpoint); int result = Integer.parseInt(callEndpoint(endpoint)); } catch (MalformedURLException e) { // do something with the exception  } catch (NumberFormatException e) { // do something with the exception  } } Of course, this isn\u0026rsquo;t necessarily a bad practice in general. It\u0026rsquo;s an appropriate thing to do if we only want to log the exception, or as a final safety mechanism in a Spring Boot @ExceptionHandler.\nThe second pattern is empty catch blocks:\npublic void myMethod() { try { URL url = new URL(\u0026#34;malformed url\u0026#34;); } catch (MalformedURLException e) {} } This approach obviously circumvents the entire idea of checked exceptions. Also, it completely hides the exception as our program continues without giving us any information about what happened.\nThe third one is to simply print the stack trace and continue as if nothing had happened:\npublic void consumeAndForgetAllExceptions(){ try { // some code that can throw an exception  } catch (Exception ex){ ex.printStacktrace(); } } Additional Code Only to Satisfy the Signature Sometimes we know for sure that an exception cannot be thrown unless we deal with a programming mistake. Let\u0026rsquo;s consider the following example:\npublic void readFromUrl(String endpoint) { try { URL url = new URL(endpoint); } catch (MalformedURLException e) { // do something with the exception  } } MalformedURLException is a checked exception and it\u0026rsquo;s thrown when the given string isn\u0026rsquo;t of a valid URL format. The important thing to note is, that the exception is thrown if the URL format is not valid, it does not mean that the URL actually exists and can be reached.\nEven if we validated the format before:\npublic void readFromUrl(@ValidUrl String endpoint) Or if we\u0026rsquo;ve hardcoded it:\npublic static final String endpoint = \u0026#34;http://www.example.com\u0026#34;; The compiler still forces us to handle the exception. We need to write two lines of \u0026ldquo;useless\u0026rdquo; code, only because there\u0026rsquo;s a checked exception.\nIf we cannot write code to trigger a certain exception to be thrown, we cannot test for it, hence test coverage will decrease.\nInterestingly, when we want to parse a string to an integer, we are not forced to handle the exception:\nInteger.parseInt(\u0026#34;123\u0026#34;); The parseInt method throws a NumberFormatException, an unchecked exception, if the provided string isn\u0026rsquo;t a valid integer.\nLambdas and Exceptions Checked exceptions also do not always work nicely with lambda expressions. Let\u0026rsquo;s look at an example:\npublic class CheckedExceptions { public static String readFirstLine(String filename) throws FileNotFoundException { Scanner scanner = new Scanner(new File(filename)); return scanner.next(); } public void readFile() { List\u0026lt;String\u0026gt; fileNames = new ArrayList\u0026lt;\u0026gt;(); List\u0026lt;String\u0026gt; lines = fileNames.stream().map(CheckedExceptions::readFirstLine).toList(); } } As our method readFirstLine() throws a checked exception, we\u0026rsquo;ll get a compilation error:\nUnhandled exception: java.io.FileNotFoundException in line 8. If we attempt to correct the code with a surrounding try-catch:\npublic void readFile() { List\u0026lt;String\u0026gt; fileNames = new ArrayList\u0026lt;\u0026gt;(); try { List\u0026lt;String\u0026gt; lines = fileNames.stream() .map(CheckedExceptions::readFirstLine) .toList(); } catch (FileNotFoundException e) { // handle exception  } } We still get a compilation error, because we cannot propagate a checked exception inside the lambda to the outside. We have to handle the exception inside the lambda expression and throw a runtime exception:\npublic void readFile() { List\u0026lt;String\u0026gt; lines = fileNames.stream() .map(filename -\u0026gt; { try{ return readFirstLine(filename); } catch(FileNotFoundException e) { throw new RuntimeException(\u0026#34;File not found\u0026#34;, e); } }).toList(); } Unfortunately, this makes the use of static method references impossible if they throw a checked exception. Alternatively, we could have the lambda expression return an error message that is added to the result:\npublic void readFile() { List\u0026lt;String\u0026gt; lines = fileNames.stream() .map(filename -\u0026gt; { try{ return readFirstLine(filename); } catch(FileNotFoundException e) { return \u0026#34;default value\u0026#34;; } }).toList(); } However, the code still looks rather cluttered.\nWhat we can do is pass an unchecked exception from inside a lambda and catch it from the calling method:\npublic class UncheckedExceptions { public static int parseValue(String input) throws NumberFormatException { return Integer.parseInt(input); } public void readNumber() { try { List\u0026lt;String\u0026gt; values = new ArrayList\u0026lt;\u0026gt;(); List\u0026lt;Integers\u0026gt; numbers = values.stream() .map(UncheckedExceptions::parseValue) .toList(); } catch(NumberFormatException e) { // handle exception  } } } Here, we need to be aware of a crucial difference between the earlier example with the checked exception and the example with the unchecked exception. In the case of the unchecked exception, the processing of the stream will continue with the next element, whereas in the case of the unchecked exception, the processing will end and no further elements will be processed. Which of the two we want, obviously depends on our use case.\nAlternative Ways to Handle Checked Exceptions Wrap a Checked Exception in an Unchecked Exception We can avoid adding a throws clause to all methods up the calling stack by wrapping a checked exception in an unchecked exception. Instead of having our method throw a checked exception:\npublic void myMethod() throws IOException {} We can wrap it in an unchecked exception:\npublic void myMethod(){ try { // some logic  } catch(IOException e) { throw new MyUnchckedException(\u0026#34;A problem occurred\u0026#34;, e); } } Ideally, we apply exception chaining. This ensures that the original exception is not hidden. We can see exception chaining in line 5, where the original exception is passed as a parameter to the new exception. This technique has been possible with almost all core Java exceptions since the early versions of Java.\nException chaining is a common approach with many popular frameworks like Spring or Hibernate. Both frameworks moved from checked to unchecked exceptions and wrap checked exceptions that are not part of the framework in their own runtime exceptions. A good example is Spring\u0026rsquo;s JDBC template which translates all JDBC-specific exceptions into unchecked exceptions that are part of the Spring framework.\nLombok @SneakyThrows Project Lombok provides us with an annotation that removes the need for exception chaining. Instead of adding a throws clause to our method:\npublic void beSneaky() throws MalformedURLException { URL url = new URL(\u0026#34;http://test.example.org\u0026#34;); } We can add @SneakyThrows and our code will compile:\n@SneakyThrows public void beSneaky() { URL url = new URL(\u0026#34;http://test.example.org\u0026#34;); } However, it\u0026rsquo;s important to understand that @SneakyThrows does not cause the MalformedURLException to behave exactly like a runtime exception. We won\u0026rsquo;t be able to catch it anymore and the following code won\u0026rsquo;t compile:\npublic void callSneaky() { try { beSneaky(); } catch (MalformedURLException e) { // handle exception  } } As @SneakyThrows removes the exception and MalformedURLException is still considered a checked exception, we\u0026rsquo;ll get a compiler error in line 4:\nException \u0026#39;java.net.MalformedURLException\u0026#39; is never thrown in the corresponding try block Performance During my research for this article, I came across a few discussions about the performance of exceptions. Is there a difference in performance between checked and unchecked? There isn\u0026rsquo;t. It\u0026rsquo;s a compile-time feature.\nHowever, there\u0026rsquo;s a significant performance difference whether or not we include the full stack trace in the exception:\npublic class MyException extends RuntimeException { public MyException(String message, boolean includeStacktrace) { super(message, null, !includeStacktrace, includeStacktrace); } } Here, we add a flag to the constructor of our custom exception. The flag specifies if we want to include the full stack trace or not. Building up the stack trace makes our program slower in case the exception is thrown. So if performance is critical, exclude the trace.\nSome Guidelines How to handle exceptions in our software is an integral part of our craft and highly depends on the specific use case. Before we finish our discussion, here are three high-level guidelines which I believe are (almost) always true.\n Use checked exceptions if it\u0026rsquo;s not a programming mistake or if the program can do something useful to recover. Use a runtime exception if it\u0026rsquo;s a programming mistake or if the program cannot do anything to recover. Avoid empty catch blocks.  Conclusion In this article, we\u0026rsquo;ve gained quite some insights into exceptions in Java. Why were they introduced into the language, when should we use checked, when unchecked exceptions? We\u0026rsquo;ve learned about the drawbacks of checked exceptions and why they are nowadays considered bad practice - keeping in mind that there are many exceptions that prove the rule.\n","date":"December 26, 2023","image":"https://reflectoring.io/images/stock/0011-exception-1200x628-branded_hu5c84ec643e645bced334d00cceee0833_119970_650x0_resize_q90_box.jpg","permalink":"/do-not-use-checked-exceptions/","title":"Don't Use Checked Exceptions"},{"categories":["Kotlin"],"contents":"Sorting refers to the process of arranging elements in a specific order. The order could be ascending or descending based on certain criteria such as numerical or lexicographical order. In this guide, we\u0026rsquo;ll explore various sorting techniques and functions available in Kotlin, unveiling the simplicity and flexibility the Kotlin language offers.\nBasic Sorting in Kotlin Let\u0026rsquo;s take a look at how we can sort lists in Kotlin:\nfun main() { val numbers = listOf(4, 2, 8, 1, 5) val sortedNumbers = numbers.sorted() println(\u0026#34;Sorted numbers: $sortedNumbers\u0026#34;) val descendingNumbers = numbers.sortedDescending() println(\u0026#34;Descending numbers: $descendingNumbers\u0026#34;) } In this code, we first create a list of numbers, sort them in ascending order using sorted() and then sort them in descending order using sortedDescending().\nHere is another example of how we can sort arrays in Kotlin:\nfun main() { val numbersArray = arrayOf(4, 2, 8, 1, 5) val sortedNumbers = numbersArray.sorted() println(\u0026#34;Sorted numbers: $sortedNumbers\u0026#34;) val descendingNumbers = numbersArray.sortedDescending() println(\u0026#34;Descending numbers: $descendingNumbers\u0026#34;) } In our code above, we initialize an array named numbersArrays with integers and demonstrate sorting operations. First, it employs the sorted() function to create a new array sortedArrayNumbers containing the elements of numbers in ascending order. The sorted array is then printed to the console using contentToString(). Subsequently, the sortedDescending() function is applied to obtain another array, descendingNumbers, with the elements sorted in descending order and this array is printed as well.\nsortBy The sortBy function in Kotlin is used to sort a collection for example a list or an array based on a specified key or custom sorting criteria.\nHere\u0026rsquo;s an example using sortBy:\ndata class Person(val name: String, val position: Int) fun main() { val people = listOf( Person(\u0026#34;John\u0026#34;, 1), Person(\u0026#34;Doe\u0026#34;, 2), Person(\u0026#34;Mary\u0026#34;,3) ) // Sorting people by position in ascending order  val sortedPeople = people.sortedBy { it.position } println(\u0026#34;Sorted by position: $sortedPeople\u0026#34;) // Sorting people by position in descending order and then by name in ascending order  val complexSortedPeople = people.sortedByDescending { it.position }.sortedBy { it.name } println(\u0026#34;Complex sorted by position and then by name: $complexSortedPeople\u0026#34;) } In this example, a Person data class is defined with a name and position. The people list is then sorted using the sortedBy function specifying that the sorting should be based on the position property. The result is a new list sortedPeople, where the people are sorted in ascending order of position. Similarly, sortedByDescending is used to sort in descending order.\nsortWith The sortWith function in Kotlin allows us to provide a custom comparator to define how elements in a collection should be compared and sorted.\nHere\u0026rsquo;s an example using sortWith:\ndata class Book(val title: String, val author: String, val publicationYear: Int) fun main() { val books = listOf( Book(\u0026#34;The Great Gatsby\u0026#34;, \u0026#34;F. Scott Fitzgerald\u0026#34;, 1925), Book(\u0026#34;To Kill a Mockingbird\u0026#34;, \u0026#34;Harper Lee\u0026#34;, 1960), Book(\u0026#34;1984\u0026#34;, \u0026#34;George Orwell\u0026#34;, 1949), Book(\u0026#34;The Catcher in the Rye\u0026#34;, \u0026#34;J.D. Salinger\u0026#34;, 1951) ) // Sorting books by publication year in ascending order using sortWith and a custom comparator  val sortedBooks = books.sortedWith(compareBy { it.publicationYear }) println(\u0026#34;Books sorted by publication year: $sortedBooks\u0026#34;) // Sorting books by title first and then publication year in descending order using sortWith and a custom comparator  val reverseSortedBooks = books.sortedWith(compareByDescending\u0026lt;Book\u0026gt; { it.publicationYear }.thenByDescending { it.title }) println(\u0026#34;Books reverse sorted by publication year: $reverseSortedBooks\u0026#34;) } In this example, the Book data class represents books with information about their title, author and publicationYear. The sortWith function is then used with compareBy and compareByDescending to sort the books based on their publication years in both ascending and descending order. This demonstrates how we can apply custom sorting to different types of data.\nConclusion In this article we went through the basic of sort and also discussed the various mthods we can use to sort a list or an array inclusive of the sort(),sortWith and sortBy methods.\n","date":"December 26, 2023","image":"https://reflectoring.io/images/stock/0135-sorted-1200x628-branded_hu18fbb96cfe480b8ec8f03f2a6dbe633b_279812_650x0_resize_q90_box.jpg","permalink":"/sorting%20in%20kotlin/","title":"Guide to Sorting in Kotlin"},{"categories":["Node"],"contents":"Traditional web applications primarily used the HTTP request-response model, where clients sent requests to servers, and servers responded with data. However, implementing real-time features like live chat, notifications, collaborative tools, etc, was challenging. Developers had to resort to workarounds like long polling (repeatedly sending requests) or plugins such as Flash, to achieve real-time communication.\nWebSockets changed the game by enabling constant, low-delay communication between clients and servers, breaking away from the old request-response model.\nSocket.IO was introduced with the aim of simplifying real-time communication between servers and clients on the web. Socket.IO is built on top of WebSockets and allows developers to create real-time applications without worrying about low-level networking details.\nIn this article, we\u0026rsquo;ll explore the concept of using Socket.IO while creating a real-time chat application using Node.js + Socket.IO, that can be connected to any client-side application of our choice.\nPrerequisites Before we begin, please ensure that you have the following setup:\n Node.js installed on your computer. Basic knowledge of JavaScript and Node.js. Integrated Development Environment (IDE) (e.g. Visual Studio Code)   Example Code This article is accompanied by a working code example on GitHub. How Communication Works Using Socket.IO Socket.IO allows servers and clients to communicate in real time. To use Socket.IO it must be integrated on both the server and client.\nServer  The server is the central hub responsible for managing the Socket.IO connection with one or more clients. The server can broadcast messages to all clients, specific clients, or exclude the sender. This is useful for group notifications and chat rooms.  Client  Clients, using Socket.IO, connect to the server by specifying the server\u0026rsquo;s address. Once connected, clients can exchange messages instantly with the server.  Bidirectional Communication In this communication flow clients can emit events to the server and listen for events from the server. Likewise, the server can emit events to the clients and listen for events from them, enabling real-time bidirectional communication.\nSocket.IO communication can also extend between servers (server-to-server) which is valuable for microservices and distributed applications that require real-time interactions.\nSetting up Socket.IO in our Application Server Side We will start the integration of Socket.IO into our application from the server-side using Node.js.\nTo set up our application server, open a terminal in a directory of your choice. Create a new folder and initialize Node.js in it using the following command:\nmkdir socket-chat-app cd socket-chat-app npm init -y Next, install the dependencies necessary for setting up our server by running:\nnpm install express socket.io Here\u0026rsquo;s a summary of what each dependency does:\n Express - This is used to create REST API and helps manage routes in our application. Socket.IO - is a library that enables real-time, bidirectional, and event-based communication between the client and the server.  To establish real-time server communication using Socket.IO, we can create a Socket.IO server instance by utilizing Node.js\u0026rsquo;s built-in HTTP module and Express, as demonstrated in the code snippet below:\nconst express = require(\u0026#34;express\u0026#34;); const Socket = require(\u0026#34;socket.io\u0026#34;); const server = require(\u0026#34;http\u0026#34;).createServer(app); const io = Socket(server, { // options }); io.on(\u0026#34;connection\u0026#34;, socket =\u0026gt; { //... }); server.listen(PORT); Here\u0026rsquo;s how the above code works:\nThe io.on(\u0026quot;connection\u0026quot;, socket =\u0026gt; { /* ... */ }) code subscribes to a \u0026quot;connection\u0026quot; event and waits for clients to connect.\n\u0026quot;connection\u0026quot; is a predefined event in Socket.IO and is triggered when a client connects to the Socket.IO server.\nOur socket argument above provides an object reference to individual client connections and it has various properties and methods.\nNext, we will explore some of the commonly used socket properties and methods accompanied with example code snippet showing how they can be accessed or used:\nSocket Properties   socket.id: This property contains a unique identifier for the connected client. Each client that connects to the server gets a distinct socket.id.\n// Logging the unique socket.id of the connected client console.log(\u0026#34;Client ID:\u0026#34;, socket.id);   socket.handshake: This is an object containing information about the handshake used to establish the connection, which can include headers, query parameters, and more.\n// Logging the handshake object console.log(\u0026#39;Handshake details:\u0026#39;, socket.handshake);   socket.rooms: Is an array of room names that the socket is currently in. Rooms are used for broadcasting messages to specific groups of clients.\n// Logging the room names our socket is in console.log(\u0026#34;Current rooms:\u0026#34;, socket.rooms);   Custom Properties: We can extend our socket argument properties by addinng custom properties to it. These custom properties allow us to store additional information or settings associated with a particular client connection.\n// adding our custom socket property socket.customProperty = \u0026#39;This is a custom property\u0026#39;; // Logging the custom property console.log(\u0026#34;Custom property value:\u0026#34;, socket.customProperty);   Socket Methods  socket.emit(event, data): The socket object allows us to send messages (events) specifically to the client associated with it. We can use socket.emit() to send data to a specified client only. // Sending a \u0026#34;welcome\u0026#34; event with data to the client socket.emit(\u0026#39;welcome\u0026#39;, \u0026#39;Hello, client!\u0026#39;);  socket.on(event, callback): We can use this method to listen for events sent from the client. When the client sends an event with the same name, the provided callback function is executed. This is how we handle messages or actions from the client. // Handling a \u0026#34;chatMessage\u0026#34; event from the client socket.on(\u0026#39;chatMessage\u0026#39;, (message) =\u0026gt; { console.log(`Received message from client: ${message}`); });  socket.join(room): Places the socket in a specific room. You can use rooms to send messages to specific groups of clients. // Joining a chat room named \u0026#34;developers\u0026#34; socket.join(\u0026#39;developers\u0026#39;);  socket.leave(room): Removes the socket from a room. // Leaving the chat room named \u0026#34;developers\u0026#34; socket.leave(\u0026#39;developers\u0026#39;);  socket.disconnect(): to forcefully disconnect a client from the server. // Disconnecting from a client socket.disconnect();  socket.to(room).emit(event, data): Sends an event to all clients in a specific room. // Sending a \u0026#34;notification\u0026#34; event to all clients in the \u0026#34;developers\u0026#34; chat room socket.to(\u0026#39;developers\u0026#39;).emit(\u0026#39;notification\u0026#39;, \u0026#39;New update!\u0026#39;);  socket.broadcast.emit(event, data): Sends an event to all connected clients except the sender. // Broadcasting a \u0026#34;news\u0026#34; event to all connected clients socket.broadcast.emit(\u0026#39;news\u0026#39;, \u0026#39;Important announcement!\u0026#39;);  socket.broadcast.to(socket.id).emit(event, data): Send an event to a specific client based on their unique socket.id. // Sending a private \u0026#34;alert\u0026#34; event to a specific client using their socket.id socket.broadcast.to(targetSocketId).emit(\u0026#39;alert\u0026#39;, \u0026#39;Important message!\u0026#39;);  socket.removeAllListeners(\\[event\\]): Removes all event listeners from the socket. If the event argument is provided, it removes listeners for the specified event. // Removing all event listeners from the socket socket.removeAllListeners(); // Removing listeners for a specific \u0026#34;chatMessage\u0026#34; event socket.removeAllListeners(\u0026#39;chatMessage\u0026#39;);   Using Socket.io to Build a Chat Application Now let us go back to building our chat application, we\u0026rsquo;ll introduce Socket.IO into the application by creating a new file called server.js. This is where we\u0026rsquo;ll add all our server\u0026rsquo;s logic.\nCopy and paste the following code into the server.js file:\nconst express = require(\u0026#34;express\u0026#34;); const Socket = require(\u0026#34;socket.io\u0026#34;); const PORT = 5000; const app = express(); const server = require(\u0026#34;http\u0026#34;).createServer(app); const io = Socket(server, { cors: { origin: \u0026#34;*\u0026#34;, methods: [\u0026#34;GET\u0026#34;, \u0026#34;POST\u0026#34;], }, }); const users = []; io.on(\u0026#34;connection\u0026#34;, socket =\u0026gt; { socket.on(\u0026#34;adduser\u0026#34;, username =\u0026gt; { socket.user = username; users.push(username); io.sockets.emit(\u0026#34;users\u0026#34;, users); io.to(socket.id).emit(\u0026#34;private\u0026#34;, { id: socket.id, name: socket.user, msg: \u0026#34;secret message\u0026#34;, }); }); socket.on(\u0026#34;message\u0026#34;, message =\u0026gt; { io.sockets.emit(\u0026#34;message\u0026#34;, { message, user: socket.user, id: socket.id, }); }); socket.on(\u0026#34;disconnect\u0026#34;, () =\u0026gt; { console.log(`user ${socket.user}is disconnected`); if (socket.user) { users.splice(users.indexOf(socket.user), 1); io.sockets.emit(\u0026#34;user\u0026#34;, users); console.log(\u0026#34;remaining users:\u0026#34;, users); } }); }); server.listen(PORT, () =\u0026gt; { console.log(\u0026#34;listening on PORT: \u0026#34;, PORT); }); In the above code:\n We imported the necessary dependencies for our application and then created an HTTP server. We initialize our Socket.IO instance using the created HTTP server and configure it with CORS settings, allowing any domain to connect in this example. In a production environment, you should specify the actual origins allowed for security reasons. An empty users array is declared to store randomly generated user data. Our server handles real-time events and communication with connected clients by subscribing to an io connection. Using the .on method to listen for events from clients, and the .emit method is used to send events and data to clients. When a client connects to our server:  socket.on(\u0026quot;adduser\u0026quot;) listens for an adduser event emitted from the client when a new user joins the chat. Upon this event, the user is added to the list of users, and both the users list and a private event are emitted back to the client. socket.on(\u0026quot;message\u0026quot;) is employed to handle incoming chat messages from clients. socket.on(\u0026quot;disconnect\u0026quot;) manages disconnections and ensures that disconnected users are removed from the users array.   Finally, the server is set to listen on the specified port 5000.  Our server logic is ready to connect and emit events to available clients.\nClient Side Our server is operational and ready for connections. The choice of client can vary based on our technology stack, but the core principles of socket communication remain consistent. This includes integrating Socket.IO into the client, configuring the connection, and implementing event handlers and emitters.\nIn this article, we\u0026rsquo;ll create our client-side using HTML and vanilla JavaScript. For framework-specific guidance, refer to dedicated resources such as Vue and React.js.\nTo begin creating our client application. In the terminal run the following command to create the necessary folder and files for the client:\nmkdir client cd client touch index.html index.js style.css The chat view for our application will be located in our HTML file, copy and paste the following code in index.html\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34; /\u0026gt; \u0026lt;meta name=\u0026#34;viewport\u0026#34; content=\u0026#34;width=device-width, initial-scale=1.0\u0026#34; /\u0026gt; \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;style.css\u0026#34; /\u0026gt; \u0026lt;title\u0026gt;Socket Chat App\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;Socket Chat App\u0026lt;/h1\u0026gt; \u0026lt;div class=\u0026#34;container\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;chatbox\u0026#34;\u0026gt; \u0026lt;ul id=\u0026#34;messagelist\u0026#34;\u0026gt;\u0026lt;/ul\u0026gt; \u0026lt;form class=\u0026#34;Input\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;text\u0026#34; placeholder=\u0026#34;Type your message ...\u0026#34; /\u0026gt; \u0026lt;button\u0026gt;Send\u0026lt;/button\u0026gt; \u0026lt;/form\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;br /\u0026gt; \u0026lt;div class=\u0026#34;activeusers\u0026#34;\u0026gt; \u0026lt;h2\u0026gt;Active Users\u0026lt;/h2\u0026gt; \u0026lt;ul id=\u0026#34;users\u0026#34;\u0026gt;\u0026lt;/ul\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;script src=\u0026#34;https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.4.1/socket.io.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;index.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Above, we\u0026rsquo;re creating our chat application where we can view and send chat messages. Additionally, we\u0026rsquo;re integrating Socket.IO into our application by including the CDN in our HTML script.\nFor styling our application, copy and paste the following code into the style.css file.\n* { padding: 0px; margin: 0px; box-sizing: border-box; font-family: Arial, Helvetica, sans-serif; } h2 { font-weight: 100; } nav { text-align: center; background-color: blueviolet; padding: 10px; color: white; } .container { max-width: 1000px; margin: 100px auto 50px; padding: 20px; } .chatbox { height: 500px; list-style: none; display: flex; flex-flow: column; background: #eee; border-radius: 6px; box-shadow: 1px 0px 10px #eee; } #messagelist { flex: 1; overflow-y: scroll; } #messagelist .private { background: #015e4b; color: #fff; margin-left: auto; } #messagelist li { list-style: none; background: white; max-width: 400px; padding: 10px; margin: 10px; } #messagelist p:first-child { color: #53bdea; } #messagelist .private p:first-child { color: #03c493; } form.Input { display: flex; } form.Input input { flex: 10; padding: 14px 10px; border: none; } form.Input button { padding: 4px; background: teal; border: none; flex: 1; color: white; cursor: pointer; } #users { list-style: none; display: flex; flex-wrap: wrap; height: 100px; overflow-y: scroll; flex-flow: row; padding-top: 20px; } #users li { min-width: 100px; max-height: 20px; border-radius: 10px; background: white; text-align: center; box-shadow: 0px 2px 10px #eee; } To configure our client socket connection and handle listening and emitting events, paste the following in the index.js file:\nconst messageform = document.querySelector(\u0026#34;.chatbox form\u0026#34;); const messageList = document.querySelector(\u0026#34;#messagelist\u0026#34;); const userList = document.querySelector(\u0026#34;ul#users\u0026#34;); const chatboxinput = document.querySelector(\u0026#34;.chatbox input\u0026#34;); const socket = io(\u0026#34;http://localhost:5000\u0026#34;); let users = []; let messages = []; let isUser = \u0026#34;\u0026#34;; socket.on(\u0026#34;message\u0026#34;, message =\u0026gt; { messages.push(message); updateMessages(); }); socket.on(\u0026#34;private\u0026#34;, data =\u0026gt; { isUser = data.name; }); socket.on(\u0026#34;users\u0026#34;, function (_users) { users = _users; updateUsers(); }); messageform.addEventListener(\u0026#34;submit\u0026#34;, messageSubmitHandler); function updateUsers() { userList.textContent = \u0026#34;\u0026#34;; for (let i = 0; i \u0026lt; users.length; i++) { var node = document.createElement(\u0026#34;LI\u0026#34;); var textnode = document.createTextNode(users[i]); node.appendChild(textnode); userList.appendChild(node); } } function updateMessages() { messageList.textContent = \u0026#34;\u0026#34;; for (let i = 0; i \u0026lt; messages.length; i++) { const show = isUser === messages[i].user ? true : false; messageList.innerHTML += `\u0026lt;li class=${show ? \u0026#34;private\u0026#34; : \u0026#34;\u0026#34;}\u0026gt; \u0026lt;p\u0026gt;${messages[i].user}\u0026lt;/p\u0026gt; \u0026lt;p\u0026gt;${messages[i].message}\u0026lt;/p\u0026gt; \u0026lt;/li\u0026gt;`; } } function messageSubmitHandler(e) { e.preventDefault(); let message = chatboxinput.value; socket.emit(\u0026#34;message\u0026#34;, message); chatboxinput.value = \u0026#34;\u0026#34;; } function userAddHandler(user) { userName = user || `User${Math.floor(Math.random() * 1000000)}`; socket.emit(\u0026#34;adduser\u0026#34;, userName); } userAddHandler(); In the above code, we are setting up our client-side application to communicate with our server using Socket.IO.\nLet\u0026rsquo;s break down what each part of the code does:\n First, we stored references to our HTML elements. These references allow us to manipulate these elements from JavaScript. Then we initialize a Socket.IO connection to a server running at http://localhost:5000. The client will use this connection to send and receive real-time messages. users and messages arrays are used to store information about connected users and chat messages, isUser is used to store the name of the current user. socket.on(\u0026quot;message\u0026quot;, message): This event listener listens for \u0026ldquo;message\u0026rdquo; events sent by the server. When a \u0026ldquo;message\u0026rdquo; event is received, the message is pushed into the messages array, and the updateMessages function is called to update the chat message display. socket.on(\u0026quot;private\u0026quot;, data): This event listener listens for \u0026ldquo;private\u0026rdquo; events. When a \u0026ldquo;private\u0026rdquo; event is received, the isUser variable is updated with the name of the sender. socket.on(\u0026quot;users\u0026quot;, function (_users)): This event listener listens for users events. When a users event is received, the users array is updated with the user data, and the updateUsers function is called to update the user list display. updateUsers(): This function updates the user list displayed in the HTML. It clears the existing list and iterates through our users array to create list items for each user. updateMessages(): This function updates the chat messages displayed in the HTML. It clears the existing messages and iterates through the messages array to create message elements. Messages from the current user are styled differently. messageSubmitHandler(e): This function is called when the user submits a chat message. It prevents the default form submission behavior, sends the message to the server using Socket.IO, and clears the input field. userAddHandler(user): This function is responsible for adding a user to the chat. If a user is provided as an argument, it uses that name; otherwise, it generates a random username. It then emits an adduser event to the server with the chosen username.  Testing the Application To test our application, we need to start both the server and client.\nRun the Node.js script to start the Socket.IO server:\nnode server.js Next, Open the index.html file in a web browser. It will connect to our Socket.IO server running on http://localhost:5000.\nTo create another client for exchanging chat messages, open a new browser window in incognito mode. This will initiate our application and create a new client user. Now, both clients can exchange messages.\nConclusion Mastering Socket.IO is a valuable skill for developers. It gives them the ability to handle complex scenarios where instant data exchange is required, allowing them to create high-performance real-time applications and improve user experiences.\nYou can refer to all the source code used in the article on Github.\n","date":"December 6, 2023","image":"https://reflectoring.io/images/stock/0134-socket-1200x628-branded_hufdee84e2196402384d88d471f7920ff5_83051_650x0_resize_q90_box.jpg","permalink":"/tutorial-nodejs-socketio/","title":"Understanding Socket.IO: Building a simple real-time chat app with Node.js and Socket.IO"},{"categories":["Spring"],"contents":"At its core, Spring is a Dependency Injection framework. Although it now offers a variety of other features that simplify developers' lives, most of these are built on top of the Dependency Injection framework.\nDependency Injection (DI) is often considered the same as Inversion of Control (IoC). Let\u0026rsquo;s briefly explain and classify the two terms in the context of the Spring Framework.\nInversion of Control The concept of Inversion of Control is to give control over the execution of program code to a framework. This can be done, for example, through a function that we program ourselves and then pass to a framework, which then calls it at the right time. We call this function a \u0026ldquo;callback.\u0026rdquo;\nAn example of a callback is a function that should be executed in a server application when a specific URL is called. We program the function, but we do not call it ourselves. Instead, we pass the function to a framework that listens for HTTP requests on a specific port, analyzes the request, and then forwards it to one of the registered callback functions based on specific parameters. The Spring WebMVC project is based on this exact mechanism.\nDependency Injection Dependency Injection is a specific form of Inversion of Control. As the name suggests, Dependency Injection is about dependencies. Class A is dependent on another class B if class A calls a method of B. In program code, this dependency is often expressed as an attribute of type B in class A:\nclass GreetingService { UserDatabase userDatabase = new UserDatabase(); String greet(Integer userId){ User user = userDatabase.findUser(userId); return \u0026#34;Hello \u0026#34; + user.getName(); } } In this example, the GreetingService requires an object of type UserDatabase to do its work. When we instantiate an object of type GreetingService, it automatically creates an object of type UserDatabase.\nThe GreetingService class is responsible for resolving the dependency on UserDatabase. This is problematic due to several reasons.\nFirst, this solution creates a very strong coupling between the two classes. GreetingService must know how to create a UserDatabase object. What if creating a UserDatabase object is not that simple? To open a database connection, we usually require a few parameters:\nclass GreetingService { UserDatabase userDatabase; public GreetingService( String dbUrl, Integer dbPort ){ this.userDatabase = new UserDatabase(dbUrl, dbPort); } String greet(Integer userId){ User user = userDatabase.findUser(userId); return \u0026#34;Hello \u0026#34; + user.getName(); } } The GreetingService still creates its own instance of type UserDatabase, but now it needs to know which parameters are required for a database connection. The coupling between GreetingService and UserDatabase has just become even stronger. We don\u0026rsquo;t want to see these details in the GreetingService!\nWhat if other classes in our application also need a UserDatabase object? We don\u0026rsquo;t want every class to know how to create a UserDatabase object!\nDue to the strong coupling, details of the UserDatabase class are spread throughout the codebase. A change to UserDatabase would therefore lead to many changes in other parts of the code.\nThis not only makes it difficult to develop application code but also to write tests. If we want to test the GreetingService class, we need the URL and port of a real database in this example. If we pass invalid connection parameters, the greet() method no longer works!\nTo break the strong coupling between classes, we modify the code so that we can \u0026ldquo;inject\u0026rdquo; the dependency into the constructor:\nclass GreetingService { final UserDatabase userDatabase; public GreetingService(UserDatabase userDatabase){ this.userDatabase = userDatabase; } String greet(Integer userId){ User user = userDatabase.findUser(userId); return \u0026#34;Hello \u0026#34; + user.getName(); } } There is still a coupling between GreetingService and UserDatabase, but it is much looser than before because GreetingService no longer needs to know how a UserDatabase object is created. The coupling is reduced to the necessary minimum. This pattern is called \u0026ldquo;constructor injection\u0026rdquo; since we pass the dependencies of a class in the form of constructor parameters.\nIn a test, we can now create a mock object of type UserDatabase (for example, using a mock library like Mockito) and pass it to the GreetingService. Since we control the behavior of the mock, we no longer need a real database connection to test the GreetingService class.\nIn the application code, we instantiate the UserDatabase class only once and pass this instance to all the classes that need it. In other words, we \u0026ldquo;inject\u0026rdquo; the dependency to UserDatabase into the constructors of other classes.\nThis \u0026ldquo;injection\u0026rdquo; of dependencies can become cumbersome in a real application with hundreds of classes because we need to instantiate all classes in the correct order and explicitly program their dependencies. The result is a lot of \u0026ldquo;boilerplate\u0026rdquo; code that often changes and distracts us from the actual development.\nThis is where Dependency Injection comes into play. A Dependency Injection framework like Spring takes on the task of instantiating most of the classes in our application so that we don\u0026rsquo;t have to worry about it anymore. It becomes clear that Dependency Injection is a form of Inversion of Control because we hand over control of object instantiation to the Dependency Injection framework.\nThe division of tasks between Spring and us as developers looks something like this:\n We program the classes GreetingService and UserDatabase. We express the dependency between the two classes through a parameter of type UserDatabase in the constructor of GreetingService. We instruct Spring to take control over the classes GreetingService and UserDatabase. Spring instantiates the classes in the correct order to resolve dependencies and creates an object network with an object for each passed class. When we need an object of type GreetingService or UserDatabase, we ask Spring for that object.  In a real application, Spring manages not just two objects but a complex network of hundreds or thousands of objects. This network is referred to as the \u0026ldquo;application context\u0026rdquo; in Spring, as it forms the core of our application.\nIn the next section, we\u0026rsquo;ll discuss how Spring\u0026rsquo;s application context works.\nThe Spring Application Context The application context is the heart of an application that is based on the Spring Framework. It contains all the objects whose control we have delegated to Spring. For this reason, it is sometimes also referred to as the \u0026ldquo;IoC container\u0026rdquo; (IoC = \u0026ldquo;Inversion of Control\u0026rdquo;) or the \u0026ldquo;Spring container\u0026rdquo;.\nThe objects within the application context are called \u0026ldquo;beans.\u0026rdquo; If you are not familiar with the Java world, the term \u0026ldquo;bean\u0026rdquo; might be somewhat confusing. We can derive the word like this:\n Spring is a framework for the Java programming language. Java is the name of an island in Indonesia where coffee is grown, and the type of coffee produced there is also called \u0026ldquo;Java.\u0026rdquo; Coffee is made from coffee beans. In the Java community, it was decided to call certain objects in Java (the programming language) \u0026ldquo;Beans.\u0026rdquo;  It\u0026rsquo;s a bit far-fetched, but the term has become widely adopted, like it or not.\nSo, the application context is essentially a network of Java objects known as \u0026ldquo;beans.\u0026rdquo; Spring instantiates these beans for us and resolves the dependencies between the beans through constructor injection.\nBut how does Spring know which beans it should create and manage within its application context? This is where the term \u0026ldquo;configuration\u0026rdquo; comes into play.\nA configuration in the context of Spring is a definition of the beans required for our application. In the simplest case, this is just a list of classes. Spring takes these classes, instantiates them, and includes the resulting objects (beans) in the application context.\nIf the instantiation of the classes is not possible (for example, if a bean constructor expects another bean that is not part of the configuration), Spring halts the creation of the application context with an exception.\nThis is one of the advantages that Spring offers: a faulty configuration usually prevents the application from starting at all, thus avoiding potential runtime issues.\nThere are several ways to create a Spring configuration. In most use cases, it is convenient and practical to program the configuration in Java. However, in cases where the source code should be completely free from dependencies on the Spring Framework, configuring with XML can also be useful.\nConfiguring an Application Context with XML In the early days of Spring, the application context had to be configured with XML. XML configuration allows for complete separation of configuration from the code. The code doesn\u0026rsquo;t need to be aware that it is managed by Spring.\nAn example XML configuration looks like this:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;beans\u0026gt; \u0026lt;bean id=\u0026#34;userDatabase\u0026#34; class=\u0026#34;de.springboot3.xml.UserDatabase\u0026#34;/\u0026gt; \u0026lt;bean id=\u0026#34;greetingService\u0026#34; class=\u0026#34;de.springboot3.xml.GreetingService\u0026#34;\u0026gt; \u0026lt;constructor-arg ref=\u0026#34;userDatabase\u0026#34;/\u0026gt; \u0026lt;/bean\u0026gt; \u0026lt;/beans\u0026gt; In this configuration, the beans userDatabase and greetingService are defined. Each bean declaration provides instructions to Spring on how to instantiate that bean.\nThe class UserDatabase has a default constructor without parameters, so it is sufficient to provide Spring with the class name. The class GreetingService has a constructor parameter of type UserDatabase, so we refer to the previously declared userDatabase bean using the constructor-ref element.\nWith this XML declaration, we can now create an ApplicationContext object:\npublic class XmlConfigMain { public static void main(String[] args) { ApplicationContext applicationContext = new ClassPathXmlApplicationContext( \u0026#34;application-context.xml\u0026#34;); GreetingService greetingService = applicationContext.getBean( GreetingService.class); System.out.println(greetingService.greet(1)); } } We pass the XML configuration to the constructor of ClassPathXmlApplicationContext, and Spring creates an ApplicationContext object for us.\nThis ApplicationContext now serves as our IoC container, and, via the method getBean() we can, for example, inquire about a bean of type GreetingService from it.\nWhile the XML configuration in this example appears quite manageable, it can become more extensive in larger applications. For us as Java developers, it would be more convenient to manage such a comprehensive configuration in Java itself and take advantage of the Java compiler and IDE features.\nJava Configuration in Detail XML configuration is now mostly used in exceptional cases and legacy applications, and configuration with Java has become the standard. Therefore, let\u0026rsquo;s take a closer look at this approach, also known as \u0026ldquo;Java config\u0026rdquo;.\n@Configuration and @Bean The core of a Java configuration is a Java class annotated with the Spring annotation @Configuration:\n@Configuration public class GreetingConfiguration { @Bean UserDatabase userDatabase() { return new UserDatabase(); } @Bean GreetingService greetingService(UserDatabase userDatabase) { return new GreetingService(userDatabase); } } This configuration is equivalent to the XML configuration from the previous section. With the @Configuration annotation, we inform Spring that this class contributes to the application context. Without this annotation, Spring remains inactive.\nA configuration class can declare factory methods like userDatabase() and greetingService(), each creating an object. With the @Bean annotation, we mark such factory methods. Spring finds these methods and calls them to create an ApplicationContext.\nDependencies between beans, such as the dependency of GreetingService on UserDatabase, are resolved through parameters of the factory methods. In our case, Spring first calls the method userDatabase() to create a UserDatabase bean and then passes it to the method greetingService() to create a GreetingService bean.\nUsing the AnnotationConfigApplicationContext class, we can then create an ApplicationContext:\npublic class JavaConfigMain { public static void main(String[] args) { ApplicationContext applicationContext = new AnnotationConfigApplicationContext(GreetingConfiguration.class); GreetingService greetingService = applicationContext.getBean(GreetingService.class); System.out.println(greetingService.greet(1)); } } The constructor of AnnotationConfigApplicationContext allows us to pass multiple configuration classes instead of just one. This is helpful for larger applications because we can split the configuration of many beans into multiple configuration classes to maintain clarity.\n@Component and @ComponentScan Configuring hundreds or even thousands of beans via Java for a large application can become tedious. To simplify this, Spring offers the ability to \u0026ldquo;scan\u0026rdquo; for beans in the Java classpath.\nThis scanning is activated using the @ComponentScan annotation:\n@Configuration @ComponentScan(\u0026#34;de.springboot3\u0026#34;) public class GreetingScanConfiguration { } As before, we create a configuration class (annotated with @Configuration). However, instead of defining the beans ourselves as factory methods annotated with the @Bean annotation, we add the new @ComponentScan annotation.\nWith this annotation, we instruct Spring to scan the de.springboot3 package for beans. If the scan finds a class annotated with @Component, it will create a bean from that class (i.e., the class will be instantiated and added to the Spring application context).\nTherefore, we simply annotate all classes for which Spring should create a bean with the @Component annotation:\n@Component public class GreetingService { private final UserDatabase userDatabase; public GreetingService(UserDatabase userDatabase) { this.userDatabase = userDatabase; } } @Component public class UserDatabase { // ... } As before, dependencies between beans are expressed through constructor parameters, and Spring resolves these automatically.\n@Bean vs. @Component The annotations @Bean and @Component express a similar concept: both mark a contribution to the Spring application context. This similarity can be confusing, especially at the beginning.\nThe Java compiler helps here a bit, as the @Bean annotation is only allowed on methods, and the @Component annotation is only allowed on classes. So, we cannot confuse them. However, we can still annotate methods and classes that Spring doesn\u0026rsquo;t recognize!\nSpring evaluates the @Bean annotation only within a @Configuration class, and the @Component annotation only on classes found by a component scan.\n Combining @Configuration and @ComponentScan Spring does not dictate how we should configure the beans of our application. We can configure them using XML or Java config or even combine both approaches. We can also combine explicit bean definitions using @Bean methods with a scan using @ComponentScan:\n@Configuration @ComponentScan(\u0026#34;de.springboot3.java.mixed\u0026#34;) class MixedConfiguration { @Bean GreetingService greetingService(UserDatabase userDatabase) { return new GreetingService(userDatabase); } } // no @Component annotation! class GreetingService {...} @Component class UserDatabase {...} In this configuration, Spring creates a bean of type UserDatabase because the class is annotated with @Component, and a @ComponentScan is configured. On the other hand, the bean of type GreetingService is defined through the explicit @Bean-annotated factory method.\nModular Configuration Configuring a larger application with hundreds of beans can quickly become confusing.\nThe explicit configuration using @Bean annotations has the advantage that the configuration of beans is bundled in a few @Configuration classes and is easy to understand.\nThe implicit configuration using @ComponentScan and @Component has the advantage that we don\u0026rsquo;t need to define each bean ourselves, but it can be spread over many @Component annotations and, therefore, over the entire codebase, making it more challenging to grasp.\nA proven principle is to design the Spring configuration along the architecture of the application. Each module of the application should reside in its own package and have its own @Configuration class. In this @Configuration class, we can either configure a @ComponentScan for the module package or use explicit configuration using @Bean methods. To bring the modules together into a complete application, we create a parent @Configuration class that defines a @ComponentScan for the main package. This scan will pick up all @Configuration classes in this and the sub-packages.\n What benefits does the Spring Container give us? Now we know that Spring offers us an IoC container that instantiates and manages objects (beans) for us. This saves us from the burden of managing the lifecycle of these objects ourselves.\nBut that\u0026rsquo;s just the beginning. Since Spring has control over all beans, it can perform many other tasks for us. For example, Spring can intercept calls to bean methods to start or commit a database transaction. We can also use Spring as an event bus. We send an event to the Spring container, and Spring forwards the event to all interested beans.\nWe will delve into these and many other features throughout the rest of the book. The foundation of all these features is the Spring programming model, whose core we have already learned in this chapter. This programming model is a combination of annotations, conventions, and interfaces that we can assemble into a complete application.\nEvents Since Spring manages all beans defined by us, the framework can send messages to these beans. We can use this to develop an event mechanism that loosely couples our components. This is just one of the benefits that the Spring container gives us.\nLoose Coupling When a software component requires the functionality of another component, the two components are \u0026ldquo;coupled\u0026rdquo; together.\nThis coupling can vary. For example, a class may call a method of another class to access its functionality. This couples the two classes at compile time. We refer to this as \u0026ldquo;strong coupling.\u0026rdquo;\nThe strength of the coupling also depends on how extensive and complicated the signature of the called method is – if it\u0026rsquo;s a simple method without parameters, the coupling is not as strong as if the method expects a series of complex parameters. Once we make a change to the types of these parameters, we have to modify the code of both the calling and the called class.\nCoupling is not inherently bad. Sometimes, two classes need to work very closely together to fulfill a function.\nMost of the time, we want to design our code modularly. When working on a module, we don\u0026rsquo;t want to have to think about all the other modules of the application. This is only possible if the dependencies between the modules are limited.\nLoosely coupled modules allow for parallel development. Each module can be developed by a different developer or even team. This is not possible if the modules are heavily coupled because then the teams would step on each other\u0026rsquo;s feet.\nLet\u0026rsquo;s take the example of a banking application that implements use cases from two different domains. The first module implements the \u0026ldquo;User\u0026rdquo; domain. It manages user data and serves as the single source of truth for this data. Another module implements the \u0026ldquo;Transactions\u0026rdquo; domain. This module implements functions related to money transfers.\nEvery time the \u0026ldquo;Transactions\u0026rdquo; module initiates a transfer, it needs to check whether the user initiating the transfer is locked or not. The transfer is only executed if the user is not locked.\nThe \u0026ldquo;Transactions\u0026rdquo; module could always directly call the \u0026ldquo;User\u0026rdquo; module before performing a transfer and ask if the user is locked or not. However, this would strongly couple the \u0026ldquo;Transactions\u0026rdquo; module to the \u0026ldquo;User\u0026rdquo; module. Whenever we make a change to the \u0026ldquo;User\u0026rdquo; module, we may also have to adjust the code in the \u0026ldquo;Transactions\u0026rdquo; module. Additionally, in the future, we may want to extract the \u0026ldquo;Transactions\u0026rdquo; module into its own (micro)service so that it can be released independently of the \u0026ldquo;User\u0026rdquo; module.\nEvents provide a solution to loosely couple both modules. Every time a new user is registered, locked, or unlocked, the \u0026ldquo;User\u0026rdquo; module sends an event. The \u0026ldquo;Transactions\u0026rdquo; module listens to these events and updates its own database with the current status of each respective user. Before performing a transfer, the \u0026ldquo;Transactions\u0026rdquo; module can now check its own database to see if the user is locked or not, instead of having to make a request to the user module. The data storage is completely decoupled from the \u0026ldquo;User\u0026rdquo; module.\nUsing this example, we want to explore how we can implement such an event mechanism with Spring and Spring Boot.\nSending Events The prerequisite for sending and receiving events with Spring is that both the sender and the receiver must be registered as beans in the ApplicationContext.\nWe can define the events themselves as simple Java classes or records. For example, we can write our user events as follows:\npublic record UserCreatedEvent(User user) {} public record UserLockedEvent(int userId) {} public record UserUnlockedEvent(int userId) {} To send such an event, we can use the ApplicationEventPublisher interface. Conveniently, Spring automatically provides a bean that implements this interface in the ApplicationContext. So we can simply inject it into our UserService:\n@Component public class UserService { private final ApplicationEventPublisher applicationEventPublisher; public UserService(ApplicationEventPublisher applicationEventPublisher) { this.applicationEventPublisher = applicationEventPublisher; } public void createUser(User user) { // ... business logic omitted  this.applicationEventPublisher.publishEvent(new UserCreatedEvent(user)); } public void lockUser(int userId) { // ... business logic omitted  this.applicationEventPublisher.publishEvent(new UserLockedEvent(userId)); } public void unlockUser(int userId) { // ... business logic omitted  this.applicationEventPublisher.publishEvent(new UserUnlockedEvent(userId)); } } We simply call the publishEvent() method. That\u0026rsquo;s all we need to do to send an event.\nReceiving Events There are several ways to respond to events in Spring. We can implement the ApplicationListener interface or use the @EventListener annotation.\nApplicationListener The conventional way to respond to an event in a Spring application is by implementing the ApplicationListener interface:\n@Component public class UserCreatedEventListener implements ApplicationListener\u0026lt;UserCreatedApplicationEvent\u0026gt; { private static final Logger logger = LoggerFactory.getLogger(UserCreatedEventListener.class); private final TransactionDatabase database; public UserCreatedEventListener(TransactionDatabase database) { this.database = database; } @Override public void onApplicationEvent(UserCreatedApplicationEvent event) { this.database.saveUser(new User(event.getUser().id(), User.UNLOCKED)); logger.info(\u0026#34;received event: {}\u0026#34;, event); } } The UserCreatedEventListener class is part of the \u0026ldquo;Transactions\u0026rdquo; module. It has access to an object of type TransactionDatabase, which simulates the module\u0026rsquo;s database.\nWe implement the onApplicationEvent() method, which, in our case, takes an object of type UserCreatedApplicationEvent. The event contains a User object. The listener takes the user\u0026rsquo;s data that the \u0026ldquo;Transactions\u0026rdquo; module needs and saves it in the database. Users are unlocked by default, so we pass the UNLOCKED status.\nWhy do we use the event class UserCreatedApplicationEvent instead of the record UserCreatedEvent we learned about earlier?\nBecause Spring\u0026rsquo;s ApplicationListener can only handle events of type ApplicationEvent. This means that every event we send must inherit from this class:\npublic class UserCreatedApplicationEvent extends ApplicationEvent { private final User user; public UserCreatedApplicationEvent(Object source, User user) { super(source); this.user = user; } public User getUser() { return user; } } As we can see, it is somewhat cumbersome to receive events with an ApplicationListener. On the one hand, our event must inherit from the ApplicationEvent class, and on the other hand, we must implement the ApplicationListener interface, which can only respond to a single type of events. To receive other event types, we would have to write additional ApplicationListeners or implement a large if/else block in the onApplicationEvent() method.\nThe Spring team recognized this and added an annotation-based event mechanism to Spring.\n@EventListener To respond to an event, we can also use the @EventListener annotation. If Spring finds this annotation on a method of a bean registered in the ApplicationContext, Spring automatically sends all events of the corresponding type to this method.\n@Component public class UserEventListener { private final TransactionDatabase database; public UserEventListener(TransactionDatabase database) { this.database = database; } @EventListener(UserCreatedEvent.class) public void onUserCreated(UserCreatedEvent event) { this.database.saveUser(new User(event.user().id(), User.UNLOCKED)); } } The UserEventListener class is added to the Spring ApplicationContext using the @Component annotation. The onUserCreated() method is annotated with @EventListener and accepts an object of our simple record type UserCreatedEvent.\nThe event does not have to inherit from the ApplicationEvent type, as in the example with an ApplicationListener! Spring internally wraps our event in an ApplicationEvent, but we don\u0026rsquo;t notice it. We can receive any type as an event, but immutable records are best suited for transporting events.\nSynchronous or asynchronous? If we use events as shown in the examples, our \u0026ldquo;Users\u0026rdquo; and \u0026ldquo;Transactions\u0026rdquo; modules are decoupled at compile time. The only coupling between the modules are the event objects, which both modules must know.\nBut how are our modules coupled at runtime? What happens if our EventListener in the \u0026ldquo;Transactions\u0026rdquo; module produces an exception when receiving an event or takes a long time to process an event? Can and should we react to this in our \u0026ldquo;Users\u0026rdquo; module?\nBy default, the Spring Event mechanism works synchronously, as shown in the following diagram:\nThe UserService sends an event to the ApplicationEventPublisher. This knows which event listeners are interested in the event, and calls them one after the other (in our case, it calls only the UserEventListener). Only when the onUserCreated() method has completed its processing does the control flow return to the UserService. The processing takes place synchronously.\nThis means that if the UserEventListener.onUserCreated() method throws an exception, it arrives in the UserService and interrupts the processing there. Or if the method takes a long time, the control flow in the UserService is interrupted for that long.\nSo our modules are not as decoupled as we had hoped! We have to adapt our exception handling and make our code more robust so that it can handle long waiting times if necessary.\nCan we also process events asynchronously to reduce this coupling, as shown in the following sequence diagram?\nThe short answer is \u0026ldquo;yes.\u0026rdquo; We can simply annotate the listener method with the @Async annotation:\n@Component public class UserEventListener { @Async @EventListener(UserCreatedEvent.class) public void onUserCreated(UserCreatedEvent event) { // …  } } This prompts Spring to return control flow directly to the caller and execute the method in a separate thread in the background.\nFor the @Async annotation to take effect, we must first activate it. We do this using the @EnableAsync annotation on one of our @Configuration classes.\nSince events are now processed asynchronously, exceptions in the event listener are no longer forwarded to the sender of the event. Depending on the use case, this may be desired or undesired.\nWe should be aware that decoupling using the @Async annotation only scales to a limited extent. If we process a large number of events, they will accumulate in Spring\u0026rsquo;s internal thread pool and be processed one after the other. In synchronous event processing, scaling problems become apparent more quickly, as the entire processing chain slows down instead of just the event processing.\nSpring Boot\u0026rsquo;s Application Events During the lifecycle of an application, Spring Boot sends some events that we can respond to. Most of these events are very technical and are rarely relevant to us as application developers. The following table briefly lists these events in chronological order:\n   Event Description     ApplicationStartingEvent The application is currently starting, but the Environment and ApplicationContext are not yet available.   ApplicationEnvironmentPreparedEvent The application is currently starting, and the Spring Environment is available.   ApplicationContextInitializedEvent The application is currently starting, and the Spring ApplicationContext is available.   ApplicationPreparedEvent A combination of the previous two events. The ApplicationContext and Environment are available.   ContextRefreshedEvent The ApplicationContext has been updated. This happens when the ApplicationContext starts and when it is reloaded (for example, after a configuration change).   WebServerInitializedEvent The embedded web server (Tomcat by default) has started.   ApplicationStartedEvent The application has started, but no ApplicationRunners and CommandLineRunners have been executed yet.   ApplicationReadyEvent The application is fully started.   ApplicationFailedEvent The application did not start due to an error.        The most relevant event for us is probably the ApplicationReadyEvent, as it is fired when the application is ready to work. We can use it, for example, to activate certain components in our application.\nIf our application processes messages from a queue, for example, we want to make sure that our application is also ready to process these messages. This is the case as soon as we have received the ApplicationReadyEvent.\nWe cannot react to some of the earlier events from the table in the \u0026ldquo;normal\u0026rdquo; way, as they are fired very early. If we want to react to these events, we must manually register an ApplicationListener:\n@SpringBootApplication public class EventsApplication { public static void main(String[] args) { SpringApplication springApplication = new SpringApplication(EventsApplication.class); springApplication.addListeners(new MyApplicationListener()); springApplication.run(args); } } ","date":"December 4, 2023","image":"https://reflectoring.io/images/stock/0027-cover-1200x628-branded_hud0e8018d4bb3bffe77108325dc949a45_281256_650x0_resize_q90_box.jpg","permalink":"/spring-basics/","title":"Spring Basics"},{"categories":["Spring"],"contents":"Spring Boot builds on top of the Spring Framework and provides a wealth of additional features and integrations. To simplify somewhat, one could say that the Spring Framework focuses on functions related to the application context, while Spring Boot provides functions that are needed in many applications running in production or that simplify developer life. In this chapter, we will provide an overview of the core functions of Spring Boot, which we will examine in more detail in later chapters.\nBootstrapping The name \u0026ldquo;Spring Boot\u0026rdquo; comes from the core functionality of Spring Boot: bootstrapping. In this case, bootstrapping refers to the process of transforming a pile of source code into an application and making it accessible to users.\nIn the previous chapter, we already saw how we can create and start an application context using Spring (without Spring Boot). But how do we turn this ApplicationContext into a complete application? This is where Spring Boot enters the stage.\nInstead of manually creating an ApplicationContext, we let Spring Boot do this for us. We simply provide a main method that is declared as a Spring Boot application:\n@SpringBootApplication public class SpringBootBasicsApplication { public static void main(String[] args) { SpringApplication.run(SpringBootBasicsApplication.class, args); } } In the main() method, we call the static method SpringApplication.run() and pass in a class annotated with @SpringBootApplication. Typically, this is the same class that also contains the main() method, as shown in the example above.\nTo have access to the classes @SpringBootApplication and SpringApplication, we need to declare the org.springframework.boot:spring-boot-starter module as a dependency in our pom.xml or build.gradle file.\nOnce we start the main() method (for example from our IDE), we will see the following log output:\n. ____ _ __ _ _ /\\\\ / ___\u0026#39;_ __ _ _(_)_ __ __ _ \\ \\ \\ \\ ( ( )\\___ | \u0026#39;_ | \u0026#39;_| | \u0026#39;_ \\/ _` | \\ \\ \\ \\ \\\\/ ___)| |_)| | | | | || (_| | ) ) ) ) \u0026#39; |____| .__|_| |_|_| |_\\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v3.0.1) 2023-07-03T11:48:41.624+10:00 INFO 44185 --- [ main] d.s.s.SpringBootBasicsApplication : Starting SpringBootBasicsApplication using Java 18 with PID 44185 2023-07-03T11:48:41.629+10:00 INFO 44185 --- [ main] d.s.s.SpringBootBasicsApplication : No active profile set, falling back to 1 default profile: \u0026#34;default\u0026#34; 2023-07-03T11:48:42.482+10:00 INFO 44185 --- [ main] d.s.s.SpringBootBasicsApplication : Started SpringBootBasicsApplication in 1.288 seconds (process running for 1.87) Process finished with exit code 0 This log excerpt already shows us some interesting things:\n We\u0026rsquo;re using Spring Boot version 3.0.1. We\u0026rsquo;re using Java 18. Spring Boot starts in the default profile because we haven\u0026rsquo;t defined a specific profile. This means that Spring Boot supports the concept of \u0026ldquo;profiles\u0026rdquo;! We will learn what exactly a profile is and how to use it in chapter [[22 - Konfiguration]]. It took 1.288 seconds to start the Spring Boot application. Spring Boot has configured logging for us, so our log outputs are formatted with dates, log levels, process ID, thread names, etc.  So, Spring Boot has already done quite a bit of \u0026ldquo;bootstrapping\u0026rdquo; to help us start and configure an application, and we\u0026rsquo;ll learn many more things that Spring Boot can do for us throughout the rest of the book.\nHowever, after the log output from above, our example application exits directly without having done anything and returns an exit code of 0 to the command line, indicating a successful application termination. This isn\u0026rsquo;t very helpful, so let\u0026rsquo;s look at some more Spring Boot features that can help us build a real application.\nInfluencing the ApplicationContext In the \u0026ldquo;Spring Basics\u0026rdquo; chapter, we already saw how to influence an ApplicationContext. Just like in a Spring application, the ApplicationContext is the heart of a Spring Boot application, and we can influence it just as we\u0026rsquo;re accustomed to from Spring (with some extras that we\u0026rsquo;ll learn in the \u0026ldquo;Extending Spring Boot\u0026rdquo; chapter).\nIf we look at the source code of the @SpringBootApplication annotation, we see the following:\n@Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @Documented @Inherited @SpringBootConfiguration @EnableAutoConfiguration @ComponentScan(excludeFilters = { @Filter(type = FilterType.CUSTOM, classes = TypeExcludeFilter.class), @Filter(type = FilterType.CUSTOM, classes = AutoConfigurationExcludeFilter.class) }) public @interface SpringBootApplication { ... } We recognize the @ComponentScan annotation from the chapter on [[Spring Basics]]! Let\u0026rsquo;s continue to look at the source code of the @SpringBootConfiguration annotation:\n@Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @Documented @Configuration @Indexed public @interface SpringBootConfiguration { ... } Here, we see the @Configuration annotation again!\nAnnotations on annotations are called meta-annotations. When Spring examines class annotations, the framework is smart enough to also evaluate meta-annotations (and meta-meta-annotations, and so on). For us, this means that our main class SpringBootBasicsApplication is annotated with both the @ComponentScan and @Configuration (meta-)annotations.\nSince our main class is annotated with @ComponentScan, Spring automatically evaluates all @Component and @Configuration annotations in the class\u0026rsquo;s package and all packages below it, instantiates objects of the annotated classes, and adds them to the ApplicationContext, as described in the [[Spring Basics]] chapter.\nFurthermore, Spring treats our main class as a @Configuration class, so we can directly add @Bean-annotated factory methods here if we want to.\nTo implement a \u0026ldquo;Hello World,\u0026rdquo; we could extend our application as follows:\n@SpringBootApplication public class SpringBootBasicsApplication { public static void main(String[] args) { SpringApplication.run(SpringBootBasicsApplication.class, args); } @Bean public HelloBean helloBean() { return new HelloBean(); } static class HelloBean { public HelloBean() { System.out.println(\u0026#34;Hello!\u0026#34;); } } } We create a bean of type HelloBean, which outputs a message in its constructor. If we start the application again, we\u0026rsquo;ll see this log output before the application stops.\nSo far, we\u0026rsquo;ve seen how we can use Spring Boot to develop an application that performs a single action (in our case, printing \u0026ldquo;Hello!\u0026rdquo; to the command line). In Chapter XX, we will see how we can use this to develop full-fledged command-line programs.\nHowever, many applications we develop nowadays are web applications and not command-line programs, so our next focus will be on Spring Boot\u0026rsquo;s embedded web server features.\nEmbedded Web Server Traditionally, Java-based web applications were deployed on application servers like Tomcat or JBoss. The application was packaged as a WAR (Web Archive) file and copied to the server, which then unpacked the archive, read some metadata from descriptor XML files, and forwarded the users' HTTP requests to the endpoints defined by the application.\nSpring Boot has turned the traditional deployment of web applications on its head. Instead of creating a WAR file and copying it to a server, with Spring Boot, we create a \u0026ldquo;runnable\u0026rdquo; JAR file (also known as a \u0026ldquo;fat JAR\u0026rdquo; or \u0026ldquo;uber JAR\u0026rdquo;) by default. This JAR file contains our application code, all dependencies, and an embedded web server. When we start this JAR file with the command java -jar app.jar, Spring Boot automatically starts the web server, which then listens for HTTP requests and forwards them to our application.\nTo develop a web application, we simply need to add the org.springframework.boot:spring-boot-starter-web module as a dependency in our pom.xml or build.gradle file.\nWhen we start the application, we\u0026rsquo;ll see the following log output in addition to the previous log outputs:\n... 2023-07-05T09:55:58.253+10:00 INFO 87686 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path \u0026#39;\u0026#39; Spring Boot has automatically started a Tomcat server that listens on port 8080 to receive HTTP requests. Our application no longer stops immediately after starting; it now waits for HTTP requests until it\u0026rsquo;s terminated (or crashes). It now acts as now a fully functional web server!\nThe embedded web server simplifies the deployment of a Spring Boot application. For instance, we can package the JAR file into a Docker image and start it using the java -jar app.jar command within the Docker container. With the profile concept we\u0026rsquo;ve already seen in the log output, Spring Boot also makes it easy to configure the application for different environments (development, staging, production, etc.). To do this, we only need to include a file like application-\u0026lt;profile\u0026gt;.yml in the JAR file (or place it next to the JAR file) and activate the profile. We will learn about this in detail in chapter [[22 - Konfiguration]].\nDependency Management Another core feature of Spring Boot is dependency management.\nIn our application, we often don\u0026rsquo;t want to reinvent the wheel, so we use a large number of libraries defined as dependencies in our pom.xml or build.gradle file. Each of these libraries exists in versions that are compatible with our code (and other libraries), and often in versions that are outdated (or too new) and not compatible with our application. We need to manage the versions of all our dependencies to ensure they remain compatible with each other.\nSpring Boot addresses this. It provides us with a \u0026ldquo;Bill of Materials\u0026rdquo; (BOM) that defines a set of libraries and their versions, ensuring they are always compatible with each other and the current Spring Boot version. This BOM doesn\u0026rsquo;t list all libraries out there, but it covers all the libraries needed by Spring Boot itself or by its official integrations with other products (such as databases).\nTo use the Spring Boot BOM, we simply need to import the org.springframework.boot:spring-boot-dependencies dependency in the same version as Spring Boot into our pom.xml or build.gradle file. If we use the Spring Boot plugin for Maven or Gradle, the plugin does this automatically for us. When declaring a dependency on a library, we can omit the version, and it will be automatically loaded in the version defined in the Spring Boot BOM. We will delve into the details of the Maven and Gradle plugins for Spring Boot in the \u0026ldquo;Build Management\u0026rdquo; chapter.\nIntegrations Spring Boot comes with several integrations for commonly used libraries, making configuration easier without requiring us to write the integration code ourselves.\nOne example is database integration. For instance, if we add the dependency org.postgresql:postgresql to our pom.xml or build.gradle file (without specifying a version, as Spring Boot\u0026rsquo;s dependency management automatically uses the latest compatible version), Spring Boot recognizes the PostgreSQL driver in the classpath. It then automatically provides a DataSource object to the ApplicationContext, which we can use to access the database. We just need to provide Spring Boot with the database\u0026rsquo;s URL, username, and password by adding configuration parameters in the application.yml file.\nThe DataSource object provided by Spring Boot can be used directly, but it\u0026rsquo;s also used by other integrations. For instance, if we add a dependency to Spring Data JPA or Spring Data JDBC, Spring Boot detects this and provides these integrations with the DataSource. In this case, we don\u0026rsquo;t need to use the DataSource directly; we can use Spring Data\u0026rsquo;s abstractions to access the database, making our life much simpler.\nThis database integration example follows a typical pattern for Spring Boot integrations. Spring Boot detects a dependency on the classpath and automatically configures some beans in the ApplicationContext that can be used by us (or by other integrations). If the integration requires additional configuration parameters, we can provide them in the application.yml file. If we need to further customize the beans provided by the integration, we can \u0026ldquo;override\u0026rdquo; the beans in the ApplicationContext by defining them ourselves as @Bean or @Component.\nThe database integration is so commonly needed in applications that Spring Boot includes it at its core. Other integrations (like with a cache or messaging system) are not required in every application, so we need to activate them with so-called \u0026ldquo;starters.\u0026rdquo; For example, if we want to use a Redis cache in our application, we need to add the org.springframework.boot:spring-boot-starter-data-redis dependency, which activates the integration with Redis. A \u0026ldquo;starter\u0026rdquo; is a small library that simplifies starting with a specific feature or integration. We\u0026rsquo;ll explore starters in more detail in the \u0026ldquo;Extending Spring Boot\u0026rdquo; chapter.\nProduction Features In addition to bootstrapping and the embedded web server, Spring Boot offers features that significantly facilitate running an application in production.\nOne of the most important production features is logging. We\u0026rsquo;ve already seen the automatic configuration of log outputs, where dates and thread names appear in each log line, making debugging easier. Spring Boot relies on the de facto standard SLF4J for this, so we only need to define a Logger and use it for log outputs. We\u0026rsquo;ll explore logging in more detail in the Logging chapter.\nWe\u0026rsquo;ve also seen the profile feature. We can define configuration parameters that have their own values for each profile. When starting the application, we can specify which profiles (and thus which configuration values) should be active. This allows us to run the same application in different environments, only needing to adjust the configuration for each environment. This feature is essential in modern software development, as it prevents a whole class of environment-specific bugs. We\u0026rsquo;ll cover profiles and configuration parameters in the [[22 - Konfiguration]] chapter.\nWith the \u0026ldquo;actuator\u0026rdquo; module, Spring Boot also provides insights into a running application. Once we add this module as a dependency, various metrics such as memory and processing capacity are exposed through the /actuator endpoint. These metrics can then be queried by observability tools and made available in dashboards, providing constant observability into the production environment.\nThis was just a glimpse of the features that Spring Boot offers. Spring Boot provides numerous other features that make application development and operation easier, which we\u0026rsquo;ll cover in the rest of this book.\n","date":"December 4, 2023","image":"https://reflectoring.io/images/stock/0027-cover-1200x628-branded_hud0e8018d4bb3bffe77108325dc949a45_281256_650x0_resize_q90_box.jpg","permalink":"/spring-boot-basics/","title":"Spring Boot Basics"},{"categories":["Spring"],"contents":"Welcome to the exciting world of Spring\u0026rsquo;s Java Configuration!\nIn this comprehensive guide, we will learn Spring\u0026rsquo;s Java-based configuration. We will get familiar with core annotations like @Bean and @Configuration. We will explore the ways to organize configuration logic, delve into modular setups, and tailor configurations with ease. By the end, we will not only grasp the fundamentals but also be equipped to create well configured Spring applications.\nLet us dive in and transform our Spring development experience!\n Example Code This article is accompanied by a working code example on GitHub. Understanding Spring Configuration In Spring based applications, we speak of \u0026ldquo;configuration\u0026rdquo; when we want to describe how our application context is made up of different, potentially related and interacting beans. It serves as the blueprint, guiding the behavior of our Spring-powered creations.\nConfiguration in Spring is all about empowering our applications with context, specifying how beans (Spring-managed objects) are created, wired, and used within the application.\nTraditionally, Spring offered XML-based configuration, providing a structured way to define beans and their relationships. As the Spring ecosystem evolved, XML configurations have been replaced by more intuitive, expressive, and Java-centric approaches, simplifying the application configuration.\nSpring Configuration Essential Features Java-based configuration in Spring enables us to use plain Java classes (POJOs) and annotations to configure our application context. With Java-based configuration, we do not need XML files. Instead, we use Java code to define our beans and their relationships.\nIn the upcoming sections, we will learn Spring\u0026rsquo;s Java Configuration, exploring its core concepts, understanding annotations like @Bean and @Configuration, and know better ways of organizing and managing our Spring applications.\nLet us first recap the essentials: beans, IoC, and DI.\nBeans: The Core of Spring In the world of Spring, beans are the fundamental building blocks of our application. Beans are Java objects managed by the Spring container in what is called the \u0026ldquo;application context\u0026rdquo;. Beans represent the various components of our application, from simple data objects to complex business logic. Spring creates these beans, manages their lifecycle, and wires them together, makes our application cohesive and organized.\nInversion of Control (IoC) IoC, one of the main building blocks of Spring, is a concept where the control over the object creation and lifecycle is transferred from our application to the Spring container. Spring ensures that our beans are created, configured, and managed without our direct intervention.\nDependency Injection (DI) Dependency Injection is the process where beans receive their dependencies from an external source, typically the Spring container, rather than creating them internally. With DI, our beans are ready to use, making our code cleaner, more modular, and easier to test. Spring handles the \u0026lsquo;injection\u0026rsquo; of dependencies, ensuring our beans work seamlessly together in our application recipe.\n👉 Learn more about beans, IoC, and DI in our article Dependency Injection and Inversion of Control.\nTypes of Spring Configuration Spring offers various types of configuration options, each tailored to specific needs. Let us get familiar with them.\nXML Configuration It is the classic approach. Configure Spring beans and dependencies using XML files. Perfect for legacy systems and scenarios requiring external configuration. We\u0026rsquo;re not going to dive into XML configuration in this article, however.\nJava-Based Configuration  Use annotations like @Component, @Service, and @Repository to define beans. Configure beans with Java classes using @Configuration and @Bean. Combine Java-based configuration with annotation scanning for effortless bean discovery.  Manual Java-based configuration can be used in special scenarios where custom configuration and more control is needed.\nIn the sections to follow, we will learn about customizations offered by Java-based configuration.\nEnter Spring\u0026rsquo;s core concepts: @Bean and @Configuration.\nContainer Configuration vs. Application Configuration In this article, we\u0026rsquo;re talking about the configuration of the Spring container (the \u0026ldquo;application context\u0026rdquo;). This configuration is about deciding which beans to include in the container and how to instantiate them.\nAnother aspect of configuration is to configure the way our application behaves by defining application properties in YAML or properties files. While those properties can also influence the way the application context is initialized, we\u0026rsquo;re not looking into application properties in this article.\nTo learn more about configuration using properties read our article Configuring a Spring Boot Module with @ConfigurationProperties.\n Understanding @Bean In Spring, @Bean transforms ordinary methods into Spring-managed beans. These beans are objects managed by the Spring IoC container, ensuring centralized control and easier dependency management. In our Employee Management System, @Bean helps create beans representing employees and departments.\nExamples of using @Bean:\npublic class Employee { // Employee properties and methods } public class Department { // Department properties and methods } @Configuration @ComponentScan(basePackages = \u0026#34;io.reflectoring.javaconfig\u0026#34;) public class JavaConfigAppConfiguration { @Bean(name = \u0026#34;newEmployee\u0026#34;, initMethod = \u0026#34;init\u0026#34;, destroyMethod = \u0026#34;destroy\u0026#34;) @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE) @Cacheable(cacheNames = \u0026#34;employeesCache\u0026#34;) public Employee newEmployee(final String firstName, final String lastName) { return Employee.builder().firstName(firstName).lastName(lastName).build(); } @Bean @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE) public Department newDepartment(final String deptName) { final Department department = Department.builder().name(deptName).build(); acme().addDepartment(department); return department; } @Bean(name = \u0026#34;founder\u0026#34;) @Scope(ConfigurableBeanFactory.SCOPE_SINGLETON) public Employee founderEmployee() { final Employee founder = newEmployee(\u0026#34;Scott\u0026#34;, \u0026#34;Tiger\u0026#34;); founder.setDesignation(\u0026#34;Founder\u0026#34;); founder.setDepartment(coreDepartment()); return founder; } @Bean(name = \u0026#34;core\u0026#34;) @Scope(ConfigurableBeanFactory.SCOPE_SINGLETON) public Department coreDepartment() { return newDepartment(\u0026#34;Core\u0026#34;); } @Bean(name = \u0026#34;acme\u0026#34;) @Scope(ConfigurableBeanFactory.SCOPE_SINGLETON) public Organization acme() { final Organization acmeCo = new Organization(); acmeCo.setName(\u0026#34;Acme Inc\u0026#34;); return acmeCo; } } \nHere, the newEmployee() and founderEmployee() methods create an Employee bean, newDepartment() and coreDepartment() methods create a Department bean and acme() method creats an Organization bean. Spring now manages these objects, handling their lifecycle and ensuring proper dependencies.\nSpring Bean Lifecycle: A Summary The lifecycle of a Spring bean involves its instantiation, initialization, use, and eventual disposal. When the Spring container starts, it instantiates the beans. Then, it injects the dependencies, calls the initialization methods (if specified), and makes the bean available for use. When the container shuts down, the beans are destroyed, invoking any destruction methods (if defined).\nSpring beans undergo a series of stages known as the bean lifecycle. Understanding these stages is essential for effective bean management.\nHere is a concise summary of the Spring bean lifecycle methods:\n  Instantiation: Beans are created, either through constructor invocation or factory methods.\n  Population of Properties: Dependencies and properties of the bean are set.\n  Awareness: Beans implementing Aware interfaces are notified of the Spring environment and related beans. Examples include BeanNameAware, BeanFactoryAware, and ApplicationContextAware.\n  Initialization: The bean is initialized after its properties are set. This involves calling custom initialization methods specified by the developer. If a bean implements the InitializingBean interface, the afterPropertiesSet() method is invoked. Alternatively, you can define a custom initialization method and specify it in the bean configuration.\n  In Use: The bean is now in use, performing its intended functions within the application.\n  Destruction: When the application context is closed or the bean is no longer needed, the destruction phase begins. Beans can implement the DisposableBean interface to define custom cleanup operations in the destroy() method. Alternatively, you can specify a custom destruction method in the bean configuration.\n  Understanding these stages ensures proper initialization and cleanup of Spring beans, facilitating efficient and well-managed Spring applications.\n👉 Learn more about bean lifecycle in our article Hooking Into the Spring Bean Lifecycle .\n Managing Spring Bean Lifecycle with @Bean In the context of the Employee Management System, let us learn about managing the lifecycle of a Spring bean using @Bean methods and related annotations.\n1. Custom Initialization and Destruction Methods:\npublic class Employee { // Bean properties and methods  public void init() { // Custom initialization logic  } public void destroy() { // Custom cleanup logic  } } In our configuration class:\n@Configuration public class JavaConfigAppConfiguration { @Bean(name = \u0026#34;newEmployee\u0026#34;, initMethod = \u0026#34;init\u0026#34;, destroyMethod = \u0026#34;destroy\u0026#34;) @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE) @Cacheable(cacheNames = \u0026#34;employeesCache\u0026#34;) public Employee newEmployee(final String firstName, final String lastName) { return Employee.builder().firstName(firstName).lastName(lastName).build(); } } In this example, the newEmployee() method creates an Employee bean. The initMethod attribute specifies a custom initialization method, and the destroyMethod attribute defines a custom cleanup method. When the bean is created and destroyed, these methods are invoked, allowing you to handle specific lifecycle events.\n2. Implementing InitializingBean and DisposableBean Interfaces:\npublic class Department implements InitializingBean, DisposableBean { // Bean properties and methods  @Override public void afterPropertiesSet() throws Exception { // Initialization logic  } @Override public void destroy() throws Exception { // Cleanup logic  } } In our configuration class, you can create the bean as usual:\n@Configuration public class JavaConfigAppConfiguration { @Bean @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE) public Department newDepartment(final String deptName) { final Department department = Department.builder().name(deptName).build(); acme().addDepartment(department); return department; } } In this approach, the Department class implements the InitializingBean and DisposableBean interfaces, providing afterPropertiesSet() for initialization and destroy() for cleanup. Spring automatically detects these interfaces and calls the appropriate methods during bean lifecycle stages.\nInterface InitializingBean is implemented by beans that need to react once all their properties have been set by a BeanFactory: for example, to perform custom initialization, or merely to check that all mandatory properties have been set.\nInterface DisposableBean is implemented by beans that want to release resources on destruction. A BeanFactory is supposed to invoke the destroy method if it disposes a cached singleton. An application context is supposed to dispose all of its singletons on close.\nThese examples demonstrate how @Bean methods, along with custom initialization and destruction methods or interfaces like InitializingBean and DisposableBean, enable precise control over the lifecycle of Spring beans within our Employee Management System.\nSpecifying Bean Scope The @Scope annotation allows you to define the scope of the bean. For instance, @Scope(\u0026quot;prototype\u0026quot;) indicates a new instance of the bean for every request, while the default scope is Singleton, creating a single bean instance per Spring IoC container.\n@Configuration public class EmployeeConfig { @Bean @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE) public Employee employee() { return new Employee(); } } Summary of Spring Bean Scopes In Spring, bean scope defines the lifecycle and visibility of a bean instance within the Spring IoC container. Choosing the right scope ensures appropriate management and usage of beans in our application. Here’s a summary of common Spring bean scopes:\n  Singleton Scope: Beans in Singleton scope have a single instance per Spring IoC container. It\u0026rsquo;s the default scope. Singleton beans are created once and shared across the application. Use this scope for stateless beans and heavy objects to conserve resources.\n  Prototype Scope: Beans in Prototype scope have a new instance every time they are requested. Each request for a prototype bean creates a new object. Use this scope for stateful beans or when you need a new instance every time.\n  Request Scope: Beans in Request scope have a single instance per HTTP request. This scope is specific to web applications. Each HTTP request gets a new instance of the bean. Use this for stateful beans in a web context.\n  Session Scope: Beans in Session scope have a single instance per HTTP session. Similar to request scope but persists across multiple requests within the same session. Use this for user-specific stateful beans in a web context.\n  Application Scope: Beans in Application scope have a single instance per ServletContext. This means a single instance shared across all users/sessions in a web application. Use this for global, stateless beans that should be shared by all users.\n  Custom Scope: Besides the standard scopes, Spring allows you to create custom scopes tailored to specific requirements. Custom scopes can range from narrow scopes like thread scope to broader scopes based on business logic.\n  Choosing the appropriate scope depends on the specific use case and requirements of our beans. Understanding these scopes ensures that our beans are managed effectively, optimizing resource utilization and ensuring the correct behavior of our Spring application.\n Overriding Default Bean Name By default, the bean name is the same as the method name. However, you can specify a custom name using the name attribute in the @Bean annotation.\n@Configuration public class JavaConfigAppConfiguration { @Bean(name = \u0026#34;founder\u0026#34;) @Scope(ConfigurableBeanFactory.SCOPE_SINGLETON) public Employee founderEmployee() { final Employee founder = newEmployee(\u0026#34;Scott\u0026#34;, \u0026#34;Tiger\u0026#34;); founder.setDesignation(\u0026#34;Founder\u0026#34;); founder.setDepartment(coreDepartment()); return founder; } } In this example, the bean is named founder, allowing specific identification within the application context.\nThe @Qualifier annotation is used to resolve the ambiguity that arises when multiple beans of the same type are present in the Spring container. We can use it to define the name of the bean we want to inject. In this case, we want to inject the bean with the name founder:\npublic class EmployeeController { private final Employee founder; public EmployeeController(@Qualifier(value = \u0026#34;founder\u0026#34;) Employee employee){ this.founder = employee; } } There are two methods in JavaConfigAppConfiguration that configure an Employee bean. We need to use the qualifier to tell Spring which bean we are tring to wire.\nUnderstanding @Bean and its related concepts is pivotal in Spring configuration. It not only creates instances but also allows fine-grained control over their lifecycle, scope, and naming conventions, empowering our Employee Management System with Spring-managed components.\nUnderstanding @Configuration @Configuration is a annotation that marks a class as a source of bean definitions. It allows you to create beans using @Bean methods within the class. In our Employee Management System, @Configuration organizes the creation of beans, providing a centralized configuration hub.\n@Configuration public class JavaConfigAppConfiguration { @Bean public Employee employee() { return new Employee(); } @Bean public Department salesDepartment() { return new Department(); } } With @Configuration, we encapsulate our bean definitions, promoting modularity and enhancing maintainability. Spring ensures that these beans are available for injection wherever needed.\nIn the above example, we tell Spring to add a bean of type Employee and a bean of type Department to the container. We can then inject those beans into other beans if needed.\nBuilding Maintainable Applications Maintainability is a key consideration while developing Spring applications. As applications grow, it becomes essential to structure the configuration in a way that\u0026rsquo;s modular and adaptable. Java-based configuration in Spring provides a robust solution, enabling the development of maintainable applications through modular configurations.\nOrganise Configuration Logic  Java Classes as Configuration Units: With Java-based configuration, each Java class can encapsulate a specific set of configuration concerns. For instance, one class can handle data source configuration, another can manage security, and yet another can configure caching mechanisms. Encapsulation and Cohesion: Each class focuses on a particular aspect of the application\u0026rsquo;s configuration, promoting encapsulation and ensuring high cohesion, making the codebase more comprehensible and maintainable.  Reuse and Compose Configurations  Reusable Configurations: Java configuration classes are highly reusable. A configuration class developed for one module can often be employed in other parts of the application or even in different projects, fostering a culture of code reuse. Composing Configurations: By composing multiple configuration classes together, developers can create complex configurations from simpler, well-defined building blocks. This composition simplifies management and promotes a modular architecture.  @Configuration @Import({DatabaseConfiguration.class, AppCacheConfiguration.class, SecurityConfiguration.class}) public class InfrastructureConfiguration { } In this example:\n@Import Annotation: The InfrastructureConfiguration class uses the @Import annotation to import the configuration classes of individual modules. This annotation allows us to compose configurations by combining multiple configuration classes into a single configuration class.\nBy importing the SecurityConfiguration, DatabaseConfiguration, and AppCacheConfiguration classes into the InfrastructureConfiguration, we create a unified configuration that encompasses all the specific settings for authentication, database interactions, and caching.\nYou can see it in action by running the testImportedConfig() unit test in the example code shared on Github.\nTesting and Unit Testing Advantages  Isolated Testing: Each configuration class can be unit-tested in isolation, ensuring that specific functionalities are correctly configured. Mocking Dependencies: In testing, dependencies can be easily mocked, enabling focused testing of individual configuration components without relying on complex setups.  Clear Hierarchical Structure  Hierarchical Structure: Java configuration fosters a clear hierarchical structure within the application, where configurations can be organized based on layers, modules, or features. Enhanced Readability: The hierarchical arrangement enhances the readability of the configuration code, making it easier for us to navigate and understand the application\u0026rsquo;s structure.  Java-based configuration in Spring empowers developers to create maintainable applications. By embracing modular configurations, we can build flexible, adaptable systems that respond effectively to changing requirements.\nAutomatic Java Configuration In the world of Spring Boot, @EnableAutoConfiguration is a powerful annotation that simplifies the process of configuring your Spring application. When you annotate your main application class with @EnableAutoConfiguration, you are essentially telling Spring Boot to automatically configure your application based on the libraries, dependencies, and components it finds in the classpath.\nAutomatic Configuration Using @EnableAutoConfiguration  Dependency Analysis: Spring Boot analyzes your classpath to identify the libraries and components you are using. Smart Defaults: It then automatically configures beans, components, and other settings, providing sensible default behaviors.  Spring Boot automatically configures essential components like the web server, database connection, and more based on your classpath and the libraries you include.\nExample: Minimal Application with Auto-Configuration:\n@SpringBootApplication public class JavaConfigApplication { } In this example, @SpringBootApplication includes @EnableAutoConfiguration. This annotation signals Spring Boot to configure the application automatically, setting up defaults based on the classpath.\nCustomize and Override Auto Configurations  Selective Overrides: While Spring Boot provides auto-configuration, you can still customize and override these configurations as needed. Properties-Based Tuning: Properties in application.properties or application.yml can fine-tune auto-configured settings.  You can customize auto-configurations using properties. For instance, you can disable a specific auto-configuration by setting a property.\nExample: Disabling a Specific Auto-Configuration:\n# application.properties spring.autoconfigure.exclude=\\ org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration In this example, DataSourceAutoConfiguration is explicitly excluded, meaning Spring Boot won\u0026rsquo;t configure a datasource bean by default.\n@OverrideAutoConfiguration annotation can be used to override @EnableAutoConfiguration. It is often used in combination with @ImportAutoConfiguration to limit the auto-configuration classes that are loaded. It is also useful when we run tests and we want to control the auto configured beans.\nSee following example:\n@RunWith(SpringRunner.class) @SpringBootTest(classes = Application.class, webEnvironment = WebEnvironment.DEFINED_PORT) @ActiveProfiles(\u0026#34;test\u0026#34;) @OverrideAutoConfiguration(exclude = {EnableWebMvc.class}) public class ExcludeAutoConfigIntegrationTest { } Simplified Application Setup  Reduced Boilerplate: @EnableAutoConfiguration drastically reduces boilerplate code, eliminating the need for explicit configuration for many common scenarios. Rapid Prototyping: It allows developers to quickly prototype and build applications without getting bogged down in intricate configuration details.  With auto-configuration, you can rapidly prototype applications without extensive configuration.\nExample: Creating a REST Controller:\nimport org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class MyController { @GetMapping(\u0026#34;/hello\u0026#34;) public String hello() { return \u0026#34;Hello, Spring Boot!\u0026#34;; } } In this example, a simple REST endpoint is created without explicit configuration. Spring Boot\u0026rsquo;s auto-configuration handles the setup, allowing developers to focus on the business logic.\nSpring Boot\u0026rsquo;s auto-configuration simplifies @RestController development. It handles HTTP message conversion, request mapping, and exception handling. Developers can focus on business logic while Spring Boot manages content negotiation and routing, streamlining RESTful API creation with sensible defaults and minimizing manual configuration.\nPerform Conditional Configuration  Conditional Loading: Auto-configuration is conditional; it\u0026rsquo;s applied only if certain conditions are met. @ConditionalOnClass and @ConditionalOnProperty: Conditions can be based on the presence of specific classes or properties, giving you fine-grained control.  Auto-configurations are conditionally applied based on specific conditions.\nExample: Conditional Bean Creation:\nimport org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Conditional; import org.springframework.context.annotation.Configuration; @Configuration public class MyConfiguration { @Bean @Conditional(MyCondition.class) public MyBean myBean() { return new MyBean(); } } In this example, MyBean is created only if the condition specified by MyCondition is met. Conditional annotations are integral to Spring Boot\u0026rsquo;s auto-configuration mechanism.\nSimplified Application Development  Faster Development: With auto-configuration, developers can focus more on business logic and features, accelerating development cycles. Convention Over Configuration: It follows the convention over configuration principle, encouraging consistency across Spring Boot applications.  Auto-configuration simplifies the development process and promotes convention over configuration.\nExample: Spring Boot Starter Usage:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-web\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; In this example, including the spring-boot-starter-web dependency automatically configures the application for web development. Spring Boot starters encapsulate auto-configurations, providing an intuitive way to include essential dependencies.\nAuto Configured Spring Boot Starter Packs  Starter Packs: Spring Boot Starter Packs are built around auto-configuration, bundling essential dependencies for specific use cases (e.g., web, data, security). Simplified Integration: Starters handle complex integration details, allowing developers to seamlessly integrate technologies like databases, messaging systems, and security frameworks. Starters handle complex integrations, ensuring seamless setup.  Example: Using Spring Boot Starter for JPA:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-jpa\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Including spring-boot-starter-data-jpa simplifies Java Persistence API (JPA) integration. Spring Boot\u0026rsquo;s auto-configuration manages JPA entity managers, data sources, and transaction management.\nConclusion: Effortless Spring Boot Applications  Boosted Productivity: @EnableAutoConfiguration is at the heart of Spring Boot\u0026rsquo;s philosophy, boosting developer productivity by simplifying setup and reducing configuration overhead. Maintenance Ease: Applications configured with auto-configuration are easier to maintain and update, ensuring compatibility with evolving libraries and technologies.  In essence, @EnableAutoConfiguration encapsulates the spirit of Spring Boot—making Java configuration easier, more streamlined, and immensely developer-friendly. By leveraging this annotation, developers can focus on crafting robust applications while Spring Boot takes care of the intricate details under the hood.\nSummary In the realm of Spring, mastering Java-based configuration is a gateway to flexible and maintainable applications. Through our journey, we\u0026rsquo;ve unlocked the power of @Configuration, @Bean, and @PropertySource, seamlessly integrating external properties and enhancing modularity.\nNext Steps As you embark on your coding odyssey, delve deeper into Spring\u0026rsquo;s official documentation. Explore frameworks and libraries, implement what you\u0026rsquo;ve learned, and test your creations. Experience the joy of transforming concepts into working programs.\n Remember, each line of code is a step towards mastery.\n 🚀 Happy coding!\nHere is a action packed plan for you:\n Explore Spring\u0026rsquo;s Official Documentation: Dive into Spring\u0026rsquo;s official documentation to grasp core concepts and best practices. Experiment with Various Frameworks: Experiment with Spring Boot, and other Spring frameworks to understand their nuances. Master Libraries and Integrations: Explore libraries like Jasypt and integrations like Hibernate, Ehcache for a holistic understanding. Implement Java-Based Configurations: Practice creating @Configuration classes, defining beans with @Bean, and integrating external properties with @PropertySource. Write Working Programs: Apply concepts learned by writing small, working programs to solidify your understanding. Test Your Applications: Embrace testing methodologies like JUnit, AssertJ to validate your Java-based configurations and ensure reliability. Understand Error Handling: Delve into error handling techniques to fortify your applications against unforeseen issues. Explore Declarative Approaches: Understand declarative programming paradigms, like Spring\u0026rsquo;s annotations, for concise and readable code. Participate in Community Forums: Engage with the developer community on forums and platforms to learn from real-world experiences and challenges. Continuous Learning and Practice: Keep the momentum going. Stay updated, practice regularly, and challenge yourself with complex scenarios to hone your skills.  ","date":"November 18, 2023","image":"https://reflectoring.io/images/stock/0013-switchboard-1200x628-branded_hu4e75c8ecd0e5246b9132ae3e09f147a6_167298_650x0_resize_q90_box.jpg","permalink":"/beginner-friendly-guide-to-spring-java-config/","title":"All You Need to Know about Spring's Java Config"},{"categories":["Kotlin"],"contents":"Web application development is a critical domain for businesses and developers. Building web applications that are efficient, scalable, and easy to maintain is a challenging task. Enter Ktor, a powerful, asynchronous, and lightweight framework for building web applications and APIs using the Kotlin programming language. Ktor offers a modern approach to web development that has gained significant popularity in recent years.\nWhat is Ktor? Ktor is an open-source framework developed by JetBrains. It is designed to build asynchronous, non-blocking, and high-performance web applications and APIs. What sets Ktor apart is that it is entirely written in Kotlin, which means it leverages Kotlin\u0026rsquo;s expressive and concise syntax while providing all the tools necessary for modern web development.\nKey Features of Ktor Asynchronous and Non-blocking Ktor is asynchronous and non-blocking in nature. It allows applications to handle multiple requests simultaneously, making it a perfect choice for high-traffic applications. This is achieved by leveraging Kotlin\u0026rsquo;s coroutines, which simplifies writing asynchronous code, making it more readable and maintainable. Besides, standard blocking applications also allow handling of multiple requests simultaneously however, they just don\u0026rsquo;t do it as efficiently as non-blocking applications.\nLightweight It provides only the essentials for web development, allowing developers to add components as needed. This minimalist approach results in faster start-up times, lower resource consumption, and more control over the application\u0026rsquo;s architecture.\nRouting Ktor provides a simple yet powerful routing system. Developers can define routes and handle HTTP requests with ease. This is done by specifying the HTTP method, URL path, and a corresponding handler function. The routing system is flexible and can be easily extended to support complex routing scenarios.\nExtensible Developers can integrate additional features and plugins to meet specific requirements. These plugins range from authentication and serialization to database connections, making Ktor a versatile choice for a wide range of web applications.\nKotlin-Native Support Ktor also offers Kotlin-Native support, enabling developers to build web applications that can run on different platforms, including iOS and Android. This versatility makes it a fantastic choice for projects with a mobile component.\nThe Ktor Architecture Ktor follows the concept of Application, Routing, and Call, making it a natural fit for building RESTful web services.\nLet\u0026rsquo;s take a closer look at these components:\nApplication The Application is the top-level component in a Ktor application. It is responsible for managing the entire application\u0026rsquo;s lifecycle, including starting and stopping the server. An application can have multiple modules and plugins that define different parts of the application\u0026rsquo;s behavior.\nRouting Routing is a crucial aspect of any web framework, and Ktor handles it elegantly. Routes define how HTTP requests are processed and which code should be executed for specific endpoints. Developers can create complex routing structures that match HTTP methods and URL patterns, making it easy to define the behavior of your application.\nCall A Call represents a single HTTP request and response. It contains all the necessary information about the request, such as headers, parameters, and the request body.\nSetting up a Ktor Application Setting up a Ktor application involves creating a basic project structure, configuring dependencies and writing our application code.\nBefore setting up a Ktor application, we need to ensure that we have Kotlin installed and already have a build tool such as Gradle or Maven installed.\nTo create a new Kotlin project, we\u0026rsquo;ll use the following command:\ngradle init --type kotlin-application To add Ktor dependencies to our project, we\u0026rsquo;re required to add the following in our build.gradle.kts file:\ndependencies { implementation(\u0026#34;io.ktor:ktor-server-netty:1.6.4\u0026#34;) implementation(\u0026#34;io.ktor:ktor-gson:1.6.4\u0026#34;) } Write Ktor Application Code To write Ktor application code, we simply create a Kotlin file such as Application.kt and define our code as:\nfun Application.module() { install(ContentNegotiation) { jackson { } } install(StatusPages) { exception\u0026lt;Throwable\u0026gt; { cause -\u0026gt; call.respond(HttpStatusCode.InternalServerError, cause.localizedMessage) } } routing { get(\u0026#34;/\u0026#34;) { call.respond(\u0026#34;Hello, Ktor!\u0026#34;) } } } fun main() { embeddedServer(Netty, port = 8080, module = Application::module).start(wait = true) } To run this code we simply use the gradle run command which will start an embedded Netty server on port 8080. Once we access our Ktor application at http://localhost:8080 we\u0026rsquo;re going to see \u0026quot;Hello, Ktor!\u0026quot; message on our webpage.\nRouting in Ktor Routing in the context of web development refers to the process of determining how an HTTP request should be handled and which code or logic should be executed based on the request\u0026rsquo;s URL path and HTTP method (GET, POST, PUT, DELETE, etc.).\nHere\u0026rsquo;s an overview of how routing works in Ktor:\nRoute Definition In Ktor, routes are defined using a declarative style. We specify the HTTP method (e.g., GET, POST) and the URL path to match.\nFor example, we might want to define a route for handling GET requests to the root path (\u0026quot;/\u0026quot;).\nrouting { get(\u0026#34;/\u0026#34;) { // Handle GET request to the root path  } } In this example, the get(\u0026quot;/\u0026quot;) block defines a route that matches GET requests to the root path.\nHandler Function Inside the route definition, we provide a handler function that contains the code to execute when the route is matched. This function typically takes a call parameter, which represents the current HTTP request and response. we can access request parameters, headers, and other data from the call object and send an appropriate response.\nrouting { get(\u0026#34;/\u0026#34;) { call.respondText(\u0026#34;Hello, Ktor!\u0026#34;) } } In this case, the call.respondText function generates a simple text response, \u0026ldquo;Hello, Ktor!\u0026rdquo;\nRoute Hierarchy Ktor allows us to create complex route hierarchies by nesting routes. This is useful for organizing our application\u0026rsquo;s routes logically. For instance, we can group related routes together under a common parent route.\nrouting { route(\u0026#34;/api\u0026#34;) { get(\u0026#34;/users\u0026#34;) { // Handle GET request for /api/users  } post(\u0026#34;/users\u0026#34;) { // Handle POST request for /api/users  } } } Here, the /api route contains sub-routes for different user-related actions.\nDynamic Routing Ktor supports dynamic routing by defining route segments that can vary. For example, we can define a route segment as a variable, which allows us to extract values from the URL path and use them in our logic.\nrouting { get(\u0026#34;/user/{id}\u0026#34;) { val userId = call.parameters[\u0026#34;id\u0026#34;] // Use the userId in our logic  } } In this case, the {id} segment is a variable, and we can access the value of id using call.parameters[\u0026ldquo;id\u0026rdquo;].\nRoute Conditions Ktor allows us to set conditions on routes. For instance, we can specify that a route should only match if certain criteria are met, such as checking for specific request headers or parameters.\nrouting { get(\u0026#34;/admin\u0026#34;) { header(\u0026#34;Authorization\u0026#34;) { // Handle GET request to /admin only if the Authorization header is present  } } } This route would only be matched if the Authorization header is present in the request.\nAdding Controllers in Ktor Let\u0026rsquo;s see how we can handle multiple HTTP action requests such as POST, DELETE and GET in Ktor.\nrouting { route(\u0026#34;/blog\u0026#34;) { val blogPosts = mutableListOf\u0026lt;BlogPost\u0026gt;() post { val post = call.receive\u0026lt;BlogPost\u0026gt;() post.id = blogPosts.size blogPosts.add(post) call.respond(\u0026#34;Blog Post Added\u0026#34;) } delete(\u0026#34;/{id}\u0026#34;) { val id = call.parameters[\u0026#34;id\u0026#34;]?.toIntOrNull() if (id != null \u0026amp;\u0026amp; id \u0026gt;= 0 \u0026amp;\u0026amp; id \u0026lt; blogPosts.size) { val deletedPost = blogPosts.removeAt(id) call.respond(deletedPost) } else { call.respond(\u0026#34;Invalid ID\u0026#34;) } } get(\u0026#34;/{id}\u0026#34;) { val id = call.parameters[\u0026#34;id\u0026#34;]?.toIntOrNull() if (id != null \u0026amp;\u0026amp; id \u0026gt;= 0 \u0026amp;\u0026amp; id \u0026lt; blogPosts.size) { call.respond(blogPosts[id]) } else { call.respond(\u0026#34;Post not found\u0026#34;) } } get { call.respond(blogPosts) } } } In this code, we set up routes which allow the following operations:\nPOST /blog: This route handles HTTP POST requests to create a new blog post. It expects a JSON payload representing a BlogPost, assigns an ID to the post based on its position in the list, adds it to the list of blogPosts, and responds with \u0026ldquo;Blog Post Added.\u0026rdquo;\nDELETE /blog/{id}: This route handles HTTP DELETE requests to delete a blog post by its ID. It extracts the post ID from the URL, checks if it\u0026rsquo;s valid, and if so, removes the corresponding post from the blogPosts list and responds with the deleted post. If the ID is invalid, it responds with \u0026ldquo;Invalid ID.\u0026rdquo;\nGET /blog/{id}: This route handles HTTP GET requests to retrieve a specific blog post by its ID. It extracts the post ID from the URL, checks if it\u0026rsquo;s valid, and if so, responds with the blog post. If the ID is invalid, it responds with \u0026ldquo;Post not found.\u0026rdquo;\nGET /blog: This route handles HTTP GET requests to retrieve a list of all blog posts. It responds with a JSON array containing all the blog posts stored in the blogPosts list.\nKtor vs Other Web Frameworks When comparing Ktor to other web frameworks, it\u0026rsquo;s important to consider the specific requirements and characteristics of our project, as well as our familiarity with the programming language.\nHere\u0026rsquo;s a comparison of Ktor with some other popular web frameworks:\nSpring Boot (Java) Language: Spring Boot is based on Java, while Ktor is built on Kotlin, a more modern and concise language.\nLearning Curve: Spring Boot has a steeper learning curve, especially for beginners, due to its extensive ecosystem and configuration.\nCommunity and Ecosystem: Spring Boot has a large and mature ecosystem with a wide range of libraries and tools. Ktor, being newer, has a smaller but growing community.\nUse Cases: Spring Boot is suitable for large enterprise applications and has extensive support for various enterprise features. Ktor is lightweight and well-suited for microservices and smaller web applications.\nExpress.js (Node.js) Language: Express.js is based on JavaScript/Node.js, while Ktor uses Kotlin.\nConcurrency Model: Ktor provides native support for asynchronous and coroutine-based programming, making it suitable for highly concurrent applications.\nPerformance: Ktor can provide better performance in CPU-bound and I/O-bound tasks due to Kotlin\u0026rsquo;s efficient concurrency model.\nUse Cases: Both can be used for web applications, but Ktor may be a better choice for Kotlin-centric projects or those requiring strong concurrency support.\nDjango (Python) Language: Django is written in Python, whereas Ktor uses Kotlin.\nDevelopment Speed: Django is known for its rapid development capabilities, offering a lot of built-in features. Ktor provides more flexibility but may require more code for certain features.\nConclusion In this article, we went through the Ktor framework and learned how we can set it in our project, its key features, how we can write the Ktor application code and various Controllers. We finally compared Ktor to other web frameworks such as Django and Spring Boot.\n","date":"November 18, 2023","image":"https://reflectoring.io/images/stock/0118-keyboard-1200x628-branded_huf25a9b6a90140c9cfeb91e792ab94429_105919_650x0_resize_q90_box.jpg","permalink":"/introduction-to-ktor/","title":"Introduction to Ktor"},{"categories":["Kotlin"],"contents":"Mocking in software development is a technique used to simulate the behavior of external dependencies or components within a system during testing. This approach allows developers to isolate the code under test, controlling the inputs and outputs of these dependencies without invoking the actual components. Mock objects or functions are employed to mimic the expected behavior of real components, ensuring that the focus of the test remains solely on the specific code being examined. This is particularly valuable when working with complex, slow, or unreliable external dependencies.\nMoreover, it\u0026rsquo;s important to introduce Mockk, a specific mocking framework commonly used in Kotlin. Mockk is a robust and flexible mocking library that simplifies the creation and configuration of mock objects, making it easier to isolate the code under test and control interactions during testing. Widely embraced in the Kotlin community, Mockk\u0026rsquo;s user-friendly syntax and powerful mocking capabilities make it a valuable tool for test-driven development and guaranteeing the reliability and correctness of Kotlin applications.\nThe Importance of Effective Testing Bug Detection: Testing helps us to identify and catch bugs and issues in our software early in the development process. By isolating and controlling dependencies through mocking, developers can thoroughly test different scenarios and uncover potential problems.\nRegression Prevention: Testing helps us to prevent regressions, where new code changes inadvertently and ends up introducing issues in existing functionality. By having a comprehensive suite of tests, we can ensure that existing features continue to work as expected.\nDocumentation: Tests serve as documentation for the expected behavior of the software. They provide clear examples of how different parts of the system should function, making it easier for developers to understand and maintain the codebase.\nRefactoring and Continuous Integration: Effective testing enables developers to confidently refactor code and make improvements without the fear of breaking existing functionality. It also supports continuous integration and deployment practices by ensuring that changes don\u0026rsquo;t introduce defects into the production environment.\nQuality Assurance: Testing and mocking contribute to delivering higher-quality software by reducing the likelihood of defects reaching the end users, which can lead to improved user satisfaction and trust.\nMockk Installation To install the Mockk library in our project, we usually add the following dependencies.\nUsing Gradle Inside the dependencies block, add the following line to include the MockK library as a dependency:\ndependencies { testImplementation \u0026#34;io.mockk:mockk:1.12.0\u0026#34; } Make sure to replace 1.12.0 with the latest version of MockK.\nUsing Mavenmockk Inside our pom.xml file, add the following XML to include the MockK library as a dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.mockk\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mockk\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.12.0\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; Testing With Mockk In this example, we\u0026rsquo;ll use a simple class Calculator that depends on a MathService, which we will mock using MockK:\ninterface MathService { fun add(a: Int, b: Int): Int } class Calculator(private val mathService: MathService) { fun addTwoNumbers(a: Int, b: Int): Int { return mathService.add(a, b) } } class CalculatorTest { @Test fun testAddTwoNumbers() { val mathService = mockk\u0026lt;MathService\u0026gt;() every { mathService.add(5, 3) } returns 8 val calculator = Calculator(mathService) val result = calculator.addTwoNumbers(5, 3) verify { mathService.add(5, 3) } assert(result == 8) } } We\u0026rsquo;re testing the addTwoNumbers() method of the Calculator class, which calls the add() method of the MathService. We use MockK to create a mock MathService and configure its behavior to return a specific value when the add() method is called. The test verifies that the add method was called as expected and asserts the result of the addTwoNumbers() method.\nThe every is a function provided by Mockk that sets up an expectation for a specific method call on a mock object mathService in this case. It specifies that when the add method of mathService is called with arguments 5 and 3, it should return 8. This configuration is setting the expected behavior of the mock object.\nThe verify function is used to ensure that a specific method call on a mock object occurred. In this case, it checks if the add() method of mathService was called with arguments 5 and 3. If the method was called, the test will pass; otherwise, it will fail.\nIn summary, the every keyword is used to set up the expected behavior of a mock object, specifying what it should return when certain methods are called. The verify keyword, on the other hand, is used to check whether specific method calls on the mock object have occurred during the test. We\u0026rsquo;re going to discuss more of these Mockk keywords further below.\nMockk Annotations The MockK library provides annotations to simplify the process of creating and managing mock objects in Kotlin. These annotations are particularly helpful when writing unit tests. Here are some key MockK annotations:\n@MockK This annotation is used to declare a property as a mock object. It\u0026rsquo;s typically applied to a property that represents a dependency or collaborator we want to mock.\n@MockK lateinit var mathService: MathService Remember we\u0026rsquo;ll need to ensure that we\u0026rsquo;ve initialized the property using mockk() in our test setup.\n@RelaxedMockK This annotation is similar to @MockK, but it creates a relaxed mock, which means that by default, the relaxed mock won\u0026rsquo;t throw exceptions if we call methods that haven\u0026rsquo;t been specifically stubbed. This can be useful for testing when we\u0026rsquo;re not concerned about verifying interactions.\n@RelaxedMockK lateinit var relaxedService: SomeService @SpyK The @SpyK annotation is used to create a partial mock, allowing us to use real implementations for some methods of a class while mocking others.\n@SpyK val calculator = Calculator() @UnmockK This annotation is used to unmock a property or object that was previously declared as a mock using @MockK or similar annotations. This is useful when we need to revert a mock back to its original behavior.\n@UnmockK lateinit var unmockedService: SomeService MockK Keywords When using the MockK library for mocking and verifying interactions in Kotlin tests, there are several essential keywords and functions we should be familiar with. Here are some of the most commonly used keywords and functions in MockK:\n   Keyword Description     mockk() Usually creates a mock object of a given class or interface   every{} Defines a behavior for mock object methods. We can specify what a method should return when invoked   justRun{} Defines a behavior for a method without returning a value. Useful for methods with a Unit return type   slot{} Captures arguments passed to a mocked method. We can use this to verify arguments later   verify{} Verify that a method was called with specific arguments and a certain number of times   atLeast(),atMost(),exactly() These keywords are used with verify to specify the number of times a method should be called.   verifyOrder{} Verify the order in which methods were called on mock objects   confirmVerified() Ensures that all interactions with the mock have been verified. This is useful to prevent false positives in our tests   clearMocks() Used to reset the verification state of one or more mock objects   unmockkAll() Used to unmock all the mock objects created with the mockk() annotation    Let\u0026rsquo;s take a look at code examples of each of these Mockk keywords:\nMockk() @Test fun mockkExample() { val mock = mockk\u0026lt;MyClass\u0026gt;() } every @Test fun everyExample() { val mock = mockk\u0026lt;MyClass\u0026gt;() every { mock.doSomething() } returns \u0026#34;Mocked result\u0026#34; } justRun @Test fun justRunExample() { val mock = mockk\u0026lt;MyClass\u0026gt;() justRun { mock.doSomething() } } slot @Test fun slotExample() { val mock = mockk\u0026lt;MyClass\u0026gt;() val capturedArg = slot\u0026lt;String\u0026gt;() every { mock.doSomething(capture(capturedArg)) } just Runs // our test code using the mock object and captured arguments } verify @Test fun verifyExample() { val mock = mockk\u0026lt;MyClass\u0026gt;() verify { mock.doSomething(\u0026#34;Specific Argument\u0026#34;) } } atLeast(),atMost(),exactly() @Test fun atLeastAtMostExactlyExamples() { val mock = mockk\u0026lt;MyClass\u0026gt;() verify(atLeast = 2) { mock.doSomething() } verify(atMost = 3) { mock.doSomething() } verify(exactly = 4) { mock.doSomething() } } verifyOrder @Test fun verifyOrderExample() { val mock = mockk\u0026lt;MyClass\u0026gt;() verifyOrder { mock.doSomething() mock.anotherMethod() } } confirmVerified() @Test fun confirmVerifiedExample() { val mock = mockk\u0026lt;MyClass\u0026gt;() // our test code calling the mock object  verify { mock.doSomething() } confirmVerified(mock) } clearMocks() @Test fun clearMocksExample() { val mock1 = mockk\u0026lt;MyClass\u0026gt;() val mock2 = mockk\u0026lt;AnotherClass\u0026gt;() clearMocks(mock1, mock2) } unmockkAll() @Test fun unmockkAllExample() { val mock1 = mockk\u0026lt;MyClass\u0026gt;() val mock2 = mockk\u0026lt;AnotherClass\u0026gt;() unmockkAll() } Combining Mockk With Other Testing Libraries Using JUnit Combining JUnit and MockK is a popular approach for testing Kotlin code. JUnit is a widely used testing framework for Java and Kotlin, while MockK is a mocking library specifically designed for Kotlin. Together, they allow us to write comprehensive unit tests for our Kotlin code with mock objects. Here\u0026rsquo;s how we can use JUnit and MockK for testing Kotlin code:\nclass CalculatorTest { private lateinit var calculator: Calculator private lateinit var mathService: MathService @BeforeEach fun setUp() { mathService = mockk() calculator = Calculator(mathService) } @Test fun testAddTwoNumbers() { every { mathService.add(2, 3) } returns 5 val result = calculator.addTwoNumbers(2, 3) //using JUnit assertions  assert(result == 5) } } In this test class, we\u0026rsquo;ve effectively integrated MockK with JUnit to create a testing environment.\nHere\u0026rsquo;s a breakdown of the key points in this integration:\n@BeforeEach\nThis annotation is provided by JUnit and marks a method setUp() in this case that is executed before each test method within the test class. In the setUp() method, we initialize the mathService as a mock and create an instance of the Calculator class, setting the stage for the test.\n@Test:\nAnother JUnit annotation, @Test marks a method as a test case. In the testAddTwoNumbers() method, we define the expected behavior of the mathService using MockK\u0026rsquo;s every function, stating that when mathService.add(2, 3) is called, it should return 5.\nassert(result == 5)\nHere, we are using JUnit\u0026rsquo;s assertion to check whether the result of calculator.addTwoNumbers(2, 3) matches the expected value, which is 5. If the assertion fails, the test will fail.\nThis combination of JUnit and MockK provides a clear and effective way to structure and run unit tests. JUnit handles the test lifecycle and assertions, while MockK facilitates mocking and defining expected behavior, ensuring that the code under test behaves as intended during the test. This integration streamlines the testing process and helps ensure the correctness of our code.\nUsing Spek Let\u0026rsquo;s take a look at how we can combine MockK with Spek to test the Calculator class.\nHere\u0026rsquo;s how to do it:\nclass CalculatorSpec : Spek({ val mathService by memoized { mockk\u0026lt;MathService\u0026gt;() } val calculator by memoized { Calculator(mathService) } describe(\u0026#34;Calculator\u0026#34;) { it(\u0026#34;should add two numbers correctly\u0026#34;) { every { mathService.add(2, 3) } returns 5 val result = calculator.addTwoNumbers(2, 3) // Verify that the result is as expected  assert(result == 5) } } }) Spek is a behavior-driven development (BDD) testing framework for Kotlin. It provides a way to structure our tests in a natural language format and helps us to organize our test cases into descriptive blocks. It\u0026rsquo;s designed to make our tests more readable and expressive. For example in our code above, we\u0026rsquo;re using Spek to describe the behavior of the Calculator class.\nIn our code above:\nval mathService by memoized { mockk\u0026lt;MathService\u0026gt;() } is used to create a mock object of the MathService class using MockK. The memoized feature ensures that the same instance of the mock is reused across all test cases within the same scope.\nval calculator by memoized { Calculator(mathService) } creates a memoized instance of the Calculator class, which we want to test. It takes the mathService mock as a constructor parameter. This setup ensures that the Calculator class uses the mocked mathService during testing.The describe(\u0026quot;Calculator\u0026quot;) { ... } block provided by Spek describes the behavior we want to test.\nThe it(\u0026quot;should add two numbers correctly\u0026quot;) { ... } block defines an individual test case. This specific test case checks whether the addTwoNumbers() method of the Calculator class correctly adds two numbers.\nThe every { mathService.add(2, 3) } returns 5 uses MockK to define the expected behavior of the mathService mock.\nval result = calculator.addTwoNumbers(2, 3)invokes the addTwoNumbers() method of the Calculator class with the given arguments.\nFinally, assert(result == 5) verifies the result of the test. The assert statement checks whether the actual result of calculator.addTwoNumbers(2, 3) is equal to the expected result, which is 5.\nConclusion In this tutorial, we took a look at the Mockk library used to test Kotlin code, the various keywords associated with Mockk, combining Mockk with other testing libraries such as JUnit and Spek and finally we went through the annotation provided by Mockk.\n","date":"November 18, 2023","image":"https://reflectoring.io/images/stock/0096-tools-1200x628-branded_hue8579b2f8c415ef5a524c005489e833a_326215_650x0_resize_q90_box.jpg","permalink":"/introduction-to-Mockk/","title":"Testing with Mockk"},{"categories":["Java"],"contents":"Feign is an open-source Java library that simplifies the process of making web requests. It streamlines the implementation of RESTful web services by providing a higher-level abstraction. Feign eliminates the need for boilerplate code, which makes the codebase more readable and maintainable.\nWhat is Feign? Feign is a popular Java HTTP client library that offers several advantages and features, making it a good choice for developers building HTTP-based microservices and applications.\nWhat is a declarative HTTP client? It\u0026rsquo;s a way to make HTTP requests by writing a Java interface. Feign generates the actual implementation behind that interface based on annotations that we provide.\nWhy use Feign? If we have a large set of APIs to call, we don\u0026rsquo;t want to generate the HTTP code by hand or with hard-to-maintain code generation. It would be much easier and more maintainable to describe the API in a simple, small interface and let Feign interpret and implement that interface at runtime.\nWho should use Feign? If we are making HTTP requests in our Java code, and don\u0026rsquo;t want to write boilerplate code, or use libraries like Apache httpclient directly, Feign is a great choice.\n Example Code This article is accompanied by a working code example on GitHub. Creating a Basic Feign Client Step 1: Add Feign Dependency Include Feign library in the Maven pom.xml file as a dependency.\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.github.openfeign\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;feign-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;12.5\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Step 2: Define the Client Interface It typically contains the method declarations annotated with Feign annotations.\nWe are going to declare a client interface with a method for each REST endpoint we want to call on a server. These are just declarations. We do not implement those methods. Feign will do that for us. The method signatures should include the HTTP method as well as all required data.\nLet us define an interface to represent calculator service. It has a simple API methods to perform calculations like add, substract, multiply and divide:\npublic interface CalculatorService { /** * Adds two whole numbers. * * @param firstNumber first whole number * @param secondNumber second whole number * @return sum of two numbers */ @RequestLine(\u0026#34;POST /operations/add?\u0026#34; + \u0026#34;firstNumber={firstNumber}\u0026amp;secondNumber={secondNumber}\u0026#34;) Long add(@Param(\u0026#34;firstNumber\u0026#34;) Long firstNumber, @Param(\u0026#34;secondNumber\u0026#34;) Long secondNumber); /** * Subtracts two whole numbers. * * @param firstNumber first whole number * @param secondNumber second whole number * @return subtraction of two numbers */ @RequestLine(\u0026#34;POST /operations/subtract?\u0026#34; + \u0026#34;firstNumber={firstNumber}\u0026amp;secondNumber={secondNumber}\u0026#34;) Long subtract(@Param(\u0026#34;firstNumber\u0026#34;) Long firstNumber, @Param(\u0026#34;secondNumber\u0026#34;) Long secondNumber); /** * Multiplies two whole numbers. * * @param firstNumber first whole number * @param secondNumber second whole number * @return multiplication of two numbers */ @RequestLine(\u0026#34;POST /operations/multiply?\u0026#34; + \u0026#34;firstNumber={firstNumber}\u0026amp;secondNumber={secondNumber}\u0026#34;) Long multiply(@Param(\u0026#34;firstNumber\u0026#34;) Long firstNumber, @Param(\u0026#34;secondNumber\u0026#34;) Long secondNumber); /** * Divides two whole numbers. * * @param firstNumber first whole number * @param secondNumber second whole number, should not be zero * @return division of two numbers */ @RequestLine(\u0026#34;POST /operations/divide?\u0026#34; + \u0026#34;firstNumber={firstNumber}\u0026amp;secondNumber={secondNumber}\u0026#34;) Long divide(@Param(\u0026#34;firstNumber\u0026#34;) Long firstNumber, @Param(\u0026#34;secondNumber\u0026#34;) Long secondNumber); } @RequestLine defines the HttpMethod and UriTemplate for request. And @Param defines a template variable. Do not worry. We will learn more about the annotations provided by OpenFeign later.\nStep 3: Create a Client Object We use Feign\u0026rsquo;s builder() method to prepare the client:\nfinal CalculatorService target = Feign .builder() .decoder(new JacksonDecoder()) .target(CalculatorService.class, HOST); There are many ways to prepare the client depending on our needs. The code snippet given above is just one of the simple ways to prepare the client. We have registered the decoder used to decode the JSON responses. The decoder can be changed to match the content type of the response returned by the service. We will learn more about decoders later.\nStep 4: Use the Client to for API Calls Now let us call the add() method of our client:\nfinal Long result = target.add(firstNumber, secondNumber); We notice that calling service with Feign HTTP client is fairly simple compared to other HTTP clients.\nYou can see it in action by running the givenTwoNumbersReturnAddition() unit test in the example code shared on Github.\nNotes on Testing We would use Wiremock to emulate the service implementation. WireMock is a web service mocking and stubbing tool. It works by emulating a real HTTP server to which the test code can connect as if it were a real online service. It allows for HTTP response stubbing, request verification, proxy/interception, stub recording/playback, and fault injection.\nIt is particularly useful to emulate error scenarios that are difficult to achieve with real service implementation. With these emulated interactions we rest assured that when such errors occur, our client error handling logic works as expected.\n Feign Annotations OpenFeign uses a set of annotations for defining HTTP requests and their parameters. Here\u0026rsquo;s a table of commonly used OpenFeign annotations with examples:\n   Annotation Description Example     @RequestLine Specifies the HTTP method and path. @RequestLine(\u0026quot;GET /resource/{id}\u0026quot;)   @Headers Specifies HTTP headers for the request. @Headers(\u0026quot;Authorization: Bearer {token}\u0026quot;)   @QueryMap Maps a Map of query parameters to the request. @QueryMap Map\u0026lt;String, Object\u0026gt; queryParams   @Body Sends a specific object as the request body. @Body RequestObject requestObject   @Param Adds a query parameter to the request. @Param(\u0026quot;id\u0026quot;) long resourceId   @Path Replaces a template variable in the path. @Path(\u0026quot;id\u0026quot;) long resourceId   @RequestHeader Adds a header to the request. @RequestHeader(\u0026quot;Authorization\u0026quot;) String authToken   @Headers Specifies additional headers for the request. @Headers(\u0026quot;Accept: application/json\u0026quot;)    These annotations allow us to define and customize OpenFeign client interface, making it easy to interact with remote services using OpenFeign. us can mix and match these annotations based on our specific API requirements.\nHandling Responses Feign also provides a declarative approach to API integration. Instead of manually writing boilerplate code for handling response or error, Feign allows us to define custom handlers and register those with Fiegn builder. This not only reduces the amount of code we need to write but also improves readability and maintainability.\nLet us see a decoder example:\nfinal CalculatorService target = Feign.builder() .encoder(new JacksonEncoder()) .decoder(new JacksonDecoder()) .target(CalculatorService.class, HOST); This given code snippet demonstrates the creation of a Feign client for using Jackson for both request encoding and response decoding. Let\u0026rsquo;s break down what these lines do:\n.encoder(new JacksonEncoder()): Here, a JacksonEncoder is set for the Feign client. JacksonEncoder is part of the Feign Jackson module and is used to encode Java objects into JSON format for the HTTP request body. This is particularly useful when you need to send objects in the request body.\n.decoder(new JacksonDecoder()): Similarly, a JacksonDecoder is set for the Feign client. JacksonDecoder is responsible for decoding JSON responses from the server into Java objects. It deserializes the JSON response into the corresponding Java objects.\nHandling Errors Error handling is a crucial aspect of building robust and reliable applications, especially when it comes to making remote API calls. Feign offers powerful features that can assist in effectively handling errors.\nFeign gives us more control over handling unexpected responses. We can register a custom ErrorDecoder via the builder.\nfinal CalculatorService target = Feign.builder() .errorDecoder(new CalculatorErrorDecoder()) .target(CalculatorService.class, HOST); Here is an example to show error handling:\npublic class CalculatorErrorDecoder implements ErrorDecoder { private final ErrorDecoder defaultErrorDecoder = new Default(); @Override public Exception decode(String methodKey, Response response) { ExceptionMessage message = null; try (InputStream bodyIs = response.body().asInputStream()) { ObjectMapper mapper = new ObjectMapper(); message = mapper.readValue(bodyIs, ExceptionMessage.class); } catch (IOException e) { return new Exception(e.getMessage()); } final String messageStr = message == null ? \u0026#34;\u0026#34; : message.getMessage(); switch (response.status()) { case 400: return new RuntimeException(messageStr.isEmpty() ? \u0026#34;Bad Request\u0026#34; : messageStr ); case 401: return new RetryableException(response.status(), response.reason(), response.request().httpMethod(), null, response.request()); case 404: return new RuntimeException(messageStr.isEmpty() ? \u0026#34;Not found\u0026#34; : messageStr ); default: return defaultErrorDecoder.decode(methodKey, response); } } } All responses with HTTP status other than HTTP 2xx range, for example HTTP 400, will trigger the ErrorDecoder\u0026rsquo;s decode() method. In this overridden decode() method, we can handle the response, wrap the failure into a custom exception or perform any additional processing.\nWe can even retry the request again by throwing a RetryableException. This will invoke the registered Retryer. Retryer is explained in detail in the advanced techniques.\nYou can see it in action by running givenNegativeDivisorDivisionReturnsError() test in the example code shared on Github.\nAdvanced Techniques Integrating Encoder/Decoder Encoder and decoder are used to encode/decode the request and response data respectively. We select these depending on the content type of the request and response. For example, Gson or Jackson can be used for JSON data.\nHere is an example showing how to use Jackson encoder and decoder.\nfinal CalculatorService target = Feign.builder() .encoder(new JacksonEncoder()) .decoder(new JacksonDecoder()) .target(CalculatorService.class, HOST); Changing HTTP Client By default, it uses Feign HTTP client. The motivation behind changing the default HTTP client of Feign, from the original Apache HTTP Client to other libraries like OkHttp, is primarily driven by the need for better performance, improved features, and enhanced compatibility with modern HTTP standards.\nNow let us see how to override the HTTP client.\nfinal CalculatorService target = Feign.builder() .client(new OkHttpClient()) .target(CalculatorService.class, HOST); Configuring a Logger SLF4JModule is used to send Feign\u0026rsquo;s logging to SLF4J. With SLF4J, we can easily use a logging backend of our choice (Logback, Log4J, etc.)\nHere is an example about building the client:\nfinal CalculatorService target = Feign.builder() .logger(new Slf4jLogger()) .logLevel(Level.FULL) .target(CalculatorService.class, HOST); To use SLF4J with Feign, add both the SLF4J module and an SLF4J binding of our choice to the classpath. Then, configure Feign to use the Slf4jLogger as shown above.\nConfiguring Request Interceptors Request Interceptors in Feign allow us to customize and manipulate HTTP requests before they are sent to the remote server. They are useful for a variety of purposes, such as adding custom headers, logging, authentication, or request modification.\nHere\u0026rsquo;s why we might want to use Request Interceptors in Feign:\n  Authentication: We can use a Request Interceptor to add authentication tokens or credentials to every request. For example, adding an \u0026ldquo;Authorization\u0026rdquo; header with a JWT token.\n  Logging: Interceptors are helpful for logging incoming and outgoing requests and responses. This can be useful for debugging and monitoring.\n  Request Modification: We can modify the request before it\u0026rsquo;s sent. This includes changing headers, query parameters, or even the request body.\n  Rate Limiting: Implementing rate limiting by inspecting the number of requests being made and deciding whether to allow or block a request.\n  Caching: Caching request/response data based on specific criteria.\n  Here is a code snippet to demonstrate how to use request interception:\nstatic class AuthorizationInterceptor implements RequestInterceptor { @Override public void apply(RequestTemplate template) { // Check if token is present, if not, add it  template.header(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer \u0026#34; + generatedToken); } } public class CalculatorServiceTest { public static void main(String[] args) { final interceptor = new AuthorizationInterceptor(); final CalculatorService target = Feign.builder() .requestInterceptor(interceptor) .target(CalculatorService.class, HOST); } } Implement RequestInterceptor and override its apply() method to do any modifications on the request that you require.\nConfiguring Retryer OpenFeign Retryer is a component that allows us to configure how Feign handles retries when a request fails. It can be particularly useful for handling transient failures in network communications. We can specify conditions under which Feign should automatically retry a failed request.\nRetryer Configuration To use a Retryer in OpenFeign, provide an implementation of the Retryer interface. The Retryer interface has two methods:\n  boolean continueOrPropagate(int attemptedRetries, int responseStatus, Request request): This method is called to determine whether to continue with the retry or propagate the error. It takes the number of attempted retries, the HTTP response status, and the request as parameters and returns true to continue with the retry or false to propagate the error.\n  Retryer clone(): This method creates a clone of the Retryer instance.\n  Default Retryer Feign provides a default retryer implementation called Retryer.Default. This default retryer is used when we create a Feign client without explicitly specifying a custom retryer.\nIt provided two factory methods to create a Retryer object.\nThe first factory method doesn\u0026rsquo;t require any parameters:\npublic Default() { this(100L, TimeUnit.SECONDS.toMillis(1L), 5); } It defines a simple retry strategy with the following characteristics:\n  Max Attempts: It allows a maximum of 5 retry attempts for failed requests.\n  Backoff Period: It uses an exponential backoff strategy between retries, starting with a backoff of 100 milliseconds and doubling the backoff time with each subsequent retry.\n  Retryable Exceptions: It retries requests if they result in any exceptions that are considered retryable. These typically include network-related exceptions like connection timeouts or socket exceptions.\n  The second factory methods requires some parameters. We can use it if the default configuration is not suitable for us.\npublic Default(long period, long maxPeriod, int maxAttempts) // use it to create retryer new Retryer.Default(1, 100, 10); While the default retryer provided by Feign covers many common retry scenarios, there are situations where we might want to define a custom retryer. Here are some motivations for defining a custom retryer:\n  Fine-Grained Control: If we need more control over the default retry behavior, such as specifying a different maximum number of retry attempts or a custom backoff strategy, a custom retryer allows is to tailor the behavior to our specific requirements.\n  Retry Logic: In some cases, we might want to retry requests only for specific response codes or exceptions. A custom retryer lets us implement our own logic for determining when a retry should occur.\n  Logging and Metrics: If we want to log or collect metrics related to retry attempts, implementing a custom retryer provides an opportunity to add this functionality.\n  Integration with Circuit Breakers: If we are using circuit breaker patterns in conjunction with Feign, a custom retryer can be integrated with the circuit breaker\u0026rsquo;s state to make more informed decisions about when to retry or when to open the circuit.\n  Non-Standard Retry Strategies: For scenarios that do not fit the standard retry strategies provided by the default retryer, such as rate-limited APIs or APIs with specific retry requirements, we can define a custom retryer tailored to our use case.\n  Here\u0026rsquo;s an example of implementing a custom Retryer in OpenFeign:\npublic class CalculatorRetryer implements Retryer { /** * millis to wait between retries */ private final long period; /** * Maximum number of retries */ private final int maxAttempts; private int attempt = 1; @Override public void continueOrPropagate(RetryableException e) { log.info(\u0026#34;Feign retry attempt {} of {} due to {} \u0026#34;, attempt, maxAttempts, e.getMessage()); if (++attempt \u0026gt; maxAttempts) { throw e; } if (e.status() == 401) { try { Thread.sleep(period); } catch (InterruptedException ex) { throw e; } } else { throw e; } } @Override public Retryer clone() { return this; } public int getRetryAttempts() { return attempt - 1; // Subtract 1 to exclude the initial attempt  } } It specifically retries HTTP 401 errors.\nYou can see it in action by running givenTwoNumbersAndServerReturningUnauthorizedErrorShouldRetry test in the example code shared on Github.\nTo summarise, the incentive for creating a custom retryer in Feign arises when we require greater control and flexibility over how retries are handled in our HTTP requests. When our requirements differ from the behaviour of the default retryer, a custom retryer allows us to modify the retry logic to our specific use case.\nCircuit Breakers Circuit breakers are typically implemented using separate libraries or tools such as Netflix Hystrix, Resilience4j, or Spring Cloud Circuit Breaker.\nWhy Should I use a Circuit Breaker? The primary motivation for using a circuit breaker with Feign is to enhance the resilience of our microservices-based applications. Here are some key reasons:\n  Fault Isolation: Circuit breakers prevent failures in one service from cascading to other services by isolating the failing component.\n  Fail-Fast: When a circuit is open (indicating a failure state), subsequent requests are \u0026ldquo;failed fast\u0026rdquo; without attempting to make calls to a potentially unresponsive or failing service, reducing latency and resource consumption.\n  Graceful Degradation: Circuit breakers allow our application to gracefully degrade when a dependent service is experiencing issues, ensuring that it can continue to provide a reduced set of functionality.\n  Monitoring and Metrics: Circuit breakers provide metrics and monitoring capabilities, allowing us to track the health and performance of our services.\n  Configuring Circuit Breakers HystrixFeign is used to configure circuit breaker support provided by Hystrix.\nHystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services, and 3rd-party libraries in a distributed environment. It helps to stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.\nTo use Hystrix with Feign, we need to add the Hystrix module to classpath. And use the HystrixFeign builder as follows:\nfinal CalculatorService target = HystrixFeign.builder() .target(CalculatorService.class, HOST); Let us see how to use fallback class to handle errors from the service.\nIn Hystrix, a fallback class is an alternative way to define fallback logic for a Hystrix command instead of defining the fallback logic directly within the getFallback method of the Hystrix command class. The fallback class provides a separation of concerns, allowing us to keep our command class focused on the main logic and delegate fallback logic to a separate class. This can improve code organization and maintainability.\nHere is sample code to implement the fallback for CalculatorService.\n@Slf4j public class CalculatorHystrixFallback implements CalculatorService { @Override public Long add(Long firstNumber, Long secondNumber) { log.info(\u0026#34;[Fallback add] Adding {} and {}\u0026#34;, firstNumber, secondNumber); return firstNumber + secondNumber; } @Override public Long subtract(Long firstNumber, Long secondNumber) { return null; } @Override public Long multiply(Long firstNumber, Long secondNumber) { return null; } @Override public Long divide(Long firstNumber, Long secondNumber) { return null; } } To demonstrate fallback, we have implemented only add method: Then we use this fallback while building the client:\nfinal CalculatorHystrixFallback fallback = new CalculatorHystrixFallback(); final CalculatorService target = HystrixFeign.builder() .decoder(new JacksonDecoder()) .target(CalculatorService.class, HOST, fallback); When there is error sent by add endpoint or the circuit is open, add fallback method would be called by Hystrix. You can see it in action by running givenTwoNumbersAndServerReturningServerErrorShouldCircuitBreak test in the example code shared on Github.\nYou can learn circuit breakers in detail by going through our article Implementing a Circuit Breaker with Resilience4j.\nCollecting Metrics Feign does not natively offer a built-in metric capabilities API like some other libraries or frameworks. Metrics related to Feign, such as request duration, error rates, or retries, typically need to be collected and tracked using external libraries or tools. Popular libraries for collecting metrics in Java applications include Micrometer and Dropwizard Metrics.\nHere\u0026rsquo;s how we can use Micrometer, a commonly used library, to collect and report metrics related to Feign calls:\npublic class CalculatorServiceTest { public static void main(String[] args) { final CalculatorService target = Feign.builder() .addCapability(new MicrometerCapability()) .target(CalculatorService.class, HOST); target.contributors(\u0026#34;OpenFeign\u0026#34;, \u0026#34;feign\u0026#34;); // metrics will be available from this point onwards  } } Please note that we would need to add Micrometer as a dependency in our project and configure it properly.\nNext Steps If you are interested in learning more about OpenFeign and trying out its features, we recommend visiting the official OpenFeign website and exploring the documentation. Here\u0026rsquo;s how you can get started:\nStep 1: Visit the Official OpenFeign Website Visit the official OpenFeign website.\nStep 2: Explore the Documentation The OpenFeign documentation provides comprehensive information on how to use and configure the library. You will find examples, guides, and detailed explanations of various features. Make sure to check out the documentation sections that interest you the most:\n Getting Started: This section typically provides a quick overview and setup instructions. Annotations: Learn about the powerful annotations used in OpenFeign to define HTTP clients. Request Interceptors: Understand how to use request interceptors for customizing requests. Error Handling: Explore error handling strategies in Feign. Configuration: Learn how to configure Feign for different use cases. Advanced Topics: Dive into advanced topics like custom encoders/decoders, retries, and circuit breakers.  Step 3: Try Out Examples As you go through the documentation, try out the provided examples in your development environment. Experiment with different features and configurations to get a hands-on experience with OpenFeign.\nStep 4: Join the Community If you have questions, run into issues, or want to share your experiences, consider joining the OpenFeign community. You can find the community on platforms like GitHub, Stack Overflow, or relevant discussion forums.\nStep 5: Stay Updated Keep an eye on the project\u0026rsquo;s GitHub repository for updates, releases, and new features. OpenFeign is an open-source project, and it may evolve over time with improvements and enhancements.\nBy visiting the OpenFeign official website and exploring its documentation, you\u0026rsquo;ll gain valuable insights into how to use this powerful library for making HTTP requests in your Java applications. It\u0026rsquo;s a great way to enhance your skills and improve your ability to work with remote APIs efficiently.\n","date":"October 14, 2023","image":"https://reflectoring.io/images/stock/0125-tools-1200x628-branded_hu82ff8da5122675223ceb88a08f293300_139357_650x0_resize_q90_box.jpg","permalink":"/create-openfeign-http-client/","title":"Create an HTTP Client with OpenFeign"},{"categories":["Kotlin"],"contents":"Kotest is simply a multi-platform framework used for testing written in Kotlin. In this tutorial, we shall cover the following sub-topics related to the Kotest framework: testing with Kotest, testing styles used, grouping Kotest tests with tags, the lifecycle hooks and assertions supported by Kotest.\nTesting with Kotest We\u0026rsquo;re going to learn how we can test our Kotlin code using Kotest. Before writing the tests, we shall first have to add Kotest framework dependencies in our project:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.kotest\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;kotest-runner-junit5-jvm\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.5.4\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.kotest\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;kotest-assertions-core-jvm\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.5.4\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; Using Describe Spec Style To write our tests, we normally create a Kotlin test file ending with the suffix like Test.kt. In this example, we\u0026rsquo;ll use the DescribeSpec style to define a suite of tests.\nclass MyTestClass: DescribeSpec() { init { describe(\u0026#34;My test suite\u0026#34;) { it(\u0026#34;should add two numbers\u0026#34;) { val result = 1 + 2 result shouldBe 3 } it(\u0026#34;should concatenate two strings\u0026#34;) { val result = \u0026#34;Hello, \u0026#34; + \u0026#34;World!\u0026#34; result shouldBe \u0026#34;Hello, World!\u0026#34; } } } } In this Kotest test, we first define a test class MyTestClass that uses the Kotest testing framework, specifically the DescribeSpec style. Within the class\u0026rsquo;s initializer block, we describe a test suite titled My test suite that contains two individual tests defined using the it function. The first test checks whether adding 1 and 2 results in 3, and the second test verifies that concatenating two strings Hello,  and World! results in the string Hello, World!. The shouldBe function is used to assert that the actual result matches the expected value in each test.\nUsing Behavior Spec Style Let\u0026rsquo;s take a look at how we can write tests using the BehaviorSpec style in Kotest:\nclass MyBehaviorSpec: BehaviorSpec({ given(\u0026#34;a calculator\u0026#34;) { val calculator = Calculator() when(\u0026#34;adding two numbers\u0026#34;) { val result = calculator.add(2, 3) then(\u0026#34;it should return the correct sum\u0026#34;) { result shouldBe 5 } } when(\u0026#34;subtracting two numbers\u0026#34;) { val result = calculator.subtract(5, 2) then(\u0026#34;it should return the correct difference\u0026#34;) { result shouldBe 3 } } } }) In our example, we have a given block that sets up a context, a when block that represents an action, and a then block that contains assertions about the expected behavior.\nUsing Should Spec Style While using the ShouldSpec block, we use the should keyword to define our test cases and describe the expected behavior of our code:\nclass MyShouldSpec: ShouldSpec({ should(\u0026#34;return the correct sum when adding two numbers\u0026#34;) { val result = Calculator().add(2, 3) result shouldBe 5 } should(\u0026#34;return the correct difference when subtracting two numbers\u0026#34;) { val result = Calculator().subtract(5, 2) result shouldBe 3 } }) Using Feature Spec Style We shall use the feature and scenario functions to define features and scenarios respectively. Our feature represents a higher-level feature, and scenarios describe specific behaviors or test cases within our feature.\nExample code using Feature Spec would be:\nclass MyFeatureSpec : FeatureSpec({ feature(\u0026#34;Calculator\u0026#34;) { scenario(\u0026#34;addition\u0026#34;) { val result = Calculator().add(2, 3) then(\u0026#34;it should return the correct sum\u0026#34;) { result shouldBe 5 } } scenario(\u0026#34;subtraction\u0026#34;) { val result = Calculator().subtract(5, 2) then(\u0026#34;it should return the correct difference\u0026#34;) { result shouldBe 3 } } } feature(\u0026#34;String Manipulation\u0026#34;) { scenario(\u0026#34;concatenation\u0026#34;) { val result = StringUtil.concat(\u0026#34;Hello\u0026#34;, \u0026#34;World\u0026#34;) then(\u0026#34;it should concatenate strings correctly\u0026#34;) { result shouldBe \u0026#34;HelloWorld\u0026#34; } } scenario(\u0026#34;length\u0026#34;) { val result = StringUtil.getLength(\u0026#34;Kotlin\u0026#34;) then(\u0026#34;it should return the correct length\u0026#34;) { result shouldBe 6 } } } }) In our example, we\u0026rsquo;ve defined two features Calculator and String Manipulation, each containing scenarios that describe specific behaviors.\nJUnit vs Kotest Choosing between Kotest and JUnit frameworks for a Kotlin project depends on our specific project requirements and preferences. Both Kotest and JUnit frameworks have their own advantages and may be better suited for different use cases. Here are some reasons why we might consider using Kotest over JUnit in a Kotlin project:\nKotlin native support: Kotest is designed with Kotlin and provides native support for Kotlin features, such as coroutines, property-based testing, and DSLs, making it more natural to work with Kotlin codebases.\nRich assertion library: Kotest comes with a powerful and extensible assertion library that allows developers to write expressive and concise assertions in a Kotlin idiomatic style. It provides a wide range of assertion functions that can make our tests more readable and maintainable.\nProperty-Based testing: Kotest supports property-based testing, which allows us to define properties that our code should satisfy, and then it generates test cases to check those properties.This can help us discover edge cases and unexpected behavior in our code.\nTest configuration and hooks: Kotest provides flexible ways to configure our test suite and define test lifecycle hooks. We can also set up custom behavior before and after tests, which can be useful for tasks like database setup and teardown.\nConcurrent testing: Kotest offers built-in support for running tests concurrently, which can significantly speed up test execution, especially in projects with a large number of tests.\nTest case nesting: Kotest allows us to nest test cases and groups, which helps us to organize our tests more hierarchically and logically, making it easier to manage complex test suites.\nIntegration with other libraries: Kotest integrates well with other libraries and frameworks commonly used in Kotlin projects, such as MockK for mocking, Koin for dependency injection, and kotlinx.coroutines for coroutine testing.\nKotest Assertions The Kotest framework provides us with several matcher functions that help us write fluent assertions in our tests. These matchers are designed to help us verify various conditions and expectations in a concise and readable manner.\nHere are some of the commonly used Kotest matchers:\n   Assertion Description     @shouldBe Asserts that value should be the expected value   @shouldNotBe Asserts that value should not be the expectedValue   @shouldBeLessThan Asserts that value should be less than the max value   @shouldBeLessThanOrEqual Asserts that value should be less than or equal to the max value   @shouldBeGreaterThan Asserts that value should be greater than minValue   @shouldBeGreaterThanOrEqual Asserts that value should be greater than or equal to the minValue   @shouldBeNull Asserts that value should be null   @shouldNotBeNull Asserts that value value should not be null   @shouldBeInstanceOf Asserts that value should be an instance of String::class   @shouldNotBeInstanceOf Asserts that value value should not be an instance of Int::class   @shouldBeOfType This matcher is used to check if an object is of a specific type and optionally matches its properties   @shouldContain Asserts that a collection should contain an element   @shouldNotContain Asserts that a collection should not contain an element   @shouldHaveSize Asserts that a collection should have a size as the expectedSize   @shouldNotContain Asserts that a collection should be empty   @shouldNotBeEmpty Asserts that a string should not be empty   @shouldStartWith Asserts that a string should start with \u0026ldquo;should_start_with\u0026rdquo;   @shouldEndWith Asserts that a string should end with \u0026ldquo;should_end_with\u0026rdquo;   @shouldContainSubstring This matcher checks if a string contains a specific substring   @shouldThrow This matcher is used to check if a specific exception is thrown during the execution of a block of code   @shouldNotThrow This matcher checks that a block of code does not throw an exception    Lifecycle Hooks In the Kotest framework, we can use lifecycle hooks to perform setup and teardown operations before and after our tests. Generally, these hooks allow us to prepare our test environment, set up resources and finally clean up after our tests have been executed.\nHere are the most commonly used lifecycle hooks in Kotest:\n   Hook Description     @beforeSpec This hook runs once before all the tests in spec   @afterSpec This hook runs once after all the tests in spec   @beforeTest This hook runs before each individual test within a spec   @afterTest This hook runs after each individual test within a spec   @beforeContainer This hook runs before each nested container within a spec   @afterContainer This hook runs after each nested container within a spec    Data-Driven Testing in Kotest Data-driven testing is a testing approach where we parameterize our tests with different sets of data, allowing us to run the same test logic with multiple input values to ensure that our code works correctly in various scenarios.\nIn order to achieve data-driven testing in Kotest, we use the withDatafunction.\nLet\u0026rsquo;s see an example of this concept:\ndata class Car(val make: String, val model: String, val year: Int, val expectedPrice: Int) class CarPricingTests : FunSpec({ withData( Car(\u0026#34;Toyota\u0026#34;, \u0026#34;Camry\u0026#34;, 2020, 25000), Car(\u0026#34;Honda\u0026#34;, \u0026#34;Civic\u0026#34;, 2021, 22000), Car(\u0026#34;Ford\u0026#34;, \u0026#34;Focus\u0026#34;, 2019, 18000) ) { (make, model, year, expectedPrice) -\u0026gt; val actualPrice = calculateCarPrice(make, model, year) actualPrice shouldBe expectedPrice } }) fun calculateCarPrice(make: String, model: String, year: Int): Int { return when (make) { \u0026#34;Toyota\u0026#34; -\u0026gt; 25000 \u0026#34;Honda\u0026#34; -\u0026gt; 22000 \u0026#34;Ford\u0026#34; -\u0026gt; 18000 else -\u0026gt; 0 } } In this example, Car is a data class representing the input data for our tests. It includes properties for make, model, year, and expectedPrice. Inside the CarPricingTests test class, we are using the withData function to define sets of input data as instances of the Car data class. For each set of input data, a test is created. This test invokes the calculateCarPrice function with the provided input values (make, model, and year) and checks if the result matches the expectedPrice.\nGrouping Kotest Tests with Tags Another amazing feature of Kotest is its ability to group tests with tags. This allows us to categorize our tests and run specific groups of tests based on these tags. Tags are helpful for organizing our test suite, especially when we have a large number of tests and want to run only a subset of them, such as smoke tests, regression tests, or tests for a specific module of our application.\nTo tag our tests, we can use the Tags annotation provided by Kotest:\n@Tags(\u0026#34;smoke\u0026#34;, \u0026#34;regression\u0026#34;) class MyTestSuite : FunSpec({ test(\u0026#34;Test case 1\u0026#34;) { // Test logic here  } test(\u0026#34;Test case 2\u0026#34;) { // Test logic here  } }) In this example, the MyTestSuite test class is tagged with both smoke and regression tags.\nTo run tests based on our tags, we can use the --kotest.tags command-line option when executing our test suite.\n./gradlew test --tests * --kotest.tags=\u0026#34;smoke\u0026#34; Conclusion In this tutorial, we have gone through the Kotest framework, its various testing styles, how we can group Kotest tests using tags and the various assertions Kotest supports inclusive of the lifecycle hooks.\n","date":"October 14, 2023","image":"https://reflectoring.io/images/stock/0112-ide-1200x628-branded_hu3b7dcb6bd35b7043d8f1c81be3dcbca2_169620_650x0_resize_q90_box.jpg","permalink":"/introduction-to-Kotest/","title":"Introduction to Kotest"},{"categories":["Kotlin"],"contents":"In this article, we\u0026rsquo;ll discuss all that entails KDoc in Kotlin. KDoc is simply a language used to document code written in Kotlin specifically. KDoc allows us to provide documentation comments for classes, functions, properties and other elements in our code. It\u0026rsquo;s the same as Javadoc which is used to document JAVA language. Essentially, KDoc combines syntax in Javadoc for the block tags and markdown for inline markup.\nKDoc Syntax Same as javadoc, KDoc comments usually start with /** and end with */.\nLet’s see an example of KDoc:\n/** * Calculates the sum of two numbers. * * @param a The first number. * @param b The second number. * @return The sum of the two numbers. */ fun sum(a: Int, b: Int): Int { return a + b } KDoc in our example is written above our sum function. In this case, KDoc explains what task our function performs and also documents the parameters a and b which the function takes inclusive of the expected return value.\nNote that, different from JavaDoc, KDoc supports Markdown syntax.\nKDoc Block Tags Block tags are used to provide documentation for larger sections of code or to describe multi-line content within KDoc.They are usually placed on separate lines.\nThese are the block tags supported by KDoc:\n   Tag Description     @param This tag is used to document a value parameter of a function   @return Used to document the return value of a function   @constructor Used to document the primary constructor of a class   @receiver Documents the receiver of an extension function   @property This tag is used to document the property of a class that has the specified name   @throws,@exception Used to document exceptions that can be thrown by a method   @sample Used to embed the body of a function that has the specified qualified name   @see Used to add a link to a specific class or method   @author Used to specify the author of the element that is being documented   @since Used to specify the version of the software in which the element under documentation was introduced    KDoc does not support the @deprecated tag. Instead, please use the @Deprecated annotation.\nCode block example with block tags supported by KDoc:\n/** * A list of movies. * * This class is just a **documentation example**. * * @param T the type of movie in this list. * @property name the name of this movie list. * @constructor creates an empty movie list. * @see https://en.wikipedia.org/wiki/Inception * @sample https://www.thetoptens.com/movies/best-movies/ */ class MovieList\u0026lt;T\u0026gt;(private val name: String) { private var movies: MutableList\u0026lt;T\u0026gt; = mutableListOf() /** * Adds a [movie] to this list. * @return the new number of movies in the list. */ fun add(movie: T): Int { movies.add(movie) return movies.size } } /** * A movie with a title. * * @property title the title of this movie. * @constructor creates a movie with a title. */ data class Movie(private val title: String) private fun movieListSample() { val movieList = MovieList\u0026lt;Movie\u0026gt;(\u0026#34;My Favorite Movies\u0026#34;) val movieCount = movieList.add(Movie(\u0026#34;Inception\u0026#34;)) } Conclusion In this article, we discussed KDoc which is the documentation language for Kotlin code. We also went through the KDOc\u0026rsquo;s syntax and the various tags it supports.\n","date":"September 23, 2023","image":"https://reflectoring.io/images/stock/0104-on-off-1200x628-branded_hue5392027620fc7728badf521ca949f28_116615_650x0_resize_q90_box.jpg","permalink":"/introduction-to-kDoc/","title":"Introduction to KDoc"},{"categories":["Node"],"contents":"Have you ever wondered how public API platforms, payment services, or popular websites such as Medium, Twitter, and others manage that their APIs are not overloaded? It’s all thanks to a concept known as rate limiting.\nRate limiting does exactly what the name implies, it limits or regulates the rate at which users or services can access a resource. This strategy is incredibly versatile and can be applied in various scenarios. It can be used to restrict the number of calls a user can make to an API, or the number of blog posts or tweets they can view, or regulate the number of successful transactions they can make within a given time.\nIn this article, we will explore the concept of rate limiting, followed by a step-by-step guide on how to implement a rate limiter in a Node.js application.\nPrerequisites Before we begin, please ensure that you have the following:\n Node.js installed on your computer. Basic knowledge of JavaScript and Node.js. Integrated Development Environment (IDE) (e.g. Visual Studio Code) API testing software (e.g. postman)   Example Code This article is accompanied by a working code example on GitHub. What is Rate Limiting Rate limiting is a strategy for limiting network traffic on a server. It puts a cap on how quickly and frequently a user can interact with a server or resource, preventing overload and abuse. For instance, we might want to set a limit of, say, 25 requests per hour for users. Once these users exceed the set limit during the one-hour time window, any further request made within the window is rejected. It typically responds with an HTTP 429 status code, and an error message is thrown indicating the user has made too many requests and exceeded the maximum request limit.\nWhy Do We Need Rate Limiting?  Preventing abuse: Without rate limiting, a single user or bot could overload the system with excessive requests, causing performance degradation or service downtime. Tailored limits: We can set different rate limits for each pricing plan, allowing flexibility to match user needs. Fair resource allocation: It ensures fair resource distribution, preventing one user from monopolizing server resources at the expense of others. Security against brute-force attacks: It slows down or mitigates brute-force attacks, making unauthorized access attempts more challenging and time-consuming. Defending against DDoS attacks: Rate limiting helps reduce the impact of DDoS attacks by limiting incoming request volumes. Cost efficiency: In cloud-based environments, rate limiting controls resource consumption, ensuring that no single user or group overuses computing power, ultimately reducing operational costs.  Considerations for Implementing a Rate Limiter Rate limiters enhance the service quality of an application by preventing resource shortages and request floods. This section discusses key considerations for implementing rate limiters on an application.\n Determine the client identity: Bear in mind before implementing rate limiting, we need to determine the client identity to be rate limited. This can be based on factors like IP address, user account, or API key. In an API, we can establish distinct rate limits for different categories of users. Anonymous users can be identified primarily by their IP addresses, while authenticated users are identified through API keys, IP addresses, or user account information. However, note that relying solely on IP addresses has limitations due to shared IPs and malicious users. It\u0026rsquo;s advisable to combine IP-based rate limiting with other authentication methods for enhanced security. Determine application\u0026rsquo;s traffic volume limit: Before setting rate limits, it\u0026rsquo;s crucial we have a deep understanding of our application\u0026rsquo;s capacity and performance limits. This knowledge enables us to establish appropriate rate limits that safeguard our server from potential overloads. The appropriate rate limiting libraries to use, or build your own: When it comes to rate limiting, there are two options to consider: using pre-existing rate-limiting libraries designed for our programming language or framework, or developing a custom rate limiter tailored for our application\u0026rsquo;s unique needs. The choice between these options requires a thoughtful evaluation of several critical factors, such as performance, stability, compatibility, customization needs, development effort, and long-term maintenance considerations. Ultimately, the decision should align with the specific requirements and constraints of the application.  Algorithms for Implementing a Rate Limiter When it comes to implementing rate limiting, various algorithms are at our disposal, each tailored to optimize specific aspects of the rate-limiting process. In this section, we will explore some commonly used rate-limiting algorithms:\nToken Bucket Algorithm This algorithm functions as a bucket holding tokens, where the system tracks tokens assigned to each client in its memory. Every incoming client request consumes one token from the bucket. The token represents permission to make one API request.\nWhen the bucket runs out of tokens due to requests, the server stops processing new requests and returns HTTP response code 429, indicating that the maximum request rate has been reached. Requests are only processed when tokens are added back to the bucket.\nThe bucket has a maximum capacity, limiting the number of requests it can handle. The token bucket algorithm allows clients to use tokens as quickly as they want, provided there are enough tokens in the bucket.\nTokens are replenished into the bucket at a fixed, consistent rate representing the allowed request rate for the client.\nThis algorithm has drawbacks, even if it offers a consistent and predictable request processing rate. Requests may be rejected due to an empty bucket during periods of high traffic. As a result, increasing the bucket size on a regular basis helps to mitigate this issue.\nFixed Window Algorithm The fixed window algorithm specifies a time window in seconds or minutes during which only a fixed number of requests can be sent. For example, we may allow 15 requests per minute. Once these 15 requests have been processed within a minute, any subsequent requests must patiently await the commencement of the next one-minute interval.\nThe fixed window algorithm is a simple approach, but it has a disadvantage. As the time window closes, requests arriving towards its end can lead to a sudden surge in processing demand, followed by prolonged periods of inactivity. This request pattern has the ability to strain resources and undermine the system\u0026rsquo;s efficient operation.\nSliding Windows Algorithm To address the limitations of the Fixed Window Algorithm, consider an adaptive approach. The Sliding Window Algorithm continuously adjusts the window size and the request counter in each window.\nIn contrast to the Fixed Window Algorithm\u0026rsquo;s fixed rate, the Sliding Window Algorithm slides the window over time while maintaining a log of request timestamps within this moving timeframe. As time progresses, it gracefully removes requests older than the window\u0026rsquo;s duration. When a new request arrives, the algorithm assesses whether the count of requests within the window exceeds the defined limit.\nThis approach offers flexibility in defining the window duration and is particularly useful for tracking historical request patterns.\nThe Sliding Window Algorithm offers flexibility and adaptability. It dynamically responds to fluctuations in request rates, ensuring that our rate-limiting strategy remains in sync with the evolving demands of our application.\nLeaky Bucket Algorithm The Leaky Bucket algorithm is based on the idea of a bucket that leaks water at a specified rate. Here\u0026rsquo;s how it works:\nThink of each API request as a drop of water that enters this bucket. The bucket has a maximum capacity, defining its limits for incoming requests.\nAs requests flow in, they fill the bucket. If more requests arrive and the bucket hits its maximum capacity, excess requests are either discarded or rejected.\nThe bucket consistently releases or \u0026ldquo;leaks\u0026rdquo; its contents at regular intervals, controlled by a predefined rate limit configuration. Requests are processed and sent to the API at a fixed rate, matching the bucket\u0026rsquo;s leakage rate.\nIn essence, the Leaky Bucket guarantees a steady and well-managed request processing rate, even during traffic spikes. It maintains a reliable pace of request handling. However, it treats all requests equally on a first-come, first-served basis. To prioritize requests based on specific criteria, additional mechanisms may need to be implemented.\nIn the next section, we will look at how to implement a rate limiter in our Node.js application, and how to use a Rate limiter both globally across all routes and on a specific route.\nHow to Implement Rate Limiting in a Node.js API Here, we’ll make use of the express-rate-limit NPM package. Surely we can build a custom rate limiter middleware ourselves using one of the above algorithms.\nHowever, the express-rate-limit package simplifies the procedure for adding rate limiting to our demo application, enabling effective resource access management without the need for extensive custom development.\nBy default, express-rate-limit identifies users based on their IP addresses req.ip, extracted from the req object. The req object holds essential information about incoming HTTP requests.\nexpress-rate-limit gives us the option to configure our window size and set the maximum number of requests allowed within that window.\nTo implement rate limiting into our Node.js demo application, we can follow the steps outlined below:\nStep 1: Setup Basic Node.js Application Open a terminal in a directory of your choice. We will create a new folder in this directory and initialize Node.js in it using the following command:\nmkdir node-rate-limiter cd node-rate-limiter npm init -y Next, execute the following command to generate the necessary folders and files for our application:\nmkdir middlewares touch app.js middlewares/ratelimit.js Our server setup will live in the app.js file, while the rate-limiting configuration will be introduced into our application as a middleware.\nMiddleware refers to a set of functions in Node.js that are executed sequentially during the processing of an HTTP request. These functions have access to the request object (req), the response object (res), and a special next() function that allows them to pass control to the next middleware in the stack.\nMiddleware functions are commonly used to perform various tasks related to request processing, such as authentication, logging, data validation, and rate limiting.\nThe rate limiter configuration will be provided as middleware from the ratelimit.js file.\nStep 2: Install the Application Dependencies To install the necessary packages for our application, Run the following command:\nnpm install express express-rate-limiter Where:\n express: is a web application framework for Node.js. It simplifies the process of building robust, scalable, and performant web applications and APIs. express-rate-limiter: is a middleware for rate limiting in Express.js applications. It allows us to control the rate at which requests are allowed to our Express routes.  Step 3: Starting Our Node.js Server Now, we\u0026rsquo;ll create basic APIs for our application and start the Node.js server.\nFor this, copy and paste the following code into the app.js file:\nconst express = require(\u0026#34;express\u0026#34;); const rateLimitMiddleware = require(\u0026#34;./middlewares/ratelimit\u0026#34;); const app = express(); // A simple API route app.get(\u0026#34;/api/blog\u0026#34;, (req, res) =\u0026gt; { res.send({ success: true, message: \u0026#34;Welcome to our Blog API Rate Limiter Project 🎉\u0026#34;, }); }); app.get(\u0026#34;/api/blog/post\u0026#34;, (req, res) =\u0026gt; { res.send({ success: true, author: \u0026#34;Mike Abdul\u0026#34;, \u0026#34;title\u0026#34;: \u0026#34;Creating NodeJs Rate Limiter\u0026#34;, \u0026#34;post\u0026#34;: \u0026#34;...\u0026#34; }); }); const PORT = process.env.PORT || 5000; app.listen(PORT, () =\u0026gt; { console.log(`Server running on port ${PORT}`); }); With this code, we have successfully set up two API routes /api/blog and /api/blog/post both of which are not rate-limited. We can start the application server by executing the command\nnode app.js When testing the APIs with tools like Postman or a web browser, we will notice that there are currently no limits on the number of calls we can make to these endpoints.\nWhen endpoints are not rate-limited and can be called endlessly, it can lead to issues like heavy resource usage, server timeouts, and unfair resource allocation.\nTo avoid these problems and secure our APIs, it\u0026rsquo;s crucial to implement rate limiting for our endpoints. This ensures that users cannot make an excessive number of requests within a specific time frame. To achieve this, we\u0026rsquo;ll create a rate-limiter middleware for our endpoints.\nStep 3: Configure the Rate limit To configure a rate limiter for our application endpoints, paste the following code into the rateLimiter.js file:\nconst setRateLimit = require(\u0026#34;express-rate-limit\u0026#34;); // Rate limit middleware const rateLimitMiddleware = setRateLimit({ windowMs: 60 * 1000, max: 5, message: \u0026#34;You have exceeded your 5 requests per minute limit.\u0026#34;, headers: true, }); module.exports = rateLimitMiddleware; In the above code, we are exporting a rateLimitMiddleware function which invokes the setRateLimit function instance from the express-rate-limit package.\nThis middleware enforces our rate limit based on the provided options, where:\n windowMs: This is the window (time frame) size in milliseconds. max: Maximum number of requests which can be allowed in the given window size. message: This option is optional, we can customize the error message or use the default message provided by the middleware. headers: The headers option is essential, as it automatically adds crucial HTTP headers to responses. These headers include X-RateLimit-Limit (indicating the rate limit), X-RateLimit-Remaining (showing the remaining requests within the window), and Retry-After (indicating the time to wait before retrying). These headers provide clients with vital rate-limiting information.  In our configuration, we have set our rate limit to allow a maximum of 5 requests per minute. If an endpoint is called more than 5 times within a minute, it will be denied, and our specified message will be sent with a Status Code of 429 indicating \u0026ldquo;Too Many Requests\u0026rdquo;.\nWe can set up the rate limiter in two ways: we can use it globally for all routes in our application or set it up for a specific route.\nUsing Rate Limiter Globally Across All Routes To apply rate limiters globally in our application\u0026rsquo;s routes, copy and paste the following in the app.js file:\nconst express = require(\u0026#34;express\u0026#34;); const rateLimitMiddleware = require(\u0026#34;./middlewares/ratelimit\u0026#34;); const app = express(); app.use(rateLimitMiddleware); // ... route definitions... In the above code, app.use(rateLimitMiddleware) is used to apply rate-limiting middleware to our application. The app.use() method is an Express.js method used to bind middleware functions to the Express application. When a request is made to the server, it goes through a series of middleware functions, including the one specified here. In this case, rateLimitMiddleware is applied to all routes defined after the app.use() statement.\nWith this, all our application\u0026rsquo;s routes will be rate-limited.\nWe can test this by making requests to all the available endpoints in our application. If any of these endpoints are called more than the specified configuration allows (e.g. more than 5 requests per minute), the requests will be denied, and an error message will be returned.\nUsing Rate Limiter on a specific route Above we implemented a rate limit on all available routes in the application. We also have the option to implement a rate limiter for a single specific route endpoint:\nTo do that replace the code in the app.js file with the following:\napp.get(\u0026#34;/api/blog/post\u0026#34;, rateLimitMiddleware, (req, res) =\u0026gt; { res.send({ success: true, author: \u0026#34;Mike Abdul\u0026#34;, \u0026#34;title\u0026#34;: \u0026#34;Creating NodeJs Rate Limiter\u0026#34;, \u0026#34;post\u0026#34;: \u0026#34;...\u0026#34; }); }); In the above code, our /api/blog/post endpoint is configured to undergo a rate-limiting check before its handler function is executed. This rate-limiting middleware assesses incoming requests based on our rate-limiting rules. If a request complies with the rate limits check, the handler function responds with JSON data about a blog post. However, if the request exceeds the rate limit, it\u0026rsquo;s rejected, and the middleware might return an error response.\nOn the other hand, the /api/blog/endpoint isn\u0026rsquo;t subject to any rate-limiting constraints. Therefore, it can be freely called without limitations.\nThis approach allows us to selectively apply rate limiting to specific routes in our application, ensuring that critical endpoints are protected against excessive requests while leaving others unrestricted. We can add multiple rate limit middlewares with different sets of configurations for certain routes.\nConclusion In summary, implementing API rate limiting in Node.js Express applications is crucial for maintaining service stability and security. The express-rate-limit NPM package is suitable for small to medium-sized applications. However, for larger applications, it may not scale well.\nFor large applications, especially those expecting high traffic loads, it\u0026rsquo;s prudent to consider alternative rate-limiting solutions that incorporate external state storage options, like Redis or Memcached. These external databases store and manage rate-limiting data separately from the application itself. As a result, rate-limiting becomes more robust and scalable.\nYou can refer to all the source code used in the article on Github.\n","date":"September 19, 2023","image":"https://reflectoring.io/images/stock/0044-lock-1200x628-branded_hufda82673b597e36c6f6f4e174d972b96_267480_650x0_resize_q90_box.jpg","permalink":"/tutorial-nodejs-rate-limiter/","title":"How to Implement API Rate Limiting in a Node.js Express Application"},{"categories":["Kotlin"],"contents":"In this tutorial, we are going to learn about Ktlint, a linting tool for Kotlin. Ktlint checks the style of our code and also helps us to format it against some guidelines. This ensures that our code is clean, and easy to read, understand, and maintain.\nWhat Is Linting? Linting refers to the process of analyzing our code for any potential errors, typos or defects in formatting. The term \u0026ldquo;linting\u0026rdquo; originally means \u0026ldquo;removing dust\u0026rdquo; and is now a common term in programming. Linting is achieved by using a lint tool/ static code analyzer.\nWhat Can We Do with Ktlint? The following are some of the tasks Ktlint can handle for us:\n Formatting code: We can use Ktlint to reformat our code to meet the coding rules and styles specified for us. Code Analysis: This being the main task for this tool, we can use Ktlint to check through our entire project where the code written has not met the official Kotlin coding conventions and style guides. Error reporting: After analyzing our project, Ktlint also has the capability to report back to us where it found errors in our project.  Benefits of Using Ktlint The following are some of the benefits developers enjoy from using Ktlint:\n Improved readability: By consistently formatting our code, Ktlint makes our code more readable and understandable. This makes it easier for developers to write code since they don\u0026rsquo;t have to struggle with varying indentation, spacing or any formatting issues. Saving time in code reviews: Since Ktlint automatically checks and formats our code to our defined rules and guidelines, it reduces the time that would have been used for manual style review hence saving us time. Consistent codebase: Following our defined guidelines via Ktlint, developers in the same team are able to follow and use the same formatting rules leading to a uniform and readable code. Customizable rules: While the Ktlint tool itself comes with its default standard rules and guidelines, we can customize our own rules that match our preferred style. This flexibility allows us to enforce the specified rules we would like to follow and use. Automated checks: We also have the capability to integrate Ktlint into our Continuous Integration pipeline, this means that every pull request or commit by developers can be automatically checked to determine whether it meets our coding standards helping us catch issues early in the development process.  Adding Ktlint to a Gradle Project To add Ktlint to a Gradle project we use the Ktlint Gradle Plugin. This plugin is easier to work with and also provides developers with commands to run in order to check and format code. Here are the steps we\u0026rsquo;ll follow:\nAdding the Ktlint Gradle Plugin To add the Gradle plugin to our project, we add the dependencies in our root level build.gradle file. Let\u0026rsquo;s take a look at how we achieve this:\nplugins { id \u0026#34;org.jlleitschuh.gradle.ktlint\u0026#34; version \u0026#34;11.0.0\u0026#34; } In the above code, we\u0026rsquo;re using version 11.0.0 which we can replace with our version of choice.\nApplying the Ktlint Plugin to Other Modules Let\u0026rsquo;s also add Ktlint to our modules within the same project to ensure that the code in them is also checked and formatted. Note that this is only applicable if our project is a multi-module project. To achieve this task, we add the plugin in the allprojects block found in build.gradle file:\nallprojects { apply plugin: \u0026#34;org.jlleitschuh.gradle.ktlint\u0026#34; } Analyzing and Formatting Code Let\u0026rsquo;s look at which commands the Ktlint plugin provides:\n ./gradlew ktlintCheck: We use this command to tell Ktlint that it should go through our entire project and check which files in our codebase violate the provided guidelines. After checking, this will result in a report indicating the file with errors that need to be corrected, if there isn\u0026rsquo;t any code violation the check passes. ./gradlew ktlintFormat: This command automatically updates the code to follow the configured coding style.  Adding Ktlint to a Maven Project In order to integrate Ktlint into our Maven project, we add the Maven plugin to our pom.xml file:\n\u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;com.github.shyiko\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;ktlint-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;RELEASE\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;id\u0026gt;format\u0026lt;/id\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;format\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; By default, after we add the Maven plugin to our project, it\u0026rsquo;s using the official standard Kotlin style guide. If we want to modify and create our own rules to use in our project, we can create a ktlint.yml file in the root directory of our project and configure our rules in this file. Note that it\u0026rsquo;s advisable to always use the official standard guidelines provided by Kotlin instead of modifying our own rules.\nThe file may look like this:\nmax_line_length: 120 indent_size: 4 The rules in this example mean the following:\n max_line_length: This rule specifies the maximum allowed length for our single line of code. If a line of code exceeds 120 in length, ktlint will flag this as a violation of our style rules. indent_size: This sets the size of an indentation level. In our case, it\u0026rsquo;s set to 4 spaces, which means that each nested block of code should be indented by 4 spaces.  Finally, after we configure the plugin to our project and add the yml file in which we provided our own style guide, we navigate to our project\u0026rsquo;s root directory and execute the mvn ktlint:format command in our terminal to ensure our code is formatted.\nIntegrate Ktlint with our Maven Build Process To ensure that our project\u0026rsquo;s code is properly formatted during our build process, we add a plugin execution to the verify phase in the pom.xml file of our project:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;com.github.shyiko\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;ktlint-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;RELEASE\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;id\u0026gt;format\u0026lt;/id\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;format\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;id\u0026gt;verify\u0026lt;/id\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;check\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; Conclusion In this tutorial, we have gone through the benefits of using Ktlint and how we can add it to our Kotlin and Maven projects.\n","date":"September 4, 2023","image":"https://reflectoring.io/images/stock/0019-magnifying-glass-1200x628-branded_hudd3c41ec99aefbb7f273ca91d0ef6792_109335_650x0_resize_q90_box.jpg","permalink":"/code-format-with-ktlint/","title":"Code Formatting with Ktlint"},{"categories":["Node"],"contents":"You and your team have spent countless hours meticulously crafting a groundbreaking application that could propel your Startup to new heights. Your code is a work of art, and you can\u0026rsquo;t wait to share it with the world. But as you prepare to deploy it to your production environment, disaster strikes! A critical bug emerges, bringing your entire application crashing down. The application hasn\u0026rsquo;t run through a Continuous Integration/Continuous Deployment (CI/CD) pipeline that would have flushed out this bug much earlier.\nThis cautionary tale highlights the vital role that CI/CD plays in the software development lifecycle. CI/CD acts as a resilient safety net, protecting applications from potential catastrophes and ensuring a seamless journey from development to deployment. In this article, we will delve into the concept of CI/CD, and its importance. Then, we\u0026rsquo;ll go over how to deploy a Node.js application on an AWS EC2 instance using GitHub Actions for CI/CD pipeline.\nPrerequisites Before we begin, make sure you have the following:\n Basic knowledge of JavaScript Node and npm installed on your computer Basic understanding of GitHub and a GitHub account AWS Account   Example Code This article is accompanied by a working code example on GitHub. What is CI/CD? CI/CD stands for Continuous Integration and Continuous Deployment/Delivery. It covers a set of strategies that help developers design and deploy software more effectively. Let\u0026rsquo;s break it down in the context of a team working on a software project.\nContinuous Integration (CI) When modifications are made to a code repository (eg. Git), CI inspects each team member\u0026rsquo;s code to ensure smooth compatibility with the existing codebase. It detects new code changes automatically and initiates a build process that includes code compilation, automated testing, and extensive checks and validations. CI establishes a solid foundation for successful software development.\nContinuous Deployment/Delivery (CD) After the code successfully passes all checks and tests in the CI phase, we can now go ahead with the deployment process. Continuous Deployment/Delivery allows organizations to rapidly and efficiently deploy software. Instead of waiting for lengthy release cycles, developers can deploy small, incremental changes to the software as soon as they are ready. This ensures that new features and bug fixes reach users or testing environments as soon as possible.\nImportance of CI/CD  Faster Building: CI/CD automates the build process, reducing manual effort and enabling faster software updates. Reduced Errors: Automated tests in CI/CD detect issues early, ensuring more stable and reliable software. Faster Feedback: CI/CD provides rapid feedback on code changes, boosting developer efficiency. Improved Team Collaboration: CI/CD fosters better collaboration and communication among team members. Reliable Releases: CD automates deployment, ensuring consistent and error-free software releases.  In this post, we\u0026rsquo;ll cover the following:\n Setting up a Node.js application Create an AWS EC2 Instance Create a Node.js GitHub Actions workflow  Connect to AWS EC2 Instance via SSH Download and Configure Git Action Runner Setting up a Node.js application environment on an AWS EC2 instance    Setting up a Node.js Application Here we\u0026rsquo;ll use our Node.js application with Express.js to display a basic HTML page. This application will be the basis for implementing our CI/CD pipeline.\nTo begin building our application, navigate to a desired location in your terminal or command prompt. Copy and paste the following command into the terminal:\nmkdir cicd-app cd cicd-app npm init -y The above command will create a new directory and initialize it with Node.js.\nTo install our application dependencies, paste the following command into the terminal:\nnpm install express jest supertest Where:\n Jest: is used for executing automated tests. Supertest: is used for testing HTTP requests in Jest. Express: is a server framework for our application routing.  Run the following command to create all necessary directories and files for the application:\nmkdir src mkdir src/public touch src/public/index.html src/app.js src/app.test.js src/index.js touch .gitignore In this setup:\n app.js: contains the application\u0026rsquo;s endpoint routes and logic. index.js: serves as the entry point to our Node.js server. app.test.js: includes a sample test for our demo application. index.html: serves as our HTML home page. .gitignore: contains a list of files and directories that we want Git to ignore and not include in the version control.  We can now open the application in our preferred IDE.\nTo begin creating our Node.js application, copy and paste the following code into the index.js file:\nconst app = require(\u0026#34;./app\u0026#34;); const port = process.env.PORT || 3000; app.listen(port, () =\u0026gt; console.log(\u0026#34;Server listening on port 3000!\u0026#34;)); The code above listens for incoming requests on the specified application port.\nNext, in the app.js file, paste the following code:\nconst express = require(\u0026#34;express\u0026#34;); const path = require(\u0026#34;path\u0026#34;); const app = express(); app.use(express.static(\u0026#34;public\u0026#34;)); app.get(\u0026#34;/test\u0026#34;, (_req, res) =\u0026gt; { res.status(200).send(\u0026#34;Hello world\u0026#34;); }); app.get(\u0026#34;/\u0026#34;, (req, res) =\u0026gt; { res.sendFile(path.join(__dirname, \u0026#34;public\u0026#34;, \u0026#34;index.html\u0026#34;)); }); module.exports = app; This code defines two endpoints for our application. The /test endpoint returns a JSON message with Hello world, while the root path / serves as an HTML file from the public folder.\nFor our application\u0026rsquo;s test, copy and paste the following in the app.test.js file:\nconst app = require(\u0026#34;./app\u0026#34;); const supertest = require(\u0026#34;supertest\u0026#34;); const request = supertest(app); describe(\u0026#34;/test endpoint\u0026#34;, () =\u0026gt; { it(\u0026#34;should return a response\u0026#34;, async () =\u0026gt; { const response = await request.get(\u0026#34;/test\u0026#34;); expect(response.status).toBe(200); expect(response.text).toBe(\u0026#34;Hello world\u0026#34;); }); }); Above, we have created a simple test to test our application /test route. Now, we can automate our application\u0026rsquo;s test process, which will allow us to run tests automatically and regularly as we make changes to the codebase.\nNext, copy and paste the following into the index.html file.\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;Demo Application Node Web Server\u0026lt;/title\u0026gt; \u0026lt;style\u0026gt; h1 { text-align: center; margin-right: 5px; } body { color: #bcbcce; background-color: #151617; display: flex; justify-content: center; align-items: center; height: 100vh; margin: 0; } \u0026lt;/style\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;Products Page\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt;version 1.0\u0026lt;/p\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; This file will serve as our home page. It contains a simple layout with a heading and version information.\nNext, let\u0026rsquo;s make some changes to our package.json file. Modify the scripts value to the following:\n\u0026#34;scripts\u0026#34;: { \u0026#34;start\u0026#34;: \u0026#34;node src\u0026#34;, \u0026#34;test\u0026#34;: \u0026#34;jest src/app.test.js\u0026#34; }, With these changes, we can easily start our application from the terminal using the npm start command, and all our application\u0026rsquo;s available tests can be executed using the npm run test command.\nOur demo application is complete and ready to be pushed to GitHub. Start and test it locally at localhost:3000.\nBefore pushing the application to GitHub, it\u0026rsquo;s important to add the node_modules/ directory to the .gitignore file. Doing so will prevent unnecessary uploads of dependencies.\nTo add node_modules/ to the .gitignore file, simply open the .gitignore file in the root directory and add the following line:\nnode_modules/ Once everything is set up create a new GitHub repository. Initialize Git in the application\u0026rsquo;s directory, commit the changes and push the code to the remote repository.\nOur newly created GitHub repository should look something like this:\nWe can now proceed to the next step - creating our AWS EC2 instance.\nCreate an AWS EC2 Instance AWS EC2 is a cloud computing service that allows us to launch and manage virtual machines known as instances in the cloud, hence providing flexible and scalable computing resources. It allows us to pay for only resources used, making it cost-effective for a wide range of applications and workloads.\nFollow these simple steps to create an EC2 instance:\n Sign in to AWS Management Console and go to the EC2 dashboard.  Click on the Launch instance button.\n Set up the instances and configure it to meet the needs of our application. Fill in the following information:  Name: node-cicd-app Application and OS Images (Amazon Machine Image): ubuntu    Instance type: t2.micro (Free tier)  Create Key Pair: cicd-key  Click on the Create key pair button. AWS will generate and download a key pair .pem file into our computer. This key pair includes a public key for the EC2 instance and a private key to be kept locally.\nThis key pair allows a secure SSH connection to our EC2 instance. SSH (Secure Shell) is a communication protocol for remote server access and management. The private key ensures encrypted authentication, enhancing security compared to password-based access, and preventing unauthorized entry.\nRemember to keep the private key safe and not share it with others, as it grants access to the EC2 instance.\n Next, click on the Launch instance button to create our EC2 virtual machine.   Next, we need to configure our security groups. Security groups are essential to control inbound traffic to our EC2 instance. Security groups act as virtual firewalls, allowing us to specify which ports and protocols are accessible to our instances from different sources (e.g., specific IP addresses, ranges, or the internet).  To set up a security group for our instance:\n Select the newly created instance for which you want to configure the security group. In the tabs below, click Security. Then click on the Security groups link associated with the instance.  In the Inbound rules tab of the security group, click on Edit inbound rules  Add new security rules by specifying the protocol, port range, and source to allow inbound traffic on the necessary ports. Click on the Save rules button to save the security group.  Above, we set up a Custom TCP security group rule for port 3000, allowing access from anywhere. This restricts inbound traffic to the necessary connections, enhancing application security against unauthorized access and potential threats.\nNext, we\u0026rsquo;ll create a GitHub action workflow outlining the CI/CD steps to be executed when changes are pushed to our GitHub repository.\nCreate a Node.Js Github Actions Workflow A GitHub Actions workflow automatically triggers necessary deployment steps on new code pushes or changes. It executes tasks defined in the workflow configuration. GitHub logs the workflow progress for monitoring.\nIn case of an error or failure, a red check mark appears in the logs, indicating an issue. Developers can review the log, fix the problem, and push changes to trigger a new workflow. A green check mark confirms a smooth workflow with successful tests and deployment. This visual feedback system ensures our codebase\u0026rsquo;s health and verifies the application\u0026rsquo;s functionality.\nGitHub offers pre-built workflow actions for common problems. For our article, we\u0026rsquo;ll use the \u0026ldquo;Publish Node.js package\u0026rdquo; template, designed for Node.js projects. With this action, we can easily install dependencies, run tests, and deploy our Node.js application with minimal configuration.\nTo set up a workflow for our Node.js application, follow these steps:\n Access the GitHub repository where the Node.js application resides. In the repository, navigate to the Actions tab. Search for node.js action workflow. Click on the Configure button.  This will generate a .github/workflows directory to store all our application\u0026rsquo;s workflows. It will also create a .yml file within this directory where we can define our specific workflow configurations.\nReplace the generated .yml file content with the commands below:\nname: Node.js CI/CD on: push: branches: [ \u0026#34;main\u0026#34; ] jobs: build: runs-on: self-hosted strategy: matrix: node-version: [18.x] # See supported Node.js release schedule at https://nodejs.org/en/about/releases/ steps: - uses: actions/checkout@v3 - name: Use Node.js ${{ matrix.node-version }} uses: actions/setup-node@v3 with: node-version: ${{ matrix.node-version }} cache: \u0026#39;npm\u0026#39; - run: npm ci - run: npm test - run: pm2 restart backendserver In the YAML file above:\n Our workflow is named \u0026ldquo;Node.js CI/CD\u0026rdquo;. It triggers when there is a push event to the main branch. The build job is defined to run on a self-hosted runner. A self-hosted runner is a computing environment that allows us to run GitHub Actions workflows on our own infrastructure instead of using GitHub\u0026rsquo;s shared runners. With a self-hosted runner, we have more control over the environment in which our workflows are executed. The steps section lists individual tasks to be executed in sequence. actions/checkout@v3 fetches the source code of our repository into the runner environment. actions/setup-node@v3 simplifies Node.js setup on the runner environment for our workflow. npm ci installs project dependencies. This command performs a clean installation, ensuring consistency for our CI server. npm test runs tests for our application. pm2 restart backendserver restarts our server using the PM2 library, which acts as a production process manager. PM2 ensures our Express application runs as a background service and automatically restarts in case of failures or crashes.  The above workflow performs both Continuous Integration (CI) tasks (clean installation, caching, building, testing) and Continuous Deployment (CD) tasks (restarting the server using PM2).\nNow, Click the Commit changes button. This will save the modified YAML file to our repository.\nNext, return back to the Actions tab on GitHub. Here, we can monitor the workflow in real time and observe logs as each step is been executed on the server.\nHowever, it\u0026rsquo;s important to note that the above workflow job will fail because we haven\u0026rsquo;t connected our AWS EC2 instance to the Git repository.\nTo use our GitHub Actions workflow with an AWS EC2 instance, we must establish a connection between the GitHub repository and the AWS EC2 instance. This connection can be achieved by setting up Git Action Runner on the AWS EC2 instance. This Runner acts as a link between the repository and the instance, enabling direct workflow execution.\nTo resolve the failed workflow, we\u0026rsquo;ll connect to our EC2 instance via SSH, locally download and configure the Git Action Runner, and then set up our Node.js application environment on the EC2 instance.\nConnect to AWS EC2 Instance via SSH To install and configure Git Action Runner on our AWS EC2 instance, we start by establishing a local connection to the EC2 instance using the .pem key we previously downloaded. The .pem key serves as the authentication mechanism for securely accessing the EC2 instance through SSH.\nHere are the steps to connect to an EC2 instance via SSH:\n Open a terminal or command prompt on your local machine. Ensure you are in the correct directory where the .pem file is located. Next, head to AWS Management Console and open the newly created instance. Click on the Connect button.   Next copy and run the chmod command in A in the terminal to confirm if the .pem file is accessible from our terminal   Run the command in B to connect to the EC2 instance by SSH.  Once we run this command, the terminal will prompt us to accept the authenticity of the host. Type yes and press the Enter button to proceed.\nUsing SSH, we are now securely connected to our EC2 instance. This secure connection enables us to remotely manage and communicate with our server.\nDownload and Configure Git Action Runner Git Action Runner acts as a link between our GitHub repository and the EC2 instance. This integration allows direct interaction between the two and enables automated build, test, and deployment processes.\nTo download and configure a Git Action Runner on our EC2 instance:\n Go to the GitHub repository and click on Settings. On the left-hand sidebar, click on Actions then select Runners. In the Runners page click on the New self-hosted runner button.  Here, we will choose the self-hosted runner image for our Ubuntu EC2 instance with the operating system set as Linux and architecture as x64.\nThen step by step, run the following commands in the local SSH terminal:\nNote: While running the command, it may prompt some setup questions, we can simply press Enter to skip to the default options.\nAfter running the ./run.sh command, If the agent returns a ✅ Connected to GitHub message, it indicates a successful installation.\nNext, we\u0026rsquo;ll install a service to run our runner agent in the background:\nsudo ./svc.sh install sudo ./svc.sh start The above code will start our runner service in the background, making it ready to execute workflows whenever triggered.\nSetting up a Node.Js Application Environment on an AWS EC2 Instance We have successfully integrated our application on GitHub with the EC2 instance server using GitHub Actions Runner.\nTo ensure the smooth execution and operation of our Node.js application on the EC2 machine, we need to install essential libraries and dependencies for our application such as Node.js, NPM, and PM2.\nTo install NPM and Node.js, run the following command in the local SSH terminal:\nsudo apt update curl -sL https://deb.nodesource.com/setup_lts.x | sudo -E bash - sudo apt-get install -y nodejs To install PM2, run the following command:\nsudo npm install -g pm2 PM2 offers various useful commands for monitoring our application server. For more information about PM2, check out their documentation.\nOur application is now fully set up and ready to go!\nTo start our application\u0026rsquo;s server, we need to navigate into the application\u0026rsquo;s folder in the EC2 instance.\nTo do this run the following command on the local SSH terminal\ncd ~ cd /home/ubuntu/actions-runner/_work/cicd-app/cicd-app Once we are inside the application\u0026rsquo;s folder, we can start the server in the background using pm2:\npm2 start src/index.js --name=backendserver Using pm2 to start the server with a specified --name enables our Node.js server to be managed as a background service. This means our server will continue running even after we exit the SSH session. Additionally, pm2 provides continuous monitoring and ensures our application remains active and responsive at all times. This is very handy in production environments where we want our program to be available at all times.\nOur Node.js application is now successfully up and running on the EC2 instance, and our CI/CD workflow has been configured.\nThe application will now be running and listening on the specified port 3000.\nTo ensure that the server is functioning correctly, we can easily check it through a web browser. Simply enter the server\u0026rsquo;s URL or IP address followed by the specified port.\nFor example, if our server\u0026rsquo;s IP address is 34.227.158.102, we would enter 34.227.158.102:3000 in the browser\u0026rsquo;s address bar.\nIf all configurations are correct, we\u0026rsquo;ll be greeted with the Products Page version 1.0 of our demo application.\nFinally, we can proceed to test our CI/CD pipeline process. We will create an event that will act as a trigger to initiate a new workflow.\nTo do this, we will make a simple change to our HTML page. Specifically, we\u0026rsquo;ll update it from version 1 to version 2. Once this change has been made, we will push the updated code to the GitHub repository where our CI/CD workflow is defined. As soon as the push event is detected, our CI/CD pipeline will automatically kick off and execute the necessary steps to build, test, and deploy our updated application.\nConclusion By implementing this approach, we can automate our entire operation. CI/CD process, will improve our development workflow, making it efficient, reliable, and scalable. It enables us to confidently build new features, collaborate with the team, and deploy high-quality applications with more speed and ease\nYou can refer to all the source code used in the article on Github.\n","date":"August 15, 2023","image":"https://reflectoring.io/images/stock/0018-cogs-1200x628-branded_huddc0bdf9d6d0f4fdfef3c3a64a742934_149789_650x0_resize_q90_box.jpg","permalink":"/tutorial-cicd-github-actions-pm2-nodejs-aws-ec2/","title":"CI/CD with Node.js and a GitHub Actions Runner Hosted on AWS EC2"},{"categories":["Node"],"contents":"TypeScript is a superset of JavaScript that adds static typing and other features to the language. Its operators are crucial to understanding the language and writing effective code.\nOperators are symbols or keywords in a programming language that perform operations on values, such as arithmetic operations, string concatenation, and comparisons.\nUnderstanding operators in TypeScript is essential because they are fundamental building blocks of the language and are used in almost every programming aspect. By choosing the right operator for the job, you can often simplify your code and make it easier to understand and maintain. In this article, we will explore the most important operators in TypeScript and provide examples of how they can be used in real-world applications to help you write more efficient and readable code.\n Example Code This article is accompanied by a working code example on GitHub. What operators are in Typescript? In Typescript, operators are symbols used to perform operations on variables or values. They can be classified into several categories based on their functions.\nConcatenation Operators Concatenation operators in TypeScript are used to combine strings and values together. The most common concatenation operator is the plus sign (+). When used with strings, the plus sign combines them into a single string. When used with a string and another data type (other than a string), the plus sign concatenates it to the end of the string.\nFor example, let\u0026rsquo;s say we have two strings, \u0026ldquo;Hello\u0026rdquo; and \u0026ldquo;World\u0026rdquo;. We can use the concatenation operator to combine them into a single string:\nlet greeting = \u0026#34;Hello\u0026#34; + \u0026#34;World\u0026#34;; console.log(greeting); Output:\nHelloWorld We can also use the concatenation operator to combine a string and a value:\nlet age = 30; let message = \u0026#34;I am \u0026#34; + age + \u0026#34; years old.\u0026#34;; console.log(message); Output:\nI am 30 years old. In this example, the concatenation operator combines the string \u0026ldquo;I am \u0026quot; with the value of the age variable (30) and the string \u0026quot; years old.\u0026rdquo; to create the final message.\nConcatenation operators are useful in situations where we need to build dynamic strings based on values or user input. Using concatenation operators, we can create custom messages or outputs tailored to our specific needs.\nArithmetic Operators Arithmetic operators allow us to perform mathematical operations such as addition, subtraction, multiplication, division etc. on numerical values (constants and variables). Let’s take a look at them:\nlet x = 5; let y = 10; console.log(x + y); // Output: 15 console.log(x - y); // Output: -5 console.log(x * y); // Output: 50 console.log(x / y); // Output: 0.5 console.log(y % x); // Output: 0  let z = 3; z++; console.log(z); // Output: 4  let a = 10; a--; console.log(a); // Output: 9   Addition (+): adds two or more values. The addition operator (+) can also perform string concatenation when used with strings. More information is in the Concatenation Operators section.\n  Subtraction (-): subtracts two or more values\n  Multiplication (*): multiplies two or more values, and the precision of the result depends on the data types involved. When multiplying two integers, the result will also be an integer, preserving the whole number portion of the calculation. Similarly, when multiplying two floats, the result will also be a float, retaining the decimal precision. When we multiply an integer by a float or a float by an integer, the result will be a float. In TypeScript, the multiplication operation between an integer and a float promotes the integer to a float and performs the multiplication as a floating-point operation. The resulting value will retain the decimal precision of the float and include any fractional component.\n  Division (/): divides two or more values, and the precision and data type of the result depend on the types of the operands. If we divide two integers, the result will be an integer, and any decimal portion will be truncated. For example, 5 / 2 would result in 2, as the remainder is discarded. However, if either the numerator or denominator is a float, the result will be a float. Dividing an integer by a float or a float by an integer will yield a float result, preserving decimal precision.\n  Modulus (%): returns the remainder of a division operation.\n  Increment (++): increases the value of the variable by one.\n  Decrement (--): decreases the value of the variable by one.\n  If used with non-number variables, with the exception of the + operator, TypeScript\u0026rsquo;s compiler will not allow these operations and raise a compilation error.\n  Relational Operators Relational Operators are used to compare two values and determine their relationship. Let’s take a look at some relational operators commonly used in Typescript:\nlet x = 10; let y = 5; console.log(x == y); // false console.log(x === \u0026#34;10\u0026#34;); // false (different data types) console.log(x != y); // true console.log(x !== \u0026#34;10\u0026#34;); // true (different data types) console.log(x \u0026gt; y); // true console.log(x \u0026lt; y); // false console.log(x \u0026gt;= y); // true console.log(x \u0026lt;= y); // false   Equality Operator (==): This operator compares two values but doesn\u0026rsquo;t consider their data types. If the values are equal, it returns true. Otherwise, it returns false. When comparing non-number variables with the Equality operator (==), it will check their data type. If the non-number variables have different data types, it will attempt to convert them to a common type. For example, if one variable is a string and the other is a number, it will try to convert the string to a number before performing the comparison. The result of the comparison will be true if the converted values are equal, and false otherwise. It\u0026rsquo;s worth noting that using the Equality operator (==) for non-numerical variables can lead to unexpected behaviour due to type coercion, and it\u0026rsquo;s generally recommended to use the Strict Equality operator (===) for more predictable comparisons.\n  Strict Equality Operator (===): This operator compares two values for equality, and it considers their data types. If the values are equal in value and type, it returns true. Otherwise, it returns false. When comparing non-numerical variables, it performs a strict comparison without type conversion. If the variables have different data types, the comparison returns false. It returns true only when the variables have the same data type and value.\n  Inequality Operator (!=): This operator compares two values for inequality. If the values are not equal, it returns true. Otherwise, it returns false.\n  Strict Inequality Operator (!==): This operator compares two values for inequality, and considers their data types. It returns true if the values are not equal in value or type. Otherwise, it returns false.\n  Greater Than Operator (\u0026gt;): This operator checks if the left operand is greater than the right operand. If it is, it returns true. Otherwise, it returns false. When used with strings, it performs a lexicographical comparison. It checks if the left operand appears after the right operand in lexicographical order. For example, \u0026quot;apple\u0026quot; \u0026gt; \u0026quot;banana\u0026quot; would return false since \u0026ldquo;apple\u0026rdquo; comes before \u0026ldquo;banana\u0026rdquo; in lexicographical order.\n  Less Than Operator (\u0026lt;): This operator checks if the left operand is less than the right operand. If it is, it returns true. Otherwise, it returns false. When used with strings, it checks if the left operand appears before the right operand in lexicographical order. For example, \u0026quot;apple\u0026quot; \u0026lt; \u0026quot;banana\u0026quot; would return true since \u0026ldquo;apple\u0026rdquo; comes before \u0026ldquo;banana\u0026rdquo; in lexicographical order.\n  Greater Than or Equal To Operator (\u0026gt;=): This operator checks if the left operand is greater than or equal to the right operand. If it is true, it returns true. Otherwise, it returns false. When used with strings, it checks if the left operand is greater than or equal to the right operand. For example, \u0026quot;apple\u0026quot; \u0026gt;= \u0026quot;banana\u0026quot; would return false since \u0026ldquo;apple\u0026rdquo; is not greater than or equal to \u0026ldquo;banana\u0026rdquo;.\n  Less Than or Equal To Operator (\u0026lt;=): This operator checks if the left operand is less than or equal to the right operand. If it is true, it returns true. Otherwise, it returns false. When used with strings, it checks if the left operand is smaller than or equal to the right operand. For example, \u0026quot;apple\u0026quot; \u0026lt;= \u0026quot;banana\u0026quot; would return true since \u0026ldquo;apple\u0026rdquo; is less than \u0026ldquo;banana\u0026rdquo;.\n  Difference Between Equality and Strict Equality Operator When it comes to comparing values, it is essential to understand the difference between the equality (==) and strict equality (===) operators. The equality operator only compares the values of the operands, while the strict equality operator compares both the values and types of the operands. Let’s take a look at this code:\nlet x = 10; let y = 5; let a = \u0026#34;apple\u0026#34;; let b = \u0026#34;banana\u0026#34;; console.log(x == y); // false console.log(x == 10); // true console.log(x === y); // false (same type but different value) console.log(x === \u0026#34;10\u0026#34;); // false (different data types) console.log(x === 10); // true (same data types and value) console.log(a == b); // false console.log(a == \u0026#34;apple\u0026#34;); // true console.log(a === b); // false (same type but different value) console.log(a === \u0026#34;apple\u0026#34;); // true (same data types and value) console.log(a === 10); // false (different data types and value)   In the given example, the expression x == y returns false because the values of x and y are not equal. The expression x == 10 returns true because x is equal to 10. The expression x === y returns false because x and y are of the same type but of different values. The expression x === '10' returns false because x is a number and '10' is a string (i.e. they are of different types). Finally, the expression x === 10 returns true because both x and 10 are of the same data type (number) and have the same value.  Logical Operators Logical operators in TypeScript allow you to perform logical operations on boolean values. ​​Let\u0026rsquo;s take a look at some Logical Operators:\nlet x = 5; let y = 10; let z = 15; console.log(x \u0026lt; y \u0026amp;\u0026amp; y \u0026lt; z); // true console.log(x \u0026gt; y || y \u0026lt; z); // true console.log(!(x \u0026gt; y)); // true AND (\u0026amp;\u0026amp;) Operator: The AND \u0026amp;\u0026amp; operator returns true if both operands are true.\nOR (||) Operator: The OR || operator returns true if at least one of the operands is true.\nNOT (!) Operator: The NOT ! operator negates a boolean value, converting true to false and vice versa.\nBitwise Operators Bitwise operators in TypeScript operate on binary representations of numbers. They are useful for performing operations at the binary level, manipulating individual bits, working with flags, or optimizing certain algorithms that require low-level bit manipulations. Understanding their behaviour and how to use them correctly can be valuable when working with binary data or implementing certain advanced programming techniques.\nSome of the common bitwise operators include AND (\u0026amp;), OR (|), XOR (^), left shift (\u0026lt;\u0026lt;), right shift (\u0026gt;\u0026gt;), and complement (~). These operators are particularly useful when working with flags, binary data, or performance optimizations. Let’s take a look at them:\nlet a = 5; // 0101 in binary let b = 3; // 0011 in binary  console.log(a \u0026amp; b); // 0001 (AND) console.log(a | b); // 0111 (OR) console.log(a ^ b); // 0110 (XOR) console.log(a \u0026lt;\u0026lt; 1); // 1010 (Left Shift) console.log(a \u0026gt;\u0026gt; 1); // 0010 (Right Shift) console.log(~a); // 1010 (Complement) AND (\u0026amp;): The AND operator, represented by the symbol \u0026amp;, performs a logical AND operation between corresponding bits of two numbers. The result is a new number where each bit is set to 1 only if both bits in the same position of the operands are 1.\nOR (|): The OR operator, represented by the symbol |, performs a logical OR operation between corresponding bits of two numbers. The result is a new number where each bit is set to 1 if at least one of the bits in the same position of the operands is 1.\nXOR (^): The XOR operator, represented by the symbol ^, performs a logical exclusive OR operation between corresponding bits of two numbers. The result is a new number where each bit is set to 1 only if one of the bits in the same position of the operands is 1, but not both.\nLeft Shift (\u0026lt;\u0026lt;): The left shift operator, represented by the symbol \u0026lt;\u0026lt;, shifts the binary bits of a number to the left by a specified number of positions. The leftmost bits are discarded, and new zeros are added on the right side. Each shift to the left doubles the value of the number.\nRight Shift (\u0026gt;\u0026gt;): The right shift operator, represented by the symbol \u0026gt;\u0026gt;, shifts the binary bits of a number to the right by a specified number of positions. The rightmost bits are discarded, and new zeroes are added on the left side. For positive numbers, each shift to the right halves the value of the number.\nComplement (~): The complement operator, represented by the symbol ~, performs a bitwise NOT operation on a number, flipping all its bits. This operator effectively changes each 0 to 1 and each 1 to 0.\nConclusion By mastering the usage of these operators and applying best practices, you can enhance your TypeScript programming skills and develop more effective solutions. Whether you\u0026rsquo;re working on small-scale projects or large-scale applications, a solid understanding of operators will contribute to your success in writing maintainable and performant code.\nYou can find all the code used in this article on GitHub.\n","date":"August 1, 2023","image":"https://reflectoring.io/images/stock/0065-java-1200x628-branded_hu49f406cdc895c98f15314e0c34cfd114_116403_650x0_resize_q90_box.jpg","permalink":"/typescript-operators/","title":"Operators in TypeScript"},{"categories":["Software Craft"],"contents":"\u0026ldquo;Spaghetti code\u0026rdquo; has long been an issue in the field of software development. Many developers have discovered the difficulty with deciphering complex tangles of code, which leads to increased delays and frustration for everyone involved.\nThankfully, there is a potential solution to this programming pitfall that can ensure project success: writing clean code. This approach doesn’t just involve producing code that machines can understand; but also creating an easily understandable, modifiable, and maintainable codebase for human collaborators as well. Clean coding demands consistent style choices with purposeful naming practices as well as an emphasis on simplicity.\nWriting clean code goes beyond mere good practice. It is an integral component of software development that should reflect its collaborative nature. With that in mind, now is the time for us all to transform how we write code, moving from convoluted and unclear structures towards clean ones that are easily manageable by any teammate or maintainer.\nThe Importance of Writing Clean Code Developers are the backbone of the software industry, responsible for the code that powers much of our modern lives. To help them, writing clean code is essential, being pivotal in developing software with easy usability. Clean code refers to using code as a form of communication, not just between the programmer and computer, but among collaborating developers as well. Writing clean code improves the readability of codebases and allows developers to readily understand and modify them.\nThis is why software companies always need to dedicate a good portion of their budget to clean up code (or create clean code in the first place, whether they be hired as an employee or on a contractual basis. On average, companies can expect to pay a backend developer between $60 to $100 hourly. With today\u0026rsquo;s fast-paced software development cycles, hiring developers to maintain and update a codebase continuously is vital to a project’s long-term success.\nCode Should Always Be Readable An effective yet straightforward strategy for making code easy to read is maintaining an intuitive structure and organization from the start. Grouping similar functions or classes enables developers to easily see where any potential flow problems may exist within their codebase.\nAn effective piece of code should provide an engaging narrative, from initialization through output. Furthermore, using whitespace correctly) can dramatically enhance readability. In contrast, dense blocks of code may be difficult for readers to navigate, but using appropriate line breaks and indentation can help direct their focus and highlight its overall structure.\nFurthermore, following principles such as \u0026ldquo;one idea per line\u0026rdquo; and \u0026ldquo;one action per statement\u0026rdquo; will make code much simpler to comprehend.\nCoding Consistency Is Essential Consistency in coding is all about maintaining a uniform style, structure, and formatting across all parts of your codebase. This aspect of software development is significant because it enhances readability, enabling developers to understand and adapt to the code more efficiently.\nFor example, consistently using camelCase for variable names in JavaScript, or adhering to specific indentation rules in Python, are some of the ways to achieve coding consistency. Similarly, in object-oriented programming, consistently placing the \u0026lsquo;public\u0026rsquo; methods before the \u0026lsquo;private\u0026rsquo; ones can also contribute to this consistency.\nConsistency in coding improves readability and allows developers to better understand and adapt to your workflow. If coding styles fluctuate across codebases, readers must constantly adjust, which can be mentally draining.\nConversely, clean, consistent code allows readers to make assumptions based on earlier code they have encountered and is thus easier for them to quickly comprehend newer sections of code. This feature is especially valuable in larger projects or open-source initiatives where contributors from varying backgrounds and expertise contribute simultaneously.\nConsistency of coding style serves as a unifying language, helping reduce differences among individual programmers' habits and preferences. Therefore, consistency requires not just individual developers to write clean code but also team-wide adherence to agreed-upon coding standards and practices.\nAlways Select Meaningful Names Names that communicate their intended use in code are key components of clean programming. By reading a name alone, a developer should be able to readily understand what the function or variable does or represents in their application.\nRather than using names like x or y, try using names like index or length; similarly, functions with specific names like ‘calculate_average’ or ‘print_report’ are preferable over vague ones such as ‘do_stuff’. Using appropriate names can eliminate the need for lengthy explanation comments while making your code self-documenting.\nMaintaining Simplicity for Programmers Coders should strive for simplicity when creating their code and architecting software systems. A complex architecture with too many interdependent parts may make committing changes difficult and testing costly. Instead, creating a modular framework where components interact predictably is preferred.\nModular design enables components to be developed, tested, and debugged separately, improving maintainability while making software more adaptable to changes. Furthermore, simpler architectures tend to be better at accommodating changes overall.\nAs requirements evolve, adding, deleting, or altering features within an easily managed and intuitively structured system becomes simpler. Thus, simplicity not only matters on an individual function-by-function level but is equally essential at the system architecture level.\nApply Comments Strategically Comments play an essential role in enhancing the understandability of your code, especially for the more intricate or subtle sections. When used judiciously, comments provide additional context, elucidate non-obvious logic, or indicate implications tied to specific sections of code.\nHowever, it is vital to avoid superfluous or repetitive comments that merely echo what the code is already clearly demonstrating. Such comments can potentially clutter an otherwise well-organized codebase.\nFor instance, comments like \u0026ldquo;incrementing the counter\u0026rdquo; do not offer any significant insight into the code\u0026rsquo;s functionality. They simply generate noise, distracting from the overall readability of the code.\nTo mitigate this, developers should leverage the concept of \u0026ldquo;self-documenting code\u0026rdquo;. This approach involves naming variables, methods, and functions in a descriptive manner that makes their purpose apparent. When done correctly, self-documenting code minimizes the need for explicit comments because the code speaks for itself. For example, instead of relying on a comment to explain what a variable holds, a properly named variable like \u0026ldquo;totalEmployees\u0026rdquo; provides an immediate understanding of its use, thereby making the codebase more efficient and readable.\nError Handling and Testing Solutions Proper error handling is an integral part of clean coding that should not be underestimated or glossed over with generic messages. Errors should be meticulously logged and managed to provide informative log entries that facilitate the diagnosis of potential issues. It\u0026rsquo;s important to maintain error messages as static as possible, so that, in case of an error, searching for the message (or error code) in the logs leads directly to the responsible code. This process becomes less straightforward if error messages are dynamically concatenated at runtime.\nThe inclusion of automated tests is a hallmark of ideal codebases. These tests verify that the code functions as intended and prevent unwanted changes or regressions when modifications are made. In fact, well-crafted tests can serve as excellent examples of how the code works, somewhat echoing the role of comments mentioned earlier.\nJust like how properly named variables and methods can render some comments unnecessary through self-documenting code, well-structured tests can effectively illustrate the expected behavior of the code. By clearly demonstrating that the output should be given certain inputs, they can reduce the need for additional explanatory comments. Therefore, in a sense, well-written tests can also contribute to making the code \u0026ldquo;self-documenting\u0026rdquo;.\nRefactoring and Code Reviews Coding requires continuous learning and improvement, with code reviews and refactoring serving as opportunities for both.\nThrough refactoring, developers learn to detect code smells (patterns that indicate potential errors) and improve their ability to write clean code from the outset. Code reviews facilitate collaborative learning environments between developers; they allow them to draw upon each other\u0026rsquo;s strengths, spot any mistakes made during implementation and collectively enhance the core code-base quality.\nHowever, we must approach these processes with an attitude of growth in mind. Code reviews shouldn\u0026rsquo;t serve as platforms for criticism but should provide constructive feedback instead. Similarly, refactoring shouldn\u0026rsquo;t be seen as admitting past errors but as part of an iterative software development process.\nRefactoring and constructive code reviews allow teams to maintain clean codebases as their knowledge expands and requirements change over time, growing and adapting alongside them.\nConclusion Writing clean code is more than an admirable skill. It’s fundamental for sustainable software development.\nClean code emphasizes readability and consistency for everyone involved. Meaningful naming conventions, simple implementation methods, effective use of comments, robust error handling capabilities, and regular refactoring each contribute to creating a codebase that\u0026rsquo;s easier for current developers and those inheriting in the future to maintain and reuse.\nWhile it may require more effort and time initially, the long-term benefits of maintainability, efficiency, and scalability significantly outweigh these costs. By carefully following these principles, developers, and teams can create better software while creating a more collaborative and productive working environment.\n","date":"July 2, 2023","image":"https://reflectoring.io/images/stock/0131-tetris-1200x628-branded_hu7ebcfba89977913066c0e0a1cad91228_251546_650x0_resize_q90_box.jpg","permalink":"/clean-code/","title":"The Art of Writing Clean Code: A Key to Maintainable Software"},{"categories":["Java"],"contents":"ArchUnit is a Java library to validate your software architecture. The library is well described in its documentation and as its fluent API is pure Java, it\u0026rsquo;s easy to explore using the code completion in the IDE.\nIn this article, we won\u0026rsquo;t repeat the user guide, but we\u0026rsquo;ll look at what we can achieve with ArchUnit and discuss reasons why that can be useful. We\u0026rsquo;ll also look at some usages which are not directly related to the architecture of our codebase, but are useful to prevent common errors (for example how to prevent calling a certain constructor of a class).\nThere\u0026rsquo;s a dedicated article that explains how ArchUnit can be used in combination with Spring Boot: Clean Architecture Boundaries with Spring Boot and ArchUnit.\n Example Code This article is accompanied by a working code example on GitHub. The code examples and the code in the repository use Maven as a build tool and JUnit as testing framework. The only exception are the code examples for using ArchUnit with Scala.\nWhy Is Testing Your Architecture Important? The architecture of software changes over time and that\u0026rsquo;s a perfectly valid process. Therefore, architecture tests will also change. So why validating it at all? The most obvious reason is to prevent unintended changes. Using an IDE, that can happen too easily: you start typing the name of a class and the import is added automatically. Of course, it\u0026rsquo;s fine to change an ArchUnit test when it fails. Doing that forces us to think thoroughly about the change we make.\nMany of us developers have been in situations where the rational behind the software architecture was not obvious. If we create a test with a descriptive name, we create a nice piece of documentation for the future.\nThere are more reasons why we want to validate our architecture.\n A good architecture ensures separation of concerns, which simplifies code changes and unit testing. Less dependencies in the codebase make refactoring and splitting up the codebase easier. Respecting naming conventions makes the code easier to read and understand. A clean architecture can facilitate secure code.  Example: Data Encapsulation Let\u0026rsquo;s discuss in more detail how clean architecture can improve security. Here\u0026rsquo;s a practical example of how data encapsulation can prevent data exposure and how validating dependencies can help us.\nHere\u0026rsquo;s a simple REST API that returns employee data:\npublic record Employee(long id, String name, boolean active) { } public class EmployeeController { @GET() @Path(\u0026#34;/employees\u0026#34;) public Employee getEmployee() { EmployeeService service = new EmployeeService(); return service.getEmployee(); } } Easy enough. However, let\u0026rsquo;s say, at a later point in time, we add one more attribute to our employee entity:\npublic record Employee(long id, String name, boolean active, int salary) { } What will happen? As our API operates directly on the employee class, we\u0026rsquo;ll expose the newly added attribute in the API. That could be the desired behavior in some situations, however, we might also expose new attributes involuntarily. The salary of an employee might be confidential and be adding it to the record, we expose that information. Therefore, it\u0026rsquo;s usually better to have separate classes for internal use and the API:\npublic record EmployeeResponse(long id, String name, boolean active) { } with a mapping in the service class:\npublic class EmployeeService { public EmployeeResponse getEmployee() { EmployeeDao employeeDao = new EmployeeDao(); Employee employee = employeeDao.findEmployee(); return new EmployeeResponse( employee.id(), employee.name(), employee.active() ); } } which we then use in the controller:\npublic class EmployeeController { @GET() @Path(\u0026#34;/employees\u0026#34;) public EmployeeResponse getEmployee() { EmployeeService service = new EmployeeService(); return service.getEmployee(); } } The following image visualizes the difference between the two approaches:\n(1) Shows the architecture without and (2) with a service layer. To keep the architecture clean, the API layer should only access the service layer and the service layer only access the domain layer. We should avoid direct access from the API to the domain layer.\nBasic ArchUnit Example Let\u0026rsquo;s look at how we can use ArchUnit to create a test for the above example. For that, we\u0026rsquo;ll create a project with the following structure:\nOur goal is to implement a test that verifies that the API layer does not access the domain layer. First, we add the ArchUnit dependency to our project:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.tngtech.archunit\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;archunit-junit5\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.junit.jupiter\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;junit-jupiter-engine\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.8.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; Then we create a unit test with an ArchUnit rule that implements the dependency check:\n@Test public void myLayerAccessTest() { JavaClasses importedClasses = new ClassFileImporter() .importPackages(\u0026#34;io.reflectoring.archunit.api\u0026#34;); ArchRule apiRule= noClasses() .should() .accessClassesThat() .resideInAPackage(\u0026#34;io.reflectoring.archunit.persistence\u0026#34;); apiRule.check(importedClasses); } This test imports all classes in the io.reflectoring.archunit.api package and verifies that there\u0026rsquo;s no dependency on any class in the persistence package. Let\u0026rsquo;s see what happens if we introduce such a dependency:\n@GET() @Path(\u0026#34;/employees\u0026#34;) public EmployeeResponse getEmployee() { EmployeeDao dao = new EmployeeDao(); Employee employee = dao.findEmployee(); return new EmployeeResponse( employee.id(), employee.name(), employee.active() ); } With this code, we access the persistence layer directly in the controller class. As a result, the test will fail with an assertion error, informing us about the access violation:\njava.lang.AssertionError: Architecture Violation [Priority: MEDIUM] - Rule \u0026#39;no classes should access classes that reside in a package \u0026#39;io.reflectoring.archunit.persistence\u0026#39;\u0026#39; was violated (2 times): Method \u0026lt;io.reflectoring.archunit.api.EmployeeController.getEmployee()\u0026gt; calls constructor \u0026lt;io.reflectoring.archunit.persistence.EmployeeDao.\u0026lt;init\u0026gt;()\u0026gt; in (EmployeeController.java:15) Method \u0026lt;io.reflectoring.archunit.api.EmployeeController.getEmployee()\u0026gt; calls method \u0026lt;io.reflectoring.archunit.persistence.EmployeeDao.findEmployee()\u0026gt; in (EmployeeController.java:16) The example shows how easy it is to use ArchUnit in a Java project. Before we look at more examples, let\u0026rsquo;s discuss why architecture violations are typically introduced in projects over time.\nReasons for Architecture Erosion Over Time There are many reasons why developers start to deviate from the initial design choices, coding best practices, or testing practices. One of the most common reasons is probably time pressure. As this is rather straightforward, we we\u0026rsquo;ll look at some other reasons in more detail.\nArchitecture Awareness At the start of a software project, we usually take certain design choices and organize the code in methods, classes, packages, modules, and layers. Each of these has its specific purpose and a clear boundary. The data access layer for example should have the sole responsibility to retrieve persisted data. It should not provide an API endpoint or map data to an external format like JSON or XML.\nWe also make choices on certain implementation details like the inheritance (for example, every DAO class should implement an interface), or how to handle date and time in the code.\nLet\u0026rsquo;s look at a simple example. Instead of:\nLocalDateTime localDate = LocalDateTime.now(); we might want to use:\nLocalDateTime localDate = LocalDateTime.now(clock); When we decide to use the latter way of instantiating our object, we\u0026rsquo;ll probably remember the reason for a while. However, after some time we might forget. Also, other developers who join the project might unintentionally deviate from the original choice.\nWith ArchUnit, we can add a test that will fail if the static factory method now is used without the parameter:\n@Test public void instantiateLocalDateTimeWithClock() { JavaClasses importedClasses = new ClassFileImporter() .importPackages(\u0026#34;io.reflectoring.archunit\u0026#34;); ArchRule rule = noClasses().should() .callMethod(LocalDateTime.class, \u0026#34;now\u0026#34;); rule.check(importedClasses); } Such a test reminds the developer to use the parameter and remain consistent within the codebase. It also explains the reason why to use (or not to use) a specific method.\nThis is a good example of how we can use ArchUnit to document the intended architecture in the form of unit tests. Of course, we can change things when we see the need for it. There might be good reasons to deviate from a certain pattern. However, using tests as a documentation will remind us to think about why we want to deviate.\nArchUnit Examples  Most examples of how to use ArchUnit describe checks on dependencies between classes and packages. That\u0026rsquo;s however not the only use case. Let\u0026rsquo;s have a look at three other types of checks that we can create.\n@deprecated @Test public void doNotCallDeprecatedMethodsFromTheProject() { JavaClasses importedClasses = new ClassFileImporter() .importPackages(\u0026#34;io.reflectoring.archunit\u0026#34;); ArchRule rule = noClasses().should() .dependOnClassesThat() .areAnnotatedWith(Deprecated.class); rule.check(importedClasses); } public void referenceDeprecatedClass() { Dep dep = new Dep(); } @Deprecated public class Dep { } With this test, we can check if we still use any deprecated methods. This check can be very useful in refactoring projects where we want to upgrade the version of libraries.\nBigDecimal Another nice use case is to prevent the use of a specific constructor of a class. Why would we want to do this? IDEs usually show us a warning (including an explanation) and code quality tools such as SonarQube can be configured to detect these cases as well.\nWith ArchUnit, we can easily achieve this in a unit test, which makes our intention to exclude a certain constructor clear. It reminds us that we really do not want to use a specific method or constructor call and we\u0026rsquo;ll get a failed test instead of only a warning. Another benefit is, that with ArchUnit, we can introduce custom checks.\nAs an example, let\u0026rsquo;s see how we can prevent calling one of the constructors of the BigDecimal class:\n@Test public void doNotCallConstructor() { JavaClasses importedClasses = new ClassFileImporter() .importPackages(\u0026#34;io.reflectoring.archunit\u0026#34;); ArchRule rule = noClasses().should() .callConstructor(BigDecimal.class, double.class); rule.check(importedClasses); } This test will fail if we call the BigDecimal constructor that accepts a double value as a parameter:\npublic void thisMethodCallsTheWrongBigDecimalConstructor() { BigDecimal value = new BigDecimal(123.0); } The test will pass if we use the constructor that accepts a string instead:\nBigDecimal value = new BigDecimal(\u0026#34;123.0\u0026#34;); Validating Unit Tests Another interesting use case for ArchUnit is to test the structure of unit tests themselves. Let\u0026rsquo;s look at the following two tests:\n@Test public void aTestWithAnAssertion() { String expected = \u0026#34;chocolate\u0026#34;; String actual = \u0026#34;chocolate\u0026#34;; assertEquals(expected, actual); } @Test public void aTestWithoutAnAssertion() { String expected = \u0026#34;chocolate\u0026#34;; String actual = \u0026#34;chocolate\u0026#34;; expected.equals(actual); } The first test contains an assertion, while the second doesn\u0026rsquo;t. Obviously, such a test isn\u0026rsquo;t useful at all. With ArchUnit, we can go ahead and create the following rule:\npublic ArchCondition\u0026lt;JavaMethod\u0026gt; callAnAssertion = new ArchCondition\u0026lt;\u0026gt;(\u0026#34;a unit test should assert something\u0026#34;) { @Override public void check(JavaMethod item, ConditionEvents events) { for (JavaMethodCall call : item.getMethodCallsFromSelf()) { if((call.getTargetOwner().getPackageName().equals( org.junit.jupiter.api.Assertions.class.getPackageName()) \u0026amp;\u0026amp; call.getTargetOwner().getName().equals( org.junit.jupiter.api.Assertions.class.getName())) || (call.getTargetOwner().getName().equals( com.tngtech.archunit.lang.ArchRule.class.getName()) \u0026amp;\u0026amp; call.getName().equals(\u0026#34;check\u0026#34;)) ) { return; } } events.add(SimpleConditionEvent.violated( item, item.getDescription() + \u0026#34;does not assert anything.\u0026#34;) ); } }; } @ArchTest public void testMethodsShouldAssertSomething(JavaClasses classes) { ArchRule testMethodRule = methods().that().areAnnotatedWith(Test.class) .should(callAnAssertion); testMethodRule.check(classes); } With this test, we make sure that all our unit tests have at least one assertion.\nSharing Tests Between Projects ArchUnit tests are a good example of unit tests that can be shared between projects. We usually write unit tests to test classes and methods within the same codebase. For example, we would have the following test in the library that implements the ArrayList class:\n@Test public void testArrayList() { List list = new ArrayList(); list.add(\u0026#34;My item\u0026#34;); assertEquals(1, list.size()); } We would not have this test in a project that only uses the library and therefore do not need to share it with other projects.\nArchUnit tests on the other hand test the structure and architecture of a project. A rule like\nArchRule interfaceName = classes().that().areInterfaces() .should.haveNameMatching(\u0026#34;I.*\u0026#34;); is a generic rule that can be reused in many projects. This approach is useful to maintain consistency between project within an organization. Especially with the shift from monolith applications to microservices, sharing ArchUnit tests can be very useful.\nAs ArchUnit tests are pure Java, we can use any approach of sharing tests between projects. Let\u0026rsquo;s briefly look at two ways of doing that.\nSharing as a Maven Dependency One way of making test available to another project is to bundle them in a dedicated project and it as a dependency to the project where we want to reuse the tests. As an example, let\u0026rsquo;s create a class with one ArchUnit test:\npublic class ArchUnitCommonTest { @ArchTest public static final ArchRule bigDecimalRule = noClasses() .should() .callConstructor(BigDecimal.class, double.class); } which we add under the main Java root folder src/main/java/com/example (make sure not to add it under /src/test/java).\nWe can define the name of our dependency in the pom file:\n\u0026lt;groupId\u0026gt;org.example\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;BundledArchitectureTests\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0-SNAPSHOT\u0026lt;/version\u0026gt; And include the tests in any other project:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.example\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;BundledArchitectureTests\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; Now, we can use the tests in the following way:\n@AnalyzeClasses(packages = \u0026#34;com.example\u0026#34;) public class CommonTests { @ArchTest static final ArchTests commonRules = ArchTests.in(ArchUnitCommonTest.class); } The @ArchUnit annotation on the rule will run the test on all classes included by @AnalyzeClasses.\nArchUnit Maven Plugin There\u0026rsquo;s a nice ArchUnit Maven plugin that can run ArchUnit tests included via a dependency directly on our project. The advantage over the approach above is, that we do not need to add any unit tests explicitly, but it\u0026rsquo;s all handled in the Maven pom file.\nThe plugin also comes with bundled tests, that can be reused and are a good inspiration to create your own tests.\nFor example, we can include\n\u0026lt;rule\u0026gt;com.societegenerale.commons.plugin.rules.NoJavaUtilDateRuleTest\u0026lt;/rule\u0026gt; to make sure we do not use the Date class in java.util.Date in our project.\nIntroducing ArchUnit to an Existing Project ArchUnit tests can easily be added to an existing codebase. We only need to add the dependency to our project and start to write the tests. If we do so, we not only ensure that future code complies to our architectural design, but we can check if the existing code does so, too! Introducing ArchUnit to an existing project can help you to really understand the architecture and find flaws in the current design.\nWhile adding tests to your projects, you might encounter many violation which you want to fix later. ArchUnit provides a nice feature for this case: FreezeRules.\nFrozen rules will be reported as passed but the violations will be stored in a violation store. Every time, the test is run, the store is updated. The text file can be used to monitor the progress of passing tests. We can also implement a custom validation store, for example to save the result in a database (by implementing com.tngtech.archunit.library.freeze.ViolationStore).\nLet\u0026rsquo;s look at an example:\n@Test public void freezingRules() { JavaClasses importedClasses = new ClassFileImporter() .importPackages(\u0026#34;io.reflectoring.archunit\u0026#34;); ArchRule rule = methods().that() .areAnnotatedWith(Test.class) .should().haveFullNameNotMatching(\u0026#34;.*\\\\d+.*\u0026#34;); FreezingArchRule.freeze(rule).check(importedClasses); } This test will report a failure only once and persist the result (the default file for that is archunit_store). For this to work, we need to set the property freeze.store.default.allowStoreCreation=true in a property file called archunit.properties.\nSuccessive runs will only report new failures.\nHere\u0026rsquo;s an example of how rule validations are stored:\nMethod \u0026lt;io.reflectoring.archunit.ArchUnitTest.someArchitectureRule2()\u0026gt; has full name matching \u0026#39;.*\\d+.*\u0026#39; in (ArchUnitTest.java:28) Method \u0026lt;io.reflectoring.archunit.ArchUnitTest.violatedRule1()\u0026gt; has full name matching \u0026#39;.*\\d+.*\u0026#39; in (ArchUnitTest.java:50) If we want to fix already reported validations, we can remove the file or remove FreezingArchRule from our test.\nArchUnit and Other JVM Languages As we\u0026rsquo;ve already seen, ArchUnit analyzes bytecode. That means we can - in principle - use ArchUnit for any JVM language like Kotlin, Scala, or Groovy. However, it\u0026rsquo;s not always possible to easily test for language specific features of languages other than Java. If we want to write tests for a particular JVM language, it comes in handy to know how language features are compiled to bytecode. Let\u0026rsquo;s look at some examples of using ArchUnit with Scala.\nThe following code snippet shows a simple test, which passes when run:\n@Test class ArchUnitTest { @Test def verifyTheAccessModifierOfMethods(): Unit = { val importedClasses = new ClassFileImporter() .importPackages(\u0026#34;io.reflectoring\u0026#34;) val rule : ArchRule = methods.should.haveModifier(JavaModifier.PUBLIC) rule.check(importedClasses) } } The following test, however, will fail. That\u0026rsquo;s because a trait is compiled to a public, abstract class:\n@Deprecated private trait myTrait { } @Test class ArchUnitTest { @Test def verifyTheAccessModifierOfMethods(): Unit = { val importedClasses = new ClassFileImporter() .importPackages(\u0026#34;io.reflectoring\u0026#34;) val rule : ArchRule = classes().that() .areAnnotatedWith(classOf[Deprecated]) .should.haveModifier(JavaModifier.PRIVATE) rule.check(importedClasses) } } Another example of a Scala-specific language feature that cannot be tested out of the box are Scala objects and companion objects.\nThis works:\nval rule: ArchRule = classes().that() .areInterfaces() .should.haveNameMatching(\u0026#34;I.*\u0026#34;) However, the following methods are not provided by ArchUnit:\nval objectsRule: ArchRule = classes().that() .areObjects().should.haveNameMatching(\u0026#34;I.*\u0026#34;) val companionRule: ArchRule = classes().that() .areCompanionObjects().should.haveNameMatching(\u0026#34;CO.*\u0026#34;) Despite these limitation, ArchUnit can be used with other JVM languages and if we are aware of some pitfalls, it makes ArchUnit a good choice for all JVM languages. This is the benefit of analyzing the bytecode.\nLimitations As ArchUnit analyzes the generated bytecode, we cannot write tests for language features that are not reflected in the bytecode:\nArchRule listParameterTypeRule = methods().should() .haveRawParameterTypes(List.class); ArchRule listReturnTypeRule = methods().should() .haveRawReturnType(List.class); ArchRule stringListReturnTypeRule = methods().should() .haveRawReturnType(List\u0026lt;String\u0026gt;.class); List\u0026lt;Object\u0026gt; Caching ArchUnit analyzes all classes that are imported by the ClassFileImporter. The scanning of all classes can quite some times (especially for larger projects) and is repeated for every test when we explicitly include the import for every test:\nJavaClasses importedClasses = new ClassFileImporter().importPackages(\u0026#34;io.reflectoring.archunit\u0026#34;); If we import classes using @AnalyzeClasses and annotate our tests with @ArchTest instead of @Test:\n@AnalyzeClasses(packages = \u0026#34;io.reflectoring.archunit\u0026#34;) public class ArchUnitCachedTest { @ArchTest public void doNotCallDeprecatedMethodsFromTheProject(JavaClasses classes) { JavaClasses importedClasses = classes; ArchRule rule = noClasses().should() .dependOnClassesThat().areAnnotatedWith(Deprecated.class); rule.check(importedClasses); } @ArchTest public void doNotCallConstructorCached(JavaClasses classes) { JavaClasses importedClasses = classes; ArchRule rule = noClasses().should() .callConstructor(BigDecimal.class, double.class); rule.check(importedClasses); } } Then ArchUnit will cache the imported classes and reuse for different tests. The below screenshots shows an example of two test runs. The first image show the timings without, the second image with caching:\nThe second image shows that the second test executes much faster when the classes that were imported in the first test are reused.\nConclusion With ArchUnit, we can test and document the architecture of our codebase with a clean, lightweight and pure Java library.\nIt\u0026rsquo;s easy to integrate ArchUnit tests into existing projects, which is a good exercise to get a good understanding of the design of an existing codebase.\nThe effort and risk to get started with ArchUnit in your (existing) project is very low and I highly recommend to try out the little library!\n","date":"June 25, 2023","image":"https://reflectoring.io/images/stock/0010-gray-lego-1200x628-branded_hu463ec94a0ba62d37586d8dede4e932b0_190778_650x0_resize_q90_box.jpg","permalink":"/enforce-architecture-with-arch-unit/","title":"Enforcing Your Architecture with ArchUnit"},{"categories":["Node"],"contents":"Node.js is a popular server-side runtime engine based on JavaScript to build and run web applications. Organizing our source code right from the start is a crucial initial step for building large applications.\nOtherwise, the code soon becomes unwieldy and very hard to maintain. Node.js does not have any prescriptive framework for organizing code. So let us look at some commonly used patterns of organizing the source code in a Node.js application.\nLeveraging Node.js Modules as the Unit of Organizing Code Modules are the fundamental construct for organizing code in Node.js. A module in Node.js is a standalone set of potentially reusable functions and variables. They are imported by other applications or modules which need to use the functions defined in the imported modules.\nThis approach makes it easier to reuse code and maintain consistency across our application. We should use the principle of DRY when defining modules. Whenever we see a possibility of code reuse we should package them in a module. The module can be scoped to our application or could be made public.\nExporting Blocks of Reusable Code We specify the functions and variables to be exposed by a module using the module.exports.\nThis is an example of a module: orderInquiryController.js:\nconst getOrderByID = ((req, res) =\u0026gt; { const orderID = Number(req.params.orderID) const order = orders.find( order =\u0026gt; order.orderID === orderID) if (!order) { return res.status(404).send(\u0026#39;Order not found\u0026#39;) } res.json(order) }) const getOrderStatus = ((req, res) =\u0026gt; { const orderID = Number(req.params.orderID) const order = orders.find( order =\u0026gt; order.orderID === orderID) if (!order) { return res.status(404).send(\u0026#39;Order not found\u0026#39;) } res.json(order) }) module.exports = { getOrders, getOrderByID, getOrderStatus } In this example, we are exporting two functions: getOrderByID and getOrderStatus. Other applications or modules can use these functions by importing the module as explained in the next section.\nImporting Blocks of Reusable Code We can import one or more modules into other modules or applications which want to use the functions defined in those modules.\nLet us import the module created in the previous section in another module: orderRoutes.js by using the require function:\nconst express = require(\u0026#39;express\u0026#39;) const router = express.Router() // Import the orderInquiryController module const { getOrders,getOrderByID,getOrderStatus } = require(\u0026#39;../controllers/orderInquiryController.js\u0026#39;) router.get(\u0026#39;/\u0026#39;, getOrders) router.get(\u0026#39;/:orderId\u0026#39;, getOrderByID) router.post(\u0026#39;/:orderId/status\u0026#39;, getOrderStatus) In this code snippet, we have imported the module: orderInquiryController. We have used a relative path: ../controllers/orderInquiryController.js to specify the location of the module.\nWe can also publish modules in a shared module registry, and other applications or modules can use them by installing from the shared module registry using the npm package manager. These installed modules reside in the node_modules folder.\nApplying the Principle of Separation of Concerns for Organizing Code Separation of concerns is a principle of software design used to break down an application into independent units with minimal overlap between the functions of the individual units. In Node.js, we can separate our code into different files and directories based on their functionality.\nFor example, we can keep all our controllers in a controllers directory, and all your routes in a routes directory. This approach makes it easier to locate specific pieces of logic in a huge codebase thereby making the code readable and maintainable.\nThis is an example of grouping files and folders using the principle of Separation of Concerns by roles:\n│ ├── app.js │ ├── controllers │ │ ├── inquiryController.js │ │ └── updateController.js │ ├── dbaccessors │ │ └── dataAccessor.js │ ├── models │ │ └── order.js │ ├── routes │ │ └── routes.js │ └── services │ └── inquiryService.js As we can see, the controller files: inquiryController.js and updateController.js are in one folder: controllers. Similarly, we have created folders for putting other types of files like routes, models, services, and dbaccessors.\nThis method of grouping by roles should be used for smaller codebases typically in a granular microservice built around 1 feature or domain.\nFor larger codebases with multiple features or domains, we should organize the code by features rather than by roles as explained in the next section.\nSeparation of Concerns by Features for Organizing Code Some Node.js applications could also be composed of multiple features or domains. For example, an e-commerce application could have features: orders, account, inventory, warehouse, etc. Each feature will have a set of APIs which we will build by using a distinct set of controllers and routes.\nFor these applications, we should organize the code by features to make it more readable.\nThis is an example of organizing the code of a project by features: accounts and orders.\n│ ├── app.js │ ├── accounts │ │ ├── controllers │ │ │ └── accountController.js │ │ └── routes │ │ ├── accountRoutes.js │ │ ├── catalogRoutes.js │ └── orders │ ├── controllers │ │ ├── orderInquiryController.js │ │ └── orderUpdateController.js │ ├── dbaccessors │ │ └── orderDataAccessor.js │ ├── models │ │ └── order.js │ ├── routes │ │ └── orderRoutes.js │ └── services │ └── orderInquiryService.js Here the files for the features: accounts and orders are placed under folders named: accounts and orders. Under each feature, we have organized the files by the roles like controllers, and routes.\nThis type of organization makes it easier to locate the code for a particular feature. For example, if we need to check the request handler for the orders API, we can go into the orders folder and look for the controllers kept in that folder.\nUsing Separate Folders for APIs and Views The express framework in Node.js allows us to integrate template engines for rendering HTML pages. Whenever we use template engines, it helps to have separate folders for views and APIs:\n│ ├── app.js │ ├── apis │ │ ├── accounts │ │ │ ├── controllers │ │ │ │ └── accountController.js │ │ │ └── routes │ │ │ ├── accountRoutes.js │ │ │ ├── catalogRoutes.js │ │ └── orders │ │ ├── controllers │ │ │ ├── orderInquiryController.js │ │ │ └── orderUpdateController.js │ │ ├── dbaccessors │ │ │ └── orderDataAccessor.js │ │ ├── models │ │ │ └── order.js │ │ ├── routes │ │ │ └── orderRoutes.js │ │ └── services │ │ └── orderInquiryService.js │ ├── views Using Separate Folders For Modules of Supported Version of API Whenever we are supporting multiple versions of APIs we should have separate folders for the modules of each version. In this example, we have two versions: v1 and v2:\n│ ├── app.js │ ├── apis │ │ ├── accounts │ │ │ ├──v1 │ │ │ │ ├── controllers │ │ │ │ │ └── accountController.js │ │ │ │ └── services │ │ │ │ └── accountInquiryService.js │ │ │ └──v2 │ │ │ ├── controllers │ │ │ │ └── accountController.js │ │ │ └── services │ │ │ └── accountInquiryService.js │ │ └── routes │ │ └── accountRoutes.js │ │ └── orders │ │ ├── controllers The controller and service modules of version1 are placed under the folder: v1 and the corresponding modules of version2 are placed under the folder: v2.\nPlacing All Configurations in a Config Folder Configurations help to prevent hard coding and make it easy to set up the system for different environments. Files with modules containing configurations should be under a folder: config so that it is easy to find and adjust the configuration values in one place.\n│ ├── app.js │ ├── apis │ │ ├── accounts │ │ │ ├── controllers . . . . │ │ └── orders │ │ ├── controllers . . │ ├── config \u0026lt;- Place all config files under this folder ├── dbconfig.test.js └── dbConfig.dev.js Separate Helpers Folder for Third-party Integration and Common Reusable Code We always have code that is common to all features for example integration with third-party APIs from Cloud, database connectivity information, utilities like masking information, etc.\nThese modules should be kept in a separate folder: helpers:\n│ ├── app.js │ ├── apis │ │ ├── accounts │ │ │ ├── controllers │ │ │ │ └── accountController.js │ │ │ └── routes │ │ │ ├── accountRoutes.js │ │ │ └── catalogRoutes.js │ │ └── orders │ │ ├── controllers │ │ │ ├── orderInquiryController.js │ │ │ └── orderUpdateController.js │ │ ├── dbaccessors │ │ │ └── orderDataAccessor.js │ │ ├── models │ │ │ └── order.js │ │ ├── routes │ │ │ └── orderRoutes.js │ │ └── services │ │ └── orderInquiryService.js │ ├── helpers \u0026lt;- Store code reusable across the project here │ │ ├── awsServices.js │ │ └── jwtService.js In this example, we have put the modules for connecting to the AWS cloud and utilities for JWT tokens under the helpers folder. If we have too many such files, we can further group them under specialized sub-folders such as integration, authentication, signing, etc.\nSeparate Folder for Tests for each Feature Beyond verifying actual and expected results, tests also provide useful information about how the functions exported by the module can be used by the consuming applications. For this reason, test files for modules should be kept under the folder for modules as shown in this example:\n│ ├── app.js │ ├── apis │ │ ├── accounts . . . │ │ └── orders │ │ ├── controllers . . . │ │ └── orders.spec.js \u0026lt;- Module specific tests │ ├── tests \u0026lt;- Common Tests │ │ ├── orders │ │ │ │ └── order_placement.spec.js │ │ ├── accounts │ │ │ │ └── account_open.spec.js │ In this project, the test file for the modules under the orders folder is kept in the same folder. Additional test files are kept in a separate test folder.\nGrouping All Shell Scripts in a Separate Folder for Scripts We often use scripts for configuring the run time environment and dependent systems. Examples of configuration scripts are database initialization scripts, setting up values of environment variables, etc. All such these scripts should be in a separate folder: scripts\n│ ├── app.js │ ├── apis │ │ ├── accounts │ │ │ ├── controllers │ │ │ │ └── accountController.js │ │ │ ├── routes . . . │ ├── scripts \u0026lt;- All the scripts are kept here │ │ ├── setup_server.js │ │ └── setup_db.js │ In this folder structure, we have stored the scripts for setting up the server: setup_server.js and database: setup_db.js under the folder: scripts.\nEnforcing Code Quality with Linters A linter is a tool that analyzes our code and checks for syntax errors, coding style, and other issues. We should use a linter to maintain consistent quality of code across our entire codebase. Some popular linters for Node.js are ESLint and JSHint.\nPeriodic Reorganizing of Code We should revisit the organization of code periodically because the assumptions and demands on the codebase keep changing as an application evolves to fulfill business needs. Some examples of these changes are the introduction of new features requiring the use of a new flavor of a database, and integration with external APIs.\nUsing a Consistent Naming Convention Apart from the rules around organizing code, we should also use a consistent naming convention for our files, folders, and functions. Consistent naming helps to increase the readability of our code. We can use a variety of naming conventions, like camelCase, PascalCase, and snake_case. Irrespective of our choice, we should ensure that the naming is consistent across our entire codebase.\nConclusion Organizing code in a Node.js application is crucial for improving the readability, maintainability, and extendability of our code. Here are the main techniques for code organization:\n Modules are the fundamental unit of organizing code in Node.js. Modules are imported by other applications or modules which need to use the functions defined in the imported modules. We apply the principle of Separation of Concerns for organizing code. For small projects like granular microservices built around 1 feature or domain, we should organize by roles like controllers, routes, etc. For bigger projects with multiple features or domains, we should organize by features and then by roles. Whenever we are supporting multiple versions of APIs, we should have separate folders for the modules of each version. Files with modules containing configurations should be under a folder: config so that it is easy to find and adjust the configuration values in one place. Whenever we use template engines, it helps to have separate folders for views and APIs We should revisit the organization of our code periodically because the assumptions and demands on the codebase keep changing as an application evolves to fulfill business needs. We should also use a consistent naming convention for our files, folders, and functions. Consistent naming helps to increase the readability of our code.  ","date":"May 17, 2023","image":"https://reflectoring.io/images/stock/0117-queue-1200x628-branded_hu88ffcb943027ab1241b6b9f65033c311_123865_650x0_resize_q90_box.jpg","permalink":"/organize-code-with-nodejs-tutorial/","title":"Organizing Code in Node.js Application"},{"categories":["Spring"],"contents":"Choosing a backend and frontend stack for web apps can be a daunting task, as there are numerous options available for backend (Node.js, Ruby, Python, C#, Go, etc) and frontend (Angular, React, Vue, Swift, etc) development. With this many options, it can be challenging to determine which technology stack will be the best fit for our application.\nFactors like performance, speed, scalability, and the availability of skilled developers must be considered while choosing a technology stack. In this article, we’ll look at why Spring Boot and ReactJs can be a perfect duo for building full-stack web applications and also walk through the process of creating a Spring Boot backend application and integrating it with a React frontend application.\n Example Code This article is accompanied by a working code example on GitHub. Prerequisite: The following knowledge and tools are required to get started with this tutorial:\n Basic knowledge of JavaScript and the React library Basic knowledge of Java and Spring Boot Basic knowledge of MongoDB database clusters  Tools Required  Download and install Node JS. Download and install the Java Open JDK.- at least version 8 Download and install the IntelliJ IDEA IDE for Spring Boot app development. Download and install Visual Studio Code IDE for ReactJs app development.  Benefits of Using Spring Boot with ReactJs Spring Boot and ReactJs offer multiple benefits when building fullstack web applications:\n High performance and scalability: They are a powerful duo for high performance and scalability. Spring Boot\u0026rsquo;s lightweight container is ideal for deploying and running applications, while ReactJs excels at rendering complex user interfaces efficiently. Robust backend: Spring Boot is ideal for developing enterprise-level applications as it offers a powerful and scalable backend for building APIs and microservices. It has extensive support for various data sources and allows easy integration with other projects, making it simpler to build microservices based architectures. Efficient frontend development: ReactJs simplifies frontend development by utilizing a component-based architecture, which allows for code reusability. This leads to faster development, easier maintenance, and improved user experience. Easy integration: ReactJs has the ability to consume RESTful APIs from a Spring Boot backend using HTTP libraries like Axios, fetch, and superagent simplifies data communication. Large community: They both have large and active developer communities that provide useful resources, support, and up-to-date information.  Alright, let\u0026rsquo;s roll up our sleeves and have some fun building a fullstack application using Spring Boot and ReactJs.\nHere is a schema architecture of the application we will be building:\nTo make things more interesting, we will create a table register that tracks the number of published posts for all publishers in an organization. We can easily Create, Read, Update, or Delete a publisher\u0026rsquo;s data right from the table. To fetch data from our Spring Boot backend and present it on a ReactJs frontend, we will utilize the Axios library for making API requests.\nLet\u0026rsquo;s start by setting up the backend and then integrating it into a frontend application.\nSetting up Spring Boot Development Environment Spring Boot is an opinionated web framework that allows us to build faster by hiding configuration and customization options at the outset.\nThis means that as developers, we only need to think about the logic our application uses, rather than worrying about the underlying architecture and boilerplate setup code that would normally need to be written. Spring Boot provides a number of pre-configured templates and components that allow developers to quickly and easily get applications up and running.\nThe first step to building Spring Boot Endpoints is to initialize.\nTo do this, go to Spring Initializr start.spring.io. Fill out the initializer form as follows:\nAs seen above, we:\n Picked maven as our applications build automation tool. Select Java for the programming language. Then the Spring Boot version. Filled in the project metadata details Selected Jar as the project packaging format. Java Version 8. In the right column, we selected the dependencies required by our application. Click on ADD DEPENDENCIES... button.  The dependencies we will be using are\n Lombok: This is a Java library used to reduce boilerplate code. It lets us use annotations and generates the boilerplate code after our codes are compiled. Spring Web: build Spring-based web applications with minimal configuration by adding all required dependencies to your project, such as the Apache Tomcat web server, jackson, spring-mvc, etc. Spring Data MongoDB: used to access data from our MongoDB Atlas cluster.  Once all of the above settings have been entered into the Initializr tool. We can proceed by clicking on GENERATE button.\nThis will generate and download the Spring Boot project into our computer.\nFor our Spring Boot development, we will be using the intellij IDE, a widely used and user-friendly integrated development environment (IDE) for Java.\nNext, unzip the downloaded file from the download path and then open the publisher_register folder in your IDE.\nGive the IDE some time to resolve and download all our app dependencies.\nOnce the setup process is complete, we can move on to the next step. Our focus will be on the src folder.\nTo proceed, let\u0026rsquo;s set up MongoDB database for our application. It can be installed locally on our machine or deployed to a cloud provider such as AWS or Google Cloud via MongoDB Atlas. For this article, we will use the MongoDB Atlas cloud service.\nSetting up MongoDB Atlas Here is a step-by-step process for creating a MongoDB Atlas cluster:\n Sign up for a MongoDB Atlas account. Click on Build a Database button, choose a free plan. Create a username and password. Add IP Address 0.0.0.0 by using this, we can conveniently connect to our project\u0026rsquo;s clusters from anywhere. Click on the finish and close button. In the database section click on the connect button. In Connect to your application select Drivers Copy the connection string URI.  The connection URI should look like this:\nmongodb+srv://\u0026lt;username\u0026gt;:\u0026lt;password\u0026gt;@cluster0.tpabvhf.mongodb.net\nIf you need help setting up a MongoDB Atlas cluster, you can follow this more detailed guide here.\nTo use MongoDB in our application, we have to store our database URI in the application.properties file.\nThe application.properties file is a Spring Boot configuration file that stores key-value pairs of application settings. It is usually found in the src/main/resources directory and is used to configure many application properties like database connection, server port, logging, security, and so on.\nTo set up our Spring Boot application settings, add the following code to the application.properties file:\nspring.data.mongodb.database=publisher_register spring.data.mongodb.uri= #Paste MongoDB URI here server.port=8000 Above, we named our database publisher_register, configured the app to use our MongoDB URI, and set the server port to 8000. Don\u0026rsquo;t forget to copy and paste your MongoDB URI.\nOur application is now ready to connect to our database using the MongoDB URI and the Spring Data MongoDB dependency.\nTo make our codebase more manageable and organized, we\u0026rsquo;ll split our application setup into different sections: Repository, Model, and Controller.\nThe Repository section will manage database interactions and queries, while the Model section will define the application\u0026rsquo;s data structures. Lastly, the Controller section will manage request and response handling.\nStructuring our Spring Boot Application To organize our application\u0026rsquo;s code and separate concerns, we will create three new folders (packages) and corresponding files.\nCreate the following in io.reflectoring.publisher_register directory located in the src/main/java folder.\n controller folder with a PublisherController class file. model folder with a Publisher class file. repository folder with a PublisherRepository interface file.  Our application structure will now look like this:\nThe model folder containing our Publisher class file is where we are defining the data model for our Publisher object.\nPaste the following code in the model/Publisher file:\npackage io.reflectoring.publisher_register.model; import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; import lombok.ToString; import org.springframework.data.annotation.Id; import org.springframework.data.mongodb.core.mapping.Document; @Data @AllArgsConstructor @NoArgsConstructor @ToString @Document(collection = \u0026#34;Publisher\u0026#34;) public class Publisher { @Id private String id; private String name; private String email; private Integer published; } In the above code, the @Data annotation generates boilerplate code for Java classes such as getters, setters, equals(), hashCode(), and a toString() method.\n@AllArgsConstructor automatically generates a constructor with arguments for all non-final fields in a class, and @NoArgsConstructor generates a constructor with no arguments. This means that when you create an object of that class, you don\u0026rsquo;t need to provide any arguments to initialize the object.\n@ToString generates a toString() method for the annotated class, which returns a string that represents the state of the object.\nThe @Document annotation in Spring Data MongoDB indicates that a class is a domain object that should be persisted in a MongoDB collection.\nThe @Id annotation marks the field that should be used as the identifier for the document.\nNext, the repository folder contains a PublisherRepository interface that defines the necessary database operations such as saving, updating, and deleting publishers for our Publisher model. It also has a @Repository annotation, indicating that it is a repository class.\nThe PublisherRepository is an interface that extends Spring Data MongoDB\u0026rsquo;s MongoRepository interface, which provides out-of-the-box methods for common database operations.\nThese methods can be used as they are or customized based on specific requirements.\nIn the repository/PublisherRepository file, paste the following code:\npackage io.reflectoring.publisher_register.repository; import io.reflectoring.publisher_register.model.Publisher; import org.springframework.data.mongodb.repository.MongoRepository; import org.springframework.stereotype.Repository; @Repository public interface PublisherRepository extends MongoRepository\u0026lt;Publisher, String\u0026gt; { } Finally, the PublisherController class in our controller folder is in charge of handling incoming endpoint requests and responding to them. This section contains functionality for dealing with REST API queries relating to our Publisher model.\nTo create our application controller, paste the following code into the controller/PublisherController file:\npackage io.reflectoring.publisher_register.controller; import io.reflectoring.publisher_register.model.Publisher; import io.reflectoring.publisher_register.repository.PublisherRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.util.List; import java.util.Optional; @RestController @CrossOrigin @RequestMapping(\u0026#34;/publisher\u0026#34;) public class PublisherController { @Autowired private PublisherRepository publisherRepository; @PostMapping(\u0026#34;/create\u0026#34;) public Publisher create(@RequestBody Publisher publisher){ return publisherRepository.save(publisher); } @GetMapping(\u0026#34;/all\u0026#34;) public List\u0026lt;Publisher\u0026gt; getAllAuthors() { return publisherRepository.findAll(); } @GetMapping(\u0026#34;/{id}\u0026#34;) public Optional\u0026lt;Publisher\u0026gt; findOneById(@PathVariable String id) { return publisherRepository.findById(id); } @PutMapping(\u0026#34;/update\u0026#34;) public Publisher update(@RequestBody Publisher publisher){ return publisherRepository.save(publisher); } @DeleteMapping(\u0026#34;/delete/{id}\u0026#34;) public void deleteById(@PathVariable String id){ publisherRepository.deleteById(id); } } In the above code, we are using various annotations like, @RestController which informs Spring that this class will handle REST requests, while the @RequestMapping annotation defines the URL path to which our endpoint request will be mapped, here we are mapping our request to /publisher.\nThe @Autowired annotation is used to inject dependencies into the class, such as the PublisherRepository object which handles the business logic of the application.\nThe @CrossOrigin annotation in Spring Boot provides a convenient and flexible way to configure Cross-Origin Resource Sharing (CORS) in our application. By specifying allowed origins, headers, and methods, we can control which external domains are allowed to access our API endpoints. It can be used at both the class and method levels to fine-tune the CORS configuration for our application.\nHowever, it is important to keep in mind the security implications of allowing cross-origin requests. It is recommended to only allow access from trusted sources and to restrict the allowed headers and methods to those that are necessary for the application to function correctly. By doing so, we can ensure that our application is not vulnerable to malicious attacks, such as cross-site scripting (XSS) or cross-site request forgery (CSRF).\nOverall, the PublisherController class file is the entry point for incoming endpoint requests and is responsible for returning the appropriate HTTP response back to the client.\nOur backend server application is now good to go, and ready for use!! 🚀\nTo start our Spring Boot application, Hit the green Run button at the top right corner of your IDE.\nOur server can now start listening on http://localhost:8000/publisher.\nSetting up React Frontend Client ReactJs is a widely used JavaScript library that enables the creation of Single Page Web Applications with dynamic and interactive UIs. It emphasizes the development of reusable UI components that can handle changing data over time, making it a great choice for building engaging user interfaces.\nReact application is typically composed of multiple components, each with its own logic and controls. The component-based approach makes it easy to maintain and scale the codebase in large projects.\nBefore creating our React project, make sure you have Node.js installed on your machine. We\u0026rsquo;ll be using the Node Package Manager (npm) to create our project.\nCreate React Project To develop our React application, we will be switching to VSCode, which is a popular and highly customizable IDE. VSCode has a lot of built-in features and extensions that are specifically designed to make React development easier and more efficient.\nThese include features such as syntax highlighting, code completion, debugging, and hot reloading, as well as extensions for linting, formatting, and testing.\nTo create a new React application, open your terminal or command prompt and run the following command:\nnpx create-react-app publisher_registerUI The create-react-app command is a standard command for creating a new React project.\npublisher_registerUI is the project name. We can replace it with any desired name for our project.\nThe publisher_registerUI application can now be opened in a VScode IDE.\nNext, run the following command in the terminal/command prompt to start the React application:\nnpm start After running the previous command, our React development server will be started and the application will be loaded on port: 3000. React project comes with an auto-reload feature, meaning that any changes made to the code will be automatically compiled and the page will be reloaded upon saving.\nThis is a common feature used in modern JavaScript libraries and frameworks to enhance the development experience.\nStructuring our React Application: To start structuring our React application, we need to install all necessary dependencies.\nReact applications are purely client side and do not have the built-in capability to make HTTP requests to external APIs or other sources of data. Therefore, in order to enable this functionality, we need to install a third-party library like Axios.\nAxios is a popular JavaScript library that provides an easy-to-use interface for making HTTP requests from the client side. It is highly configurable, and it supports various request methods, such as GET, POST, PUT, DELETE, and more. With Axios, we can easily fetch data from external APIs and update our application\u0026rsquo;s state accordingly.\nAdditionally, we will install Bootstrap, a popular CSS and JavaScript library that provides a collection of components, such as buttons, forms, modals, and more, to easily style our React components.\nPaste the following code in the React application\u0026rsquo;s terminal:\nnpm install axios bootstrap The above code will install the Axios and Bootstrap dependency into our application. All dependency files are saved in the node_modules folder.\nTo structure our React project, Run the following command to create the necessary folders and files for the application.\nmkdir src/api src/components touch src/api/axiosConfig.js src/components/PublisherCrud.jsx src/components/PublisherList.jsx Our React application structure will now look like this:\nThe src folder, is where we will write our code. With the command above we have created components file PublisherCrud.jsx and PublisherList.jsx.\nJSX is a syntax extension to JavaScript, used by React for creating user interfaces. It allows developers to write HTML-like syntax directly in their JavaScript code, making it easier to visualize and manipulate the UI elements.\nOur App.js and App.css are components of our React application. App.js is the default landing page where we can define, pass and render all our application components UI.\nWe also created an api folder in the src directory, this contains an axiosConfig.js file. This is where our application will make all API calls.\nIn the axiosConfig.js file we will create a connection to our SpringBoot endpoints.\nTo achieve this, Paste the following code in the axiosConfig.js file:\nimport axios from \u0026#34;axios\u0026#34;; export default axios.create({ baseURL: \u0026#34;http://localhost:8000/publisher\u0026#34;, }); In the above code, we are using Axios dependency\u0026rsquo;s .create() method.\nThis method allows us to set default values for headers, timeouts, interceptors, and other properties that will be applied to all requests made by that instance. It is useful when we have to make multiple requests to the same API or when we need to customize the request behavior for a particular endpoint.\nWith this, we can easily call our Spring Boot endpoints from the frontend code without needing to repeatedly specify our full backend URL. Read this article here to learn more about the Axios library.\nNext, let\u0026rsquo;s begin creating our UI components.\nCreating Components In React. React is component-based, we can create reusable pieces of UI called components. For example, we are building a publisher registration website and we want to display a table that shows information about publishers who have registered. Instead of creating separate tables for each publisher, we can create a single table component and pass different properties for each publisher, such as name, email, and registration date. This way, we can represent hundreds of publishers with just one block of code, making our development process more efficient.\nFunctional components in React are JavaScript functions that receive an optional object of properties (props) and return HTML (JSX) that describes the user interface. Hooks are functions that enable us to use state and other React features in functional components without writing a class.\nSome popular hooks in React include useState, useEffect, useContext, and useReducer. They enable us to manage state, trigger re-renders, hook into component lifecycle methods, and perform actions like fetching data from APIs. Check out this link to learn more about hooks.\nTo create our application component. We will update both files in our components folder.\nIn src/components/PublisherCrud.jsx file, paste the following code:\nimport { useState } from \u0026#34;react\u0026#34;; import api from \u0026#34;../api/axiosConfig\u0026#34;; import PublisherList from \u0026#34;./PublisherList\u0026#34;; const PublisherCrud = ({ load, publishers }) =\u0026gt; { /* state definition */ const [id, setId] = useState(\u0026#34;\u0026#34;); const [name, setName] = useState(\u0026#34;\u0026#34;); const [email, setEmail] = useState(\u0026#34;\u0026#34;); const [published, setPublished] = useState(\u0026#34;\u0026#34;); /* being handlers */ async function save(event) { event.preventDefault(); await api.post(\u0026#34;/create\u0026#34;, { name: name, email: email, published: published, }); alert(\u0026#34;Publisher Record Saved\u0026#34;); // reset state  setId(\u0026#34;\u0026#34;); setName(\u0026#34;\u0026#34;); setEmail(\u0026#34;\u0026#34;); setPublished(\u0026#34;\u0026#34;); load(); } async function editEmployee(publishers) { setName(publishers.name); setEmail(publishers.email); setPublished(publishers.published); setId(publishers.id); } async function deleteEmployee(id) { await api.delete(\u0026#34;/delete/\u0026#34; + id); alert(\u0026#34;Publisher Details Deleted Successfully\u0026#34;); load(); } async function update(event) { event.preventDefault(); if (!id) return alert(\u0026#34;Publisher Details No Found\u0026#34;); await api.put(\u0026#34;/update\u0026#34;, { id: id, name: name, email: email, published: published, }); alert(\u0026#34;Publisher Details Updated\u0026#34;); // reset state  setId(\u0026#34;\u0026#34;); setName(\u0026#34;\u0026#34;); setEmail(\u0026#34;\u0026#34;); setPublished(\u0026#34;\u0026#34;); load(); } /* end handlers */ /* jsx */ return ( \u0026lt;div className=\u0026#34;container mt-4\u0026#34;\u0026gt; \u0026lt;form\u0026gt; \u0026lt;div className=\u0026#34;form-group my-2\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;text\u0026#34; className=\u0026#34;form-control\u0026#34; hidden value={id} onChange={e =\u0026gt; setId(e.target.value)} /\u0026gt; \u0026lt;label\u0026gt;Name\u0026lt;/label\u0026gt; \u0026lt;input type=\u0026#34;text\u0026#34; className=\u0026#34;form-control\u0026#34; value={name} onChange={e =\u0026gt; setName(e.target.value)} /\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div className=\u0026#34;form-group mb-2\u0026#34;\u0026gt; \u0026lt;label\u0026gt;Email\u0026lt;/label\u0026gt; \u0026lt;input type=\u0026#34;text\u0026#34; className=\u0026#34;form-control\u0026#34; value={email} onChange={e =\u0026gt; setEmail(e.target.value)} /\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div className=\u0026#34;row\u0026#34;\u0026gt; \u0026lt;div className=\u0026#34;col-4\u0026#34;\u0026gt; \u0026lt;label\u0026gt;Published\u0026lt;/label\u0026gt; \u0026lt;input type=\u0026#34;text\u0026#34; className=\u0026#34;form-control\u0026#34; value={published} placeholder=\u0026#34;Published Post(s)\u0026#34; onChange={e =\u0026gt; setPublished(e.target.value)} /\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div\u0026gt; \u0026lt;button className=\u0026#34;btn btn-primary m-4\u0026#34; onClick={save}\u0026gt; Register \u0026lt;/button\u0026gt; \u0026lt;button className=\u0026#34;btn btn-warning m-4\u0026#34; onClick={update}\u0026gt; Update \u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/form\u0026gt; \u0026lt;PublisherList publishers={publishers} editEmployee={editEmployee} deleteEmployee={deleteEmployee} /\u0026gt; \u0026lt;/div\u0026gt; ); }; export default PublisherCrud; There’s a lot to unpack here, let\u0026rsquo;s break it down section by section to better understand what\u0026rsquo;s happening.\nIn state definition section, here we use React\u0026rsquo;s useState hook for state management. This hook accepts an initial state value and returns an array with the current state value and a function for updating the state. When the state changes, the component re-renders with the new state value.\nThe handlers section within our PublisherCrud component sets up functions to handle the fetching of our API data and saving, editing, deleting of our table data, and resetting of our application\u0026rsquo;s state.\nAnd finally, our jsx section, gives a clear idea of what the component is going to render to the DOM. You can learn more about it here. In our components JSX we are making use of bootstrap classes for styling.\nLastly, we are passing the PublisherList component with a set of props, which are optional object properties that can be passed down from parent components to child components.\nTo specify props, we set them as attributes on the component where it is to be used. Here, we are passing state values and handler functions as props to the child component PublisherList.\nNext, paste the following in PublisherList.jsx.\nimport React from \u0026#34;react\u0026#34;; const PublisherList = ({ publishers, editEmployee, deleteEmployee }) =\u0026gt; { return ( \u0026lt;table className=\u0026#34;table table-hover mt-3\u0026#34; align=\u0026#34;center\u0026#34;\u0026gt; \u0026lt;thead className=\u0026#34;thead-light\u0026#34;\u0026gt; \u0026lt;tr\u0026gt; \u0026lt;th scope=\u0026#34;col\u0026#34;\u0026gt;Nº\u0026lt;/th\u0026gt; \u0026lt;th scope=\u0026#34;col\u0026#34;\u0026gt;Name\u0026lt;/th\u0026gt; \u0026lt;th scope=\u0026#34;col\u0026#34;\u0026gt;Email\u0026lt;/th\u0026gt; \u0026lt;th scope=\u0026#34;col\u0026#34;\u0026gt;Published\u0026lt;/th\u0026gt; \u0026lt;th scope=\u0026#34;col\u0026#34;\u0026gt;Option\u0026lt;/th\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;/thead\u0026gt; {publishers.map((employee, index) =\u0026gt; { return ( \u0026lt;tbody key={employee.id}\u0026gt; \u0026lt;tr\u0026gt; \u0026lt;th scope=\u0026#34;row\u0026#34;\u0026gt;{index + 1} \u0026lt;/th\u0026gt; \u0026lt;td\u0026gt;{employee.name}\u0026lt;/td\u0026gt; \u0026lt;td\u0026gt;{employee.email}\u0026lt;/td\u0026gt; \u0026lt;td\u0026gt;{employee.published}\u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;button type=\u0026#34;button\u0026#34; className=\u0026#34;btn btn-warning\u0026#34; onClick={() =\u0026gt; editEmployee(employee)} \u0026gt; Edit \u0026lt;/button\u0026gt; \u0026lt;button type=\u0026#34;button\u0026#34; className=\u0026#34;btn btn-danger mx-2\u0026#34; onClick={() =\u0026gt; deleteEmployee(employee.id)} \u0026gt; Delete \u0026lt;/button\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;/tbody\u0026gt; ); })} \u0026lt;/table\u0026gt; ); }; export default PublisherList; In the PublisherList component above, we are destructuring the passed props and using Bootstrap classes to display a table with the list of all saved publishers.\nNext in React, the App.js file is the root component of our application, and it is responsible for rendering and displaying all other components in the application.\nTo render our components, paste the following code in the App.js file:\nimport \u0026#34;bootstrap/dist/css/bootstrap.css\u0026#34;; import api from \u0026#34;./api/axiosConfig\u0026#34;; import { useEffect, useState } from \u0026#34;react\u0026#34;; import \u0026#34;./App.css\u0026#34;; import PublisherCrud from \u0026#34;./components/PublisherCrud\u0026#34;; function App() { const [publishers, setPublishers] = useState([]); /* manage side effects */ useEffect(() =\u0026gt; { (async () =\u0026gt; await load())(); }, []); async function load() { const result = await api.get(\u0026#34;/all\u0026#34;); setPublishers(result.data); } return ( \u0026lt;div\u0026gt; \u0026lt;h1 className=\u0026#34;text-center\u0026#34;\u0026gt;List Of Publisher\u0026lt;/h1\u0026gt; \u0026lt;PublisherCrud load={load} publishers={publishers} /\u0026gt; \u0026lt;/div\u0026gt; ); } export default App; In the code snippet above, the bootstrap dependency and application components were imported into our root component App.js.\nThe useEffect hook here is used to run side effects in our App.js. These side effects include operations that change the state of the application, such as fetching data from an API, updating the DOM, setting up event listeners, and more.\nThe load function is responsible for calling the backend API to fetch all publishers in the database.\nIn the JSX section, we passed our PublisherCrud component with load function and publishers list as props. After saving the code, the application will be updated automatically.\nWe can open our application on any browser of choice at http://localhost:3000.\nConclusion By combining the strengths of Spring Boot and React, we can create a responsive, scalable, and modern web application. With careful planning, project structuring, and integration, we can provide a seamless user experience.\nYou can refer to all the source code used in the article on Github.\n","date":"May 8, 2023","image":"https://reflectoring.io/images/stock/0130-spring-boot-and-reactjs_hu3eabe02b6bb03731629f95bc76450cbd_250928_650x0_resize_q90_box.jpg","permalink":"/build-responsive-web-apps-with-springboot-and-react-tutorial/","title":"How to Build Responsive Web Apps with Spring Boot and React: A Step-by-Step Guide"},{"categories":["AWS","Spring Boot"],"contents":"The primary purpose of logging in applications is to debug and trace one or more root causes of an unexpected behavior. We take various approaches to logging from putting ad-hoc print statements to embedding sophisticated logging libraries in our code.\nIrrespective of which approach we take, a log without a consistent structure with context information is difficult to search for and locate the root cause of problems.\nAmazon CloudWatch is a managed monitoring and logging service which is used as centralized log storage. It can also run queries on structured logs to extract valuable information.\nIn this article, we will understand:\n how to produce structured logs from applications with an example of producing structured logs from a Spring Boot application ingest those structured logs in Amazon CloudWatch run queries on the ingested structured logs to extract useful insights into the application   Example Code This article is accompanied by a working code example on GitHub. What is Structured Logging Before going further, let us understand Structured logging in a bit more detail.\nStructured logging is writing logs with information in a consistent format that allows logs to be treated as data rather than text. We log a structured object, most often as JSON for writing structured logs, instead of just logging a line of text.\nThe JSON object is composed of fields that can give contextual information about the log event, for example:\n the application name class or method name from where the log was produced invoker of the method DateTime of the logging event  The JSON object may also include the request and response payload in case of API or method calls and optionally the stacktrace in case of errors.\nThis structured format of logs helps us to search by applying filters, sort, and limit operations on different fields in the structure to gain useful insights about our application.\nHere is an example of a structured log:\n{ \u0026#34;instant\u0026#34;: { \u0026#34;epochSecond\u0026#34;: 1682426514, \u0026#34;nanoOfSecond\u0026#34;: 223252000 }, \u0026#34;thread\u0026#34;: \u0026#34;http-nio-8080-exec-6\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;ERROR\u0026#34;, \u0026#34;loggerName\u0026#34;: \u0026#34;***.services.AccountService\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Account not found:: 5678000\u0026#34;, \u0026#34;endOfBatch\u0026#34;: false, \u0026#34;loggerFqcn\u0026#34;: \u0026#34;org.apache.logging.log4j.spi.AbstractLogger\u0026#34;, \u0026#34;contextMap\u0026#34;: { \u0026#34;accountNo\u0026#34;: \u0026#34;5678000\u0026#34; }, \u0026#34;threadId\u0026#34;: 43, \u0026#34;threadPriority\u0026#34;: 5, \u0026#34;appName\u0026#34;: \u0026#34;AccountsProcessor\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;release1.0\u0026#34; } In this log, we can see several contextual information like the thread identifier, datetime epoch, and application name apart from the log message: Account not found:: 5678000.\nProducing Structured Logs from a Spring Boot Application We produce structured logs in applications most often by using logging libraries in different programming languages.\nLet us use a Spring Boot application for generating structured logs. We can create the initial application setup of our Spring Boot application from the Spring Boot starter  and open it in our favorite IDE.\nWe will use the log4j library to generate structured logs. The snippet of the FileAppender of our Log4j configuration in log4j2.xml looks like this:\n\u0026lt;File name=\u0026#34;FileAppender\u0026#34; fileName=\u0026#34;/home/ec2-user/accountprocessor/logs/accountprocessor-logging-dev.log\u0026#34;\u0026gt; \u0026lt;JsonLayout complete=\u0026#34;false\u0026#34; compact=\u0026#34;true\u0026#34; eventEol=\u0026#34;true\u0026#34; properties=\u0026#34;true\u0026#34; \u0026gt; \u0026lt;KeyValuePair key=\u0026#34;appName\u0026#34; value=\u0026#34;AccountsProcessor\u0026#34; /\u0026gt; \u0026lt;KeyValuePair key=\u0026#34;version\u0026#34; value=\u0026#34;release1.0\u0026#34; /\u0026gt; \u0026lt;KeyValuePair key=\u0026#34;accountNo\u0026#34; value=\u0026#34;${ctx:accountNo}\u0026#34;/\u0026gt; \u0026lt;/JsonLayout\u0026gt; \u0026lt;/File\u0026gt; In this FileAppender we have used JsonLayout to generate the logs in JSON format. We have added additional fields: appName, version, and accountNo to add useful context around the log events.\nWe have also added a sample API to the application to which we will send HTTP GET requests. On receiving these requests, our application will use the log4j configuration to produce structured logs.\n@RestController @RequestMapping(\u0026#34;/accounts\u0026#34;) public class AccountInquiryController { private AccountService accountService; private static final Logger LOG = LogManager.getLogger( AccountInquiryController.class); public AccountInquiryController( final AccountService accountService){ this.accountService = accountService; } @GetMapping(\u0026#34;/{accountNo}\u0026#34;) @ResponseBody public AccountDetail getAccountDetails( @PathVariable(\u0026#34;accountNo\u0026#34;) String accountNo) { ThreadContext.put(\u0026#34;accountNo\u0026#34;, accountNo); LOG.info( \u0026#34;fetching account details for account {}\u0026#34;, accountNo); Optional\u0026lt;AccountDetail\u0026gt; accountDetail = accountService.getAccount(accountNo); // Log response from the service class  LOG.info(\u0026#34;Details of account {}\u0026#34;, accountDetail); ThreadContext.clearAll(); return accountDetail.orElse( AccountDetail.builder().build()); } } Here we have added two logger statements to print the HTTP request\u0026rsquo;s path parameter accountNo and the response from the service class.\nWe have also added the accountNo in a ThreadContext so that all the logs in this thread of execution will print the accountNo field. This will allow us to correlate and group requests by the accountNo field.\nWhen we run this application and send some requests to the endpoint http://localhost:8080/accounts/5678888, we can see the logs in the console as well as in a file. In the next section, we will run this application in an Amazon EC2 instance and send the structured logs generated by the application to Amazon CloudWatch.\nCloudWatch Logging Concepts: Log Events, Log Streams, and Log Groups Before sending our logs to Amazon CloudWatch, let us understand how the logs are stored and organized in CloudWatch into Log Streams and Log Groups.\nLog Event: A Log Event is an activity recorded by the application. It contains a timestamp and raw event message encoded in UTF-8.\nLog Streams: A log stream is a sequence of log events emitted by AWS services or any custom application. This is how a set of log streams looks in the AWS management console:\nThis is a snapshot of a log stream containing a sequence of log events.\nLog Groups: Log Groups are a group of Log Streams that share the same retention, monitoring, and access control settings. Each log stream belongs to one log group. A set of log groups in the AWS console is shown here:\nWe can specify the duration for which we want the logs to be retained by specifying retention settings to the log group.\nWe can also assign metric filters to log groups to extract metric observations from ingested log events and transform them into data points in a CloudWatch metric.\nHere we will configure a Spring Boot application to produce structured logs and then send those logs to CloudWatch.\nSending the Logs to Amazon CloudWatch from Amazon EC2 Instance We will next run the Spring Boot application in an EC2 instance and ship our application logs to CloudWatch. We use the unified CloudWatch agent to collect logs from Amazon EC2 instances and send them to CloudWatch.\nCreating EC2 Instance and Configuring it to Run the Spring Boot Application We can either create the EC2 instance from the AWS Management Console or any of the Infrastructure as Code tools: Terraform, CloudFormation, or CDK.\nFor the purpose of running our example, Terraform scripts are included in the source code for creating the EC2 instance.\nWe also need to install OpenJDK: an open-source implementation of the Java Platform to run our Spring Boot application. After the EC2 instance starts up, we can use the following script to install OpenJDK on the EC2 instance.\nwget https://download.java.net/***openjdk-20.0.1_linux-x64_bin.tar.gz tar xvf openjdk* export JAVA_HOME=jdk-20.0.1 export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar We will further need to attach an IAM role with AWS Managed policy CloudWatchAgentServerPolicy to the EC2 instance which allows the EC2 instance to write logs to Amazon CloudWatch.\nRunning the Spring Boot Application After configuring the EC2 instance, we will transfer the Spring Boot application from our local machine to the EC2 instance created in the previous step using SCP(Secure Copy) protocol:\nscp -i tf-key-pair.pem ~/Downloads/accountProcessor/target/accountProcessor-0.0.1-SNAPSHOT.jar ec2-user@3.66.165.62:/home/ec2-user/ In this scp command, we are copying the Spring Boot application jar file: accountProcessor-0.0.1-SNAPSHOT.jar from our local machine to the EC2 instance. We will then run this Jar file with the command:\njava -jar accountProcessor-0.0.1-SNAPSHOT.jar After the application is started we can see the application logs in the file: accountprocessor-logging-dev.log configured in the FileAppender in the log4j configuration of our application.\nIn the next section, we will configure the CloudWatch agent to read this file and ship the log entries to Amazon CloudWatch.\nInstalling and Configuring the Unified CloudWatch Agent The Unified CloudWatch agent is available as a package in Amazon Linux 2. Let us install the CloudWatch agent by running the yum command:\nsudo yum install amazon-cloudwatch-agent Next, we need to create a configuration file for configuring the CloudWatch agent to collect specific log files from the EC2 instance and send them to CloudWatch.\nThe agent configuration file is a JSON file with three sections: agent, metrics, and logs that specifies the metrics and logs which the agent needs to collect. The logs section specifies what log files are published to CloudWatch Logs.\nSince our Spring Boot application is writing the log files to the path: accountprocessor/logs/accountprocessor-logging-dev.log, we will configure this path in the logs section of our agent configuration file.\nWe can create the agent configuration file by using the agent configuration file wizard or by creating it manually from scratch.\nLet us use the wizard to create the configuration file by starting the agent configuration file wizard using the following command:\nsudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard In our case we will specify the log file path in the wizard:\nIn this snapshot, we can see the file path and the names of the log group specified in the wizard.\nThe configuration file: config.json generated by the wizard looks like this:\n{ \u0026#34;agent\u0026#34;: { \u0026#34;run_as_user\u0026#34;: \u0026#34;ec2-user\u0026#34; }, \u0026#34;logs\u0026#34;: { \u0026#34;logs_collected\u0026#34;: { \u0026#34;files\u0026#34;: { \u0026#34;collect_list\u0026#34;: [ { \u0026#34;file_path\u0026#34;: \u0026#34;/home/ec2-user/accountprocessor/logs/accountprocessor-logging-dev.log\u0026#34;, \u0026#34;log_group_name\u0026#34;: \u0026#34;accountprocessor-logging-dev.log\u0026#34;, \u0026#34;log_stream_name\u0026#34;: \u0026#34;{instance_id}\u0026#34;, \u0026#34;retention_in_days\u0026#34;: -1 } ] } } } } We can further modify this file manually to add more file paths.\nAfter configuring the CloudWatch agent let us start the CloudWatch agent by running the command:\nsudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json Once the agent is started, it will start sending the log events to Amazon CloudWatch.\nViewing the Application Logs in Amazon CloudWatch We can now view the logs from our Spring Boot application in the Amazon CloudWatch console:\nAlthough we are logging from a single application here, CloudWatch is commonly used as a log aggregator from multiple source applications or services. This allows us to see the logs from all sources in one place as a single and consistent flow of log events ordered by time.\nRunning Queries on Logs with CloudWatch Log Insights CloudWatch Log Insights provides a User Interface and a powerful purpose-built query language to search through the ingested log data and decipher different signals to monitor our applications.\nHere we are using CloudWatch Log Insights to find the number of errors that occurred in our Spring Boot application in the last 1 hour.\nWe have defined a query with a filter on level = \u0026lsquo;ERROR\u0026rsquo; sorting by timestamp and limiting the results to 20. When we run the query, we get the following results:\nIn the query results, we can see 6 errors from our application in the last 1 hour. We can define appropriate thresholds on fields like the number of errors within a defined interval to take proactive mitigating actions.\nConclusion Here is a list of the major points for a quick reference:\n Amazon CloudWatch is a managed monitoring and logging service which is used as centralized log storage. Structured logging is a methodology to log information in a consistent format that allows logs to be treated as data rather than text. We produce structured logs in applications most often by using logging libraries in different programming languages. logs are stored and organized in CloudWatch into Log Streams and Log Groups. Log stream is a sequence of log events emitted by AWS services or any custom application. Log Groups are a group of Log Streams that share the same retention, monitoring, and access control settings. We use the unified CloudWatch agent to collect logs from Amazon EC2 instances and send them to CloudWatch. CloudWatch Log Insights provides a User Interface and a powerful purpose-built query language to search through log data and decipher different signals to monitor our applications.  You can refer to all the source code used in the article on Github.\n","date":"May 8, 2023","image":"https://reflectoring.io/images/stock/0117-queue-1200x628-branded_hu88ffcb943027ab1241b6b9f65033c311_123865_650x0_resize_q90_box.jpg","permalink":"/struct-log-with-cloudwatch-tutorial/","title":"Structured Logging with Spring Boot and Amazon CloudWatch"},{"categories":["Spring"],"contents":"The Jackson API is one of the best JSON parsers in Java. Spring integrates well with Jackson and with every new Spring release, newer Jackson features get incorporated making the Spring Jackson support more flexible and powerful. In this article, we will discuss one such annotation @JsonView that is supported from Spring version 4.x and above. To know more about Jackson improvements in Spring, refer to this blog post.\n Example Code This article is accompanied by a working code example on GitHub. What is @JsonView Often we come across situations where we have a model object containing various fields, and we need to expose different views of the same object depending on the caller. Traditionally, we would create different model objects catering to each of the scenarios. @JsonView is an annotation that is inspired by how database views work. It helps hide fields and create different views of the same model object simplifying the process of exposing only the required fields to the caller.\nSteps to create Json View Step 1: Define the view as a class or interface.\nStep 2: Use the class or interface with @JsonView annotations in models or DTOs\nStep 3: Annotate the controller class methods or @RequestBody params with the view to be used for serializing or deserializing the object.\nIn the further sections, we will take a look at a few examples to understand its usage.\nUse cases for @JsonView Protect sensitive information being exposed public class User { @JsonView(Views.ExternalView) private String name; @JsonView(Views.ExternalView) private String address; @JsonView(Views.ExternalView) private String dob; @JsonView(Views.InternalView) private String loginName; @JsonView(Views.InternalView) private String loginPassword; private String crnNumber; /* More code here */ } public class Views { public static interface ExternalView { } public static interface InternalView extends ExternalView { } } As seen from the example above, Json Views help segregate confidential information from the basic ones by creating separate views within the same model.\nAllows control over the data exposed public class User { @JsonView(Views.UserSummary.class) private String firstName; @JsonView(Views.UserSummary.class) private String lastName; @JsonView(Views.UserSummary.class) private String address; @JsonView(Views.UserSummary.class) private String suburb; @JsonView(Views.UserSummary.class) private String mobileNo; @JsonView(Views.UserDetailedSummary.class) private String ssnNumber; @JsonView(Views.UserDetailedSummary.class) private boolean hasBroadband; @JsonView(Views.UserDetailedSummary.class) private String broadbandConnDate; } public class Views { public static interface UserSummary { } public static interface UserDetailedSummary { } } In this example, we have created hierarchical views to have more control over the data serialized. Here the view UserSummary provides basic user details. The view UserDetailedSummary gives a more detailed view.\nSeparate views for HTTP Request Methods public class User { @JsonView(Views.GetView.class) private String loginName; @JsonView(Views.GetView.class) private String firstName; @JsonView(Views.GetView.class) private String lastName; @JsonView(Views.PatchView.class) private String mobileNo; } public class Views { public static interface PatchView { } public static interface GetView extends PatchView { } } For the above views to apply to the HTTP methods, we will define our controller methods as below:\n@RestController public class UserController { @PostMapping(path = \u0026#34;/userdetails\u0026#34;) public ResponseEntity\u0026lt;User\u0026gt; post( @RequestBody @JsonView(value = Views.GetView.class) User user) { return ResponseEntity.status(HttpStatus.CREATED).body(savedUser); } @PatchMapping(path = \u0026#34;/userdetails/{userId}\u0026#34;) public ResponseEntity\u0026lt;?\u0026gt; patch( @RequestBody @JsonView(value = Views.PatchView.class) User user) { return ResponseEntity.status(HttpStatus.NO_CONTENT).build(); } } This configuration will allow only the @JsonView mapped fields to be updated in the POST and PATCH requests respectively. Since the GetView extends PatchView, the PATCH request will update only the PatchView defined fields (other fields will be ignored) while the POST request will update both PatchView and GetView defined fields.\nIn the further sections, we will look at a sample Spring Boot application to understand how to use @JsonView in the context of the use cases defined above.\nWhat is Serialization and Deserialization  Serialization is the process of converting an object into a stream of bytes. Deserialization is the process of converting the serialized form of an object back to a copy of the original object.  Serialize and Deserialize objects in a Spring Boot Application Serialization and deserialization form the core of REST APIs. Spring Boot internally uses Jackson\u0026rsquo;s ObjectMapper class to perform serialization and deserialization.\nDeserialization example:\nSerialization example:\nSpring Boot defaults for @JsonView configuration The ObjectMapper class uses MapperFeature.DEFAULT_VIEW_INCLUSION to determine how the JsonView annotation needs to behave. This configuration will determine whether the properties that are not annotated with @JsonView should be included during serialization and deserialization.\nFor the sample application, we will use the Spring Boot version 2.7.5 that internally uses Jackson 2.13. By default, the MapperFeature.DEFAULT_VIEW_INCLUSION is set to true which means when enabling a JSON View, non-annotated fields also get serialized.\nWe can also verify the configuration defaults when the JacksonAutoConfiguration class gets registered using the below test:\n@SpringBootTest public class JacksonAutoConfigTest { private AnnotationConfigApplicationContext context; @Test public void defaultObjectMapperBuilder() throws Exception { this.context.register(JacksonAutoConfiguration.class); this.context.refresh(); Jackson2ObjectMapperBuilder builder = this.context.getBean(Jackson2ObjectMapperBuilder.class); ObjectMapper mapper = builder.build(); assertTrue(MapperFeature.DEFAULT_VIEW_INCLUSION.enabledByDefault()); assertFalse(mapper.getDeserializationConfig().isEnabled( MapperFeature.DEFAULT_VIEW_INCLUSION)); assertFalse(mapper.getSerializationConfig().isEnabled( MapperFeature.DEFAULT_VIEW_INCLUSION)); } } SerializationConfig and DeserializationConfig classes contain baseline configurations for serialization and deserialization processes respectively. When the ObjectMapper\u0026rsquo;s MapperFeature.DEFAULT_VIEW_INCLUSION value is enabled, it automatically applies to both serialization and deserialization process. We can override this value to apply to either serialization only or deserialization only (if required by enabling or disabling the MapperFeature configuration) using SerializationConfig and DeserializationConfig classes respectively. We will take a look at its configuration in the further sections.\nDifference between @JsonView and @JsonIgnore  @JsonIgnoreProperties used at the class level is used to ignore multiple fields during both serialization and deserialization process. @JsonIgnore can be used at getter or setter for a property to ignore the fields during deserialization or serialization respectively. @JsonView is an enhancement over @JsonIgnore since we can selectively decide if a field needs to be ignored or not for a particular API.   Using @JsonView with Spring Boot Let\u0026rsquo;s look at a sample User application to demonstrate the various ways in which @JsonView annotation can be used. This application is configured to run on port 8083.\nmvnw clean verify spring-boot:run (for Windows) ./mvnw clean verify spring-boot:run (for Linux) Let\u0026rsquo;s first define the views.\npackage com.reflectoring.userdetails.persistence; public class Views { // For external user  public static interface ExternalView { } // For internal user  public static interface InternalView extends ExternalView { } // Basic User Details  public static interface UserSummary { } // Additional User Details  public static interface UserDetailedSummary extends UserSummary { } // Default fields mapped for GET requests  public static interface GetView { } // Allowed fields for PATCH requests  public static interface PatchView { } } In the further sections, we will take a look at how to use @JsonView to cater to each of the usecases we previously looked at.\nLet us define the model UserData class.\npublic class UserData { @JsonView(value = {Views.GetView.class, Views.UserSummary.class, Views.ExternalView.class}) private long id; @JsonView(value = {Views.GetView.class, Views.UserSummary.class, Views.ExternalView.class}) private String firstName; @JsonView(value = {Views.GetView.class, Views.UserSummary.class, Views.ExternalView.class}) private String lastName; @JsonView(value = {Views.GetView.class, Views.UserSummary.class, Views.ExternalView.class}) private String dob; private boolean internalUser; private String additionalData; @JsonView(value = {Views.GetView.class, Views.InternalView.class}) private String loginId; @JsonView(value = {Views.GetView.class, Views.InternalView.class}) private String loginPassword; @JsonView(value = {Views.GetView.class, Views.InternalView.class}) private String ssnNumber; // More fields here  // Code for getters and setters } As seen above, depending on our use case we have defined fields to have multiple views. Next, let us take a look at how the controller class will use them.\n@RestController @RequestMapping(\u0026#34;/internal\u0026#34;) public class InternalUserController { @GetMapping(\u0026#34;/users\u0026#34;) @JsonView(Views.GetView.class) public ResponseEntity\u0026lt;List\u0026lt;UserData\u0026gt;\u0026gt; getAllUsers( @RequestParam(required = false) String loginId) { if (Objects.isNull(loginId)) { return ResponseEntity.ok().body(userService.getAllUsers(true)); } else { return ResponseEntity.ok().body(List.of(userService.getUser(loginId))); } } } As seen above, the internal/users API uses the GetView class. Here, since we haven\u0026rsquo;t explicitly autowired the ObjectMapper class, the default configuration will apply, and we get JSON response as below:\nAs seen from the response, the GetView configured fields and the fields that do not have any @JsonView annotation are included.\nprivate boolean internalUser; @JsonIgnore public boolean isInternalUser() { return isInternalUser; } In this example, we do not see the internalUser field in the JSON response since we have added @JsonIgnore to the field getter.\nNow, let us autowire a custom ObjectMapper to explicitly disable MapperFeature.DEFAULT_VIEW_INCLUSION as below:\n@Configuration public class CommonBean { @Bean public ObjectMapper objectMapper() { ObjectMapper mapper = new ObjectMapper(); mapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false); mapper.registerModule(new JavaTimeModule()); mapper.disable(MapperFeature.DEFAULT_VIEW_INCLUSION); return mapper; } } With this configuration, let us execute the same API call again:\nNow, we don\u0026rsquo;t see the additionalData field in the JSON response. Here, mapper.disable(MapperFeature.DEFAULT_VIEW_INCLUSION) applies to both serialization and deserialization process. Instead, if we need to include no view annotation fields only in the serialization process, we can apply the following configuration:\n@Configuration public class CommonBean { @Bean public ObjectMapper objectMapper() { ObjectMapper mapper = new ObjectMapper(); mapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false); mapper.registerModule(new JavaTimeModule()); //mapper.disable(MapperFeature.DEFAULT_VIEW_INCLUSION);  mapper.getSerializationConfig().without(MapperFeature.DEFAULT_VIEW_INCLUSION); //mapper.getDeserializationConfig().without(  // MapperFeature.DEFAULT_VIEW_INCLUSION);  return mapper; } } Detailed Working of @JsonView Use Cases Allows control over the data exposed Let\u0026rsquo;s define a snippet of our model class definition:\npublic class UserData { @JsonView(value = {Views.GetView.class, Views.UserSummary.class}) private long id; @JsonView(value = {Views.GetView.class, Views.UserSummary.class}) private String firstName; @JsonView(value = {Views.GetView.class, Views.UserSummary.class}) private String lastName; // More UserSummary view fields  @JsonView(Views.UserDetailedSummary.class) private String createdBy; @JsonView(Views.UserDetailedSummary.class) private LocalDate createdDate; @JsonView(Views.UserDetailedSummary.class) private String updatedBy; @JsonView(Views.UserDetailedSummary.class) private LocalDate updatedDate; // More fields } Next, let\u0026rsquo;s take a look at how UserSummary and UserDetailedSummary views are used:\n@RestController @RequestMapping(\u0026#34;/internal\u0026#34;) public class InternalUserController { private static final Logger log = LoggerFactory.getLogger( InternalUserController.class); private final UserService userService; public InternalUserController(UserService userService) { this.userService = userService; } @GetMapping(\u0026#34;/userdetails/all\u0026#34;) @JsonView(Views.UserDetailedSummary.class) public ResponseEntity\u0026lt;UserData\u0026gt; getDetailUsers(@RequestParam String loginId) { return ResponseEntity.ok().body(userService.getUser(loginId)); } @GetMapping(\u0026#34;/userdetails\u0026#34;) @JsonView(Views.UserSummary.class) public ResponseEntity\u0026lt;UserData\u0026gt; getUserSummary(@RequestParam String loginId) { return ResponseEntity.ok().body(userService.getUser(loginId)); } } As seen from the JSON responses, /internal/userdetails/all API uses the UserDetailedSummary view to return a detailed response in comparison to /internal/userdetails that uses the UserSummary view having fewer fields.\nSeparate views for HTTP Request Methods We can also create a view that caters to PATCH requests, so that only the defined fields get updated in the downstream system. In our UserData model class, only the below three address fields cater to PatchView:\npublic class UserData { @JsonView(value = {Views.PatchView.class, Views.UserSummary.class}) private String address; @JsonView(value = {Views.PatchView.class, Views.UserSummary.class}) private String suburb; @JsonView(value = {Views.PatchView.class, Views.UserSummary.class}) private String city; } Let\u0026rsquo;s fire a GET request for a user Rob:\nNext, let\u0026rsquo;s make a PATCH request to change Rob\u0026rsquo;s address. In this process, let\u0026rsquo;s try to change a few other details too: When the PATCH request is made, we can see only the PatchView fields have been updated and the other fields ignored.\nFor this PATCH request to apply, we need to add @JsonView along with the @RequestBody parameter:\n@PatchMapping(\u0026#34;/users\u0026#34;) public ResponseEntity\u0026lt;UserData\u0026gt; updateAddress(@RequestParam String loginId, @RequestBody @JsonView(Views.PatchView.class) UserData addressData) { return ResponseEntity.ok().body(userService.updateAddress(loginId, addressData)); } Thus, we can use @JsonView to control which fields need to updated in our database.\nProtect sensitive information being exposed In our example we have created different views for internal users (InternalView) and external users (ExternalView) so that confidential details are not exposed to the external users.\npublic class UserData { @JsonView(value = {Views.GetView.class, Views.ExternalView.class}) private long id; @JsonView(value = {Views.GetView.class, Views.ExternalView.class}) private String firstName; @JsonView(value = {Views.GetView.class, Views.ExternalView.class}) private String lastName; // More ExternalView fields  @JsonView(value = {Views.GetView.class, Views.InternalView.class}) private String loginId; @JsonView(value = {Views.GetView.class, Views.InternalView.class}) private String loginPassword; @JsonView(value = {Views.GetView.class, Views.InternalView.class}) private String ssnNumber; // More fields here } When we add those views to our controllers:\n@RestController @RequestMapping(\u0026#34;/internal\u0026#34;) public class InternalUserController { private static final Logger log = LoggerFactory.getLogger( InternalUserController.class); private final UserService userService; public InternalUserController(UserService userService) { this.userService = userService; } @GetMapping(\u0026#34;/users\u0026#34;) @JsonView(Views.InternalView.class) public ResponseEntity\u0026lt;List\u0026lt;UserData\u0026gt;\u0026gt; getAllUsers( @RequestParam(required = false) String loginId) { if (Objects.isNull(loginId)) { return ResponseEntity.ok().body(userService.getAllUsers(true)); } else { return ResponseEntity.ok().body(List.of(userService.getUser(loginId))); } } } @RestController @RequestMapping(\u0026#34;/external\u0026#34;) public class ExternalUserController { private static final Logger log = LoggerFactory.getLogger( ExternalUserController.class); private final UserService userService; public ExternalUserController(UserService userService) { this.userService = userService; } @GetMapping(\u0026#34;/users\u0026#34;) @JsonView(Views.ExternalView.class) public ResponseEntity\u0026lt;List\u0026lt;UserData\u0026gt;\u0026gt; getExtUsers( @RequestParam(required = false) String loginId) { if (Objects.isNull(loginId)) { return ResponseEntity.ok().body(userService.getAllUsers(false)); } else { return ResponseEntity.ok().body( List.of(userService.getUser(loginId, false))); } } } The Output JSON response looks like this.\nInternal View:\nExternal View:\nAs seen from the JSON responses, the internal view exposes more user information than the external view.\nTesting with @JsonView With the right ObjectMapper configuration, we can write tests to verify if the objects were serialized and deserialized as expected. Let\u0026rsquo;s consider this sample test:\n@SpringBootTest public class JsonViewTest { @Test public void serializeUserSummaryViewTest() throws JsonProcessingException { final UserData mockedUser = MockedUsersUtility.getMockedUserData(); final ObjectMapper objectMapper = new ObjectMapper(); objectMapper.configure(MapperFeature.DEFAULT_VIEW_INCLUSION, false); final String serializedValue = objectMapper .writerWithView(Views.UserSummary.class) .writeValueAsString(mockedUser); final List\u0026lt;String\u0026gt; expectedFields = Arrays.asList( \u0026#34;createdBy\u0026#34;, \u0026#34;createdDate\u0026#34;, \u0026#34;updatedBy\u0026#34;, \u0026#34;updatedDate\u0026#34;, \u0026#34;additionalData\u0026#34;, \u0026#34;loginId\u0026#34;, \u0026#34;loginPassword\u0026#34;, \u0026#34;ssnNumber\u0026#34;); expectedFields.stream().forEach(field -\u0026gt; { assertFalse(serializedValue.contains(field)); }); final ObjectMapper objectMapper1 = new ObjectMapper(); objectMapper1.configure(MapperFeature.DEFAULT_VIEW_INCLUSION, true); final String serializedValue1 = objectMapper1 .writerWithView(Views.UserSummary.class) .writeValueAsString(mockedUser); System.out.println(serializedValue1); assertTrue(serializedValue1.contains(\u0026#34;additionalData\u0026#34;)); } } To test object serialization, we have configured the ObjectMapper to have MapperFeature.DEFAULT_VIEW_INCLUSION set to false. The mockedUser object has all the UserData properties set to mock values. Using the writerWithView(Views.UserSummary.class), we can verify that the object has been serialized to a String for only the properties that are a part of UserSummary view. The same test has been repeated with MapperFeature.DEFAULT_VIEW_INCLUSION set to true. Here, we can see that the serialized string contains additionalData(property with no @JsonView annotation).\nNow, let\u0026rsquo;s verify the deserialization process:\n@SpringBootTest public class JsonViewTest { @Test public void deserializeUserSummaryViewTest() throws JsonProcessingException { final ObjectMapper objectMapper = new ObjectMapper(); objectMapper.configure(MapperFeature.DEFAULT_VIEW_INCLUSION, false); final UserData deserializedValue = objectMapper .readerWithView(Views.UserSummary.class) .forType(UserData.class) .readValue(MockedUsersUtility.userDataObjectAsString()); System.out.println( \u0026#34;Deserialize with DEFAULT_VIEW_INCLUSION as false :\u0026#34; + deserializedValue); assertTrue(Objects.isNull(deserializedValue.getCreatedBy())); assertTrue(Objects.isNull(deserializedValue.getCreatedDate())); assertTrue(Objects.isNull(deserializedValue.getUpdatedBy())); assertTrue(Objects.isNull(deserializedValue.getUpdatedDate())); assertTrue(Objects.isNull(deserializedValue.getAdditionalData())); } } Here, we use the readerWithView(Views.UserSummary.class) method of the ObjectMapper to verify that the deserialization from the string representation of json to UserData object contains values only for the view annotated fields.\nConclusion In this article, we took a closer look at @JsonView annotations to understand the flexibility it provides to expose different views. This annotation helps to write clean code and have better control over when and how to expose properties during serialization and deserialization process.\n","date":"April 26, 2023","image":"https://reflectoring.io/images/stock/0012-pages-1200x628-branded_hufb8ee3f5c23483830eda0bab846d2b56_155969_650x0_resize_q90_box.jpg","permalink":"/jackson-jsonview-tutorial/","title":"Serialize and Deserialize with Jackson's @JsonView in a Spring Boot Application"},{"categories":["Node"],"contents":"REST API is a widely used client-server communication protocol, but it has limitations when dealing with clients such as web, iOS, Android, smart devices, etc. All of these have varying demands for data granularity, speed, and performance. GraphQL, on the other hand, excels in this area by allowing clients to define the structure of the data to be returned by the server, as well as allowing multiple resource requests in a single query call, which makes it faster and more efficient.\nIt’s like when a teacher keeps a class register with detailed information about each student, such as their name, age, favorite color, etc.\nNow, let’s say we wanted to know just the names of all the students in our class. Without GraphQL, we might have to ask the teacher to read out the whole list of information, including things we don’t need like age and favorite color. That could be slow and confusing.\nBut with GraphQL, we can ask the teacher to just give us the names of all the students. That way, we only get the information we need and it’s much easier to understand. It’s like a magic spell that helps get exactly what we want, without having to look through lots of extra stuff.\nIn this article, we\u0026rsquo;ll explore how to build a web server with GraphQL API (powered by Apollo Server), MongoDB persistence layer and Node.js\n Example Code This article is accompanied by a working code example on GitHub. Why Graphql?  GraphQL is declarative: The client, not the server, decides the query responses. GraphQL is strongly-typed: During development, a GraphQL query can be guaranteed to be valid within a GraphQL-type system. This strongly typed schema reduces GraphQL\u0026rsquo;s error rate and adds additional validation. This helps in smooth debugging and easy detection of bugs by client applications. Fetch Only Requested Data: Developers can use GraphQL to retrieve client-specified queries exactly as needed. This feature eliminates problems caused by over-fetching(when a response is more verbose and contains more information than was initially requested) and under-fetching (when a request provides less verbose data than expected and is often less useful than required). Versioning is optional: Versioning is unnecessary with GraphQL. The resource URL or address remains unchanged. You can add new fields and deprecate older ones. When querying a deprecated field, the client receives a deprecation warning. Saves Time and Bandwidth: By allowing multiple resource requests to be made in a single query call, GraphQL reduces the number of network round trips to the server, saving time and bandwidth.  When to Use Graphql? GraphQL is an excellent solution to a unique problem involving the creation and consumption of APIs. They are most effective in the following scenarios where:\n Application bandwidth usage is important, such as mobile phones, smartwatches, and IoT devices. Large-scale applications with complex data requirements, GraphQL\u0026rsquo;s ability to provide only the data that is needed for each query can greatly improve performance by reducing network overhead. Application requires multiple clients with different data requirements, GraphQL\u0026rsquo;s flexible nature makes it easier to manage and maintain a consistent API across different platforms and devices. A hybrid pattern where applications access and manage data from multiple sources, For example, imagine a dashboard that displays data from multiple sources, such as logging services, backends for consumption statistics, and third-party analytics tools that capture end-user interactions.  Prerequisites: To follow along, you\u0026rsquo;ll need the following:\n Basic knowledge of JavaScript Node and npm installed on your computer: A fundamental understanding of Node.js is required. A Curious mind.  Getting the Project Started We\u0026rsquo;ll be building a Student register application, that stores students data using GraphQL APIs.\nLet\u0026rsquo;s begin by pasting the following code in the terminal to create a student-register folder and navigate into it:\nmkdir student-register \u0026amp;\u0026amp; cd student-register To initialize Node.js into our application run the following command:\nnpm init -y Open the project in your preferred IDE.\nFollowing that, we can proceed to install our application\u0026rsquo;s dependencies.\nIn the terminal, run the following code:\nnpm install @apollo/server graphql-tag mongoose Above we are installing:\n @apollo/server: apollo Server turns HTTP requests and responses into GraphQL operations. It has plugins, extensible support, and other features for this article will be using Apollo Server 4. graphql-tag: In Apollo Server V4 template literal tag is no longer exported, we will be using the graphql-tag for our template literal tag to parse GraphQL query strings into the standard GraphQL AST. mongoose: a MongoDB object modeling tool.  Next, we\u0026rsquo;ll create the directory and files needed for our application. To do this enter the following command into the application terminal:\nmkdir models touch models/Student.js models/typeDefs.js resolvers.js index.js Our application structure would look like this:\n📂 student-register ┣ 📂 models ┣ Student.js ┣ typeDefs.js ┣ 📂 node_modules ┣ index.js ┣ package-lock.json ┣ package.json ┣ resolvers.js The application is structured such that its modules are separated independently. The models directory will contain both our database Student model and GraphQL typeDefs schema file.\nOur GraphQL schema types are defined in the typeDef.js file, hence the name typeDefs. Every GraphQL server makes use of type schema. Schemas are collections of type definitions that also specify the exact query clients can execute.\nLet\u0026rsquo;s begin by setting up our Apollo GraphQL Server and sending a simple greetings message from the application.\nSetting up the Apollo Server Apollo Server is the most commonly used implementation of GraphQL specification. A query request is made to the Apollo GraphQL Server by a client application. This query will be parsed and validated against a schema defined in the server. If the query passes the schema validation, then an associated resolver function will be executed.\nResolvers contain logic to fetch and process data from an API or a database.\nHere, let\u0026rsquo;s define our server schema. Paste the following code in the models/typeDef.js file:\nconst gql = require(\u0026#34;graphql-tag\u0026#34;); const typeDefs = gql` type Query { greetings: String } `; module.exports = { typeDefs }; The type Query is the root of the schema. The above code defines a single field greetings of type String, GraphQL schema supports scalar types like String, Int, Float, Boolean, and ID so we can use them directly in our schema.\nWe also used graphql-tag this allows us write GraphQL queries and mutations as template literals which are then parsed as abstract syntax tree (AST) that represents the query. This AST can then be passed to a GraphQL client or server, such as Apollo. It allows us to embed GraphQL queries and mutations directly into our code in a simple and efficient manner.\nALso to access typeDefs outside the module, typeDefs template was exported using module.exports.\nNext, We need to tell the GraphQL server what to retrieve and how to process our query. To do this we will use resolvers.\nResolvers are responsible for populating data into schema fields. They are functions that handle data for each field defined in the schema.\nTo create resolvers for our application, Navigate to and paste the following code into the resolvers.js file\n// GraphQL Resolvers const resolvers = { Query: { greetings: () =\u0026gt; \u0026#34;GraphQL is Awesome\u0026#34;, }, }; module.exports = { resolvers }; In the code above we created a resolvers function that returns a string when the greetings field is queried.\nThe resolver function acts as a GraphQL query handler, they must match a field name defined in the Schema.\nIn our case, we have one type definition Query, with the field greetings of type String. As a result, we defined a greetings resolver function that returns a string.\nWe\u0026rsquo;ve defined our schema types and resolver. They can now be used to create our ApolloServer instance.\nGo to the index.js file in the root directory. Copy and paste the following code there:\nconst { ApolloServer } = require(\u0026#34;@apollo/server\u0026#34;); const { startStandaloneServer } = require(\u0026#34;@apollo/server/standalone\u0026#34;); const { resolvers } = require(\u0026#34;./resolvers.js\u0026#34;); const { typeDefs } = require(\u0026#34;./models/typeDefs.js\u0026#34;); const server = new ApolloServer({ typeDefs, resolvers }); startStandaloneServer(server, { listen: { port: 4000 }, }).then(({ url }) =\u0026gt; { console.log(`Server ready at ${url}`); }); The index.js file is the entry point for our server.\nIn the code above we imported ApolloServer constructor and created an instance by passing our typeDefs schema and resolvers as parameters.\nThe Apollo instance is then passed to a startStandaloneServer function.\nThis function creates an Express app, then uses the Apollo instance as middleware and prepares our application to handle incoming requests. The startStandaloneServer returns a Promise containing the URL on which our server is listening.\nRun the following command in the terminal to start the server:\nnode index.js Go to http://localhost:4000 in a browser, we would see GraphQL Playground where you can execute our GraphQL queries:\nIn the query editor type in the following code\nquery Query { greetings } Next hit the ▶️ Query button and we will see our greetings message:\nNext, we\u0026rsquo;ll be adding arguments to our Graphql query.\nAdding Arguments to GraphQL Query So far, all we did is return a simple string. Let\u0026rsquo;s upgrade by adding a new field with a name argument.\nThe application will take in a name as an argument and return a welcome message.\nTo do this we need to update our GraphQL schema and resolvers files:\nRevisit the models/typeDef.js file and update as follows:\nconst gql = require(\u0026#34;graphql-tag\u0026#34;); const typeDefs = gql` type Query { greetings: String welcome(name: String!): String } `; module.exports = { typeDefs }; Above, we\u0026rsquo;ve added a welcome field. The welcome field accepts a name argument with data type String!. Where ! indicates a non-nullable unique identifier field and it returns a String value just like our previous greetings field.\nThen, in the resolver.js file, we\u0026rsquo;ll create a resolver function for the welcome field.\nUpdate resolver.js file with the code below:\n// GraphQL Resolvers const resolvers = { Query: { greetings: () =\u0026gt; \u0026#34;GraphQL is Awesome\u0026#34;, welcome: (parent, args) =\u0026gt; `Hello ${args.name}`, }, }; module.exports = { resolvers }; Every GraphQL resolver function accepts four positional arguments: (parent, args, contextValue, information) Learn more about these arguments by clicking here. Our focus will be on the second positional argument, which is the args argument.\nThe args is an object that holds all of the data passed from the query argument.\nFor example, when we execute a query eg: query{ welcome(name: \u0026quot;Peter Hills\u0026quot;) } the args object passed to the welcome resolver is { \u0026quot;name\u0026quot;: \u0026quot;Peter Hills\u0026quot; }.\nAbove, notice that we extract name from the args in welcome resolver function.\nWe can now test our application. Execute the command node index.js in the terminal.\nGo to http://localhost:4000 GraphQL Playground\nTo test, we can use GraphQL Playground, which can help us easily generate queries with parameters:\nNext, we can start creating our CRUD APIs.\nCreate CRUD APIs in Apollo (Graphql) Server GraphQL operations can either be a read or a write. GraphQL query is used to read or fetch data while mutation is used to write or post values. Mutations modify data in the database and return a value.\nIn this section, we will use GraphQL query and mutation with a MongoDB database to create, read, update, and delete student data in our application.\nWorking with MongoDB in Apollo (Graphql) Server To use MongoDB for our database. It can be installed on either a Mac or Windows machine. Here we will be using MongoDB Community Edition 6.0. I recommend installing with brew (on Mac) to do that in your terminal run the following:\nxcode-select --install # installing XCode tools brew tap mongodb/brew brew update brew install mongodb-community@6.0 Run the following command in the terminal to start MongoDB on macOS (docs):\nbrew services start mongodb-community@6.0 MongoDB will start and be ready to use.\nTo connect to MongoDB from our application, we will use the mongoose dependency previously installed. Here\u0026rsquo;s a breakdown of how Mongoose interacts with MongoDB:\n To use Mongoose, we first need to connect to our MongoDB database using the mongoose.connect() method, which takes a URI string pointing to the database port. After establishing the connection to the MongoDB database, we can define schemas and models. Schemas are blueprints for datatypes use to validate the data received in the request, while models are classes representing a collection of documents in the MongoDB database. Models serve as an interface to interact with the MongoDB collection. Mongoose uses the created model to execute the request using its built-in methods like find(), findOne(), updateOne(), findById(), etc.  To establish connection with MongoDB using Mongoose, add the following code to the index.js file:\nconst { ApolloServer } = require(\u0026#34;@apollo/server\u0026#34;); const { startStandaloneServer } = require(\u0026#34;@apollo/server/standalone\u0026#34;); const mongoose = require(\u0026#34;mongoose\u0026#34;); const { resolvers } = require(\u0026#34;./resolvers.js\u0026#34;); const { typeDefs } = require(\u0026#34;./models/typeDefs.js\u0026#34;); const MONGO_URI = \u0026#34;mongodb://localhost:27017/student-register\u0026#34;; // Database connection mongoose .connect(MONGO_URI, { useNewUrlParser: true, useUnifiedTopology: true, }) .then(() =\u0026gt; { console.log(`Db Connected`); }) .catch(err =\u0026gt; { console.log(err.message); }); const server = new ApolloServer({ typeDefs, resolvers }); startStandaloneServer(server, { listen: { port: 4000 }, }).then(({ url }) =\u0026gt; { console.log(`Server ready at ${url}`); }); Above we defined MONGO_URI which points to our MongoDB database. MongoDB by default connects on port 27017. The last part of the MONGO_URI string is our database name. We created a database connection using our MongoDB URL.\nCreating Student Model We can now map to our MongoDB collection by using mongoose to create schema and model. To create a Student model for our application, navigate to the models/Student.js file and paste the following code:\nconst mongoose = require(\u0026#34;mongoose\u0026#34;); const Student = mongoose.model(\u0026#34;Student\u0026#34;, { firstName: String, lastName: String, age: Number, }); module.exports = { Student }; Above, we created Student model, which serves as a blueprint for storing student data in our database.\nFinally, we can begin CRUD operations in our application using GraphQL queries and mutations.\nCreate Student API To create a new student detail using GraphQL, we need to create an object and Mutation type in our schema.\nUpdate models/typeDef.js file:\nconst gql = require(\u0026#34;graphql-tag\u0026#34;); const typeDefs = gql` type Query { hello: String welcome(name: String): String } # Student object type Student { id: ID firstName: String lastName: String age: Int } # Mutation type Mutation { create(firstName: String, lastName: String, age: Int): Student } `; module.exports = { typeDefs }; In the above code, we created a Student object type. An object type is a data type that represents an object, it consists of fields that define the properties of an object. This defines the structure of the data that can be returned in a GraphQL API.\nWe want our Student type to be able to return the id, firstName, lastName, and age.\nMutations are in a separate block in the schema. We added a create mutation that takes firstName, lastName, and age arguments and returns the Student object.\nNow, we need to implement a resolver for our create mutation field.\nPaste the following code in the resolver.js file\nconst { Student } = require(\u0026#34;./models/Student.js\u0026#34;); const resolvers = { Query: { hello: () =\u0026gt; \u0026#34;GraphQL is Awesome\u0026#34;, welcome: (, params) =\u0026gt; `Hello ${params.name}`, }, Mutation: { create: async (parent, args) =\u0026gt; { const { firstName, lastName, age } = args; const newStudent = new Student({ firstName, lastName, age, }); await newStudent.save(); return newStudent; }, }, }; module.exports = { resolvers }; In the resolver, we added a separate mutation block and create a function. The create function adds and saves a new student to the database.\nCreate a student in GraphQL Playground:\nGet Students Details API We can fetch all students or a single student\u0026rsquo;s details in GraphQL by querying the Student model.\nTo do this we will update our models/typeDefs.js and resolvers.js files:\nmodels/typeDef.js file:\nconst gql = require(\u0026#34;graphql-tag\u0026#34;); const typeDefs = gql` type Query { hello: String welcome(name: String): String students: [Student] #return array of students student(id: ID): Student #return student by id } type Student { id: ID firstName: String lastName: String age: Int } type Mutation { create(firstName: String, lastName: String, age: Int): Student } `; module.exports = { typeDefs }; In the above code, we are adding two new queries to our schema type Query. A students type query that returns an array of Students elements and a student type query returns a single Student object fetched by id.\nNext, update the resolver.js file:\nconst { Student } = require(\u0026#34;./models/Student.js\u0026#34;); // GraphQL Resolvers const resolvers = { Query: { hello: () =\u0026gt; \u0026#34;Hello from Reflectoring Blog\u0026#34;, welcome: (parent, args) =\u0026gt; `Hello ${args.name}`, students: async () =\u0026gt; await Student.find({}), student: async (parent, args) =\u0026gt; await Student.findById(args.id), }, Mutation: { create: async (parent, args) =\u0026gt; { const newStudent = new Student({ first_name: args.firstName, last_name: args.lastName, age: args.age, }); await newStudent.save(); return newStudent; }, }, }; module.exports = { resolvers }; In the resolver file above we are adding two new functions, students function to get an array of all Students and a student function to return a single student\u0026rsquo;s detail.\nWe can now use Playground to query for:\n All students:   One Student:  Update Student Details API Editing or Updating data is almost like creating, they are mutation query.\nThe models/typeDef.js and resolver.js file will need to be updated, to include an update schema and function.\nTo add update schema, copy and paste the following code into the models/typeDef.js file:\nconst gql = require(\u0026#34;graphql-tag\u0026#34;); const typeDefs = gql` type Query { hello: String welcome(name: String): String students: [Student] #return array of students student(id: ID): Student #return student by id } type Student { id: ID firstName: String lastName: String age: Int } type Mutation { create(firstName: String, lastName: String, age: Int): Student update(id: ID, firstName: String, lastName: String, age: Int): Student } `; module.exports = { typeDefs }; In the code block above we added an update type to our type Mutation, which takes an id and the new student data as arguments and returns a Student object.\nUpdate the resolver.js file as follows:\nconst { Student } = require(\u0026#34;./models/Student.js\u0026#34;); // GraphQL Resolvers const resolvers = { Query: { hello: () =\u0026gt; \u0026#34;Hello from Reflectoring Blog\u0026#34;, welcome: (parent, args) =\u0026gt; `Hello ${args.name}`, students: async () =\u0026gt; await Student.find({}), student: async (parent, args) =\u0026gt; await Student.findById(args.id), }, Mutation: { create: async (parent, args) =\u0026gt; { const { firstName, lastName, age } = args; const newStudent = new Student({ firstName, lastName, age, }); await newStudent.save(); return newStudent; }, update: async (parent, args) =\u0026gt; { const { id } = args; const result = await Student.findByIdAndUpdate(id, args); return result; }, }, }; module.exports = { resolvers }; We\u0026rsquo;ve added an update function to our resolvers above. This function looks in the database for a student with the same id as the argument id and updates the student\u0026rsquo;s details.\nNow, we should be able to edit students details inline, we can use GraphQL Playground to do this:\nDelete Student Details API Lastly, we are going to attempt deleting students from our database. Delete mutation is similar to create and update mutation from the previous section. We simply require a mutation that takes the id of the student data to be deleted.\nTo add the delete feature to our application update the schema in models/typeDefs.js by adding delete mutation that takes id argument and returns the student object if successful:\nconst gql = require(\u0026#34;graphql-tag\u0026#34;); const typeDefs = gql` type Query { hello: String welcome(name: String): String students: [Student] #return array of students student(id: ID): Student #return student by id } type Student { id: ID firstName: String lastName: String age: Int } type Mutation { create(firstName: String, lastName: String, age: Int): Student update(id: ID, firstName: String, lastName: String, age: Int): Student delete(id: ID): Student } `; module.exports = { typeDefs }; Update the resolvers.js file to implement the delete resolver function:\nconst { Student } = require(\u0026#34;./models/Student.js\u0026#34;); // GraphQL Resolvers const resolvers = { Query: { hello: () =\u0026gt; \u0026#34;Hello from Reflectoring Blog\u0026#34;, welcome: (parent, args) =\u0026gt; `Hello ${args.name}`, students: async () =\u0026gt; await Student.find({}), student: async (parent, args) =\u0026gt; await Student.findById(args.id), }, Mutation: { create: async (parent, args) =\u0026gt; { const { firstName, lastName, age } = args; const newStudent = new Student({ firstName, lastName, age, }); await newStudent.save(); return newStudent; }, update: async (parent, args) =\u0026gt; { const { id } = args; const updatedStudent = await Student.findByIdAndUpdate(id, args); if (!updatedStudent) { throw new Error(`Student with ID ${id}not found`); } return updatedStudent; }, delete: async (parent, args) =\u0026gt; { const { id } = args; const deletedStudent = await Student.findByIdAndDelete(id); if (!deletedStudent) { throw new Error(`Student with ID ${id}not found`); } return deletedStudent; }, }, }; module.exports = { resolvers }; Grab a student id from the database, then delete the student in the GraphQL playground:\nGreat news! Our CRUD APIs on the backend are now operational!\nTo ensure that everything is working properly, double-check that students are being created, deleted, and updated in the database.\nConclusion: In conclusion, Using GraphQL with Node.js can create flexible and efficient APIs with a better developer experience and improved performance. Apollo Server simplifies schema creation, resolvers, and request handling. To learn more about Apollo Server, check out the Apollo docs.\nYou can refer to all the source code used in the article on Github.\n","date":"March 22, 2023","image":"https://reflectoring.io/images/stock/0129-node-graphql-1200x628-branded_hu946ea48a063c7bb127fbc15b48596a23_203637_650x0_resize_q90_box.jpg","permalink":"/tutorial-graphql-apollo-server-nodejs-mongodb/","title":"Build CRUD APIs Using Apollo Server(Graphql), MongoDB and Node.Js"},{"categories":["Spring"],"contents":"Spring Security is a framework that helps secure enterprise applications. By integrating with Spring MVC, Spring Webflux or Spring Boot, we can create a powerful and highly customizable authentication and access-control framework. In this article, we will explain the core concepts and take a closer look at the default configuration that Spring Security provides and how they work. We will further try to customize them and analyse their impact on a sample Spring Boot application.\n Example Code This article is accompanied by a working code example on GitHub. Creating a Sample Application Let\u0026rsquo;s begin by building a Spring Boot application from scratch and look at how spring configures and provides security. Let\u0026rsquo;s create an application from spring starter and add the minimum required dependencies.\nOnce the project is generated, we will import it into our IDE and configure it to run on port 8083.\nmvnw clean verify spring-boot:run (for Windows) ./mvnw clean verify spring-boot:run (for Linux) On application startup, we should see a login page.\nThe console logs print the default password that was randomly generated as a part of the default security configuration:\nWith the default username user and the default password (from the logs), we should be able to login to the application. We can override these defaults in our application.yml:\nspring: security: user: name: admin password: passw@rd Now, we should be able to login with user admin and password passw@rd.\nStarter dependency versions Here, we have used Spring Boot version 2.7.5. Based on this version, Spring Boot internally resolves Spring Security version as 5.7.4. However, we can override these versions if required in our pom.xml as below:\n\u0026lt;properties\u0026gt; \u0026lt;spring-security.version\u0026gt;5.2.5.RELEASE\u0026lt;/spring-security.version\u0026gt; \u0026lt;/properties\u0026gt;  Understanding the Security Components To understand how the default configuration works, we first need to take a look at the following:\n Servlet Filters Authentication Authorization  Servlet Filters Let\u0026rsquo;s take a closer look at the console logs on application startup. We see that the DefaultSecurityFilterChain triggers a chain of filters before the request reaches the DispatcherServlet. The DispatcherServlet is a key component in the web framework that handles incoming web requests and dispatches them to the appropriate handler for processing.\no.s.s.web.DefaultSecurityFilterChain : Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2fd954f, org.springframework.security.web.context.request.async. WebAsyncManagerIntegrationFilter@5731d3a, org.springframework.security.web.context.SecurityContextPersistenceFilter@5626d18c, org.springframework.security.web.header.HeaderWriterFilter@52b3bf03, org.springframework.security.web.csrf.CsrfFilter@30c4e352, org.springframework.security.web.authentication.logout.LogoutFilter@37ad042b, org.springframework.security.web.authentication. UsernamePasswordAuthenticationFilter@1e60b459, org.springframework.security.web.authentication.ui. DefaultLoginPageGeneratingFilter@29b40b3, org.springframework.security.web.authentication.ui. DefaultLogoutPageGeneratingFilter@6a0f2853, org.springframework.security.web.authentication.www. BasicAuthenticationFilter@254449bb, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3dc95b8b, org.springframework.security.web.servletapi. SecurityContextHolderAwareRequestFilter@2d55e826, org.springframework.security.web.authentication. AnonymousAuthenticationFilter@1eff3cfb, org.springframework.security.web.session.SessionManagementFilter@462abec3, org.springframework.security.web.access.ExceptionTranslationFilter@6f8aba08, org.springframework.security.web.access.intercept. FilterSecurityInterceptor@7ce85af2] To understand how the FilterChain works, let\u0026rsquo;s look at the flowchart from the Spring Security documentation\nNow, let\u0026rsquo;s look at the core components that take part in the filter chain:\n DelegatingFilterProxy It is a servlet filter provided by Spring that acts as a bridge between the Servlet container and the Spring Application Context. The DelegatingFilterProxy class is responsible for wiring any class that implements javax.servlet.Filter into the filter chain. FilterChainProxy Spring security internally creates a FilterChainProxy bean named springSecurityFilterChain wrapped in DelegatingFilterProxy. The FilterChainProxy is a filter that chains multiple filters based on the security configuration. Thus, the DelegatingFilterProxy delegates request to the FilterChainProxy which determines the filters to be invoked. SecurityFilterChain: The security filters in the SecurityFilterChain are beans registered with FilterChainProxy. An application can have multiple SecurityFilterChain. FilterChainProxy uses the RequestMatcher interface on HttpServletRequest to determine which SecurityFilterChain needs to be called.  Additional Notes on Spring Security Chain  The default fallback filter chain in a Spring Boot application has a request matcher /**, meaning it will apply to all requests. The default filter chain has a predefined @Order SecurityProperties.BASIC_AUTH_ORDER. We can exclude this complete filter chain by setting security.basic.enabled=false. We can define the ordering of multiple filter chains. For instance, to call a custom filter chain before the default one, we need to set a lower @Order. Example @Order(SecurityProperties.BASIC_AUTH_ORDER - 10). We can plugin a custom filter within the existing filter chain (to be called at all times or for specific URL patterns) using the FilterRegistrationBean or by extending OncePerRequestFilter. For the defined custom filter, if no @Order is specified, it is the last in the security chain. (Has the default order LOWEST_PRECEDENCE.) We can also use methods addFilterAfter(), addFilterAt() and addFilterBefore() to have more control over the ordering of our defined custom filter.  We will define custom filters and filter chain in the later sections.\n Now that we know that Spring Security provides us with a default filter chain that calls a set of predefined and ordered filters, let\u0026rsquo;s try to briefly understand the roles of a few important ones in the chain.\n org.springframework.security.web.csrf.CsrfFilter : This filter applies CSRF protection by default to all REST endpoints. To learn more about CSRF capabilities in spring boot and spring security, refer to this article. org.springframework.security.web.authentication.logout.LogoutFilter : This filter gets called when the user logs out of the application. The default registered instances of LogoutHandler are called that are responsible for invalidating the session and clearing the SecurityContext. Next, the default implementation of LogoutSuccessHandler redirects the user to a new page (/login?logout). org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter : Validates the username and password for the URL (/login) with the default credentials provided at startup. org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter : Generates the default login page html at /login org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter : Generates the default logout page html at /login?logout org.springframework.security.web.authentication.www.BasicAuthenticationFilter : This filter is responsible for processing any request that has an HTTP request header of Authorization, Basic Authentication scheme, Base64 encoded username-password. On successful authentication, the Authentication object will be placed in the SecurityContextHolder. org.springframework.security.web.authentication.AnonymousAuthenticationFilter : If no Authentication object is found in the SecurityContext, it creates one with the principal anonymousUser and role ROLE_ANONYMOUS. org.springframework.security.web.access.ExceptionTranslationFilter : Handles AccessDeniedException and AuthenticationException thrown within the filter chain. For AuthenticationException instances of AuthenticationEntryPoint are required to handle responses. For AccessDeniedException, this filter will delegate to AccessDeniedHandler whose default implementation is AccessDeniedHandlerImpl. org.springframework.security.web.access.intercept.FilterSecurityInterceptor : This filter is responsible for authorising every request that passes through the filter chain before the request hits the controller.  Authentication Authentication is the process of verifying a user\u0026rsquo;s credentials and ensuring their validity. Let\u0026rsquo;s understand how the spring framework validates the default credentials created:\nStep.1: UsernamePasswordAuthenticationFilter gets called as a part of the security filter chain when FormLogin is enabled i.e when the request is made to the URL /login. This class is a specific implementation of the base AbstractAuthenticationProcessingFilter. When an authentication attempt is made, the filter forwards the request to an AuthenticationManager.\nStep.2: UsernamePasswordAuthenticationToken is an implementation of Authentication interface. This class specifies that the authentication mechanism must be via username-password.\nStep.3: With the authentication details obtained, an AuthenticationManager tries to authenticate the request with the help of an appropriate implementation of AuthenticationProvider and a fully authenticated Authentication object is returned. The default implementation is the DaoAuthenticationProvider which retrieves user details from UserDetailsService. If authentication fails, AuthenticationException is thrown.\nStep.4: The loadUserByUsername(username) method of the UserDetailsService returns UserDetails object that contains user data. If no user is found with the given username, UsernameNotFoundException is thrown.\nStep.5: On successful authentication, SecurityContext is updated with the currently authenticated user.\nTo understand the outlined steps above, let\u0026rsquo;s take a look at the authentication architecture as defined in the Spring Security documentation.\nThe ProviderManager is the most common implementation of AuthenticationManager. As seen in the diagram, the ProviderManager delegates the request to a list of configured AuthenticationProvider each of which is queried to see if it can perform the authentication. If the authentication fails with ProviderNotFoundException, which is a special type of AuthenticationException, it indicates that the ProviderManager does not support the type of Authentication passed. This architecture allows us to configure multiple authentication types within the same application.\nThe AuthenticationEntryPoint is an interface that acts as a point of entry for authentication that determines if the client has included valid credentials when requesting for a resource. If not, an appropriate implementation of the interface is used to request credentials from the client.\nNow, let\u0026rsquo;s understand how the Authentication object ties up the entire authentication process. The Authentication interface serves the following purposes:\n Provides user credentials to the AuthenticationManager. Represents the current authenticated user in SecurityContext. Every instance of Authentication must contain   principal - This is an instance of UserDetails that identifies an user. credentials authorities - Instances of GrantedAuthority GrantedAuthority play an important role in the authorization process.  Additional Notes on Spring Authentication  There could be scenarios where we need Spring Security to be used in case of Authorization alone since it has already been reliably authenticated by an external system before our application was accessed. Refer to the pre-authentication documentation to understand how to configure and handle such scenarios. Spring allows various means to customize the authentication mechanism We will take a look at a couple of them in the later sections.   Authorization Authorization is a process of ensuring that the user or a system accessing a resource has valid permissions.\nIn the Spring security filter chain, the FilterSecurityInterceptor triggers the authorization check. As seen from the order of filter execution, authentication runs before authorization. This filter checks for valid permissions after the user has been successfully authenticated. In case authorization fails, AccessDeniedException is thrown.\nGranted Authority As seen in the previous section, every user instance holds a list of GrantedAuthority objects. GrantedAuthority is an interface that has a single method:\npublic interface GrantedAuthority extends Serializable { String getAuthority(); } Spring security by default calls the concrete GrantedAuthority implementation, SimpleGrantedAuthority. The SimpleGrantedAuthority allows us to specify roles as String, automatically mapping them into GrantedAuthority instances. The AuthenticationManager is responsible for inserting the GrantedAuthority object list into the Authentication object. The AccessDecisionManager then uses the getAuthority() to decide if authorization is successful.\nGranted Authorities vs Roles Spring Security provides authorization support via both granted authorities and roles using the hasAuthority() and hasRole() methods respectively. These methods are used for expression-based security and are a part of the interface SecurityExpressionOperations. For most cases, both methods can be interchangeably used, the most notable difference being the hasRole() need not specify the ROLE prefix while the hasAuthority() needs the complete string to be explicitly specified. For instance, hasAuthority(\u0026quot;ROLE_ADMIN\u0026quot;) and hasRole(\u0026quot;ADMIN\u0026quot;) perform the same task.\nAdditional Notes on Spring Authorization  Spring allows us to configure method-level securities using @PreAuthorize and @PostAuthorize annotations. As the name specifies, they allow us to authorize the user before and after the method execution. Conditions for authorization checks can be specified in Spring Expression Language (SpEL). We will look at a few examples in the further sections. We can configure the authorization rules to use a different prefix (other than ROLE_) by exposing a GrantedAuthorityDefaults bean.   Common Exploit Protection The default spring security configuration comes with a protection against a variety of attacks enabled by default. We will not cover the details of those in this article. You can refer to the Spring documentation for a detailed guide. However, to understand in-depth spring security configuration on CORS and CSRF refer to these articles:\n CORS in Spring Security CSRF in Spring Security  Implementing the Security Configuration Now that we are familiar with the details of how Spring Security works, let\u0026rsquo;s understand the configuration setup in our application to handle the various scenarios we briefly touched upon in the previous sections.\nDefault configuration The SpringBootWebSecurityConfiguration class from the org.springframework.boot.autoconfigure.security.servlet package provides a default set of spring security configurations for spring boot applications. The decompiled version of this class looks like this:\nclass SpringBootWebSecurityConfiguration { @ConditionalOnDefaultWebSecurity static class SecurityFilterChainConfiguration { SecurityFilterChainConfiguration() { } @Bean @Order(2147483642) SecurityFilterChain defaultSecurityFilterChain(HttpSecurity http) throws Exception { ((AuthorizedUrl) http.authorizeRequests().anyRequest()).authenticated(); http.formLogin(); http.httpBasic(); return (SecurityFilterChain) http.build(); } } } Spring uses the above configurations to create the default SecurityFilterChainBean:\n authorizeRequests() restricts access based on RequestMatcher implementations. Here authorizeRequests().anyRequest() will allow all requests. To have more control over restricting access, we can specify URL patterns via antMatchers(). authenticated() requires that all endpoints called be authenticated before proceeding in the filter chain. formLogin() calls the default FormLoginConfigurer class that loads the login page to authenticate via username-password and accordingly redirects to corresponding failure or success handlers. For a diagrammatic representation of how form login works, refer to the detailed notes in the Spring documentation. httpBasic() calls the HttpBasicConfigurer that sets up defaults to help with basic authentication. To understand in detail, refer to the Spring documentation.  Spring Security with SecurityFilterChain  From Spring Security 5.7.0-M2, the WebSecurityConfigurerAdapter has been deprecated and replaced with SecurityFilterChain, thus moving into component based security configuration. To understand the differences, refer to this Spring blog post. All examples in this article, will make use of the newer configuration that uses SecurityFilterChain.   Common Use cases Now that we understand how the spring security defaults work, let\u0026rsquo;s look at a few scenarios and customize the configurations accordingly.\n1. Customize default configuration @Configuration @EnableWebSecurity public class SecurityConfiguration { public static final String[] ENDPOINTS_WHITELIST = { \u0026#34;/css/**\u0026#34;, \u0026#34;/\u0026#34;, \u0026#34;/login\u0026#34;, \u0026#34;/home\u0026#34; }; public static final String LOGIN_URL = \u0026#34;/login\u0026#34;; public static final String LOGOUT_URL = \u0026#34;/logout\u0026#34;; public static final String LOGIN_FAIL_URL = LOGIN_URL + \u0026#34;?error\u0026#34;; public static final String DEFAULT_SUCCESS_URL = \u0026#34;/home\u0026#34;; public static final String USERNAME = \u0026#34;username\u0026#34;; public static final String PASSWORD = \u0026#34;password\u0026#34;; @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.authorizeRequests(request -\u0026gt; request.antMatchers(ENDPOINTS_WHITELIST).permitAll() .anyRequest().authenticated()) .csrf().disable() .formLogin(form -\u0026gt; form .loginPage(LOGIN_URL) .loginProcessingUrl(LOGIN_URL) .failureUrl(LOGIN_FAIL_URL) .usernameParameter(USERNAME) .passwordParameter(PASSWORD) .defaultSuccessUrl(DEFAULT_SUCCESS_URL)); return http.build(); } } Instead of using the spring security login defaults, we can customize every aspect of login:\n loginPage - Customize the default login Page. Here, we have created a custom login.html and its corresponding LoginController class. loginProcessingUrl - The URL that validates username and password. failureUrl - The URL to direct to in case the login fails. defaultSuccessUrl - The URL to direct to on successful login. Here, we have created a custom homePage.html and its corresponding HomeController class. antMatchers() - to filter out the URLs that will be a part of the login process.  Similarly, we can customize the logout process too.\n@Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.authorizeRequests(request -\u0026gt; request.antMatchers(ENDPOINTS_WHITELIST).permitAll() .anyRequest().authenticated()) .csrf().disable() .formLogin(form -\u0026gt; form .loginPage(LOGIN_URL) .loginProcessingUrl(LOGIN_URL) .failureUrl(LOGIN_FAIL_URL) .usernameParameter(USERNAME) .passwordParameter(PASSWORD) .defaultSuccessUrl(DEFAULT_SUCCESS_URL)) .logout(logout -\u0026gt; logout .logoutUrl(\u0026#34;/logout\u0026#34;) .invalidateHttpSession(true) .deleteCookies(\u0026#34;JSESSIONID\u0026#34;) .logoutSuccessUrl(LOGIN_URL + \u0026#34;?logout\u0026#34;)); return http.build(); } Here, when the user logs out, the http session gets invalidated, however the session cookie does not get cleared. Using deleteCookies(\u0026quot;JSESSIONID\u0026quot;) helps avoid session based conflicts.\nFurther, we can manage and configure sessions via Spring Security.\n@Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.authorizeRequests(request -\u0026gt; request.antMatchers(ENDPOINTS_WHITELIST).permitAll() .anyRequest().authenticated()) .csrf().disable() .formLogin(form -\u0026gt; form .loginPage(LOGIN_URL) .loginProcessingUrl(LOGIN_URL) .failureUrl(LOGIN_FAIL_URL) .usernameParameter(USERNAME) .passwordParameter(PASSWORD) .defaultSuccessUrl(DEFAULT_SUCCESS_URL)) .logout(logout -\u0026gt; logout .logoutUrl(\u0026#34;/logout\u0026#34;) .invalidateHttpSession(true) .deleteCookies(\u0026#34;JSESSIONID\u0026#34;) .logoutSuccessUrl(LOGIN_URL + \u0026#34;?logout\u0026#34;)) .sessionManagement(session -\u0026gt; session .sessionCreationPolicy(SessionCreationPolicy.ALWAYS) .invalidSessionUrl(\u0026#34;/invalidSession.htm\u0026#34;) .maximumSessions(1) .maxSessionsPreventsLogin(true)); return http.build(); } It provides us with the following values for session attribute sessionCreationPolicy:\n SessionCreationPolicy.STATELESS - No session will be created or used. SessionCreationPolicy.ALWAYS - A session will always be created if it does not exist. SessionCreationPolicy.NEVER - A session will never be created. But if a session exists, it will be used. SessionCreationPolicy.IF_REQUIRED - A session will be created if required. (Default Configuration)  Other options include:\n invalidSessionUrl - The URL to redirect to when an invalid session is detected. maximumSessions - Limits the number of active sessions that a single user can have concurrently. maxSessionsPreventsLogin - The Default value is false, which indicates that the authenticated user is allowed access while the existing user\u0026rsquo;s session expires. true indicates that the user will not be authenticated when SessionManagementConfigurer.maximumSessions(int) is reached. In this case, it will redirect to /invalidSession when multiple logins are detected.  2. Configure Multiple Filter Chains Spring Security allows us to have more than one co-existing security configuration giving us more control over the application. To demonstrate this, let\u0026rsquo;s create REST endpoints for a Library application that uses H2 database to store books based on genre. Our BookController class will have an endpoint defined as below:\n@GetMapping(\u0026#34;/library/books\u0026#34;) public ResponseEntity\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getBooks(@RequestParam String genre) { return ResponseEntity.ok().body(bookService.getBook(genre)); } In order to secure this endpoint, lets use basic auth and configure details in our SecurityConfiguration class:\n@Configuration @EnableWebSecurity @EnableConfigurationProperties(BasicAuthProperties.class) public class SecurityConfiguration { private final BasicAuthProperties props; public SecurityConfiguration(BasicAuthProperties props) { this.props = props; } @Bean @Order(1) public SecurityFilterChain bookFilterChain(HttpSecurity http) throws Exception { http .csrf().disable() .sessionManagement(session -\u0026gt; session .sessionCreationPolicy(SessionCreationPolicy.STATELESS)) .antMatcher(\u0026#34;/library/**\u0026#34;) .authorizeRequests() .antMatchers(HttpMethod.GET, \u0026#34;/library/**\u0026#34;).hasRole(\u0026#34;USER\u0026#34;) .anyRequest().authenticated() .and() .httpBasic() .and() .exceptionHandling(exception -\u0026gt; exception .authenticationEntryPoint(userAuthenticationErrorHandler()) .accessDeniedHandler(new UserForbiddenErrorHandler())); return http.build(); } @Bean public UserDetailsService userDetailsService() { return new InMemoryUserDetailsManager(props.getUserDetails()); } @Bean public AuthenticationEntryPoint userAuthenticationErrorHandler() { UserAuthenticationErrorHandler userAuthenticationErrorHandler = new UserAuthenticationErrorHandler(); userAuthenticationErrorHandler.setRealmName(\u0026#34;Basic Authentication\u0026#34;); return userAuthenticationErrorHandler; } public static final String[] ENDPOINTS_WHITELIST = { \u0026#34;/css/**\u0026#34;, \u0026#34;/login\u0026#34;, \u0026#34;/home\u0026#34; }; public static final String LOGIN_URL = \u0026#34;/login\u0026#34;; public static final String LOGIN_FAIL_URL = LOGIN_URL + \u0026#34;?error\u0026#34;; public static final String DEFAULT_SUCCESS_URL = \u0026#34;/home\u0026#34;; public static final String USERNAME = \u0026#34;username\u0026#34;; public static final String PASSWORD = \u0026#34;password\u0026#34;; @Bean @Order(2) public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.authorizeRequests(request -\u0026gt; request.antMatchers(ENDPOINTS_WHITELIST).permitAll() .anyRequest().authenticated()) .csrf().disable() .antMatcher(\u0026#34;/login\u0026#34;) .formLogin(form -\u0026gt; form .loginPage(LOGIN_URL) .loginProcessingUrl(LOGIN_URL) .failureUrl(LOGIN_FAIL_URL) .usernameParameter(USERNAME) .passwordParameter(PASSWORD) .defaultSuccessUrl(DEFAULT_SUCCESS_URL)) .logout(logout -\u0026gt; logout .logoutUrl(\u0026#34;/logout\u0026#34;) .invalidateHttpSession(true) .deleteCookies(\u0026#34;JSESSIONID\u0026#34;) .logoutSuccessUrl(LOGIN_URL + \u0026#34;?logout\u0026#34;)) .sessionManagement(session -\u0026gt; session .sessionCreationPolicy(SessionCreationPolicy.ALWAYS) .invalidSessionUrl(\u0026#34;/invalidSession\u0026#34;) .maximumSessions(1) .maxSessionsPreventsLogin(true)); return http.build(); } } Let\u0026rsquo;s take a closer look at the code:\n We have two SecurityFilterChain methods bookFilterChain() and filterChain() methods with @Order(1) and @Order(2). Both of them will execute in the mentioned order. Since both filter chains cater to separate endpoints, different credentials exist in application.yml  auth: users: loginadmin: role: admin password: loginpass bookadmin: role: user password: bookpass For Spring Security to utilize these credentials, we will customize UserDetailsService as :\n@Bean public UserDetailsService userDetailsService() { return new InMemoryUserDetailsManager(props.getUserDetails()); } To cater to AuthenticationException and AccessDeniedException, we have customized exceptionHandling() and configured custom classes UserAuthenticationErrorHandler and UserForbiddenErrorHandler.  With this configuration, the postman response for the REST endpoint looks like this:\nSuccess Response:\nUnauthorized Response:\nForbidden Response:\n3. Additional endpoints secured by default Once spring security is configured for a request matcher, additional endpoints added get secured by default. For instance, let\u0026rsquo;s add an endpoint to the BookController class\n@GetMapping(\u0026#34;/library/books/all\u0026#34;) public ResponseEntity\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getAllBooks() { return ResponseEntity.ok().body(bookService.getAllBooks()); } For this endpoint to be called successfully, we need to provide basic auth credentials.\nError response when no credentials passed:\nSuccess response:\n4. Unsecure Specific Endpoints We can specify a list of endpoints that need to be excluded from the security configuration. To achieve this, let\u0026rsquo;s first add another endpoint to our BookController class and add the below configuration:\n@GetMapping(\u0026#34;/library/info\u0026#34;) public ResponseEntity\u0026lt;LibraryInfo\u0026gt; getInfo() { return ResponseEntity.ok().body(bookService.getLibraryInfo()); } @Bean public WebSecurityCustomizer webSecurityCustomizer() { return (web) -\u0026gt; web.ignoring().antMatchers(\u0026#34;/library/info\u0026#34;); } Now, we should be able to hit the endpoint from postman without passing credentials:\n5. Add Custom Filters Spring provides security by executing a sequence of filters in a chain. In cases where we need to add additional checks to the request before it reaches the controller, Spring Security provides us with the below methods that help us add a custom filter at the desired position in the chain.\n addFilterBefore(Filter filter, Class\u0026lt;? extends Filter\u0026gt; beforeFilter): This method lets us add the custom filter before the specified filter in the chain. addFilterAfter(Filter filter, Class\u0026lt;? extends Filter\u0026gt; afterFilter): This method lets us add the custom filter after the specified filter in the chain. addFilterAt(Filter filter, Class\u0026lt;? extends Filter\u0026gt; atFilter): This method lets us add the custom filter at the specified filter in the chain with the same priority. Once the custom filter gets added, both the filters will get called in the filter chain (in no specific order).  Let\u0026rsquo;s take a look at a sample configuration:\n@Configuration @EnableWebSecurity @EnableConfigurationProperties(BasicAuthProperties.class) public class SecurityConfiguration { private final BasicAuthProperties props; public SecurityConfiguration(BasicAuthProperties props) { this.props = props; } @Bean @Order(1) public SecurityFilterChain bookFilterChain(HttpSecurity http) throws Exception { http .csrf().disable() .sessionManagement(session -\u0026gt; session .sessionCreationPolicy(SessionCreationPolicy.STATELESS)) .antMatcher(\u0026#34;/library/**\u0026#34;) .authorizeRequests() .antMatchers(HttpMethod.GET, \u0026#34;/library/**\u0026#34;).hasRole(\u0026#34;USER\u0026#34;) .anyRequest().authenticated() .and() .httpBasic() .and() .exceptionHandling(exception -\u0026gt; exception .authenticationEntryPoint(userAuthenticationErrorHandler()) .accessDeniedHandler(new UserForbiddenErrorHandler())); http.addFilterBefore(customHeaderValidatorFilter(), BasicAuthenticationFilter.class); return http.build(); } @Bean public CustomHeaderValidatorFilter customHeaderValidatorFilter() { return new CustomHeaderValidatorFilter(); } } In order to write a custom filter, we create a class CustomHeaderValidatorFilter that extends a special filter OncePerRequestFilter created for this purpose. This makes sure that our filter gets invoked only once for every request.\npublic class CustomHeaderValidatorFilter extends OncePerRequestFilter { private static final Logger log = LoggerFactory.getLogger (CustomHeaderValidatorFilter.class); @Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { log.info(\u0026#34;Custom filter called...\u0026#34;); if (StringUtils.isEmpty(request.getHeader(\u0026#34;X-Application-Name\u0026#34;))) { response.setStatus(HttpServletResponse.SC_FORBIDDEN); response.setContentType(\u0026#34;application/json\u0026#34;); response.getOutputStream().println(new ObjectMapper(). writeValueAsString(CommonException.headerError())); } else { filterChain.doFilter(request, response); } } } Here, we have overridden the doFilterInternal() and added our logic. In this case, the request will proceed in the filter chain only if the required header X-Application-Name is passed in the request. Also, we can verify that this filter gets wired to our SecurityConfiguration class from the logs.\nWill secure Ant [pattern=\u0026#39;/library/**\u0026#39;] with [org.springframework.security.web.session.DisableEncodeUrlFilter@669469c9, org.springframework.security.web.context.request.async. WebAsyncManagerIntegrationFilter@7f39ad3f, org.springframework.security.web.context.SecurityContextPersistenceFilter@1b901f7b, org.springframework.security.web.header.HeaderWriterFilter@64f49b3, org.springframework.security.web.authentication.logout.LogoutFilter@628aea61, com.reflectoring.security.CustomHeaderValidatorFilter@3d40a3b4, org.springframework.security.web.authentication.www. BasicAuthenticationFilter@8d23cd8, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1a1e38ab, org.springframework.security.web.servletapi. SecurityContextHolderAwareRequestFilter@5bfdabf3, org.springframework.security.web.authentication. AnonymousAuthenticationFilter@7524125c, org.springframework.security.web.session.SessionManagementFilter@3dc14f80, org.springframework.security.web.access.ExceptionTranslationFilter@58c16efd, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@5ab06829] Here the filter gets called for all endpoints /library/**. To further restrict it to cater to specific endpoints, we can modify the Filter class as :\n@Override protected boolean shouldNotFilter(HttpServletRequest request) throws ServletException { String path = request.getRequestURI(); return path.startsWith(\u0026#34;/library/books/all\u0026#34;); } With this change, for the endpoint /library/books/all the doFilterInternal() method will not be executed. The same concept applies to filters added using addFilterAt() and addFilterAfter() methods.\n6. Role-based Authorization In the context of Spring Security, authorization occurs after the user is authenticated. In the previous sections, we have looked at an example where we handled AccessDeniedException. This exception is thrown when user authorization fails. In our example we have defined roles for the users bookadmin and loginadmin in application.yml as :\nauth: users: loginadmin: role: admin password: loginpass bookadmin: role: user password: bookpass To ensure authorization, we have configured spring security to have:\npublic class SecurityConfiguration { @Bean @Order(1) public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.authorizeRequests(request -\u0026gt; request.antMatchers(ENDPOINTS_WHITELIST).hasRole(\u0026#34;ADMIN\u0026#34;) .anyRequest().authenticated()); /* Code continued.. */ return http.build(); } } and\npublic class SecurityConfiguration { @Bean @Order(2) public SecurityFilterChain bookFilterChain(HttpSecurity http) throws Exception { http .csrf().disable() .sessionManagement(session -\u0026gt; session .sessionCreationPolicy(SessionCreationPolicy.STATELESS)) .antMatcher(\u0026#34;/library/**\u0026#34;) .authorizeRequests() .antMatchers(HttpMethod.GET, \u0026#34;/library/**\u0026#34;).hasRole(\u0026#34;USER\u0026#34;) .anyRequest().authenticated(); /* Code continued.. */ return http.build(); } } Let\u0026rsquo;s take a look at the methods that can be used to authorize endpoints.\n hasRole(String role) : Returns true if the current principal has the specified role. eg. hasRole(\u0026quot;ADMIN\u0026quot;) hasAnyRole(String... roles) : Multiple roles can be specified. If any of the role matches, returns true. eg. hasAnyRole(\u0026quot;ADMIN\u0026quot;, \u0026quot;USER\u0026quot;) NOTE : In both the above cases ROLE_ prefix is added by default to the provided role string. hasAuthority(String authority) : Returns true if the current principal has the specified authority. eg. hasAuthority(ROLE_ADMIN) hasAnyAuthority(String... authorities) : Multiple authorities can be specified. If any of the authority matches, returns true. eg. hasAnyAuthority(\u0026quot;ROLE_ADMIN\u0026quot;, \u0026quot;ROLE_USER\u0026quot;)  Additional Notes on Spring Security Access Control  All the methods discussed above use spEL for more complex access control support. This allows us to use specific classes for web and method security to access values such as current principal. To understand how spEL can be leveraged refer to this Spring documentation Also, if we do not need to set authorization we can use methods permitAll() and denyAll() to allow or deny all roles and authorities respectively.   Let\u0026rsquo;s take a look at an example configuration that uses different roles for different endpoints within the same method.\npublic class SecurityConfiguration { @Bean public SecurityFilterChain bookFilterChain(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers(HttpMethod.GET, \u0026#34;/library/info\u0026#34;).permitAll() .antMatchers(HttpMethod.GET, \u0026#34;/library/books\u0026#34;).hasRole(\u0026#34;USER\u0026#34;) .antMatchers(HttpMethod.GET, \u0026#34;/library/books/all\u0026#34;).hasRole(\u0026#34;ADMIN\u0026#34;); return http.build(); } } 7. @PreAuthorize and @PostAuthorize Spring Security allows us to extend the security mechanism to methods via @PreAuthorize and @PostAuthorize annotations. These annotations use spEL to evaluate and authorize based on the arguments passed.\n @PreAuthorize: Authorizes the condition before executing the method. @PostAuthorize: Authorizes the condition after the method is executed. In order to get these annotations to work, we need to add @EnableGlobalMethodSecurity(prePostEnabled = true) to our configuration class as below:  @Configuration @EnableWebSecurity @EnableGlobalMethodSecurity(prePostEnabled = true) @EnableConfigurationProperties(BasicAuthProperties.class) public class SecurityConfiguration { /* ... */ } Next, let\u0026rsquo;s look at how to use these annotations. Here we have used @PreAuthorize in our Controller class.\n@Controller public class BookController { private static final Logger log = LoggerFactory.getLogger(BookController.class); private final BookService bookService; public BookController(BookService bookService) { this.bookService = bookService; } @GetMapping(\u0026#34;/library/books\u0026#34;) @PreAuthorize(\u0026#34;#user == authentication.principal.username\u0026#34;) public ResponseEntity\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getBooks(@RequestParam String genre, @RequestParam String user) { return ResponseEntity.ok().body(bookService.getBook(genre)); } @GetMapping(\u0026#34;/library/books/all\u0026#34;) @PreAuthorize(\u0026#34;hasRole(\u0026#39;ROLE_USER\u0026#39;)\u0026#34;) public ResponseEntity\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getAllBooks() { return ResponseEntity.ok().body(bookService.getAllBooks()); } } Here, we have demonstrated two ways in which @PreAuthorize annotations can be used.\n @PreAuthorize(\u0026quot;#user == authentication.principal.username\u0026quot;) : The logged-in username is passed as a request param and verified with the current principal. For a successful match, postman returns a valid response.  In case of an error, we get:\n@PreAuthorize(\u0026quot;hasRole('ROLE_USER')\u0026quot;) : We get a success response only if the current principal has a USER role.  Next, let\u0026rsquo;s use @PostAuthorize in our Repository class.\n@Repository public interface BookRepository extends JpaRepository\u0026lt;Book, Long\u0026gt; { List\u0026lt;Book\u0026gt; findByGenre(String genre); @PostAuthorize(\u0026#34;returnObject.size() \u0026gt; 0\u0026#34;) List\u0026lt;Book\u0026gt; findAll(); } Here, the returnObject denotes List\u0026lt;Book\u0026gt;. Therefore, when size() returns 0, we will get an error response.\nCustomize Authorization  To customize the way expressions are handled, we need to expose MethodSecurityExpressionHandler as a bean. Spring method security is built using Spring AOP. For more examples, refer to the Method Security documentation.   8. DB-based Authentication and Authorization In all of our previous examples, we have configured users, password, roles using the InMemoryUserDetailsManager. Spring Security allows us to customize the authentication and authorization process. We can also configure these details in a database and get spring security to access them accordingly.\nFor a working example, refer to this article. It also explains the different ways in which passwords should be handled for better security.\nLet\u0026rsquo;s outline the steps required to get this configuration working.\nStep.1 : Customize UserDetailsService by overriding loadUserByUsername() to load user credentials from the database.\nStep.2 : Create PasswordEncoder bean depending on the encoding mechanism used.\nStep.3 : Since the AuthenticationProvider is responsible for validating credentials, customize and override the authenticate() to validate with the DB credentials.\nAdditional information on Password Encoder  Prior to Spring Security 5.0, the default PasswordEncoder was NoOpPasswordEncoder, which required plain-text passwords. From Spring Security 5.0, we use the DelegatingPasswordEncoder which ensures that passwords are encoded using the current password storage recommendations. For more info on DelegatingPasswordEncoder, refer to this documentation   Testing with Spring Security Now that we have learnt about the workings of the various security configuration, let\u0026rsquo;s look at unit testing them. Spring security provides us with the below dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.security\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-security-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; In addition, we have also added Hamcrest dependency. Hamcrest is a framework that allows us to use Matcher objects in our assertions for a more expressive response matching. Refer to Hamcrest documentation for an in-depth look at its features.\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.hamcrest\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;hamcrest-library\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.2\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; First, let\u0026rsquo;s setup our ApplicationContext for testing our BookController class. Here we have defined a sample test data using @Sql\n@SpringBootTest @AutoConfigureMockMvc @SqlGroup({ @Sql(value = \u0026#34;classpath:init/first.sql\u0026#34;, executionPhase = BEFORE_TEST_METHOD), @Sql(value = \u0026#34;classpath:init/second.sql\u0026#34;, executionPhase = BEFORE_TEST_METHOD) }) public class BookControllerTest { } Now, let\u0026rsquo;s look at the various options available to test basic authentication secured endpoints.\n@WithMockUser As the name suggests, we use this annotation with the default username user, password password and role ROLE_USER. Since we are mocking the user, the user need not actually exist. As long as our endpoint is secured, the @WithMockUser will be successful.\npublic class BookControllerTest { @Autowired private MockMvc mockMvc; @Test @DisplayName(\u0026#34;TestCase1 Check if spring security applies to the endpoint\u0026#34;) @WithMockUser(username = \u0026#34;bookadmin\u0026#34;, roles = {\u0026#34;USER\u0026#34;}) void successIfSecurityApplies() throws Exception { mockMvc.perform(get(\u0026#34;/library/books\u0026#34;) .param(\u0026#34;genre\u0026#34;, \u0026#34;Fiction\u0026#34;) .param(\u0026#34;user\u0026#34;, \u0026#34;bookadmin\u0026#34;) .header(\u0026#34;X-Application-Name\u0026#34;, \u0026#34;Library\u0026#34;)) .andDo(print()) .andExpect(status().isOk()) .andExpect(authenticated().withUsername(\u0026#34;bookadmin\u0026#34;)) .andExpect(authenticated().withRoles(\u0026#34;USER\u0026#34;)) .andExpect(jsonPath(\u0026#34;$\u0026#34;, hasSize(3))) ; } @Test @DisplayName(\u0026#34;TestCase2 Fails when wrong roles are provided\u0026#34;) @WithMockUser(username = \u0026#34;bookadmin\u0026#34;, roles = {\u0026#34;ADMIN\u0026#34;}) void failsForWrongAuthorization() throws Exception { mockMvc.perform(get(\u0026#34;/library/books\u0026#34;) .param(\u0026#34;genre\u0026#34;, \u0026#34;Fiction\u0026#34;) .param(\u0026#34;user\u0026#34;, \u0026#34;bookadmin\u0026#34;) .header(\u0026#34;X-Application-Name\u0026#34;, \u0026#34;Library\u0026#34;)) .andDo(print()) .andExpect(status().isForbidden()) ; } @Test @DisplayName(\u0026#34;TestCase3 Fails when we run the test with no security\u0026#34;) void failsIfSecurityApplies() throws Exception { mockMvc.perform(get(\u0026#34;/library/books\u0026#34;) .param(\u0026#34;genre\u0026#34;, \u0026#34;Fiction\u0026#34;) .param(\u0026#34;user\u0026#34;, \u0026#34;bookadmin\u0026#34;) .header(\u0026#34;X-Application-Name\u0026#34;, \u0026#34;Library\u0026#34;)) .andDo(print()) .andExpect(status().isUnauthorized()) ; } }  @WithMockUser(username = \u0026quot;bookadmin\u0026quot;, roles = {\u0026quot;USER\u0026quot;}) : Here we are running the test with the username bookadmin and role USER. This test is used only to verify if the endpoint is secured. Further we have also used methods authenticated() to verify the authentication details and hamcrest matcher hasSize() to verify the response object. @WithMockUser(username = \u0026quot;bookadmin\u0026quot;, roles = {\u0026quot;ADMIN\u0026quot;}) : Here, we get a Forbidden response since the roles do not match. Although the user is mocked, roles re required to match for a success response. When no user details are specified, the endpoint is not secured and therefore we get Unauthorized response.  @WithUserDetails Instead of mocking the user, we could also use the UserDetailsService bean created in the SecurityConfiguration class.\npublic class BookControllerTest { @Autowired private MockMvc mockMvc; @Test @DisplayName(\u0026#34;TestCase4 Run the test with configured UserDetailsService\u0026#34;) @WithUserDetails(value = \u0026#34;bookadmin\u0026#34;, userDetailsServiceBeanName = \u0026#34;userDetailsService\u0026#34;) void testBookWithConfiguredUserDetails() throws Exception { mockMvc.perform(get(\u0026#34;/library/books\u0026#34;) .param(\u0026#34;genre\u0026#34;, \u0026#34;Fantasy\u0026#34;) .param(\u0026#34;user\u0026#34;, \u0026#34;bookadmin\u0026#34;) .header(\u0026#34;X-Application-Name\u0026#34;, \u0026#34;Library\u0026#34;)) .andDo(print()) .andExpect(status().isOk()) .andExpect(jsonPath(\u0026#34;$\u0026#34;, hasSize(1))) ; } @Test @DisplayName(\u0026#34;TestCase5 Fails when execution of CustomHeaderValidatorFilter \u0026#34; + \u0026#34;does not meet the criteria\u0026#34;) @WithUserDetails(value = \u0026#34;bookadmin\u0026#34;, userDetailsServiceBeanName = \u0026#34;userDetailsService\u0026#34;) void failsIfMandatoryHeaderIsMissing() throws Exception { mockMvc.perform(get(\u0026#34;/library/books\u0026#34;) .param(\u0026#34;genre\u0026#34;, \u0026#34;Fantasy\u0026#34;) .param(\u0026#34;user\u0026#34;, \u0026#34;bookadmin\u0026#34;)) .andDo(print()) .andExpect(status().isForbidden()) ; } @Test @DisplayName(\u0026#34;TestCase6 Fails when preauthorization \u0026#34; + \u0026#34;of current principal fails\u0026#34;) @WithUserDetails(value = \u0026#34;bookadmin\u0026#34;, userDetailsServiceBeanName = \u0026#34;userDetailsService\u0026#34;) void failsIfPreAuthorizeConditionFails() throws Exception { mockMvc.perform(get(\u0026#34;/library/books\u0026#34;) .param(\u0026#34;genre\u0026#34;, \u0026#34;Fantasy\u0026#34;) .param(\u0026#34;user\u0026#34;, \u0026#34;bookuser\u0026#34;) .header(\u0026#34;X-Application-Name\u0026#34;, \u0026#34;Library\u0026#34;)) .andDo(print()) .andExpect(status().isForbidden()) ; } @Test @DisplayName(\u0026#34;TestCase7 Fails when wrong basic auth credentials are applied\u0026#34;) void testBookWithWrongCredentialsUserDetails() throws Exception { mockMvc.perform(get(\u0026#34;/library/books\u0026#34;) .param(\u0026#34;genre\u0026#34;, \u0026#34;Fantasy\u0026#34;) .param(\u0026#34;user\u0026#34;, \u0026#34;bookadmin\u0026#34;) .header(\u0026#34;X-Application-Name\u0026#34;, \u0026#34;Library\u0026#34;) .with(httpBasic(\u0026#34;bookadmin\u0026#34;, \u0026#34;password\u0026#34;))) .andDo(print()) .andExpect(status().isUnauthorized()); } } With this configuration, the endpoints will be authenticated with the userDetailsService bean. We can use httpBasic() to ensure wrong credentials are rejected. Also, the tests above validate pre-authorization and custom filter checks.\nConclusion In this article, we looked at the basic concept that applies in spring security. Further, we explained the default configuration that spring provides and how to override them. Also, we looked at a few commonly encountered use-cases and verified them with unit tests. As we have seen, spring provides a lot of flexibility and allows us to customize security for complex applications. We can extend the sample configuration applied in our application on GitHub to suit our needs.\n","date":"February 28, 2023","image":"https://reflectoring.io/images/stock/0101-keylock-1200x628-branded_hu54aa4efa315910c5671932665107f87d_212538_650x0_resize_q90_box.jpg","permalink":"/spring-security/","title":"Getting started with Spring Security and Spring Boot"},{"categories":["Spring"],"contents":"One of the most convincing justifications for using the Spring Framework is its extensive transaction support. For transaction management, the Spring Framework offers a stable abstraction. But before we deep-dive into the concepts of transaction management, let’s quickly understand the basic concept of a transaction.\nIn terms of Database Management Systems (DBMS), a transaction is a logical processing unit that reads and updates database content. This transaction might consist of a single command, a group of commands, or any other database actions. Any DBMS supporting transactions must guarantee ACID qualities to retain the integrity of the data. ACID stands for Atomicity, Consistency, Isolation, and Durability.\n Atomicity - Since the transaction is handled as a single unit of activity, it should either be completed in its whole or not at all. No partial execution is allowed. This is referred to as an \u0026ldquo;all or nothing\u0026rdquo; feature. Consistency - A database needs to maintain consistency once the transaction is finished. This depicts the accuracy of the database. Isolation - Transactions execute in isolation from other transactions. Other concurrent transactions won\u0026rsquo;t be able to see incomplete transactions. Durability - Even if the system crashes or restarts, a successful transaction should be permanently recorded in the database.  Before we understand what Spring offers out-of-the-box to manage transactions, we must understand how a plain JDBC transaction works. A plain standard JDBC transaction management code looks something like the below:\nConnection connection = dataSource.getConnection(); try (connection) { connection.setAutoCommit(false); // execute some SQL queries...  connection.commit(); } catch (SQLException e) { connection.rollback(); } Let’s understand what this does! Firstly, the getConnection() method would connect to the database to start with the transactions. Ideally in an enterprise, there will be a data source already configured and we can re-use the existing connection.\nsetAutoCommit() starts a typical transaction. It’s the only way to start a database transaction in Java. setAutoCommit(true) makes sure that every single SQL statement automatically gets wrapped in its transaction and setAutoCommit(false) is the exact opposite. One thing to note is that the autoCommit flag is valid for the whole time when the connection is open. Thus, we just need to call this method once and not repeatedly.\nFinally, the commit() method will commit the transaction. In case of any SQL exception, rollback() will roll back any changes or queries being executed. That’s all that a Spring transaction does under the hood, too!\nDifferent Types of Transaction Management Spring supports two types of transaction management:\n Programmatic Transaction Management - This implies that you must use programming to manage the transaction as we did in the example above. Although it provides you with great flexibility, it is challenging to keep up. Declarative Transaction Management - This implies that we keep business code and transaction management separate. To manage the transactions, only XML-based settings or annotations are used.  Let’s take a look into each of these transaction management types in Spring.\nProgrammatic Transaction Management Firstly, we will try to understand programmatic transaction management. The Spring Framework provides two means of programmatic transaction management:\n Using TransactionTemplate. Implementing TransactionManager directly.  The TransactionTemplate and other Spring templates, such as the JdbcTemplate, follow a similar methodology. It makes use of the callback method and produces code that is intention driven, meaning that it focuses only on what you want to do.\n@Service public class EntityService { @Autowired private TransactionTemplate template; public Long registerEntity(Entity entity) { Long id = template.execute(status -\u0026gt; { // execute some SQL statements like  // inserting an entity into the db  // and return the autogenerated id  return id; }); } } If we compare this with the simple JDBC transaction that we discussed earlier, we don’t have to deal with the opening and closing database connections ourselves. Spring would also convert the SQL exceptions into runtime exceptions. As far as the integration with Spring goes, TransactionTemplate will use a TransactionManager internally which will again use a data source. Since all of these are beans in our Spring context configuration, we don’t have to worry about it.\nIf we use TransactionManager, Spring provides PlatformTransactionManager for imperative and ReactiveTransactionManager for reactive transactions. We can simply initiate, commit, or roll back transactions using these transaction managers.\nDeclarative Transaction Management Contrary to the programmatic approach, Spring’s declarative transaction management enables configuration-based transaction management. Declarative transactions allow transactions and business code to be separated. Therefore, we can use XML settings or an annotation-based approach to manage transactions.\nTransactions could be configured directly via XML when XML configuration for Spring applications was the standard. The @Transactional annotation, which is considerably easier, has mostly replaced this method today, except for a few older business applications.\nAlthough we won\u0026rsquo;t go into great length about XML setup in this manual, we may use this example as a jumping-off point to learn more about it. We will take the AOP approach here:\n\u0026lt;tx:advice id=\u0026#34;txAdvice\u0026#34; transaction-manager=\u0026#34;txManager\u0026#34;\u0026gt; \u0026lt;!-- the transactional semantics... --\u0026gt; \u0026lt;tx:attributes\u0026gt; \u0026lt;!-- all methods starting with \u0026#39;get\u0026#39; are read-only --\u0026gt; \u0026lt;tx:method name=\u0026#34;get*\u0026#34; read-only=\u0026#34;true\u0026#34;/\u0026gt; \u0026lt;!-- other methods use the default transaction settings --\u0026gt; \u0026lt;tx:method name=\u0026#34;*\u0026#34;/\u0026gt; \u0026lt;/tx:attributes\u0026gt; \u0026lt;/tx:advice\u0026gt; Firstly, we need to make use of the tag \u0026lt;tx: advice /\u0026gt; for creating a transaction-handling advice. Next, we need to define a pointcut that matches all methods we wish to wrap into a transaction and pass it to the bean:\n\u0026lt;aop:config\u0026gt; \u0026lt;aop:pointcut id=\u0026#34;entityServiceOperation\u0026#34; expression=\u0026#34;execution(* x.y.service.EntityService.*(..))\u0026#34;/\u0026gt; \u0026lt;aop:advisor advice-ref=\u0026#34;txAdvice\u0026#34; pointcut-ref=\u0026#34;entityServiceOperation\u0026#34;/\u0026gt; \u0026lt;/aop:config\u0026gt; \u0026lt;bean id=\u0026#34;entityService\u0026#34; class=\u0026#34;x.y.service.EntityService\u0026#34;/\u0026gt; Finally, we can define a method in the service layer to add our business logic.\npublic class EntityService { public Long registerEntity(Entity entity) { // execute some SQL statements like  // inserting an entity into the db  // and return the autogenerated id  return id; } } This looks like configuring a lot of complicated, verbose XML, with the pointcut and advisor configurations. Since the annotation-based configuration is the core discussion of this article, let’s look into it in much detail.\nSpring’s @Transactional Annotation Now let’s have a look at what modern Spring transaction management usually looks like. Spring at its core is an IoC container. Thus it has an advantage. It instantiates an EntityService for us and makes sure to auto wire it into any other bean that needs it.\nNow whenever we use the @Transactional annotation on a bean, Spring uses a tiny trick. It doesn’t just instantiate the EntityService but it also creates a transactional proxy of the same bean:\nAs we can see from the above diagram, the proxy has two jobs:\n Opening and closing database connections/transactions. And then delegating to the original EntityService.  Other beans, like our EntityController in the diagram above, will never know that they are talking to a proxy, and not the real bean.\nIf we get inside in more detail, then we would find that our EntityService gets proxied on the fly, but it is not the proxy that handles the transactional states (open, commit, close, rollback). Instead the proxy delegates the job to a transaction manager.\nSpring offers a PlatformTransactionManager/TransactionManager interface, which, by default, comes with a couple of handy implementations. One of them is the datasource transaction manager. Now all transaction managers have methods like doBegin() or doCommit() that takes care of the connectivity and final execution.\nTo put all of the above discussion in a gist:\n If Spring detects @Transactional annotation on a bean, it creates a dynamic proxy of the bean. The proxy will then have access to a transaction manager which will open and close transactions/connections. Finally, the transaction manager will simply do what we did as part of our plain old JDBC connection implementation.  Configuring a TransactionManager Spring recommends defining @EnableTransactionManagement annotation in a @Configuration class to enable transactional support.\n@Configuration @EnableTransactionManagement public class JPAConfig{ @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory() { //...  } @Bean public PlatformTransactionManager transactionManager() { JpaTransactionManager transactionManager = new JpaTransactionManager(); transactionManager.setEntityManagerFactory(entityManagerFactory().getObject()); return transactionManager; } } However, if we use a Spring Boot project and have defined “spring-data-*” or “spring-tx” dependencies on the classpath, then the transaction management would be enabled by default.\nUsage of @Transactional Annotation We can use the annotation on definitions of interfaces, classes, or directly on methods. They take precedence over one another according to the priority order from lowest to highest like interface, superclass, class, interface method, superclass method, and finally class method.\nOne thing to note is that, if we apply this annotation over a class, then this will be applied to all the public methods in it which have not been annotated with the @Transactional annotation.\nHowever, if we put the annotation on a private or protected method, Spring will ignore it without an error.\nLet’s consider that if we have an interface defined with the annotation over it:\n@Transactional public interface PaymentService { void pay(String source, String destination, double val); } Next, we can put the same annotation on a class to override the transaction setting of interface:\n@Service @Transactional public class PaymentServiceImpl implements PaymentService { @Override public void pay(String source, String destination, double val) { // ...  } } Finally, we can override all of this by setting the annotation directly on the method:\n@Transactional public void pay(String source, String destination, double val) { // ... } Propagation Levels in Spring Transactions As the name suggests, propagation in a Spring transaction indicates if any service would like to participate or not in the transaction. It would also decide the behavior of a component or service depending on whether or not a transaction has already been generated by the calling component or service.\nFirst, we will take two scenarios. In the first scenario, we will have the registerEntity() defined that we saw above annotated with transaction propagation:\n@Service public class EntityService { @Transactional(propagation = Propagation.REQUIRED) public Long registerEntity(Entity entity) { // execute some SQL statements like  // inserting an entity into the db  // and return the autogenerated id  return id; } } In the other scenario, consider that this registerEntity() method is being called by another service OrganizationService, then that class will be annotated as follows:\n@Service @Transactional(propagation=Propagation.REQUIRED) public class OrganizationService { @Autowired EntityService entityService; public void organize() { // ...  entityService.registerEntity(entity); // ...  } } Let’s understand each of these propagation strategies using the above scenarios:\n REQUIRED - This is the default propagation. If the registerEntity() method is called directly, it creates a new transaction. Whereas if this method is called from OrganizationService, since that service is annotated with @Transactional then the transaction would make use of the existing transaction called at the service layer rather than the one defined on registerEntity(). If the calling service didn’t have the transaction defined, it will create a new transaction. SUPPORTS - In this case, if the registerEntity() method is called directly, it doesn’t create a new transaction. If the method is called from OrganizationService, then it will make use of the existing transaction defined as part of that class, else, it won’t create a new transaction. NOT_SUPPORTED - In this case, if the registerEntity() method is called directly, it doesn’t create a new transaction. If the method is called from OrganizationService, then it doesn’t make use of the existing transaction neither it creates its own transaction. It runs without a transaction. REQUIRES_NEW - If the registerEntity() method is called directly, it creates a new transaction. Whereas if this method is called from OrganizationService, then the transaction would not make use of the existing transaction called at the service layer instead it would create its own new transaction. If the calling service didn’t have the transaction defined, it will still create a new transaction. NEVER - If the registerEntity() method is called directly, it doesn’t creates a new transaction. Whereas if this method is called from OrganizationService, then the method would throw an exception. If the calling service didn’t have the transaction defined, it will not create a new transaction and run without a transaction. MANDATORY - If the registerEntity() method is called directly, it will throw an exception. In case, the method is called from OrganizationService, then the method makes use of its existing transaction. Else, it will throw an exception. NESTED - If a transaction is present, Spring verifies it and marks a save point. This indicates that the transaction rolls back to this save point if our business logic execution encounters an issue. It operates similarly to REQUIRED if there are no ongoing transactions. In the case of NESTED, only JDBC connections are supported in JPATransactionManager. However, if our JDBC driver supports save points, setting the nestedTransactionAllowed value to true also makes the JDBC access code in the JPA transactions function.  Isolation Levels in Spring Transactions When two transactions act concurrently on the same database entity, then that database state is defined as transaction isolation. It involves the locking of database records. In other words, it specifies how the database would behave or what happens when one transaction is being processed on a database entity and another concurrent transaction would like to access or update the same database entity at the same time.\nOne of the ACID (Atomicity, Consistency, Isolation, Durability) characteristics is isolation. Therefore, the transaction isolation level is not a feature exclusive to the Spring Framework. We can adjust the isolation level with Spring to match our business logic. We can set the isolation level of a transaction by the annotation:\n@Transactional(isolation = Isolation.READ_UNCOMMITTED) It has these five enumerations in Spring:\n  DEFAULT - The default isolation level in Spring is DEFAULT which means when Spring creates a new transaction, the isolation level will be the default isolation of our RDBMS. Therefore, we should be careful when we change the database.\n  READ_UNCOMMITTED - If two transactions are running simultaneously, the second transaction can update both new and existing records before the first transaction is committed. The newly added and altered records are reflected in the first transaction, which is still in progress even though the second transaction is not yet committed.\nNote: PostgreSQL does not support READ_UNCOMMITTED isolation and falls back to READ_COMMITED instead. Also, Oracle does not support or allow READ_UNCOMMITTED.\n  READ_COMMITTED - If two transactions are running simultaneously, the second transaction can update both new and existing records before the first transaction is committed. The newly added and altered records are reflected in the first transaction, which is not yet committed after the second transaction is committed.\nNote: READ_COMMITTED is the default level with Postgres, SQL Server, and Oracle.\n  REPEATABLE_READ - If two transactions are running simultaneously, the second transaction cannot update any existing records until the first transaction has been committed, but it can add new records. The newly added records are reflected in the first transaction, which is not yet committed, once the second transaction is committed.\nNote: REPEATABLE_READ is the default level in MySQL. Oracle does not support REPEATABLE_READ.\n  SERIALIZABLE - When two transactions are running simultaneously, it appears as though they are running sequentially, with the first transaction being committed before the second is carried out. This is the highest level of isolation and is considered total isolation. An ongoing transaction is thus invulnerable to the effects of other transactions. But because of the poor performance and potential for deadlock, this could be problematic.\n  Error handling with @Transactional The @Transactional annotation makes use of the attributes rollbackFor or rollbackForClassName to rollback the transactions, and the attributes noRollbackFor or noRollbackForClassName to avoid rollback on listed exceptions.\nAccording to the Spring documentation:\n In its default configuration, the Spring Framework’s transaction infrastructure code marks a transaction for rollback only in the case of runtime, unchecked exceptions. That is, when the thrown exception is an instance or subclass of RuntimeException. ( Error instances also, by default, result in a rollback). Checked exceptions that are thrown from a transactional method do not result in rollback in the default configuration.\n Thus, the default rollback behavior in the declarative approach will rollback on runtime exceptions. So when a checked exception is thrown from our code and we don’t explicitly tell Spring that it should rollback the transaction, then it gets committed.\nRollback on Runtime Exception Let’s look at the case where the code is expected to rollback on runtime exception:\n@Transactional public void rollbackOnRuntimeException() { jdbcTemplate.execute(\u0026#34;insert into sample_table values(\u0026#39;abc\u0026#39;)\u0026#34;); throw new RuntimeException(\u0026#34;Rollback as we have a Runtime Exception!\u0026#34;); } Spring will rollback when it comes across this exception.\nNo Rollback for Checked Exception If we declare a normal Exception and we don’t declare rollback strategy, then the data will be inserted and committed.\n@Transactional public void noRollbackOnCheckedException() throws Exception { jdbcTemplate.execute(\u0026#34;insert into sample_table values(\u0026#39;abc\u0026#39;)\u0026#34;); throw new Exception(\u0026#34;Generic exception occurred\u0026#34;); } Rollback on Checked Exception If we pass rollbackFor strategy to roll back its changes for a custom checked exception, then it will roll back when the exception is thrown:\n@Transactional(rollbackFor = CustomCheckedException.class) public void rollbackOnDeclaredException() throws CustomCheckedException { jdbcTemplate.execute(\u0026#34;insert into sample_table values(\u0026#39;abc\u0026#39;)\u0026#34;); throw new CustomCheckedException(\u0026#34;rollback on checked exception\u0026#34;); } It will also rollback if any runtime exception is thrown as part of the above code.\nNo Rollback on RuntimeException If we define Spring noRollbackFor in case of runtime exception, then the code will commit the transaction even though if there is any runtime exception in the code:\n@Transactional(noRollbackFor = RuntimeException.class) public void noRollbackOnRuntimeException() { jdbcTemplate.execute(\u0026#34;insert into sample_table values(\u0026#39;abc\u0026#39;)\u0026#34;); throw new IllegalStateException(\u0026#34;Exception\u0026#34;); } Conclusion In this article, we looked at the basic configuration and usage of transactions in the Spring ecosystem. We also explored the propagation and isolation properties of @Transactional in detail. We also learned about the various side effects and pitfalls of the concurrency of @Transactional annotation.\n","date":"January 31, 2023","image":"https://reflectoring.io/images/stock/0029-contract-1200x628-branded_hu7a19ccad5c11568ad8f2270ae968f76d_151831_650x0_resize_q90_box.jpg","permalink":"/spring-transactions-and-exceptions/","title":"Demystifying Transactions and Exceptions with Spring"},{"categories":["Java"],"contents":"If you’re reading this article, it means you’re already well-versed with JUnit.\nLet me give you a summary of JUnit - In software development, we developers write code which does something simple as designing a person’s profile or as complex as making a payment (in a banking system). When we develop these features, we tend to write unit tests. As the name suggests, the main purpose of unit tests is to ensure that small, individual parts of code are functioning as expected. If the execution of the unit test fails for any reason, it means the functionality is not working as intended. One such tool available for writing unit tests is JUnit. These unit tests are tiny programs, yet so powerful and execute in a (Thanos) snap. If you like to learn more about JUnit 5 (also known as JUnit Jupiter), please check out - JUnit5 article here\nNow that we know about JUnit. Let\u0026rsquo;s now focus on the topic of parameterized tests in JUnit 5. The parameterized tests solve the most common problems while developing a test framework for any old/new functionalities.\n Writing a test case for every possible input becomes easy. A single test case can accept multiple inputs to test the source code, helping to reduce code duplication. By running a single test case with multiple inputs, we can be confident that all possible scenarios have been covered and maintain better code coverage.  Development teams aim to create source code that is both reusable and loosely coupled by utilizing methods and classes. The way the code functions is affected by the parameters passed to it. For example, the sum method in a Calculator class is able to process both integer and float values. JUnit 5 has introduced the ability to perform parameterized tests, which enables testing the source code using a single test case that can accept different inputs. This allows for more efficient testing, as previously in older versions of JUnit, separate test cases had to be created for each input type, leading to a lot of code repetition.\n Example Code This article is accompanied by a working code example on GitHub. Setup Just like the mad titan Thanos who is fond of accessing powers, you can access the power of parameterized tests in JUnit5 using the below maven dependency\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.junit.jupiter\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;junit-jupiter-params\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.9.2\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; Let’s do some coding, shall we?\nOur First Parameterized Test Now, I would like to introduce you to a new annotation @ParameterizedTest. As the name suggests, it tells the JUnit engine to run this test with different input values.\nimport static org.junit.jupiter.api.Assertions.assertEquals; import org.junit.jupiter.params.ParameterizedTest; import org.junit.jupiter.params.provider.ValueSource; public class ValueSourceTest { @ParameterizedTest @ValueSource(ints = { 2, 4 }) void checkEvenNumber(int number) { assertEquals(0, number % 2, \u0026#34;Supplied number is not an even number\u0026#34;); } } In the above example, the annotation @ValueSource provides multiple inputs to the checkEvenNumber() method. Let\u0026rsquo;s say we are writing the same using JUnit4, we had to write 2 test cases to cover the inputs 2 and 4 even though their result (assertion) is exactly the same.\nWhen we execute the ValueSourceTest, what we see:\nValueSourceTest |_ checkEvenNumber |_ [1] 2 |_ [2] 4 It means that the checkEvenNumber() method is executed with 2 input values.\nIn the next section, let’s learn about the various arguments sources provided by the JUnit5 framework.\nSources Of Arguments JUnit5 offers a number of source annotations. The following sections provide a brief overview and an example for some of these annotations.\n@ValueSource It is one of the simple sources. It accepts a single array of literal values. The literal values supported by @ValueSource are: short, byte, int, long, float, double, char, boolean, String and Class.\n@ParameterizedTest @ValueSource(strings = { \u0026#34;a1\u0026#34;, \u0026#34;b2\u0026#34; }) void checkAlphanumeric(String word) { assertTrue(StringUtils.isAlphanumeric(word), \u0026#34;Supplied word is not alpha-numeric\u0026#34;); } @NullSource \u0026amp; @EmptySource Let\u0026rsquo;s say when verifying if the user has supplied all the required fields (username and password in a login function). We check the provided fields are not null, not empty or not blank using the annotations\n @NullSource \u0026amp; @EmptySource in unit tests will help us supply the source code with null, empty and blank values and verify the behaviour of your source code.  @ParameterizedTest @NullSource void checkNull(String value) { assertEquals(null, value); } @ParameterizedTest @EmptySource void checkEmpty(String value) { assertEquals(\u0026#34;\u0026#34;, value); }  We can also combine the passing of null and empty inputs using @NullAndEmptySource.  @ParameterizedTest @NullAndEmptySource void checkNullAndEmpty(String value) { assertTrue(value == null || value.isEmpty()); }  Another trick to pass null, empty and blank input values is to combine the @NullAndEmptySource and @ValueSource(strings = { \u0026quot; \u0026quot;, \u0026quot; \u0026quot; }) to cover all possible negative scenarios.  @ParameterizedTest @NullAndEmptySource @ValueSource(strings = { \u0026#34; \u0026#34;, \u0026#34; \u0026#34; }) void checkNullEmptyAndBlank(String value) { assertTrue(value == null || value.isBlank()); } @MethodSource This annotation allows us to load the inputs from one or more factory methods of the test class or external classes. Each factory method must generate a stream of arguments.\n Explicit method source - The test will try to load the supplied method.  // Note: The test will try to load the supplied method @ParameterizedTest @MethodSource(\u0026#34;checkExplicitMethodSourceArgs\u0026#34;) void checkExplicitMethodSource(String word) { assertTrue(StringUtils.isAlphanumeric(word), \u0026#34;Supplied word is not alpha-numeric\u0026#34;); } static Stream\u0026lt;String\u0026gt; checkExplicitMethodSourceArgs() { return Stream.of(\u0026#34;a1\u0026#34;, \u0026#34;b2\u0026#34;); }  Implicit method source - The test will search for the source method that matches the test-case method name.  // Note: The test will search for the source method // that matches the test-case method name @ParameterizedTest @MethodSource void checkImplicitMethodSource(String word) { assertTrue(StringUtils.isAlphanumeric(word), \u0026#34;Supplied word is not alpha-numeric\u0026#34;); } static Stream\u0026lt;String\u0026gt; checkImplicitMethodSource() { return Stream.of(\u0026#34;a1\u0026#34;, \u0026#34;b2\u0026#34;); }  Multi-argument method source - We must pass the inputs as a Stream of Arguments. The test will automatically map arguments based on the index.  // Note: The test will automatically map arguments based on the index @ParameterizedTest @MethodSource void checkMultiArgumentsMethodSource(int number, String expected) { assertEquals(StringUtils.equals(expected, \u0026#34;even\u0026#34;) ? 0 : 1, number % 2); } static Stream\u0026lt;Arguments\u0026gt; checkMultiArgumentsMethodSource() { return Stream.of(Arguments.of(2, \u0026#34;even\u0026#34;), Arguments.of(3, \u0026#34;odd\u0026#34;)); }  External method source - The test will try to load the external method.  // Note: The test will try to load the external method @ParameterizedTest @MethodSource( \u0026#34;source.method.ExternalMethodSource#checkExternalMethodSourceArgs\u0026#34;) void checkExternalMethodSource(String word) { assertTrue(StringUtils.isAlphanumeric(word), \u0026#34;Supplied word is not alpha-numeric\u0026#34;); } package source.method; import java.util.stream.Stream; public class ExternalMethodSource { static Stream\u0026lt;String\u0026gt; checkExternalMethodSourceArgs() { return Stream.of(\u0026#34;a1\u0026#34;, \u0026#34;b2\u0026#34;); } } @CsvSource This annotation will allow us to pass argument lists as comma-separated values (i.e. CSV String literals). Each CSV record results in one execution of the parameterized test. There is also a possibility of skipping the CSV header using the attribute useHeadersInDisplayName.\n@ParameterizedTest @CsvSource({ \u0026#34;2, even\u0026#34;, \u0026#34;3, odd\u0026#34;}) void checkCsvSource(int number, String expected) { assertEquals(StringUtils.equals(expected, \u0026#34;even\u0026#34;) ? 0 : 1, number % 2); } @CsvFileSource This annotation lets us use comma-separated value (CSV) files from the classpath or the local file system. Similar to @CsvSource, here also, each CSV record results in one execution of the parameterized test. It also supports various other attributes - numLinesToSkip, useHeadersInDisplayName, lineSeparator, delimiterString etc.\nExample 1: Basic implementation @ParameterizedTest @CsvFileSource( files = \u0026#34;src/test/resources/csv-file-source.csv\u0026#34;, numLinesToSkip = 1) void checkCsvFileSource(int number, String expected) { assertEquals(StringUtils.equals(expected, \u0026#34;even\u0026#34;) ? 0 : 1, number % 2); } src/test/resources/csv-file-source.csv\nNUMBER, ODD_EVEN 2, even 3, odd Example 2: Using attributes @ParameterizedTest @CsvFileSource( files = \u0026#34;src/test/resources/csv-file-source_attributes.csv\u0026#34;, delimiterString = \u0026#34;|\u0026#34;, lineSeparator = \u0026#34;||\u0026#34;, numLinesToSkip = 1) void checkCsvFileSourceAttributes(int number, String expected) { assertEquals(StringUtils.equals(expected, \u0026#34;even\u0026#34;) ? 0 : 1, number % 2); } src/test/resources/csv-file-source_attributes.csv\n|| NUMBER | ODD_EVEN || || 2 | even || || 3 | odd\t|| @EnumSource This annotation provides a convenient way to use Enum constants as test-case arguments. Attributes supported -\n value - The enum class type, example - ChronoUnit.class  package java.time.temporal; public enum ChronoUnit implements TemporalUnit { SECONDS(\u0026#34;Seconds\u0026#34;, Duration.ofSeconds(1)), MINUTES(\u0026#34;Minutes\u0026#34;, Duration.ofSeconds(60)), HOURS(\u0026#34;Hours\u0026#34;, Duration.ofSeconds(3600)), DAYS(\u0026#34;Days\u0026#34;, Duration.ofSeconds(86400)), //12 other units } The ChronoUnit is an enum type that contains standard date period units.\n@ParameterizedTest @EnumSource(ChronoUnit.class) void checkEnumSourceValue(ChronoUnit unit) { assertNotNull(unit); } @EnumSource will pass all 16 ChronoUnit enums as an argument in this example.\n names - The names of enum constants to provide, or regular expression to select the names, example - DAYS or ^.*DAYS$  @ParameterizedTest @EnumSource(names = { \u0026#34;DAYS\u0026#34;, \u0026#34;HOURS\u0026#34; }) void checkEnumSourceNames(ChronoUnit unit) { assertNotNull(unit); } @ArgumentsSource This annotation provides a custom, reusable ArgumentsProvider. The implementation of ArgumentsProvider must be an external or a static nested class.\n External arguments provider  public class ArgumentsSourceTest { @ParameterizedTest @ArgumentsSource(ExternalArgumentsProvider.class) void checkExternalArgumentsSource(int number, String expected) { assertEquals(StringUtils.equals(expected, \u0026#34;even\u0026#34;) ? 0 : 1, number % 2, \u0026#34;Supplied number \u0026#34; + number + \u0026#34; is not an \u0026#34; + expected + \u0026#34; number\u0026#34;); } } public class ExternalArgumentsProvider implements ArgumentsProvider { @Override public Stream\u0026lt;? extends Arguments\u0026gt; provideArguments( ExtensionContext context) throws Exception { return Stream.of(Arguments.of(2, \u0026#34;even\u0026#34;), Arguments.of(3, \u0026#34;odd\u0026#34;)); } }  Static nested arguments provider  public class ArgumentsSourceTest { @ParameterizedTest @ArgumentsSource(NestedArgumentsProvider.class) void checkNestedArgumentsSource(int number, String expected) { assertEquals(StringUtils.equals(expected, \u0026#34;even\u0026#34;) ? 0 : 1, number % 2, \u0026#34;Supplied number \u0026#34; + number + \u0026#34; is not an \u0026#34; + expected + \u0026#34; number\u0026#34;); } static class NestedArgumentsProvider implements ArgumentsProvider { @Override public Stream\u0026lt;? extends Arguments\u0026gt; provideArguments( ExtensionContext context) throws Exception { return Stream.of(Arguments.of(2, \u0026#34;even\u0026#34;), Arguments.of(3, \u0026#34;odd\u0026#34;)); } } } Argument Conversion First of all, imagine without Argument Conversion, we would have to deal with the argument data type ourselves.\nSource method: Calculator class\npublic int sum(int a, int b) { return a + b; } Testcase:\n@ParameterizedTest @CsvSource({ \u0026#34;10, 5, 15\u0026#34; }) void calculateSum(String num1, String num2, String expected) { int actual = calculator.sum(Integer.parseInt(num1), Integer.parseInt(num2)); assertEquals(Integer.parseInt(expected), actual); } If we have String arguments and the source method we are testing accepts Integers, it becomes our responsibility to make this conversion before calling the source method.\nDifferent argument conversions made available by the JUnit5 are\n Widening Primitive Conversion  @ParameterizedTest @ValueSource(ints = { 2, 4 }) void checkWideningArgumentConversion(long number) { assertEquals(0, number % 2); } The parameterized test annotated with @ValueSource(ints = { 1, 2, 3 }) can be declared to accept an argument of type int, long, float, or double.\n Implicit Conversion  @ParameterizedTest @ValueSource(strings = \u0026#34;DAYS\u0026#34;) void checkImplicitArgumentConversion(ChronoUnit argument) { assertNotNull(argument.name()); } JUnit5 provides several built-in implicit type converters. The conversion depends on the declared method argument type. Example - The parameterized test annotated with @ValueSource(strings = \u0026quot;DAYS\u0026quot;) converted implicitly to an argument type ChronoUnit.\n Fallback String-to-Object Conversion  @ParameterizedTest @ValueSource(strings = { \u0026#34;Name1\u0026#34;, \u0026#34;Name2\u0026#34; }) void checkImplicitFallbackArgumentConversion(Person person) { assertNotNull(person.getName()); } public class Person { private String name; public Person(String name) { this.name = name; } //Getters \u0026amp; Setters } JUnit5 provides a fallback mechanism for automatic conversion from a String to a given target type if the target type declares exactly one suitable factory method or a factory constructor. Example - The parameterized test annotated with @ValueSource(strings = { \u0026quot;Name1\u0026quot;, \u0026quot;Name2\u0026quot; }) can be declared to accept an argument of type Person that contains a single field name of type string.\n Explicit Conversion  @ParameterizedTest @ValueSource(ints = { 100 }) void checkExplicitArgumentConversion( @ConvertWith(StringSimpleArgumentConverter.class) String argument) { assertEquals(\u0026#34;100\u0026#34;, argument); } public class StringSimpleArgumentConverter extends SimpleArgumentConverter { @Override protected Object convert(Object source, Class\u0026lt;?\u0026gt; targetType) throws ArgumentConversionException { return String.valueOf(source); } } For a reason, if you don\u0026rsquo;t want to use the implicit argument conversion, then you can use @ConvertWith annotation to define your argument converter. Example - The parameterized test annotated with @ValueSource(ints = { 100 }) can be declared to accept an argument of type String using StringSimpleArgumentConverter.class which converts an integer to string type.\nArgument Aggregation @ArgumentsAccessor By default, each argument provided to a @ParameterizedTest method corresponds to a single method parameter. Due to this, when argument sources that supply a large number of arguments can lead to large method signatures. To solve this problem, we can use ArgumentsAccessor instead of declaring multiple parameters. The type conversion is supported as discussed in Implicit conversion above.\n@ParameterizedTest @CsvSource({ \u0026#34;John, 20\u0026#34;, \u0026#34;Harry, 30\u0026#34; }) void checkArgumentsAccessor(ArgumentsAccessor arguments) { Person person = new Person(arguments.getString(0), arguments.getInteger(1)); assertTrue(person.getAge() \u0026gt; 19, person.getName() + \u0026#34; is a teenager\u0026#34;); } Custom Aggregators We saw using an ArgumentsAccessor can access the @ParameterizedTest method’s arguments directly. What if we want to declare the same ArgumentsAccessor in multiple tests? JUnit5 solves this by providing custom, reusable aggregators.\n @AggregateWith  @ParameterizedTest @CsvSource({ \u0026#34;John, 20\u0026#34;, \u0026#34;Harry, 30\u0026#34; }) void checkArgumentsAggregator( @AggregateWith(PersonArgumentsAggregator.class) Person person) { assertTrue(person.getAge() \u0026gt; 19, person.getName() + \u0026#34; is a teenager\u0026#34;); } public class PersonArgumentsAggregator implements ArgumentsAggregator { @Override public Object aggregateArguments(ArgumentsAccessor arguments, ParameterContext context) throws ArgumentsAggregationException { return new Person(arguments.getString(0), arguments.getInteger(1)); } } Implement the ArgumentsAggregator interface and register it via the @AggregateWith annotation in the @ParameterizedTest method. When we execute the test, it provides the aggregation result as an argument for the corresponding test. The implementation of ArgumentsAggregator can be an external class or a static nested class.\nBonus Since you have read the article to the end, I would like to give you a bonus - If you\u0026rsquo;re using assertion frameworks like - Fluent assertions for java you can pass the java.util.function.Consumer as an argument that holds the assertion itself.\n@ParameterizedTest @MethodSource(\u0026#34;checkNumberArgs\u0026#34;) void checkNumber(int number, Consumer\u0026lt;Integer\u0026gt; consumer) { consumer.accept(number);\t} static Stream\u0026lt;Arguments\u0026gt; checkNumberArgs() {\tConsumer\u0026lt;Integer\u0026gt; evenConsumer = i -\u0026gt; Assertions.assertThat(i % 2).isZero(); Consumer\u0026lt;Integer\u0026gt; oddConsumer = i -\u0026gt; Assertions.assertThat(i % 2).isEqualTo(1); return Stream.of(Arguments.of(2, evenConsumer), Arguments.of(3, oddConsumer)); } Summary JUnit5\u0026rsquo;s parameterized tests feature allows for efficient testing by eliminating the need for duplicate test cases and providing the capability to run the same test multiple times with varying inputs. This not only saves time and effort for the development team, but also increases the coverage and effectiveness of the testing process. Additionally, this feature allows for more comprehensive testing of the source code, as it can be tested with a wider range of inputs, increasing the chances of identifying any potential bugs or issues. Overall, JUnit5\u0026rsquo;s parameterized tests are a valuable tool for improving the quality and reliability of the code.\n","date":"January 29, 2023","image":"https://reflectoring.io/images/stock/0010-gray-lego-1200x628-branded_hu463ec94a0ba62d37586d8dede4e932b0_190778_650x0_resize_q90_box.jpg","permalink":"/tutorial-JUnit5-parameterized-tests/","title":"JUnit 5 Parameterized Tests"},{"categories":["Kotlin"],"contents":"Introduction Sorting is a fundamental operation that plays a crucial role in various applications. Among the many sorting algorithms, merge sort stands out for its efficiency and simplicity. In this blog post, we will delve into the details of merge sort and implement it in Kotlin.\nKotlin Implementation Now, let\u0026rsquo;s dive into the implementation of Merge Sort in Kotlin. We\u0026rsquo;ll start by defining a function for the merging process:\nfun merge(left: IntArray, right: IntArray): IntArray { var i = 0 var j = 0 val merged = IntArray(left.size + right.size) for (k in 0 until merged.size) { when { i \u0026gt;= left.size -\u0026gt; merged[k] = right[j++] j \u0026gt;= right.size -\u0026gt; merged[k] = left[i++] left[i] \u0026lt;= right[j] -\u0026gt; merged[k] = left[i++] else -\u0026gt; merged[k] = right[j++] } } return merged } In this function, we compare elements from the left and right subarrays, merging them into a single sorted array.\nNow, let\u0026rsquo;s implement the recursive Merge Sort function:\nfun mergeSort(arr: IntArray): IntArray { if (arr.size \u0026lt;= 1) return arr val mid = arr.size / 2 val left = arr.copyOfRange(0, mid) val right = arr.copyOfRange(mid, arr.size) return merge(mergeSort(left), mergeSort(right)) } In this code, the mergeSort function recursively divides the array into halves and calls itself until the base case is reached when the array size is 1 or empty. Then, it merges the sorted subarrays using the previously defined merge function.\nTesting the Merge Sort Implementation Let\u0026rsquo;s test our merge sort implementation with a sample array:\nfun main() { val unsortedArray = intArrayOf(64, 34, 25, 12, 22, 11, 90) val sortedArray = mergeSort(unsortedArray) println(\u0026#34;Original Array: ${unsortedArray.joinToString()}\u0026#34;) println(\u0026#34;Sorted Array: ${sortedArray.joinToString()}\u0026#34;) } This program initializes an array, performs the merge sort and prints both the original and sorted arrays.\nAnalysis of Merge Sort Algorithm Merge Sort is a sorting algorithm that follows the divide-and-conquer paradigm. Let\u0026rsquo;s analyze its key aspects:\nTime Complexity Merge Sort guarantees a consistent time complexity of O(n log n) for the worst, average and best cases. This efficiency is achieved by dividing the array into halves and recursively sorting them before merging resulting in a logarithmic depth and linear work at each level.\nDivide Phase Dividing the array into halves requires O(log n) operations. This is because the array is continually divided until each subarray contains only one element. Merge Phase Merging two sorted arrays of size n/2 takes O(n) time. Since there are log n levels in the recursive tree, the total merging time is O(n log n).\nThe overall time complexity is dominated by the merging phase, making merge sort particularly efficient for large datasets. It outperforms algorithms with higher time complexities, such as Bubble Sort or Insertion Sort.\nSpace Complexity Merge Sort has a space complexity of O(n) due to the need for additional space to store the temporary merged arrays during the merging phase. Each recursive call creates new subarrays, and the merging process involves creating a new array that stores the sorted elements.\nTemporary Arrays\nDuring the merging phase, temporary arrays are created to store the sorted subarrays. The size of these arrays is proportional to the size of the input. Recursive Stack\nThe recursive calls contribute to the space complexity. In the worst case, the maximum depth of the recursion tree is log n, which determines the space required for the function call stack. Despite the additional space requirements, merge sort\u0026rsquo;s stability, predictable performance and ease of parallelization make it a viable choice in scenarios where memory usage is not a critical concern.\nStability and Parallelization Merge sort is a stable sorting algorithm, meaning that equal elements maintain their relative order in the sorted output. This stability is essential in applications where the original order of equal elements should be preserved.\nAdditionally, merge sort is inherently parallelizable. The divide-and-conquer nature of the algorithm allows for straightforward parallel implementations. Each subarray can be sorted independently and the merging process can be parallelized leading to potential performance gains on multi-core architectures.\nConclusion Merge Sort is a highly efficient and predictable sorting algorithm with a consistent time complexity of O(n log n). Its stability and parallelizability make it a popular choice in various applications, especially when dealing with large datasets. While it incurs a space overhead due to the need for temporary arrays, the trade-off in terms of time complexity and reliability often justifies its use in practical scenarios.\n","date":"January 18, 2023","image":"https://reflectoring.io/images/stock/0096-tools-1200x628-branded_hue8579b2f8c415ef5a524c005489e833a_326215_650x0_resize_q90_box.jpg","permalink":"/introduction-to-Mockk/","title":"Understanding Merge Sort in Kotlin"},{"categories":["Node"],"contents":"Consider the case where someone provides us with a CSV file containing employee details that have been exported from an employee management application. It may also have data to map the employee/manager relationship within the organization to form a tree chart. Our task is to load that data into another application. But that application doesn’t have a CSV import feature, so we’re going to build it. We’re going to build a simple UI and a backend that will import CSV files and store the data in a database:\nWhile building a basic importer is straightforward, there are a multitude of advanced features that should be considered when designing and implementing a CSV importer that’s meant to be used in production settings.\nBudgeting maintenance time is also necessary, as teams often spend an additional $75,000 annually on:\n Adaptive Maintenance: The largest maintenance cost tends to be changes to the database schema. Each new field of validation requires updating a CSV importer to add new validations. Performance: Naive approaches to improving performance, such as loading, validating, and visualizing all of the spreadsheet data at once in memory, scale drastically as spreadsheets approach thousands (or millions) of rows. At that size, validations need to be done in parallel batches, especially if results will be displayed in a responsive UI. Bug fixes / QA: Once implemented, CSVs tend to become a permanent area that teams must QA. Testing a large number of encodings, formats, and file sizes can cost a substantial amount of resources. When data is uploaded in the wrong format, undoing / bulk correcting files requires time as well.  Read the section on “Creating a production-ready CSV importer” to learn more about the differences involved with building a basic importer versus one able to handle more complex workflows, what features can help make the import process seamless for customers, and embeddable CSV importer options such as OneSchema.\nCompanies often scope one engineering month to build an importer, but end up taking over 3-6 months with a team of 2 engineers to build all the supporting features needed to make the importer usable for their customers. This results in an estimated launch cost of $100,000.\nIn this article, we\u0026rsquo;re going to look into what it means to build a CSV importer from scratch. We will look at some general use cases that the CSV format helps us with and then use the tools the Node.js tech stack offers us to build a CSV importer with a basic UI.\n Example Code This article is accompanied by a working code example on GitHub. CSV Use Cases Before we start building a CSV importer, we need to understand why we use CSV files. Here are some of the most important benefits:\n Easy to read: CSV files contain data in plain text, which makes them human-readable, unlike some alternative data storage formats. Lightweight: These files take up little space. The header row and the commas in between each data field are the only extra space they require aside from the actual data. Portable and flexible: The CSV format is a widely used standard format which means that it\u0026rsquo;s easy to import and export CSV files into / out of many different software applications.  The ease of use and popularity of the CSV format makes it suitable for many different use cases. For a more detailed list, refer to the list of CSV use cases published by the W3C.\n Relational Data and Row-Formats - Usually, when data is retrieved from a table, the data can be complete or half-filled which means there could be null or empty values for a few of the columns. But CSV helps to categorically observe the empty or missed values in the form of comma-separated data which makes it easy to point out missing content in the pool of data. Publication of Statistics - Often the data extracted for statistics need to be re-used for multiple purposes. The common support of CSV files in different tools increases the reusability of the data. Time-series data - Time-related data like weather data is very well suited for a column-based file format. Again, in CSV format this data is easily consumable with a commonly available toolset. Importing and exporting data - During mergers or acquisitions, companies often need to export and import data across systems. Given the ubiquitousness of CSV, it\u0026rsquo;s a common choice to represent this data.  In this article, we are going to explain the use case of exporting and importing hierarchical data between different applications. Hierarchical data is data that contains a hierarchy, like an employee/manager relationship.\nSetting Up the Node.js project Let’s start with our implementation. As shown in the above diagram, we need to create two components: one for the Express backend and the other for the React UI client.\nLet’s create a folder and start with the initialization of a Node.js project:\nnpm init Next, we need to install a few libraries as dependencies:\nnpm install express cors multer pg sequelize fast-csv json2csv Let’s understand how we\u0026rsquo;re using each of the installed dependencies:\n Express - We are using Express to provide a REST API for our application. Cors - We will use this library for CORS (Cross-Origin Resource Sharing) configuration between the backend and the frontend server. Multer - It is a Node.js middleware used for handling multipart/form-data, which is primarily used for uploading files. Pg - It is a non-blocking PostgreSQL client for Node.js. Sequelize - This is a modern TypeScript and Node.js ORM for various databases like Oracle, Postgres, MySQL, MariaDB, SQLite, and SQL Server. Fast-csv - We will use this library for parsing and formatting CSVs or any other delimited value file in Node.js. Json2csv - We will use this library to convert JSON into CSV with column titles and proper line endings.  Now, we will create a server folder and add all our code within that directory.\nNext, we need to define the frontend React client. So we will create another directory to host our frontend:\nnpx create-react-app client This will bootstrap the React code under the client folder. We will first implement the backend part and then we will come back to the frontend side.\nConfigure a PostgreSQL Database We have the base setup for our implementation ready. So, let’s host an instance of PostgreSQL and configure our backend server to connect with that DB. We can quickly spin up a PostgreSQL instance by creating a docker-compose.yml file:\nversion: \u0026#39;3.1\u0026#39; services: db: image: postgres restart: always environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: Welcome123 POSTGRES_DB: csvdb We can run this by executing the following command (assuming you have docker-compose installed):\ndocker-compose up This will host a Postgres instance locally. Now we can switch to our code and create the file config/db.config.js within our server directory with these connection details:\nconst HOST = \u0026#34;localhost\u0026#34;; const USER = \u0026#34;postgres\u0026#34;; const PASSWORD = \u0026#34;Welcome123\u0026#34;; const DB = \u0026#34;csvdb\u0026#34;; const dialect = \u0026#34;postgres\u0026#34;; const pool = { max: 5, min: 0, acquire: 30000, idle: 10000 }; export default { HOST, USER, PASSWORD, DB, dialect, pool }; The first five details are specific to the PostgreSQL driver. We have also defined an optional parameter to configure the connection pool for Sequelize.\nDefining the Data Model Next, we can initialize a data model for Sequelize. Sequelize is an object-relational mapper (ORM) that maps between a data model in the code and the database tables. In this section, we\u0026rsquo;re going to define the data model in the code. Sequelize will take care of creating the database tables out of that model.\nFirst, we define the Employee data that we want to store in our database. We can create a models folder and in it, the model file employee.model.js:\nimport Sequelize from \u0026#39;sequelize\u0026#39;; import { sequelize } from \u0026#39;../database/index.js\u0026#39;; const Employee = sequelize.define(\u0026#34;employee\u0026#34;, { id: { type: Sequelize.STRING, primaryKey: true }, name: { type: Sequelize.STRING }, email: { type: Sequelize.STRING }, username: { type: Sequelize.STRING }, dob: { type: Sequelize.STRING }, company: { type: Sequelize.STRING }, address: { type: Sequelize.STRING }, location: { type: Sequelize.STRING }, salary: { type: Sequelize.STRING }, about: { type: Sequelize.STRING }, role: { type: Sequelize.STRING }, managedBy: { type: Sequelize.STRING, references: { model: \u0026#39;employees\u0026#39;, key: \u0026#39;id\u0026#39; } }, createdAt: { type: Sequelize.STRING }, updatedAt: { type: Sequelize.STRING }, avatar: { type: Sequelize.STRING } }); export default Employee; As we can see, we have an attribute as id which will contain the primary key for each employee. We also have an attribute as managedBy which denotes the id of the manager who is managing this employee. We need to mark this attribute as a foreign key. In Sequelize, we can define references and map the reference to the other model and key for foreign key definition.\nNext, we need to define ORM mapping for the parent-child relationship. We will have a one-to-many relationship which means one manager can have multiple employees reporting to them. Sequelize provides 4 types of associations that should be combined to create ORM mappings for One-To-One, One-To-Many, and Many-To-Many:\n hasOne() belongsTo() hasMany() belongsToMany()  In our data model, we use the combination of hasMany() and belongsTo() to model the hierarchical relationship between manager and employee:\nEmployee.hasMany(Employee, { as: \u0026#39;children\u0026#39;, foreignKey: \u0026#39;managedBy\u0026#39;, sourceKey: \u0026#39;id\u0026#39;, useJunctionTable: false }); Employee.belongsTo(Employee, { foreignKey: \u0026#34;managedBy\u0026#34;, targetKey: \u0026#34;id\u0026#34;, }); How do we make sure that our data model is in sync with the database schema? Luckily, Sequelize does that for us. For this, we create another file database/index.js and call sequelize.sync() to tell Sequelize to create or update the database table so that it matches our data model:\nimport Sequelize from \u0026#39;sequelize\u0026#39;; import dbConfig from \u0026#39;../config/db.config.js\u0026#39;; export const sequelize = new Sequelize(dbConfig.DB, dbConfig.USER, dbConfig.PASSWORD, { host: dbConfig.HOST, dialect: dbConfig.dialect, pool: dbConfig.pool, logging: console.log } ); sequelize.authenticate() .then(() =\u0026gt; { console.log(\u0026#39;Connection has been established successfully.\u0026#39;); console.log(\u0026#39;Creating tables ===================\u0026#39;); sequelize.sync().then(() =\u0026gt; { console.log(\u0026#39;=============== Tables created per model\u0026#39;); }) .catch(err =\u0026gt; { console.error(\u0026#39;Unable to create tables:\u0026#39;, err); }) }) .catch(err =\u0026gt; { console.error(\u0026#39;Unable to connect to the database:\u0026#39;, err); }); This will connect to PostgreSQL and update all the tables as per the models defined.\nCaching the Uploaded File As mentioned earlier, we are using multer as a body parsing middleware that handles content type multipart/form-data which is primarily used for uploading files. That means it parses the raw HTTP request data and makes it more accessible by storing it somewhere for further processing. Without multer, we would have to parse the raw data ourselves to access the file.\nSo let’s define middleware by creating a middleware folder and adding our logic in upload.js:\nimport fs from \u0026#39;fs\u0026#39;; import multer from \u0026#39;multer\u0026#39;; const storage = multer.diskStorage({ destination: (_req, file, cb) =\u0026gt; { console.log(file.originalname); const dir = \u0026#39;./resources/static/assets/uploads\u0026#39;; if (!fs.existsSync(dir)) { fs.mkdirSync(dir, { recursive: true }); } cb(null, dir); }, filename: (_req, file, cb) =\u0026gt; { console.log(file.originalname); cb(null, `${Date.now()}-${file.originalname}`); }, }); const csvFilter = (_req, file, cb) =\u0026gt; { console.log(\u0026#39;Reading file in middleware\u0026#39;, file.originalname); if (file == undefined) { cb(\u0026#39;Please upload a file to proceed.\u0026#39;, false); } else if (file.mimetype.includes(\u0026#39;csv\u0026#39;)) { cb(null, true); } else { cb(\u0026#39;Please upload only csv file as only CSV is supported for now.\u0026#39;, false); } }; export default multer({ storage: storage, fileFilter: csvFilter }); The code above will only allow files ending with .csv and then store them on the disk for later use. We will later include it in the route we define in our Express server that handles the file upload.\nDefining the REST APIs Now, once we have our data model and the required middleware defined, we can move on to write our core implementation for the REST APIs. As part of this article, we will need an API to upload a CSV file and store the content in the PostgreSQL database. We would also need an API to fetch all the employees and their direct children to denote the employees managed by each one of them. Additionally, we will also define an API to download a CSV file to export the data.\nThese are the API endpoints we want to define:\n /api/csv/upload: will accept a multipart/form-data content as POST call to import the CSV file. /api/csv/download: will be a simple GET call to return raw CSV data as a response. /api/employees: will be a GET call to return all the employees and their associations in JSON format.  Import CSV File So let’s start with the APIs related to CSV import/export as part of our controller directory. In the file csv.controller.js, we will pull the file from the disk where it was stored by our middleware, then parse the data and store it in the database:\nimport Employee from \u0026#39;../models/employee.model.js\u0026#39;; import { createReadStream } from \u0026#39;fs\u0026#39;; import { parse } from \u0026#39;fast-csv\u0026#39;; const upload = async (req, res) =\u0026gt; { try { if (req.file == undefined) { return res.status(400).send(\u0026#34;Please upload a CSV file!\u0026#34;); } let employees = []; let path = \u0026#34;./resources/static/assets/uploads/\u0026#34; + req.file.filename; createReadStream(path) .pipe(parse({ headers: true })) .on(\u0026#34;error\u0026#34;, (error) =\u0026gt; { throw error.message; }) .on(\u0026#34;data\u0026#34;, (row) =\u0026gt; { employees.push(row); }) .on(\u0026#34;end\u0026#34;, () =\u0026gt; { Employee.bulkCreate(employees) .then(() =\u0026gt; { res.status(200).send({ message: \u0026#34;The file: \u0026#34; + req.file.originalname + \u0026#34; got uploaded successfully!!\u0026#34;, }); }) .catch((error) =\u0026gt; { res.status(500).send({ message: \u0026#34;Couldn\u0026#39;t import data into database!\u0026#34;, error: error.message, }); }); }); } catch (error) { console.log(error); res.status(500).send({ message: \u0026#34;Failed to upload the file: \u0026#34; + req.file.originalname, }); } }; Export CSV File Next, we will define a method to download the data stored in the database as a CSV file in the same csv.controller.js file:\nimport Employee from \u0026#39;../models/employee.model.js\u0026#39;; import { Parser as CsvParser } from \u0026#39;json2csv\u0026#39;; const download = (_req, res) =\u0026gt; { Employee.findAll().then((objs) =\u0026gt; { let employees = []; objs.forEach((obj) =\u0026gt; { const { id, name, email, username, dob, company, address, location, salary, about, role } = obj; employees.push({ id, name, email, username, dob, company, address, location, salary, about, role }); }); const csvFields = [\u0026#39;id\u0026#39;, \u0026#39;name\u0026#39;, \u0026#39;email\u0026#39;, \u0026#39;username\u0026#39;, \u0026#39;dob\u0026#39;, \u0026#39;company\u0026#39;, \u0026#39;address\u0026#39;, \u0026#39;location\u0026#39;, \u0026#39;salary\u0026#39;, \u0026#39;about\u0026#39;, \u0026#39;role\u0026#39;]; const csvParser = new CsvParser({ csvFields }); const csvData = csvParser.parse(employees); res.setHeader(\u0026#39;Content-Type\u0026#39;, \u0026#39;text/csv\u0026#39;); res.setHeader(\u0026#39;Content-Disposition\u0026#39;, \u0026#39;attachment; filename=employees.csv\u0026#39;); res.status(200).end(csvData); }); }; export default { upload, download, }; Get Employee Data To test if our upload API works as expected, we\u0026rsquo;ll introduce another REST API that retrieves the employee data and returns it in plain JSON format.\nFor this, we will define another controller employee.controller.js to fetch the employees with their child elements:\nimport Employee from \u0026#39;../models/employee.model.js\u0026#39;; const getEmployees = (_req, res) =\u0026gt; { Employee.findAll({ include: [{ model: Employee, as: \u0026#39;children\u0026#39;, attributes: [\u0026#39;id\u0026#39;, \u0026#39;name\u0026#39;, \u0026#39;email\u0026#39;, \u0026#39;username\u0026#39;, \u0026#39;avatar\u0026#39;], required: true }], attributes: { exclude: [\u0026#39;managedBy\u0026#39;] } }) .then((data) =\u0026gt; { res.send(data); }) .catch((err) =\u0026gt; { res.status(500).send({ message: err.message || \u0026#34;Error while retrieving employees from the database.\u0026#34;, }); }); }; export default getEmployees; While defining the data model we had a field named managedBy which relates the Employee to their manager in the same table. When loading the data from the table, we have to choose between eager loading and lazy loading.\nLazy loading refers to the technique of fetching the related data only when we truly want it. Eager loading, on the other hand, refers to the approach of requesting everything at once, starting from the beginning, with a bigger query. It is a process of simultaneously requesting data from one primary model and one or more associated models. This is a query involving one or more joins at the SQL level. For our use case, we need to opt for the eager-loading concept to map the same model to retrieve child values.\nIn Sequelize, eager loading is mainly done by using the include option on a model finder query (such as findOne(), findAll(), etc). Thus, we have defined the following option for our include:\ninclude: [{ model: Employee, as: \u0026#39;children\u0026#39;, attributes: [\u0026#39;id\u0026#39;, \u0026#39;name\u0026#39;, \u0026#39;email\u0026#39;, \u0026#39;username\u0026#39;, \u0026#39;avatar\u0026#39;], required: true }]  model defines the data model that we want to retrieve. as defines the association column (in our case, the association between employee and manager). attributes defines the fields to be retrieved for the associated model. required controls the query. It will create an OUTER JOIN if false and an INNER JOIN if true.  Finally, we have also defined exclude to exclude a given attribute from the final result as we don’t want to retrieve managedBy attribute since we have defined a children attribute now in the same employee.controller.js.\nattributes: { exclude: [\u0026#39;managedBy\u0026#39;] } Hooking in the Routes Next, we need to hook in the logic from above to their respective routes. We do this in the file routes/index.js:\nimport { Router } from \u0026#39;express\u0026#39;; import csvController from \u0026#39;../controllers/csv.controller.js\u0026#39;; import getEmployees from \u0026#39;../controllers/employee.controller.js\u0026#39;; import uploadFile from \u0026#39;../middleware/upload.js\u0026#39;; const router = Router(); let routes = (app) =\u0026gt; { // CSV  router.post(\u0026#39;/csv/upload\u0026#39;, uploadFile.single(\u0026#39;file\u0026#39;), csvController.upload); router.get(\u0026#39;/csv/download\u0026#39;, csvController.download); // Employees  router.get(\u0026#39;/employees\u0026#39;, getEmployees); app.use(\u0026#34;/api\u0026#34;, router); }; export default routes; Setting Up the Express Server Since we have now defined all the building blocks for the APIs, next we need to set up the Express server and host the APIs. We would also need to define CORS to allow frontend to hit the backend APIs finally in index.js:\nimport express from \u0026#39;express\u0026#39;; import path from \u0026#39;path\u0026#39;; import cors from \u0026#39;cors\u0026#39;; import initRoutes from \u0026#39;./routes/index.js\u0026#39;; global.__basedir = path.resolve() + \u0026#34;/..\u0026#34;; const app = express(); var corsOptions = { origin: \u0026#34;http://localhost:3000\u0026#34; }; app.use(cors(corsOptions)); // parse requests of content-type - application/json app.use(express.json()); // parse requests of content-type - application/x-www-form-urlencoded app.use(express.urlencoded({ extended: true })); initRoutes(app); // set port, listen for requests const PORT = process.env.PORT || 8080; app.listen(PORT, () =\u0026gt; { console.log(`Server is running on port ${PORT}.`); }); Finally, we can add the following script to our package.json file:\n\u0026#34;scripts\u0026#34;: { \u0026#34;start\u0026#34;: \u0026#34;node server/index.js\u0026#34;, } and then run the Express server with this command:\nnpm run start Once the server starts, it will first create the tables for the defined model and then map the primary key and foreign key for the association:\nyarn start yarn run v1.22.17 $ node server/index.js Server is running on port 8080. Executing (default): SELECT 1+1 AS result Connection has been established successfully. Creating tables =================== Executing (default): SELECT table_name FROM information_schema.tables WHERE table_schema = \u0026#39;public\u0026#39; AND table_name = \u0026#39;employees\u0026#39; Executing (default): CREATE TABLE IF NOT EXISTS \u0026#34;employees\u0026#34; (\u0026#34;id\u0026#34; VARCHAR(255) , \u0026#34;name\u0026#34; VARCHAR(255), \u0026#34;email\u0026#34; VARCHAR(255), \u0026#34;username\u0026#34; VARCHAR(255), \u0026#34;dob\u0026#34; VARCHAR(255), \u0026#34;company\u0026#34; VARCHAR(255), \u0026#34;address\u0026#34; VARCHAR(255), \u0026#34;location\u0026#34; VARCHAR(255), \u0026#34;salary\u0026#34; VARCHAR(255), \u0026#34;about\u0026#34; VARCHAR(255), \u0026#34;role\u0026#34; VARCHAR(255), \u0026#34;managedBy\u0026#34; VARCHAR(255) REFERENCES \u0026#34;employees\u0026#34; (\u0026#34;id\u0026#34;) ON DELETE CASCADE ON UPDATE CASCADE, \u0026#34;createdAt\u0026#34; VARCHAR(255), \u0026#34;updatedAt\u0026#34; VARCHAR(255), \u0026#34;avatar\u0026#34; VARCHAR(255), PRIMARY KEY (\u0026#34;id\u0026#34;)); Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = \u0026#39;r\u0026#39; and t.relname = \u0026#39;employees\u0026#39; GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname; =============== Tables created per model Now, we can send a cURL to upload a CSV file:\ncurl -i -X POST \\  -H \u0026#34;Content-Type:multipart/form-data\u0026#34; \\  -F \u0026#34;file=@\\\u0026#34;./employee_details.csv\\\u0026#34;;type=text/csv;filename=\\\u0026#34;employee_details.csv\\\u0026#34;\u0026#34; \\  \u0026#39;http://localhost:8080/api/csv/upload\u0026#39; Then we can execute another cURL command to fetch the employees data that we have imported in the previous one:\ncurl -i -X GET \\  \u0026#39;http://localhost:8080/api/employees\u0026#39; With this, we have completed the backend implementation. Now we will build the React UI to upload the CSV file and display the data as tabular content.\nBuilding a CSV Importer UI Let’s move on to the frontend part. As initially discussed, we will try to build a simplistic UI that can upload a CSV and store its data in our PostgreSQL database. Then we will retrieve that data from the DB using our /employees endpoint and display it in tabular format. Each row will have some basic information about an employee and the avatar of each employee he/she is managing. The final UI would look something like below:\nInitially, while setting up the Node project, we initiated a client folder and created a React app using create-react-app script. We would additionally add axios to call REST APIs and react-table to build the table to display the imported data in tabular format:\nnpm install axios react-table Now, we will edit the App.js to add the component to upload the CSV file:\nimport React, { useMemo, useState, useEffect } from \u0026#34;react\u0026#34;; import axios from \u0026#34;axios\u0026#34;; import \u0026#39;./App.css\u0026#39;; const uploadToServer = (file, onUploadProgress) =\u0026gt; { let formData = new FormData(); formData.append(\u0026#34;file\u0026#34;, file); return axios.post(\u0026#39;http://localhost:8080/api/csv/upload\u0026#39;, formData, { headers: { \u0026#34;Content-Type\u0026#34;: \u0026#34;multipart/form-data\u0026#34;, }, onUploadProgress, }); }; function App() { const [data, setData] = useState([]); const [selectedFiles, setSelectedFiles] = useState(undefined); const [currentFile, setCurrentFile] = useState(undefined); const [progress, setProgress] = useState(0); const [message, setMessage] = useState(\u0026#34;\u0026#34;); useEffect(() =\u0026gt; { (async () =\u0026gt; { const result = await axios(\u0026#34;http://localhost:8080/api/employees\u0026#34;); setData(result.data); })(); }, []); const selectFile = (event) =\u0026gt; { setSelectedFiles(event.target.files); }; const upload = () =\u0026gt; { let currentFile = selectedFiles[0]; setProgress(0); setCurrentFile(currentFile); uploadToServer(currentFile, (event) =\u0026gt; { setProgress(Math.round((100 * event.loaded) / event.total)); }) .then(async (response) =\u0026gt; { setMessage(response.data.message); const result = await axios(\u0026#34;http://localhost:8080/api/employees\u0026#34;); setData(result.data); }) .catch(() =\u0026gt; { setProgress(0); setMessage(\u0026#34;Could not upload the file!\u0026#34;); setCurrentFile(undefined); }); setSelectedFiles(undefined); }; return ( \u0026lt;div className=\u0026#34;App\u0026#34;\u0026gt; \u0026lt;div\u0026gt; {currentFile \u0026amp;\u0026amp; ( \u0026lt;div className=\u0026#34;progress\u0026#34;\u0026gt; \u0026lt;div className=\u0026#34;progress-bar progress-bar-info progress-bar-striped\u0026#34; role=\u0026#34;progressbar\u0026#34; aria-valuenow={progress} aria-valuemin=\u0026#34;0\u0026#34; aria-valuemax=\u0026#34;100\u0026#34; style={{ width: progress + \u0026#34;%\u0026#34; }} \u0026gt; {progress}% \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; )} \u0026lt;label className=\u0026#34;btn btn-default\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;file\u0026#34; onChange={selectFile} /\u0026gt; \u0026lt;/label\u0026gt; \u0026lt;button className=\u0026#34;btn btn-success\u0026#34; disabled={!selectedFiles} onClick={upload} \u0026gt; Upload \u0026lt;/button\u0026gt; \u0026lt;div className=\u0026#34;alert alert-light\u0026#34; role=\u0026#34;alert\u0026#34;\u0026gt; {message} \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; ); } export default App; We have defined the uploadToServer() method to upload the CSV file. Then we defined React hooks to set the values for various actions. Finally, we have defined the UI component to display upload, submit button, and progress bar to display the actions related to the upload feature. In the end, once the file is uploaded successfully, it will display a success or error message.\nNext, we need to define a component to display the retrieved employee data as a table. So, first, we will define a table component:\nimport React, { useState } from \u0026#34;react\u0026#34;; import { useTable, useFilters, useSortBy } from \u0026#34;react-table\u0026#34;; export default function Table({ columns, data }) { const [filterInput, setFilterInput] = useState(\u0026#34;\u0026#34;); // Use the state and functions returned from useTable to build your UI  const { getTableProps, getTableBodyProps, headerGroups, rows, prepareRow, setFilter } = useTable( { columns, data }, useFilters, useSortBy ); const handleFilterChange = e =\u0026gt; { const value = e.target.value || undefined; setFilter(\u0026#34;name\u0026#34;, value); setFilterInput(value); }; // Render the UI for your table  return ( \u0026lt;\u0026gt; \u0026lt;input value={filterInput} onChange={handleFilterChange} placeholder={\u0026#34;Search name\u0026#34;} /\u0026gt; \u0026lt;table {...getTableProps()}\u0026gt; \u0026lt;thead\u0026gt; {headerGroups.map(headerGroup =\u0026gt; ( \u0026lt;tr {...headerGroup.getHeaderGroupProps()}\u0026gt; {headerGroup.headers.map(column =\u0026gt; ( \u0026lt;th {...column.getHeaderProps(column.getSortByToggleProps())} className={ column.isSorted ? column.isSortedDesc ? \u0026#34;sort-desc\u0026#34; : \u0026#34;sort-asc\u0026#34; : \u0026#34;\u0026#34; } \u0026gt; {column.render(\u0026#34;Header\u0026#34;)} \u0026lt;/th\u0026gt; ))} \u0026lt;/tr\u0026gt; ))} \u0026lt;/thead\u0026gt; \u0026lt;tbody {...getTableBodyProps()}\u0026gt; {rows.map((row, i) =\u0026gt; { prepareRow(row); return ( \u0026lt;tr {...row.getRowProps()}\u0026gt; {row.cells.map(cell =\u0026gt; { return ( \u0026lt;td {...cell.getCellProps()}\u0026gt;{cell.render(\u0026#34;Cell\u0026#34;)}\u0026lt;/td\u0026gt; ); })} \u0026lt;/tr\u0026gt; ); })} \u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;/\u0026gt; ); } This has the base logic to render the cells, rows, and columns from the API values. Next, we need to import this in App.js and pass the data to this Table component:\nimport React, { useMemo, useState, useEffect } from \u0026#34;react\u0026#34;; import axios from \u0026#34;axios\u0026#34;; import Table from \u0026#34;./Table\u0026#34;; import \u0026#39;./App.css\u0026#39;; const Children = ({ values }) =\u0026gt; { return ( \u0026lt;\u0026gt; {values.map((child, idx) =\u0026gt; { return ( \u0026lt;div className=\u0026#34;image\u0026#34;\u0026gt; \u0026lt;img src={child.avatar} alt=\u0026#34;Profile\u0026#34; /\u0026gt; \u0026lt;/div\u0026gt; ); })} \u0026lt;/\u0026gt; ); }; const Avatar = ({ value }) =\u0026gt; { return ( \u0026lt;div className=\u0026#34;image\u0026#34;\u0026gt; \u0026lt;img src={value} alt=\u0026#34;Profile\u0026#34; /\u0026gt; \u0026lt;/div\u0026gt; ); }; const uploadToServer = (file, onUploadProgress) =\u0026gt; { let formData = new FormData(); formData.append(\u0026#34;file\u0026#34;, file); return axios.post(\u0026#39;http://localhost:8080/api/csv/upload\u0026#39;, formData, { headers: { \u0026#34;Content-Type\u0026#34;: \u0026#34;multipart/form-data\u0026#34;, }, onUploadProgress, }); }; function App() { const [data, setData] = useState([]); const [selectedFiles, setSelectedFiles] = useState(undefined); const [currentFile, setCurrentFile] = useState(undefined); const [progress, setProgress] = useState(0); const [message, setMessage] = useState(\u0026#34;\u0026#34;); const columns = useMemo( () =\u0026gt; [ { Header: \u0026#34;Employee Details\u0026#34;, columns: [ { Header: \u0026#34;Avatar\u0026#34;, accessor: \u0026#34;avatar\u0026#34;, Cell: ({ cell: { value } }) =\u0026gt; \u0026lt;Avatar value={value} /\u0026gt; }, { Header: \u0026#34;Name\u0026#34;, accessor: \u0026#34;name\u0026#34; }, { Header: \u0026#34;Email\u0026#34;, accessor: \u0026#34;email\u0026#34; }, { Header: \u0026#34;Username\u0026#34;, accessor: \u0026#34;username\u0026#34; }, { Header: \u0026#34;DOB\u0026#34;, accessor: \u0026#34;dob\u0026#34; }, { Header: \u0026#34;Company\u0026#34;, accessor: \u0026#34;company\u0026#34; }, { Header: \u0026#34;Address\u0026#34;, accessor: \u0026#34;address\u0026#34; }, { Header: \u0026#34;Location\u0026#34;, accessor: \u0026#34;location\u0026#34; }, { Header: \u0026#34;Salary\u0026#34;, accessor: \u0026#34;salary\u0026#34; }, { Header: \u0026#34;Role\u0026#34;, accessor: \u0026#34;role\u0026#34; }, { Header: \u0026#34;Direct Reportee\u0026#34;, accessor: \u0026#34;children\u0026#34;, Cell: ({ cell: { value } }) =\u0026gt; \u0026lt;Children values={value} /\u0026gt; } ] } ], [] ); useEffect(() =\u0026gt; { (async () =\u0026gt; { const result = await axios(\u0026#34;http://localhost:8080/api/employees\u0026#34;); setData(result.data); })(); }, []); const selectFile = (event) =\u0026gt; { setSelectedFiles(event.target.files); }; const upload = () =\u0026gt; { let currentFile = selectedFiles[0]; setProgress(0); setCurrentFile(currentFile); uploadToServer(currentFile, (event) =\u0026gt; { setProgress(Math.round((100 * event.loaded) / event.total)); }) .then(async (response) =\u0026gt; { setMessage(response.data.message); const result = await axios(\u0026#34;http://localhost:8080/api/employees\u0026#34;); setData(result.data); }) .catch(() =\u0026gt; { setProgress(0); setMessage(\u0026#34;Could not upload the file!\u0026#34;); setCurrentFile(undefined); }); setSelectedFiles(undefined); }; return ( \u0026lt;div className=\u0026#34;App\u0026#34;\u0026gt; \u0026lt;div\u0026gt; {currentFile \u0026amp;\u0026amp; ( \u0026lt;div className=\u0026#34;progress\u0026#34;\u0026gt; \u0026lt;div className=\u0026#34;progress-bar progress-bar-info progress-bar-striped\u0026#34; role=\u0026#34;progressbar\u0026#34; aria-valuenow={progress} aria-valuemin=\u0026#34;0\u0026#34; aria-valuemax=\u0026#34;100\u0026#34; style={{ width: progress + \u0026#34;%\u0026#34; }} \u0026gt; {progress}% \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; )} \u0026lt;label className=\u0026#34;btn btn-default\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;file\u0026#34; onChange={selectFile} /\u0026gt; \u0026lt;/label\u0026gt; \u0026lt;button className=\u0026#34;btn btn-success\u0026#34; disabled={!selectedFiles} onClick={upload} \u0026gt; Upload \u0026lt;/button\u0026gt; \u0026lt;div className=\u0026#34;alert alert-light\u0026#34; role=\u0026#34;alert\u0026#34;\u0026gt; {message} \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;Table columns={columns} data={data} /\u0026gt; \u0026lt;/div\u0026gt; ); } export default App; We have defined useMemo hook to map the attributes from the incoming API data to columns in react-table component. We have also defined the Children and Avatar component to render the images of employees and their children.\nFinally, we can run our app by executing the following command:\ncd client/ \u0026amp;\u0026amp; npm run start This will load the UI and we can upload the CSV file and display our employees in the table:\nYou can find the complete code on GitHub.\nCreating a Production-Ready CSV Importer For most enterprise use cases, a simple CSV importer, while easy to build, will result in issues down the road. Missing key features like lack of clear import error messages and UI for resolving will create challenges for users during the import process. This can result in a major time investment required from support (and technical) teams to assist customers with manually debugging their file imports.\nBelow are a few examples of advanced features that can be critical for ensuring a seamless import experience for customers (read a full list of features here):\n Data Validation \u0026amp; Autofixes In-line error resolution Intelligent Mapping Exportable Excel Error Summaries Custom Columns  Performance on large files should also be considered depending on the data being uploaded, as product speed has a large, measurable impact on import success rates.\n “The first self-serve CSV importer built at Affinity led to more support tickets than any other part of our product. And because it was so challenging to display all of the specific errors that could break the import flow, customers would get esoteric error messages like ‘something is amiss’ whenever there was a missing comma, encoding issue, or a myriad of business-specific data formatting problems that led to downstream processing issues. Because of the critical onboarding flow that data importer powered, before long v1.5, v2, and v3 were prioritized, leading to multiple eng-years of work in iterating toward a robust importer experience.\u0026quot;\n (Rohan Sahai, Director of Engineering at Affinity)\nFor companies with many priorities, a never-ending CSV import project takes away valuable engineering time that could be spent focusing on the core product. If you need production-ready imports in your product, OneSchema is an embeddable CSV importer that takes less than 30 minutes to get running in your app. They’ve built in features that improve import completion rates which automatically correct customer data, handle edge cases, and enable bulk data editing (demo video here).\nConclusion For hobbyist or non-customer-facing use cases, investing in a quick, feature-light importer following the steps we’ve outlined here can be a great option. The cost of missing features, failed imports, and bugs is low. If the CSV importer is part of critical workflows like customer onboarding or recurring data syncs, the cost of integrating a product like OneSchema can be much lower than the cost to build the solution entirely in-house.\n","date":"January 6, 2023","image":"https://reflectoring.io/images/stock/0128-data-1200x628-branded_hu2d381f94dcea97cc13e9ac389a5ec159_170689_650x0_resize_q90_box.jpg","permalink":"/node-csv-importer/","title":"Building a CSV Importer with Node.js"},{"categories":["Node"],"contents":"Whenever we deploy a new version of an application, we may break things or the users may not like the changes. To reduce the risk, we can hide our changes behind feature flags to activate them for more and more users over time. If things don\u0026rsquo;t work out, we can just deactivate the feature flag again without redeploying.\nBut we can also use feature flags for certain administrative use cases as we\u0026rsquo;ll see in this article.\n Example Code This article is accompanied by a working code example on GitHub. Use-cases of Feature Flags Feature flags are commonly used for these use cases:\n Increase deployment success rate by reducing the number of rollbacks caused by errors during deployment. Progressively roll out new features to more and more users. Minimize the risk of a release by first releasing something like a beta version to a limited group of users. Test various kinds of user acceptance.  A less common use case for feature flags is to perform administrative tasks that change the application behavior at runtime. These admin tasks can be one of the following, for example:\n Changing the log level: We can set the application log level as a feature flag, then load it during server bootstrap or listen to its changing events and update it dynamically in the backend. Manage batch job size: Usually batch processing applications are configured with a default batch size which needs to be tuned depending on usage. If we set the batch size as a feature flag, we can dynamically load it and change the batch size on-demand. Manage rate limits: If an application provides an API, we often want to rate limit the customers' access to that API. Some customers may need a higher rate limit than others. If we set the rate limit as a feature flag, we can dynamically change the rate limit for each customer. Maintain a list of IPs: Applications or websites may want to restrict access to certain IPs or geolocations only. Setting those IPs as a feature flag, we can change it on-demand while the application is running. Update cron job schedules: Usually, scheduled jobs are configured with a hard-coded cron expression. We can make that cron expression dynamic by setting it as a feature flag. Gathering of metrics: We can define rules to gather system metrics. These rules can be modified using feature flags dynamically whenever we need to perform any kind of maintenance. Show or hide Personally Identifiable Information (PII) in logs: Sometimes, we need certain data in logs to help investigate a support case. But we don\u0026rsquo;t want to log this data all the time. With a feature flag, we can enable certain log data on-demand.  Of course, we can build all these things into our application. But to quickly modify those admin settings on-demand, a feature management platform like LaunchDarkly can do the work for us.\nIn this article, we\u0026rsquo;ll take a look at how to implement some of the above use cases with LaunchDarkly and Node.js. The concepts also apply to any other feature flagging solution and programming language, however.\nIntroducing LaunchDarkly LaunchDarkly is a feature management service that takes care of all the feature flagging concepts. The name is derived from the concept of a “dark launch”, which deploys a feature in a deactivated state and activates it when the time is right.\nLaunchDarkly is a cloud-based service and provides a UI to manage everything about our feature flags. For each flag, we need to define one or more variations. The variation can be a boolean, an arbitrary number, a string value, or a JSON snippet.\nWe can define targeting rules to define which variation a feature flag will show to its user. By default, a targeting rule for a feature flag is deactivated. The simplest targeting rule is “show variation X for all users”. A more complex targeting rule is “show variation A for all users with attribute X, variation B for all users with attribute Y, and variation C for all other users”.\nWe can use the LaunchDarkly SDK in our code to access the feature flag variations. It provides a persistent connection to LaunchDarkly\u0026rsquo;s streaming infrastructure to receive server-sent-events (SSE) whenever there is a change in a feature flag. If the connection fails for some reason, it falls back to default values.\nInitial Setup in Node.js LaunchDarkly supports lots of clients in different programming languages. For Node.js, they have created the launchdarkly-node-server-sdk library. We will use this library as a dependency in our code.\nTo create our backend service, we need to first initiate the repo by executing:\nnpm init Then we can try installing all the packages at once by executing the following command:\nnpm install launchdarkly-node-server-sdk express We are going to use launchdarkly-node-server-sdk to connect to the LaunchDarkly server to fetch the feature flag variations. We\u0026rsquo;re also going to create an express server that will listen to a particular port and host our application\u0026rsquo;s API.\nNext, we need to create an account with LaunchDarkly. You can sign up for a free trial here. After signing up, you are assigned an SDK Key under the default project and default environment:\nWe will use this SDK key in our code to authenticate with the LaunchDarkly server.\nServer-side Bootstrapping with LaunchDarkly First, we\u0026rsquo;ll try a very simple use case where we can fetch a feature flag from LaunchDarkly and use it as part of our server-side bootstrap code and subscribe to it before the server starts serving requests. Let’s first add some libraries like date-fns and lodash to design a custom logger:\nnpm install date-fns lodash Then we will create a Logger class which will have a constructor and some static and normal methods defined for each log level:\nimport { format } from \u0026#39;date-fns\u0026#39;; import padEnd from \u0026#39;lodash/padEnd.js\u0026#39;; import capitalize from \u0026#39;lodash/capitalize.js\u0026#39;; const LEVELS = { debug: 10, log: 20, warn: 30, error: 40 }; let currentLogLevel = LEVELS[\u0026#39;debug\u0026#39;]; class Logger { constructor(module) { this.module = module ? module : \u0026#39;\u0026#39;; this.debug = this.debug.bind(this); this.log = this.log.bind(this); this.warn = this.warn.bind(this); this.error = this.error.bind(this); this.writeToConsole = this.writeToConsole.bind(this); } static setLogLevel(level) { currentLogLevel = LEVELS[level]; } static get(module) { return new Logger(module); } writeToConsole(level, message, context = \u0026#39;\u0026#39;) { if (LEVELS[level] \u0026gt;= currentLogLevel) { const dateTime = format(new Date(), \u0026#39;MM-dd-yyyy HH:mm:ss:SSS\u0026#39;); const formattedLevel = padEnd(capitalize(level), 5); const formattedMessage = `${dateTime}${formattedLevel}[${ this.module }] ${message}`; console[level](formattedMessage, context); } } debug(message, context) { this.writeToConsole(\u0026#39;debug\u0026#39;, message, context); } log(message, context) { this.writeToConsole(\u0026#39;log\u0026#39;, message, context); } warn(message, context) { this.writeToConsole(\u0026#39;warn\u0026#39;, message, context); } error(message, context) { this.writeToConsole(\u0026#39;error\u0026#39;, message, context); } } export default Logger; Then we will define a flag in LaunchDarkly with the name backend-log-level where we can add a default variation as debug. The goal is that we can change the log level in LaunchDarkly any time we need to:\nNext, we will create a file named bootstrap.js that subscribes to the log level flag before we initiate the express app to serve our APIs:\nimport util from \u0026#39;util\u0026#39;; import express from \u0026#39;express\u0026#39;; import LaunchDarkly from \u0026#39;launchdarkly-node-server-sdk\u0026#39;; import Logger from \u0026#39;./logger.js\u0026#39;; const PORT = 5000; const app = express(); const simpleLogger = new Logger(\u0026#39;SimpleLogging\u0026#39;); const LD_SDK_KEY = \u0026#39;sdk-********-****-****-****-************\u0026#39;; const LOG_LEVEL_FLAG_KEY = \u0026#39;backend-log-level\u0026#39;; const client = LaunchDarkly.init(LD_SDK_KEY); const asyncGetFlag = util.promisify(client.variation); client.once(\u0026#39;ready\u0026#39;, async () =\u0026gt; { const user = { anonymous: true }; const initialLogLevel = await asyncGetFlag(LOG_LEVEL_FLAG_KEY, user, \u0026#39;debug\u0026#39;); Logger.setLogLevel(initialLogLevel); app.get(\u0026#39;/\u0026#39;, (req, res) =\u0026gt; { simpleLogger.debug(\u0026#39;detailed debug message\u0026#39;); simpleLogger.log(\u0026#39;simple log message\u0026#39;); simpleLogger.warn(\u0026#39;Warning warning do something\u0026#39;); simpleLogger.error(\u0026#39;ERROR! ERROR!\u0026#39;); res.sendStatus(200); }); app.listen(PORT, () =\u0026gt; { simpleLogger.log(`Server listening on port ${PORT}`); }); }); Note that in a real application the SDK key should be provided via an environment variable and shouldn\u0026rsquo;t be hardcoded.\nTo execute code only when the LaunchDarkly client is ready, we have two mechanisms: an event or a promise.\nWith client.once('ready', ...), we subscribe to the ready event which will fire once the LaunchDarkly client has received the state of all feature flags from the server.\nFor the promise mechanism, the SDK supports two methods: waitUntilReady() and waitForInitialization(). The behavior of waitUntilReady() is equivalent to the ready event. The promise resolves when the client receives its initial flag data. As with all promises, you can either use .then() to provide a callback, or use await if you are writing asynchronous code. The other method that returns a promise, waitForInitialization(), is similar to waitUntilReady() except that it also tells you if initialization fails by rejecting the promise.\nNext, we can define the bootstrap script as part of package.json:\n{ \u0026#34;scripts\u0026#34;: { \u0026#34;bootstrap\u0026#34;: \u0026#34;node bootstrap.js\u0026#34; } } Then we can execute the following command to run our app:\nnpm run bootstrap Finally, when we hit the API with the endpoint http://localhost:5000 we get to see the following log message being printed (assuming the feature flag is set to debug):\ninfo: [LaunchDarkly] Initializing stream processor to receive feature flag updates info: [LaunchDarkly] Opened LaunchDarkly stream connection 07-26-2022 11:54:58:193 Debug [Backend] detailed debug message 07-26-2022 11:54:58:195 Log [Backend] simple log message 07-26-2022 11:54:58:196 Warn [Backend] Warning warning do something 07-26-2022 11:54:58:197 Error [Backend] ERROR! ERROR! Performing Admin Operations with Feature Flags LaunchDarkly supports something named “multivariate” flags apart from the simple boolean, String or JSON values. A multivariate feature flag could be a list of different strings, numbers or booleans. We have already seen a multivariate flag that controls the log level. Let\u0026rsquo;s look at a few different use cases in which long-lived multivariate flags can control our application dynamically.\nChanging the Log Level Without Restarting the Server The first flag type we can start with is to use the same log-level concept that we saw earlier. In the above section, we just retrieved the log level flag and started our server. But if we need to change the log level, we would need to restart our server for the changes to take effect. Let\u0026rsquo;s try to make the log level change dynamically without server restart.\nFirst of all, we will define a multivariate flag with the following string variations:\n debug error info warn  In the LaunchDarkly UI, it looks like this:\nNext, we can define targeting values that would deliver one of the multivariate strings defined above:\nNote that we\u0026rsquo;re not using targeting rules that target individual users because our log level is a global feature flag that is independent of specific users.\nNow that we have our multivariate feature flag defined, we update our existing logger class from above with a few new methods that read the log level from the feature flag variation at runtime and updates in the console messages:\nimport { format } from \u0026#39;date-fns\u0026#39;; import padEnd from \u0026#39;lodash/padEnd.js\u0026#39;; import capitalize from \u0026#39;lodash/capitalize.js\u0026#39;; class DynamicLogger { constructor( module, ldClient, flagKey, user ) { this.module = module ? module : \u0026#39;\u0026#39;; this.ldClient = ldClient; this.flagKey = flagKey; this.user = user; this.previousLevel = null; } writeToConsole(level, message) { const dateTime = format(new Date(), \u0026#39;MM-dd-yyyy HH:mm:ss:SSS\u0026#39;); const formattedLevel = padEnd(capitalize(level), 5); const formattedMessage = `${dateTime}${formattedLevel}[${ this.module }] ${message}`; console[level](formattedMessage, \u0026#39;\u0026#39;); } async debug( message ) { if ( await this._presentLog( \u0026#39;debug\u0026#39; ) ) { this.writeToConsole(\u0026#39;debug\u0026#39;, message); } } async error( message ) { if ( await this._presentLog( \u0026#39;error\u0026#39; ) ) { this.writeToConsole(\u0026#39;error\u0026#39;, message); } } async info( message ) { if ( await this._presentLog( \u0026#39;info\u0026#39; ) ) { this.writeToConsole(\u0026#39;info\u0026#39;, message); } } async warn( message ) { if ( await this._presentLog( \u0026#39;warn\u0026#39; ) ) { this.writeToConsole(\u0026#39;warn\u0026#39;, message); } } async _presentLog( level ) { const minLogLevel = await this.ldClient.variation( this.flagKey, { key: this.user }, \u0026#39;debug\u0026#39; // Default/fall-back value if LaunchDarkly unavailable.  ); if ( minLogLevel !== this.previousLevel ) { console.log( `Present log-level: ${ minLogLevel }` ); } switch ( this.previousLevel = minLogLevel ) { case \u0026#39;error\u0026#39;: return level === \u0026#39;error\u0026#39;; case \u0026#39;warn\u0026#39;: return level === \u0026#39;error\u0026#39; ||\tlevel === \u0026#39;warn\u0026#39;; case \u0026#39;info\u0026#39;: return level === \u0026#39;error\u0026#39; || level === \u0026#39;warn\u0026#39; || level === \u0026#39;info\u0026#39;; default: return true; } } } export default DynamicLogger; Next we will define the logic to subscribe to this log-level and execute some operations in a loop. For this testing, we can simply define a method that will print various log messages and run them in a loop at an interval of 1 second:\nimport chalk from \u0026#39;chalk\u0026#39;; import LaunchDarkly from \u0026#39;launchdarkly-node-server-sdk\u0026#39;; import DynamicLogger from \u0026#39;./dynamic_logger.js\u0026#39;; const LD_SDK_KEY = \u0026#39;sdk-********-****-****-****-************\u0026#39;; const flagKey = \u0026#39;backend-log-level\u0026#39;; const userName = \u0026#39;admin\u0026#39;; const launchDarklyClient = LaunchDarkly.init( LD_SDK_KEY ); let logger; let loop = 0; launchDarklyClient.once(\u0026#39;ready\u0026#39;, async () =\u0026gt; { setTimeout( executeLoop, 1000 ); } ); async function executeLoop () { logger = new DynamicLogger( \u0026#39;DynamicLogging\u0026#39;, launchDarklyClient, flagKey, userName ); console.log( chalk.dim.italic( `Loop ${ ++loop }` ) ); logger.debug( \u0026#39;Executing loop.\u0026#39; ); logger.debug(\u0026#39;This is a debug log.\u0026#39;); logger.info(\u0026#39;This is an info log.\u0026#39;); logger.warn(\u0026#39;This is a warn log.\u0026#39;); logger.error(\u0026#39;This is a error log.\u0026#39;); setTimeout( executeLoop, 1000 ); } Note that we\u0026rsquo;re passing the static user admin to LaunchDarkly so that LaunchDarkly evaluates the feature flag for this user. This is not a real user. The feature flag is meant as a global feature flag, so targeting different users with different values doesn\u0026rsquo;t make sense.\nNext, we can define the script as part of package.json:\n{ \u0026#34;scripts\u0026#34;: { \u0026#34;dynamic\u0026#34;: \u0026#34;node dynamic_logging.js\u0026#34; } } Then we can execute the following command to run our app:\nnpm run dynamic Finally, when we run the above command it will print something like below:\ninfo: [LaunchDarkly] Initializing stream processor to receive feature flag updates info: [LaunchDarkly] Opened LaunchDarkly stream connection Loop 1 Present log-level: debug 08-20-2022 21:11:40:251 Debug [DynamicLogging] Executing loop. 08-20-2022 21:11:40:264 Debug [DynamicLogging] This is a debug log. 08-20-2022 21:11:40:264 Info [DynamicLogging] This is an info log. 08-20-2022 21:11:40:265 Warn [DynamicLogging] This is a warn log. 08-20-2022 21:11:40:267 Error [DynamicLogging] This is a error log. Loop 2 08-20-2022 21:11:40:268 Debug [DynamicLogging] Executing loop. 08-20-2022 21:11:40:269 Debug [DynamicLogging] This is a debug log. 08-20-2022 21:11:40:270 Info [DynamicLogging] This is an info log. 08-20-2022 21:11:40:271 Warn [DynamicLogging] This is a warn log. 08-20-2022 21:11:40:272 Error [DynamicLogging] This is a error log. Loop 3 Present log-level: info 08-20-2022 21:11:40:274 Info [DynamicLogging] This is an info log. Modifying Rate Limits Dynamically Rate Limiting is a technique used for regulating the volume of incoming or outgoing traffic within a network. In this context, network refers to the line of communication between a client (e.g., a web browser) and our server (e.g., an API).\nFor instance, we might wish to set a daily cap of 100 queries for a public API from an unsubscribed user. If the user goes over that threshold, we can ignore the request and throw an error to let people know they\u0026rsquo;ve gone over their limit.\nWe don\u0026rsquo;t want to implement the rate limiter ourselves, so we will use the express-rate-limit library (you can find it here):\nnpm install express-rate-limit First, we will define a simple express app to host a server:\nimport bodyParser from \u0026#39;body-parser\u0026#39;; import express from \u0026#39;express\u0026#39;; import cors from \u0026#39;cors\u0026#39;; import rateLimit from \u0026#39;express-rate-limit\u0026#39;; import LaunchDarkly from \u0026#39;launchdarkly-node-server-sdk\u0026#39;; import LdLogger from \u0026#39;./ld_logger.js\u0026#39;; // Initiating LaunchDarkly Client const LD_SDK_KEY = \u0026#39;sdk-********-****-****-****-************\u0026#39;; const userName = \u0026#39;admin\u0026#39;; const launchDarklyClient = LaunchDarkly.init( LD_SDK_KEY ); // Initiating the Logger const flagKey = \u0026#39;backend-log-level\u0026#39;; let logger; launchDarklyClient.once(\u0026#39;ready\u0026#39;, async () =\u0026gt; { logger = new LdLogger( launchDarklyClient, flagKey, userName ); serverInit(); } ); const serverInit = async () =\u0026gt; { // Essential globals  const app = express(); // Initialize API  app.get(\u0026#39;/hello\u0026#39;, function (req, res) { return res.send(\u0026#39;Hello World\u0026#39;) }); // Initialize server  app.listen(5000, () =\u0026gt; { logger.info(\u0026#39;Starting server on port 5000\u0026#39;); }); }; Now we will look at the JSON type flags as these feature flag values are pretty open-ended. LaunchDarkly supports a JSON type right out of the box. This allows us to pass Object and Array data structures to our application which can then be used to implement lightweight administrative and operational functionality in our web application. In this case, we will define a feature flag that will take the rate limit config and pass it to our rate limiter.\nThe feature flag looks like this in LaunchDarkly:\nAnd the variations look like this:\nNext, we will define an express middleware and pass it to the express app before starting the server:\n// Initialize Rate Limit Middleware const rateLimiterConfig = await launchDarklyClient.variation( \u0026#39;rate-limiter-config\u0026#39;, { // The static \u0026#34;user\u0026#34; for this task.  key: userName }, { // default rate limit config for fallback  windowMs: 24 * 60 * 60 * 1000, max: 100, message: \u0026#39;You have exceeded 100 requests in 24 hrs limit!\u0026#39;, standardHeaders: true, legacyHeaders: false, } ); app.use(rateLimit(rateLimiterConfig)); As the default configuration, we create a ratelimit of 100 requests per 24 hours. We can now override this config by changing the JSON in the LaunchDarkly feature flag.\nNext, we can define the script as part of package.json:\n{ \u0026#34;scripts\u0026#34;: { \u0026#34;rateLimiter\u0026#34;: \u0026#34;node rate_limiter.js\u0026#34; } } Then we can execute the following command to run our app:\nnpm run rateLimiter Note that using a feature flag for complex JSON configuration can be risky, because you don\u0026rsquo;t get immediate feedback if the JSON you provided is valid or not (unless you look into the logs). So if you do this, be very careful not to provide invalid JSON in the feature flag variation.\nSchedule Cron Jobs Dynamically Sometimes system admins need to schedule cron jobs to perform various tasks like gathering system metrics, generate reports, clearing or archiving logs, taking a backup, etc. Usually, these cron jobs are scheduled using cronTime expressions which are understood by cron executors. If there is a sudden need to change the cronTime expression of a particular job, then we can define it as a feature flag (whose value we can change on demand) and use it whenever the cron runs.\nFor this, first we will run:\nnpm install cron Then, we will define a variation in LaunchDarkly with string parameters which will take different cronTime expressions as values:\nNext, we will define our cron job code which will retrieve the config from Launchdarkly and schedule the cron:\nimport cron from \u0026#39;cron\u0026#39;; import LaunchDarkly from \u0026#39;launchdarkly-node-server-sdk\u0026#39;; const CronJob = cron.CronJob; const CronTime = cron.CronTime; // Initiating LaunchDarkly Client const LD_SDK_KEY = \u0026#39;sdk-********-****-****-****-************\u0026#39;; const userName = \u0026#39;admin\u0026#39;; const launchDarklyClient = LaunchDarkly.init( LD_SDK_KEY ); launchDarklyClient.once(\u0026#39;ready\u0026#39;, async () =\u0026gt; { const cronConfig = await launchDarklyClient.variation( \u0026#39;cron-config\u0026#39;, { key: userName }, \u0026#39;*/4 * * * *\u0026#39; // Default fall-back variation value.  ); const job = new CronJob(cronConfig, function() { run(); }, null, false) let run = () =\u0026gt; { console.log(\u0026#39;scheduled task called\u0026#39;); } let scheduler = () =\u0026gt; { console.log(\u0026#39;CRON JOB STARTED WILL RUN AS PER LAUNCHDARKLY CONFIG\u0026#39;); job.start(); } let schedulerStop = () =\u0026gt; { job.stop(); console.log(\u0026#39;scheduler stopped\u0026#39;); } let schedulerStatus = () =\u0026gt; { console.log(\u0026#39;cron status ----\u0026gt;\u0026gt;\u0026gt;\u0026#39;, job.running); } let changeTime = (input) =\u0026gt; { job.setTime(new CronTime(input)); console.log(\u0026#39;changed to every 1 second\u0026#39;); } scheduler(); setTimeout(() =\u0026gt; {schedulerStatus()}, 1000); setTimeout(() =\u0026gt; {schedulerStop()}, 9000); setTimeout(() =\u0026gt; {schedulerStatus()}, 10000); setTimeout(() =\u0026gt; {changeTime(\u0026#39;* * * * * *\u0026#39;)}, 11000); setTimeout(() =\u0026gt; {scheduler()}, 12000); setTimeout(() =\u0026gt; {schedulerStop()}, 16000); } ); First we initiated the cron job using scheduler() and then we checked the status after a second by calling schedulerStatus(). Next, we stop the scheduler using schedulerStop() after 9 seconds and again check the status at the 10th second. Then, we change the cron time dynamically by calling the changeTime('* * * * * *') method to run this cron every second. This value can also be set by defining another flag in LaunchDarkly and passing on to this function. After that we schedule the cron job again by calling scheduler() and then stop it after few seconds. So in this way, we can schedule and dynamically re-schedule the cron as per our convenience.\nNext, we can define the script as part of package.json:\n{ \u0026#34;scripts\u0026#34;: { \u0026#34;cron\u0026#34;: \u0026#34;node cron_job.js\u0026#34; } } Then, we can execute the following command to run our app:\nnpm run cron Retrieving All Feature Flags Lastly, for debugging reasons, we might want to see the values of all our admin feature flags in the application. For this, we can retrieve all the flags from the LaunchDarkly server in an index.js file:\nimport LaunchDarkly from \u0026#39;launchdarkly-node-server-sdk\u0026#39;; import express from \u0026#39;express\u0026#39;; const app = express(); app.get(\u0026#34;/\u0026#34;, async (req, res) =\u0026gt; { const flags = await init(); res.send(flags); }); app.listen(8080); const LD_SDK_KEY = \u0026#39;sdk-********-****-****-****-************\u0026#39;; const userName = \u0026#39;admin\u0026#39;; let client; async function init() { if (!client) { client = LaunchDarkly.init(LD_SDK_KEY); await client.waitForInitialization(); } const user = { key: userName }; const allFlagsState = await client.allFlagsState(user); const flags = allFlagsState.allValues(); return flags; } We can simply initiate a client using LaunchDarkly.init(sdkKey) and wait until it\u0026rsquo;s ready with client.waitForInitialization(). After that, we can call the allFlagsState() function that captures the state of all feature flag variations for a specific user. This includes their values as well as other metadata.\nFinally, we can bind all of this to an API using app.get() method so that it would get printed as a response whenever we hit the API with the endpoint http://localhost:8080.\nNext we can define the script as part of package.json:\n{ \u0026#34;scripts\u0026#34;: { \u0026#34;start\u0026#34;: \u0026#34;node index.js\u0026#34; } } Then we can execute the following command to run our app:\nnpm run start When we hit the endpoint we can see the following output:\n{ \u0026#34;backend-log-level\u0026#34;: \u0026#34;debug\u0026#34;, \u0026#34;cron-config\u0026#34;: \u0026#34;* * * * * *\u0026#34;, \u0026#34;rate-limiter-config\u0026#34;: { \u0026#34;message\u0026#34;: \u0026#34;You have exceeded 200 requests in 24 hrs limit!\u0026#34;, \u0026#34;standardHeaders\u0026#34;: true, \u0026#34;windowMs\u0026#34;: 86400000, \u0026#34;legacyHeaders\u0026#34;: false, \u0026#34;max\u0026#34;: 200 } } In the example above we\u0026rsquo;re always printing out the feature flags for the static admin user. We could also think about adding a parameter username to our endpoint and then print the feature flag state for any other user! This can be very handy for investigating customer support requests!\nConclusion A feature flag platform allows us to dynamically change the runtime behavior of our application. We can roll out or roll back new features as per our convenience.\nWe can also use a feature flag platform as a store for configuration data, so we can rapidly iterate on our application without having to build a custom configuration management.\nYou can refer to all the source code used in the article on Github.\n","date":"January 3, 2023","image":"https://reflectoring.io/images/stock/0104-on-off-1200x628-branded_hue5392027620fc7728badf521ca949f28_116615_650x0_resize_q90_box.jpg","permalink":"/nodejs-admin-feature-flag-launchdarkly/","title":"Admin Operations with Node.js and Feature Flags"},{"categories":["Software Craft"],"contents":"Inversion of control (IoC) is simply providing a callback (reaction) to an event that might happen in a system. In other words, instead of executing some logic directly, we invert the control to that callback whenever a specific event occurs. This pattern allows us to separate what we want to do from when we want to do it with each part knowing as little as possible about the other, thus simplifying our design.\n Example Code This article is accompanied by a working code example on GitHub. Use Cases for Inversion of Control IoC offers us the ability to separate the concern of writing the code to take action from the concern of declaring when to take that action. This comes in handy when we are developing a complex system and we want to keep it clean and maintainable. Let\u0026rsquo;s take a look at some concrete usages.\nFramework A framework is the best example of IoC because we invert so much control into it. Let\u0026rsquo;s take the Spring framework for example. Instead of going through the trouble of writing code to configure and start a web server we just use the Spring @SpringBootApplication annotation that tells Spring to take control and start a web server.\n@SpringBootApplication public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } } Spring also uses IoC to facilitate all related back-end development tasks, such as creating HTTP request handlers.\n@GetMapping(\u0026#34;/hello\u0026#34;) public void createPost() { // handle request } The @GetMapping annotation is Spring\u0026rsquo;s IoC pattern to tell us not to worry about how to intercept the GET request to the endpoint /hello but worry about what to do with it.\nMessage Handling Messaging systems are another good example of inversion of control where we subscribe to a certain message queue (topic) and then we simply write the code that handles what to do with that message. In other words, we invert the control of fetching the messages to the messaging system and ask it to handle the message.\nLet\u0026rsquo;s look at an example using Kafka:\n@KafkaListener(topics = \u0026#34;myTopic\u0026#34;, groupId = \u0026#34;myGroup\u0026#34;) public void consumeMessage(String message) { System.out.println(\u0026#34;Received Message in myGroup : \u0026#34; + message); } Dependency Injection Simply put, dependency injection (DI) is having a framework that provides a component with its dependencies, so you don\u0026rsquo;t have to construct the objects with all their dependencies yourself.\nIn this sense, dependency injection is a subtype of inversion of control because we invert the control of constructing objects with their dependencies to a framework.\nReasons to Use Dependency Injection Using dependency injection has major benefits that make it a widely-used pattern. Let\u0026rsquo;s discuss two of them.\nSimplifies Code Design Using dependency injection allows a component not to worry about how to instantiate its dependencies, which might be quite complicated and might require method calls to other helper utilities. This way the component only asks for the dependency rather than creating it which makes the component itself smaller and simpler.\nLet\u0026rsquo;s look at an example where we have a ShippingService that only sends a shipment after making some checks using REST calls and Database operations.\nFirst, let\u0026rsquo;s do it without dependency injection, where we construct the RestTemplate and DataSource objects inside the ShippingService.\npublic class ShippingService { private RestTemplate restTemplate; private DataSource dataSource; public ShippingService() { RestTemplate restTemplate = new RestTemplateBuilder() .setConnectTimeout(Duration.ofMillis(1000)) .setReadTimeout(Duration.ofMillis(2000)) .build(); restTemplate.setUriTemplateHandler(new DefaultUriBuilderFactory(\u0026#34;http://payment-service-uri:8080\u0026#34;)); this.restTemplate = restTemplate; DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create(); dataSourceBuilder.driverClassName(\u0026#34;org.h2.Driver\u0026#34;); dataSourceBuilder.url(\u0026#34;jdbc:h2:file:C:/temp/test\u0026#34;); dataSourceBuilder.username(\u0026#34;shipping-user\u0026#34;); dataSourceBuilder.password(\u0026#34;superSecretPassword\u0026#34;); DataSource dataSource = dataSourceBuilder.build(); this.dataSource = dataSource; } private boolean packageIsShippable(String id) { // business logic that make REST and database calls  return true; } public void ship(String shipmentId) { if (packageIsShippable(shipmentId)) { // ship the thing  } } } One thing that immediately catches our attention is the big amount of code we had to write without even starting with the ShippingServie core business logic.\nNow let\u0026rsquo;s use dependency injection to create a simpler design.\npublic class ShippingService { private RestTemplate restTemplate; private DataSource dataSource; public ShippingService(RestTemplate restTemplate, DataSource dataSource) { this.restTemplate = restTemplate; this.dataSource = dataSource; } private boolean packageIsShippable(String id) { // business logic that makes REST and database calls  return true; } public void ship(String shipmentId) { if (packageIsShippable(shipmentId)) { // ship the thing  } } } Note that now the ShippingService doesn\u0026rsquo;t concern itself with how to construct the RestTemplate and DataSource dependencies, rather it just asks for them and expects them to be fully configured.\nSo, who will create the dependencies and pass them to the ShippingService ? In the dependency injection world, this is known as the dependency injection Container.\npublic class DIContainer { private RestTemplate getRestTemplate() { RestTemplate restTemplate = new RestTemplateBuilder() .setConnectTimeout(Duration.ofMillis(1000)) .setReadTimeout(Duration.ofMillis(2000)) .build(); restTemplate.setUriTemplateHandler(new DefaultUriBuilderFactory(\u0026#34;http://payment-service-uri:8080\u0026#34;)); return restTemplate; } private DataSource getDataSource() { DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create(); dataSourceBuilder.driverClassName(\u0026#34;org.h2.Driver\u0026#34;); dataSourceBuilder.url(\u0026#34;jdbc:h2:file:C:/temp/test\u0026#34;); dataSourceBuilder.username(\u0026#34;shipping-user\u0026#34;); dataSourceBuilder.password(\u0026#34;superSecretPassword\u0026#34;); return dataSourceBuilder.build(); } public ShippingService getShipmentService() { return new ShippingService(getRestTemplate(), getDataSource()); } } And now, whoever wants to use the ShippingService can simply as the DIContainer for it and start using it out of the box.\nSimplifies Testing Testing often includes testing a component that has dependencies that we don\u0026rsquo;t necessarily want to test as well. That\u0026rsquo;s where the concept of mocking comes in to help us mock the behavior of those dependencies.\nDependency injection allows dependencies to be passed into the component under test. Those dependencies could be the actual implementation or mocks that we create to simulate them during the test.\nLet\u0026rsquo;s look at an example using Mockito as our mocking library.\npublic class ShippingServiceTest { @Test void testShipping() { RestTemplate restTemplateMock = Mockito.mock(RestTemplate.class); DataSource dataSourceMock = Mockito.mock(DataSource.class); when(restTemplateMock.getForEntity(\u0026#34;url\u0026#34;, String.class)) .thenReturn(ResponseEntity.ok(\u0026#34;What Ever\u0026#34;)); ShippingService shippingService = new ShippingService(restTemplateMock, dataSourceMock); shippingService.ship(\u0026#34;some Id\u0026#34;); // assert stuff  } } While we\u0026rsquo;re not using a dependency injection framework here, we are injecting the (mocked) dependencies into the constructor of ShippingService.\nDependency Injection Frameworks In the Java world, we have three main frameworks that handle DI.\nSpring It\u0026rsquo;s an OpenSource framework developed and maintained by Pivotal. It\u0026rsquo;s a widely used framework with lots of integrations which makes quite heavyweight.\nGuice It\u0026rsquo;s an OpenSource framework that is developed and maintained by Google. It\u0026rsquo;s lightweight in comparison with Spring, however, it has fewer integrations.\nDagger Just like Guice, it\u0026rsquo;s also an OpenSource framework maintained by Google. However, it\u0026rsquo;s more lightweight with very few integrations.\nDependency Injection in Spring Spring makes it pretty straightforward to declare components and their dependencies and it handles the injection process itself, leaving us the task of declaring what dependencies to be injected in which components.\nSpring Bean Spring offers us the concept of Beans, which are just Java objects that get registered in the Spring Bean Registry.\nSpring Beans are objects that we define in a configuration class.\n@Configuration public class ShipmentConfiguration { @Bean public RestTemplate restTemplate() { RestTemplate restTemplate = new RestTemplateBuilder() .setConnectTimeout(Duration.ofMillis(1000)) .setReadTimeout(Duration.ofMillis(2000)) .build(); restTemplate.setUriTemplateHandler(new DefaultUriBuilderFactory(\u0026#34;http://payment-service-uri:8080\u0026#34;)); return restTemplate; } } By doing this we are telling Spring to register a Bean of type RestTemplate with a name of restTemplate. Spring then allows us to inject this Bean in any other registered Spring Bean or in a Spring Component.\nTo have Spring inject the RestTemplate object into our ShippingService all we have to do is to accept it as a constructor argument:\n@Component public class ShippingService { private final RestTemplate restTemplate; public ShippingService(RestTemplate restTemplate){ this.restTemplate = restTemplate; } public void ship(String shipmentId) { // do stuff \t} } Note that we annotated the ShippingService class with the @Component annotation which tells Spring to make a bean of this class and to inject whatever dependencies it has.\nDependency Injection Types in Spring Spring offers us different ways to inject dependencies into our components. Let\u0026rsquo;s get to know them.\nField Injection We declare the dependency as a field in the component and simply annotate it with @Autowired:\n@Component public class ShippingService { @Autowired RestTemplate restTemplate; public void ship(String shipmentId) { // do stuff  } } Setter Injection We can annotate a setter method with @Autowired which tells Spring to inject the Bean of the type declared in the parameter.\n@Component public class ShippingService { RestTemplate restTemplate; @Autowired public void setRestTemplate(RestTemplate restTemplate) { this.restTemplate = restTemplate; } public void ship(String shipmentId) { // do stuff  } } Constructor Injection Spring also allows us to inject the dependencies through the constructor of the component class, which we have seen in the first example:\n@Component public class ShippingService { private final RestTemplate restTemplate; public ShippingService(RestTemplate restTemplate){ this.restTemplate = restTemplate; } public void ship(String shipmentId) { // do stuff  } } Constructor injection is the preferred way of injecting dependencies, because it makes the code less dependent on the framework. We can just as well use the constructor without Spring to create an object with mocked dependencies for a unit test, for example.\nConclusion Inversion of control (IoC) is a design pattern in which we declare an action to be taken when a certain event happens in our system. It is heavily used in software because it allows us to write clean and maintainable code.\nDependency injection (DI) is one form of IoC where we delegate the responsibility of creating and injecting components' dependencies to some other party outside of the components themselves.\n","date":"December 8, 2022","image":"https://reflectoring.io/images/stock/0128-threads-1200-628_hud862ab68256e860108af4bcf2a85d52a_56614_650x0_resize_q90_box.jpg","permalink":"/dependency-injection-and-inversion-of-control/","title":"Dependency Injection and Inversion of Control"},{"categories":["Software Craft"],"contents":"Teams looking to control and reduce their cloud costs can choose from multiple cloud cost management approaches. All of them require at least a basic understanding of what\u0026rsquo;s going on in your cloud infrastructure - this part relies on monitoring and reporting.\nOnce you gain visibility of the cost, you\u0026rsquo;re ready to optimize. Traditionally, many such approaches relied on reserving cloud capacity via Reserved Instances or Savings Plans. But we\u0026rsquo;re not going to cover this point since there are so many ways to get a better deal instead of paying upfront, as long as you do it right.\nHere are six battle-tested practices to help you manage and optimize Kubernetes costs.\n1. Track the right cost metrics in the right place Understanding the cloud bill is hard, so it pays to invest in a cost monitoring tool. Ideally, it should show you cost metrics in real time since containerized applications scale dynamically - and so do their resource demands.\nBut having the best cost monitoring tool isn\u0026rsquo;t going to work if you don\u0026rsquo;t know which metrics to keep your eye on. Here are 3 metrics that will help you understand your Kubernetes costs better:\nDaily spend Get a daily cloud spending report to compare actual costs with the budget you set for the month. Suppose your monthly cloud budget is $1000. If your average daily spend is closer to $50 than $33 (30 days x $33 = $990), you\u0026rsquo;re likely to end up with a higher cloud bill than your budget allows.\nAnother perk of the daily report? Take a look, and you\u0026rsquo;ll immediately identify outliers or anomalies that might cause your bill to skyrocket\nCost per provisioned CPU vs. requested CPU Another good practice is tracking the cost per provisioned CPU and requested CPU. If you\u0026rsquo;re running a Kubernetes cluster that hasn\u0026rsquo;t been optimized, you\u0026rsquo;ll see a difference between how much you\u0026rsquo;re provisioning and how much you\u0026rsquo;re actually requesting. You spend money on provisioned CPUs but only end up actually using (requesting) a small amount of them - so the price of individual requested CPUs grows.\nIf you compare the number of requested versus provisioned CPUs, you can find a gap. This gap is your cloud waste. Calculate how much you\u0026rsquo;re spending per requested CPU to make cost reporting more accurate. CAST AI, for example, makes this gap visible to you like in the image below.\nHistorical cost allocation When finance approaches you to explain why your cloud bill is so high again, you probably want to know what ended up costing you more than expected. This is where historical cost allocation helps. A historical cost allocation report like that shows cost data for the past months split into the daily cost to help teams instantly spot cost outliers that are driving cloud waste.\nCloud cost reporting is a challenge since major cloud providers don\u0026rsquo;t provide access to data in real time. Third-party solutions that increase cost visibility can fill this gap and allow engineering teams to instantly identify cost spikes and keep their cloud expenses in check. They also include automatic alerting features that help to take action immediately. This works really well if you serve cost data in a tool engineers use anyway - for example, the industry-standard observability tool Grafana.\n2. Ask the right questions to accelerate cloud cost anomaly detection So, you\u0026rsquo;ve got your cost monitoring tool in place, and it’s generating heaps of data. And then you experience a cost spike, so it\u0026rsquo;s time to investigate the cause.\nThis may take a while if you don’t have a clue where to look. Investigating cloud cost issues can take a team from a few hours to days. Some teams report dedicating entire sprints to this!\nThe first step is taking a look at the historical cost allocation report that we discussed in the previous section. To grasp your cost situation quickly, here are a few questions you should ask based on that report:\n What was your projected monthly spend compared to last month\u0026rsquo;s spend? What is the difference between this and the previous month? Are there any idle workloads that aren\u0026rsquo;t doing anything apart from burning your money? What was the distribution between namespaces in terms of dollar spend? Namespaces provide a way for isolating groups of resources within a single cluster.  Answering these questions with the support of a historical cost report will speed up the investigation process and prevent such cost anomalies from happening in the future.\n3. Choose the right type and size of your virtual machines Define your requirements Data from CAST AI shows that by eliminating picking the right instance types and sizes, companies reduce their monthly cloud spend by 43% on average.\nThe idea here is to provision only as much capacity as your workload really needs. You need to take into account the following compute dimensions:\n CPU count and architecture, memory, storage, network.  See a cheap instance? You might be tempted to get it, but consider this: you start running a memory-intensive application, and all you get for that price is performance issues that impact your brand and customers. Picking the cheapest option will surely slash your costs - but your reputation will go along with it.\nPick the right instance type Cloud providers offer many different instance types matching a wide range of use cases with different combinations of CPU, memory, storage, and networking capacity. Each virtual machine type comes in one or more sizes to help you scale easily.\nBefore you settle on a machine type, consider that cloud providers roll out different computers, and the chips in those computers come with various performance characteristics. So you might end up with a machine that has stronger performance that you don\u0026rsquo;t need. And you won\u0026rsquo;t even know it.\nThe best way to verify the capabilities of an instance is benchmarking - dropping the same workload on every machine type and checking its performance.\nCheck storage transfer limitations Data storage is another key cost optimization area. An application has unique storage needs, so make sure that the machine you choose has the storage throughput your workloads require.\nSteer clear of expensive drive options like premium SSD unless you\u0026rsquo;re going to maximize your use and take full advantage of them.\n4. Optimize Kubernetes autoscaling Ensure that your autoscaling policies don\u0026rsquo;t clash Kubernetes comes with several autoscaling mechanisms: Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler (VPA), and Cluster Autoscaler.\nVPA automatically adjusts the requests and limits configuration to help you reduce overhead and cut costs. HPA, on the other hand, scales out - and more likely out than in.\nThat\u0026rsquo;s why you should make sure that your VPA and HPA policies aren\u0026rsquo;t interfering with each other. When designing clusters for business- or purpose-class tier of service, it\u0026rsquo;s a good idea to review your binning and packing density settings as well.\nConsider mixing instances A mixed-instance strategy provides you with high availability and performance at a cost that is reasonable (hopefully, this is how your finance department sees that too).\nThe idea here is to choose different instance types that are cheaper and just good enough for some of your workloads but not for those that are high-throughput, low-latency ones. Depending on the workload, it\u0026rsquo;s often ok to pick the cheapest machines. Alternatively, you can get away with a smaller number of machines with higher specs. This is a good method for reducing your Kubernetes bill because each node requires Kubernetes to be installed on it, as a result adding a little overhead.\nBut prepare for scaling challenges if you use mixed instances. In this scenario, each instance uses a different type of resource. You can scale up the instances in your autoscaling groups using metrics like CPU or network utilization - but then expect to get inconsistent metrics.\nThis is where Cluster Autoscaler helps. It lets you mix instance types in a node group - as long as your machines have the same capacity in terms of CPU and memory.\nUse multiple availability zones Virtual machines that span across several availability zones (AZs) increase your availability. AWS recommends its users configure multiple node groups, scope each to a single availability zone, and finally enable the –balance-similar-node-groups feature.\nIf you create a single node group, you can scope that node group to span across multiple Availability Zones.\nApply instance weighted scores Suppose you have a workload that often ends up consuming more capacity than provisioned. Were these resources really needed? Or did the workload consume them because they were available but not critically required?\nYou can eliminate this by using instance weighted scores when choosing machine sizes and types that are a good match for autoscaling. Instance weights define the capacity units that each of the instance types contributes to your application\u0026rsquo;s performance. Instance weighting comes in handy, especially if you adopt a diversified allocation strategy and use spot instances.\n5. Use spot instances Spot instances are a great way to cut your Kubernetes bill as they offer discounts reaching even 90% off the on-demand pricing. A spot instance uses spare EC2 capacity available for less than the On-Demand price. However, this also means that the provider may reclaim the capacity any time, with a notice period lasting from 30 seconds (Google Cloud) to 2 minutes (AWS and Azure).\nData from CAST AI shows that by using spot instances, companies cut their cloud spend by 65% on average. Clusters using only spot instances achieve the greatest savings - 74.2% on average.\nBut before jumping on this opportunity, take a look at your workload to see if it\u0026rsquo;s a good fit for spot instances.\nCheck if your workload is spot-ready Ask these questions when examining your workload: How much time does it need to finish the job? Is this workload mission- and time-critical? How well can it handle interruptions? Is it tightly coupled between instance nodes? How are you going to deal with interruptions when the cloud provider pulls the plug on your machine?\nIf your workload is mission-critical and can’t handle interruptions well, it’s probably not a good candidate for a spot instance. But if it’s not so critical, interruption-tolerant, and falls under a clear strategy for dealing with interruptions, running it on a spot instance is a good idea.\nChoose your spot instances; here\u0026rsquo;s how When picking spot instances, consider going for the slightly less popular ones. It\u0026rsquo;s simple - if they\u0026rsquo;re less in demand, they\u0026rsquo;re also less likely to get interrupted.\nBefore settling on an instance, take a look at its frequency of interruption - this is the rate at which the instance reclaimed capacity during the trailing month.\nFor example, the AWS Spot Instance Advisor displays the frequency interruption in ranges of \u0026lt;5%, 5-10%,10-15%,15-20%, and \u0026gt;20%:\nBid your price Found the right spot instance? Now it\u0026rsquo;s time to set the maximum price you\u0026rsquo;re ready to pay for it. Note that the machine will only run when the marketplace price is below or equal to your bid.\nThe rule of thumb here is to set the maximum price to the level of on-demand pricing. If you pick a lower value, you risk more frequent interruptions once the instance price exceeds the one you set for it.\nTo increase your chances of snatching spot instances, set up spot instance groups (this is called Spot Fleets in AWS). This will let you request multiple machine types at the same time. Expect to pay the maximum price per hour for the entire fleet instead of a specific spot pool (which is a set of instances of the same type with the same OS, availability zone, and network platform).\nYou can probably tell that making it work means a massive number of configuration, setup, and maintenance tasks.\n6. Use an automation tool that does cloud optimization for you AWS alone has some 400+ virtual machines on offer. What if your teams use different cloud providers? The manual effort of configuring resources, picking virtual machines, and setting autoscaling policies is going to cost you more than its optimization impact.\nThe market is full of cloud cost optimization and management solutions that take some or all of the above tasks off engineers' shoulders, reclaiming time for teams to do more strategic work.\nWhen picking such solutions, you\u0026rsquo;re facing the following choice:\n  Cost management tools from cloud providers ( like AWS Cost Explorer) - these tools are the entry point into the world of cloud costs for most teams. But once your cloud footprint grows beyond a single cloud provider and service, they fail to provide accurate data. Also, cloud providers don\u0026rsquo;t offer access to real-time cost data, and we all know that a cloud bill can grow from $0 to $72k in just a few hours.\n  Legacy cost management tools - legacy cloud monitoring tools like Cloudability that don’t consider the business context are great if all you need is increased visibility into how much you spend, where that money is going, and who exactly is spending it. But they don\u0026rsquo;t offer any automation capabilities to seriously reduce your cloud bill - it\u0026rsquo;s all down to manual configuration. If you run on Kubernetes, there are more powerful tools that do it all for you.\n  Cloud-native optimization and monitoring tools - you can choose from a range of modern solutions like CAST AI that handle cloud-native cost dynamics, bringing teams all the cost monitoring and optimization features that act on cloud resources in real time.\n  Start optimizing your Kubernetes cloud bill We didn\u0026rsquo;t mention reserved capacity because long-term commitments aren\u0026rsquo;t a good fit for many modern companies, and - when using Kubernetes - you can get a better cost outcome with automation. After all, engineers have more important things to do than babysitting their cloud infrastructure.\nThere\u0026rsquo;s no reason why Kubernetes costs should remain a black box. You can find out how much you\u0026rsquo;re spending and where you could save up now - connect your cluster to CAST AI and get access to a free Kubernetes cost monitoring solution that shows your expenses in real time and gives you recommendations - for example, more cost-efficient virtual machines that do the job for your workloads.\nWhenever you’re ready, you can turn on CAST AI’s fully automated cloud cost management and - as icing on the cake - check your clusters against security vulnerabilities and configuration best practices, which is free of charge as well.\n","date":"November 24, 2022","image":"https://reflectoring.io/images/stock/0128-coins-1200x628-branded_hue1b232befe98603f391ace785a445daa_193157_650x0_resize_q90_box.jpg","permalink":"/blog/2022/2022-11-24-6-cloud-cost-management-practices/","title":"6 Proven Cloud Cost Management Practices for Kubernetes"},{"categories":["Node"],"contents":"How to design a URL shortening service like tinyurl.com is a frequently asked question in system design interviews. URL shortener services convert long URLs into significantly shorter URL links.\nIn this article, we will walk through the architecture of designing a URL shortening service, looking at both basic and advanced requirements, then we will explore how to create a Basic URL shortener using Node.js, React.js and MongoDB.\nOn our Node.js server, we will create REST API endpoints for the URL shortener and integrate them into React.js frontend applications, while storing all our URL data in a MongoDB database.\n Example Code This article is accompanied by a working code example on GitHub. How Do Url Shorteners Work? A URL shortening service selects a short domain name as a placeholder. Examples are tinyurl.com or bit.ly. When a client submits a long URL to be shortened by the service the URL shortening service generates and returns a short URL, by using some function (cryptographic hash function, iterating through IDs, random IDs, or some combination) to generate a token like XQ6953. This URL returned to the client consists of the selected domain name plus the generated ID token appended to the end, for example https://bit.ly/XQ6953.\nThe URL shortening service stores both the short and long URLs in the database mapped to each other. When a call is made to the short URL, the database is looked up for the associated longer URL and redirects the web request; to the long URL\u0026rsquo;s web page. This is how a basic URL shortening service works.\nFor scalability and durability, a URL shortener service can employ the following features.\nAdvanced Architecture High Availability The system should be highly available. This is necessary because if our service goes down, all URL redirections would fail. URL redirection and response time should happen in real time with minimal latency.\nSQL or NoSQL Database? What kind of database is to be used? A NoSQL database like DynamoDB, MongoDB or Cassandra is a better option since we expect to store billions of rows and don\u0026rsquo;t need to employ associations between items. A NoSQL option can horizontally scale up performance over numerous servers.\nThey are inherently designed for large data (and for scale). Data in a NoSQL database can be distributed across multiple machines or workstations. NoSQL documents can be located on various servers without worrying about joining rows, which is a concern in relational databases.\nCaching for Improved Latency We can improve this architecture by adding a caching layer to our service. Every time a user clicks on a short URL, the server access the database in order to retrieve the long URL mapped to it in the database.\nDatabase calls can be time-consuming and costly. We can improve the response time of our server by caching frequently accessed short URLs or the top 10% of daily lookups. So, when we receive a request for a short URL, our servers first check to see if the data is available in the cache; if it is, it is retrieved from the cache; otherwise, it is retrieved from the database.\nValidation What characters are allowed in the shortened URL? This encoding could be base36 ([a-z ,0-9]) or base62 ([A-Z, a-z, 0-9]) and if we add ‘+’ and ‘/’ we can use Base64 encoding.\nHow long should the randomly generated ID be? The length of the random string should be such that it is not so long that it defeats the purpose of having a shortened URL, nor too small either. Because the longer the generated id the more unique our ids will be. The shortened links must be unique and random (not predictable).\nLoad balancing A load balancer, as the name suggests, balances the load by distributing requests across our servers. We cannot have multiple servers and expose them as endpoints to users.\nA load balancer determines which server is available to handle which request. There are various types of load balancers, each type has a unique method of how they handle load distribution.\nThe load balancer also serves as a single point of contact for all of our users, removing the need for them to know the specific server IP addresses of our server instances. All the user requests land on the load balancer and the load balancer is responsible for re-routing these requests to a specific server instance.\nExample Use Case  Shortened URL links are entered by the user. The URL is validated. Check to see if the user provided the right URL address. The load balancer receives the URL and sends the request to the web servers. If the shortened URL is already in the cache, it returns the long URL right away. If the shortened URL is not in the cache, the service will have to search the database for it. The long URL will be returned to the user.  In the next section, we will build a basic URL shortening application that accepts URLs, then we\u0026rsquo;ll validate the URL string using a helper function to guarantee that users do not make mistakes while entering the URLs. After receiving the long URL, our URL service will generate a short random Id using the previously installed shortId dependency. Which is then concatenated with the domain name of our application.\nBoth URL (short and long) links are saved in a MongoDB database. Finally, all URL endpoints from the server are integrated into our React.js application.\nSetting up the Node.Js Application To begin, we navigate to a new root directory where we want our application to live.\nHere, we\u0026rsquo;ll create a new folder urlbackend and navigate into it. By entering the following command in the terminal:\nmkdir urlbackend \u0026amp;\u0026amp; cd urlbackend Then, again in the terminal, we run the following command to initialize our Node.js application.\nnpm init -y Open the Node.js application in your preferred IDE.\nThen, run the following command to install the required dependencies for our application.\nnpm install cors dotenv express mongoose shortid Here, we\u0026rsquo;re installing the dependencies we need for our application\u0026rsquo;s server, which include:\n cors: Cross-origin resource sharing (CORS) allows AJAX requests to skip the Same-origin policy and access resources from remote hosts. Comes in handy while connecting the Node.js server to the Client (frontend) side. dotenv: This loads environment variables from a .env file into process.env. express: A Node.js framework that provides broad features for building web and mobile applications. mongoose: An object modeling tool that aids in connecting and querying the MongoDB database. shortid: Generates non-sequential short unique ids  Next, create an index.js file to start our Node.js server and a .env file to store all of our application\u0026rsquo;s confidential information as environment variables.\nOur application should be structured like this now:\nTo create a simple Node.Js server, paste the code below In the index.js file:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); // Server Setup const PORT = process.env.PORT || 3333; app.listen(PORT, () =\u0026gt; { console.log(`Server is running at PORT: ${PORT}`); }); In the code above, we created a server by importing and instantiating the express package. Making it listen on our custom PORT 3333.\nTo start the application server, Run node index.js in the terminal and we’ll get the following output:\nServer is running at PORT: 3333 Our URLs will be stored in a MongoDB database. Following, we\u0026rsquo;ll go through how to use and configure the MongoDB database in our application.\nWorking with the MongoDB Database MongoDB is a schema-less NoSQL database, which means it stores data objects in collections and documents rather than the tables and rows used in typical relational databases. Collections are sets of documents, which are equivalent to tables in a relational database. Documents consist of key-value pairs, which are the basic unit of data in MongoDB.\nWe can choose to install a local version of MongoDB Compass for our application. But we\u0026rsquo;ll have to switch this during production to connect to a live MongoDB server.\nHowever, we have another option of connecting to a live MongoDB database, where we won\u0026rsquo;t have to configure the database connection again during deployment. We can achieve this using MongoDB Altas Cluster.\nMongoDB Atlas cluster is a simple and quick solution to integrating MongoDB with our application. The MongoDB Atlas cluster is a fully-managed cloud database that handles all of the complexities of deploying, administering and repairing our installations on the cloud service provider of our choosing (AWS , Azure and GCP). The best approach to deploy, run and scale MongoDB in the cloud is with MongoDB Atlas. We can build faster and spend less time managing our database by leveraging MongoDB\u0026rsquo;s rich ecosystem of drivers, integrations and tools.\nTo get started with MongoDB Atlas in our project, we\u0026rsquo;ll need to Create an Atlas Account and deploy a Free Tier Cluster. To create and deploy a MongoDB Altas cluster, follow these steps:\n Go here to sign up for a new MongoDB Atlas account. Fill in the registration form with your information and click Sign up. Click on Deploy a shared cloud database for Free Click on Create a Shared Cluster Click on Database access on the sidebar and Add New Database User Select Password then enter in a username and password detail for your user. Built-in Role select Atlas Admin Click the Add User button to create your new user Click on Network Access on the sidebar. To Allow Access From All IP Addresses Click on Add IP Address button Select ALLOW ACCESS FROM ANYWHERE Click the Confirm button. Click on Database on the sidebar Click the Connect button for your cluster In the popup modal, click on Connect your application. Copy the URI on the clipboard Lastly, all you need to do is replace the \u0026lt;password\u0026gt; field with the password you created previously.  Our MongoDB atlas is all set and ready for use. Otherwise, click here for a more in-depth guide on how to set up a MongoDB cluster.\nTo secure and keep our MongoDB Atlas URI confidential, we will store the copied URI link in the .env file we created above.\nCopy and Paste the following code into our .env file:\nMONGO_URI= mongodb+srv://\u0026lt;username\u0026gt;:\u0026lt;password\u0026gt;@cluster0.oq1hdin.mongodb.net/?retryWrites=true\u0026amp;w=majority DOMAIN_URL=http://localhost:3333 Here, we are storing MongoDB_URI and DOMAIN_URL as environment variables in the .env file. Replace the MongoDB_URI link with the one you generated in MongoDB Altas and remember to input your username and password. While DOMAIN_URL is our server\u0026rsquo;s localhost address, which can readily be changed during production.\nMongoDB is schema-less, which means that it pushes database architecture and schema creation to the application level, where they can be handled more flexibly. For schema creation, query and connecting to MongoDB database we will use the Mongoose dependency.\nMongoose manages relationships between data. It is used to create schema and define how data is stored and structured in MongoDB. It remains one of the most popular ODM tools for MongoDB. If you are coming from a SQL background then using Mongoose will make the transition into a NoSQL environment much easier.\nCreating Mongoose Schema In this section, we will use Mongoose to create a URL schema. This will define how URL data will be structured and stored in our database. Each schema maps to a MongoDB collection.\nTo create our URL schema, create a Url.js file in the urlbackend folder.\nPaste the following code in the Url.js file:\nconst mongoose = require(\u0026#34;mongoose\u0026#34;); const UrlSchema = new mongoose.Schema({ urlId: { type: String, required: true, }, origUrl: { type: String, required: true, }, shortUrl: { type: String, required: true, }, clicks: { type: Number, required: true, default: 0, }, date: { type: String, default: Date.now, }, }); module.exports = mongoose.model(\u0026#34;Url\u0026#34;, UrlSchema); In the above code we use mongoose to create a schema this will structure how Urls are saved in our MongoDB database. To use schema definition, we converted our UrlSchema into a Model. Bypassing it into\nmongoose.model(modelName, schema) a mongoose model provides an interface to the database for creating, querying, updating, deleting records, etc.\nCreate a Helper Function To Validate Url Links We now have a schema in place that allows us to receive and store URLs in our database. However, URLs entered into the application must be validated. To do this, we will write a helper function to assist us in validating any URL submitted by users.\nOur helper function will be created in a new folder. Create a Util folder in the application\u0026rsquo;s root directory, within that folder, we will create a util.js file.\nAdd the following code to the Util/util.js file.\nfunction validateUrl(value) { var urlPattern = new RegExp(\u0026#39;^(https?:\\\\/\\\\/)?\u0026#39;+ // validate protocol \t\u0026#39;((([a-z\\\\d]([a-z\\\\d-]*[a-z\\\\d])*)\\\\.)+[a-z]{2,}|\u0026#39;+ // validate domain name \t\u0026#39;((\\\\d{1,3}\\\\.){3}\\\\d{1,3}))\u0026#39;+ // validate OR ip (v4) address \t\u0026#39;(\\\\:\\\\d+)?(\\\\/[-a-z\\\\d%_.~+]*)*\u0026#39;+ // validate port and path \t\u0026#39;(\\\\?[;\u0026amp;a-z\\\\d%_.~+=-]*)?\u0026#39;+ // validate query string \t\u0026#39;(\\\\#[-a-z\\\\d_]*)?$\u0026#39;,\u0026#39;i\u0026#39;); return !!urlPattern.test(value); } module.exports = { validateUrl }; The code above uses RegExp to examine and validate any URL passed into our application. Checking if the URL entered is following HTTP protocol if the syntax of a URL domain name and IP address is valid etc.\nUsing mongoose schema and our helper function, we can now validate all URLs entered into our application as well as the way they are structured in our database.\nConnecting to Database and Creating Endpoints In this section, using mongoose, we will connect the Node js application to our MongoDB cluster database.\nWe will be using the mongoose.connect() method to create a connection with MongoDB. To avoid the mongoose DeprecationWarning, we pass the necessary parameters to mongoose.connect() such as useNewUrlParser: true etc.\nNext, we will create the following endpoints for our application:\n GET All URLs: This endpoint will be used to retrieve all stored URLs in JSON format from our database. POST Shorten URLs: All URLs entered into the application will be sent to this endpoint as payload, where they will be validated using the util.js helper function we previously created. Then a random id is generated using the shortId library. To create a new URL, we will concatenate the newly generated random id with our application\u0026rsquo;s domain name. Finally, our database stores both the entered URL and the newly created URL. GET Redirect: With the help of this endpoint, we can switch from the short URL stored in our database to the long or original URL. while also monitoring the number of clicks on the short URL.  In the index.js file, paste the following code:\nconst dotenv = require(\u0026#34;dotenv\u0026#34;); const express = require(\u0026#34;express\u0026#34;); const cors = require(\u0026#34;cors\u0026#34;); const mongoose = require(\u0026#34;mongoose\u0026#34;); const shortid = require(\u0026#34;shortid\u0026#34;); const Url = require(\u0026#34;./Url\u0026#34;); const utils = require(\u0026#34;./Util/util\u0026#34;); // configure dotenv dotenv.config(); const app = express(); // cors for cross-origin requests to the frontend application app.use(cors()); // parse requests of content-type - application/json app.use(express.json()); // Database connection mongoose .connect(process.env.MONGO_URI, { useNewUrlParser: true, useUnifiedTopology: true, }) .then(() =\u0026gt; { console.log(`Db Connected`); }) .catch((err) =\u0026gt; { console.log(err.message); }); // get all saved URLs app.get(\u0026#34;/all\u0026#34;, async (req, res) =\u0026gt; { Url.find((error, data) =\u0026gt; { if (error) { return next(error); } else { res.json(data); } }); }) // URL shortener endpoint app.post(\u0026#34;/short\u0026#34;, async (req, res) =\u0026gt; { console.log(\u0026#34;HERE\u0026#34;,req.body.url); const { origUrl } = req.body; const base = `http://localhost:3333`; const urlId = shortid.generate(); if (utils.validateUrl(origUrl)) { try { let url = await Url.findOne({ origUrl }); if (url) { res.json(url); } else { const shortUrl = `${base}/${urlId}`; url = new Url({ origUrl, shortUrl, urlId, date: new Date(), }); await url.save(); res.json(url); } } catch (err) { console.log(err); res.status(500).json(\u0026#39;Server Error\u0026#39;); } } else { res.status(400).json(\u0026#39;Invalid Original Url\u0026#39;); } }); // redirect endpoint app.get(\u0026#34;/:urlId\u0026#34;, async (req, res) =\u0026gt; { try { const url = await Url.findOne({ urlId: req.params.urlId }); console.log(url) if (url) { url.clicks++; url.save(); return res.redirect(url.origUrl); } else res.status(404).json(\u0026#34;Not found\u0026#34;); } catch (err) { console.log(err); res.status(500).json(\u0026#34;Server Error\u0026#34;); } }); // Port Listenning on 3333 const PORT = process.env.PORT || 3333; app.listen(PORT, () =\u0026gt; { console.log(`Server is running at PORT ${PORT}`); }); In the above code, we created our database connection using mongoose, as well as all of the endpoints required for our URL shortening service application.\nTo Start our application server, Run node index.js in the terminal and we’ll get the following output:\nServer is running at PORT 3333 Db Connected Our endpoints and database are now operational. Next, we will configure our React.js application and test our endpoints:\nSetting Up a React.js Application We are using React framework for our URL shortener frontend, React is a free and open-source front-end JavaScript library for building user interfaces based on UI components. It designs simple views for each state in our application and React efficiently updates and renders just the right components when our data changes. To get started using React, see the React documentation.\nLet\u0026rsquo;s begin building our react application. Change the directory to the project\u0026rsquo;s Root folder by entering the following command into the terminal:\ncd.. We\u0026rsquo;ll take full advantage of the rich React ecosystem by using create-react-app and npx to swiftly setup our React.js application. npx is an npm package runner that can execute any package we want from the npm registry without even installing it, whereas create-react-app sets up our React.js development environment so we can get right into building our application right away.\nRun the following command in the terminal to create a React application named urlfrontend:\nnpx create-react-app urlfrontend After executing the above code, a React.js application named urlfrontend will be generated. To change the directory into it, Run the:\ncd urlfrontend To install the required dependencies for our React.js application, Run:\nnpm install axios bootstrap In the above command, we installed:\n axios: is a promised-based HTTP client for JavaScript. It has the ability to make HTTP requests from the browser and handle the transformation of request and response data. bootstrap: a powerful, feature-packed frontend toolkit for styling our application and helps create an elegant responsive layout.  Open the React.js application in your preferred IDE.\nTo effectively use the React framework, we first have to create components for our application. Next, we will be looking at what components are and how to create them in our application.\nCreating React.js Components React components renders our application view, they are independent and reusable bits of code, they let us split our applications UI into independent, reusable pieces. They serve the same purpose as JavaScript functions but return HTML.\nTo begin creating components for our application, create a new folder in the src folder of the application name components.\nIn the new components folder, add two new files: AddUrlComponent.js and ViewUrlComponent.js.\nThis is the current structure of our project:\nIn the AddUrlComponent.js component, we will create a simple form that accepts input URLs and sends them as a \u0026lsquo;POST\u0026rsquo; request to our urlbackend server endpoint using axios dependencies.\nAlso, we\u0026rsquo;ll be utilizing React\u0026rsquo;s useState hook to store state changes in this component and we are also using bootstrap dependency classes for styling the component.\nCopy and paste the \u0026lsquo;AddUrlComponent\u0026rsquo; code:\nimport React, { useState } from \u0026#39;react\u0026#39; import axios from \u0026#34;axios\u0026#34;; const AddUrlComponent = () =\u0026gt; { const [url, setUrl] = useState(\u0026#34;\u0026#34;); const onSubmit = (e)=\u0026gt; { e.preventDefault(); if (!url) { alert(\u0026#34;please enter something\u0026#34;); return; } axios .post(\u0026#34;http://localhost:3333/short\u0026#34;, {origUrl: url}) .then(res =\u0026gt; { console.log(res.data); }) .catch(err =\u0026gt; { console.log(err.message); }); setUrl(\u0026#34;\u0026#34;) } console.log(url) return ( \u0026lt;div\u0026gt; \u0026lt;main\u0026gt; \u0026lt;section className=\u0026#34;w-100 d-flex flex-column justify-content-center align-items-center\u0026#34;\u0026gt; \u0026lt;h1 className=\u0026#34;mb-2 fs-1\u0026#34;\u0026gt;URL Shortener\u0026lt;/h1\u0026gt; \u0026lt;form className=\u0026#34;w-50\u0026#34; onSubmit={onSubmit}\u0026gt; \u0026lt;input className=\u0026#34;w-100 border border-primary p-2 mb-2 fs-3 h-25\u0026#34; type=\u0026#34;text\u0026#34; placeholder=\u0026#34;http://samplesite.com\u0026#34; value={url} onChange={e =\u0026gt; setUrl(e.target.value)} /\u0026gt; \u0026lt;div class=\u0026#34;d-grid gap-2 col-6 mx-auto\u0026#34;\u0026gt; \u0026lt;button type=\u0026#34;submit\u0026#34; className=\u0026#34;btn btn-danger m-5\u0026#34;\u0026gt; Shorten! \u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/form\u0026gt; \u0026lt;/section\u0026gt; \u0026lt;/main\u0026gt; \u0026lt;/div\u0026gt; ); } export default AddUrlComponent; In ourViewUrlComponent component. The axios dependency is used with a useEffect hook to make GETAll URLs request to our urlbackend server. This fetches all URLs saved in our database.\nThe ReactuseEffect hook helps reload our page view when URL states are changed meaning when a URL is added or removed from the database React\u0026rsquo;s useEffect hook automatically updates the ViewUrlComponent component.\nAll states and fetched URL data in the ViewUrlComponent component are managed and stored using React\u0026rsquo;s useState hook.\nPaste the following in the ViewUrlComponent.js file:\nimport React, { useEffect, useState } from \u0026#39;react\u0026#39; import axios from \u0026#34;axios\u0026#34; const ViewUrlComponent= () =\u0026gt; { const [urls, setUrls] = useState([]); useEffect(() =\u0026gt; { const fetchUrlAndSetUrl = async () =\u0026gt; { const result = await axios.get(\u0026#34;http://localhost:3333/all\u0026#34;); setUrls(result.data); }; fetchUrlAndSetUrl(); }, [urls]); return ( \u0026lt;div\u0026gt; \u0026lt;table className=\u0026#34;table\u0026#34;\u0026gt; \u0026lt;thead className=\u0026#34;table-dark\u0026#34;\u0026gt; \u0026lt;tr\u0026gt; \u0026lt;th\u0026gt;Original Url\u0026lt;/th\u0026gt; \u0026lt;th\u0026gt;Short Url\u0026lt;/th\u0026gt; \u0026lt;th\u0026gt;Click Count\u0026lt;/th\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;/thead\u0026gt; \u0026lt;tbody\u0026gt; {urls.map((url, idx) =\u0026gt; ( \u0026lt;tr key={idx}\u0026gt; \u0026lt;td\u0026gt;{url.origUrl}\u0026lt;/td\u0026gt; \u0026lt;td\u0026gt; \u0026lt;a href={`${url.shortUrl}`}\u0026gt;{url.shortUrl}\u0026lt;/a\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;td\u0026gt;{url.clicks}\u0026lt;/td\u0026gt; \u0026lt;/tr\u0026gt; ))} \u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;/div\u0026gt; ); } export default ViewUrlComponent; To use the above components in our application, use similar syntax as normal HTML: \u0026lt;AddUrlComponent /\u0026gt;, \u0026lt;ViewUrlComponent /\u0026gt; . We will be rendering all our URL components in the src/App.js file. We will import all our application\u0026rsquo;s components and also import the bootstrap CSS dependency link for our application styling.\nIn the src/App.js file add the following code snippet:\nimport \u0026#34;bootstrap/dist/css/bootstrap.min.css\u0026#34;; import AddUrlComponent from \u0026#34;./components/AddUrlComponent\u0026#34;; import ViewUrlComponent from \u0026#34;./components/ViewUrlCommpnent\u0026#34;; function App() { return ( \u0026lt;div className=\u0026#34;App container mt-5\u0026#34;\u0026gt; \u0026lt;AddUrlComponent /\u0026gt; \u0026lt;ViewUrlComponent /\u0026gt; \u0026lt;/div\u0026gt; ); } export default App; In the code above, we simply included our bootstrap styling, the AddUrlComponent and ViewUrlComponent components in the App component.\n ## Start React Application This is the final but most important step in successfully launching our Node.js and React.js applications. First ensure that the Node server is up and **listening on PORT: 3333** Finally, run the following command to start the React app. ```bash npm start Our urlfrontend application should be up and running on port 3000:\nConclusion In this article we looked at the basic architecture and advanced requirements of a URL shortener, then we created a URL shortening service API from scratch using React.js, Node.js and MongoDB. I hope you enjoyed reading this article and learned something new. The complete source code can be found here.\n","date":"November 4, 2022","image":"https://reflectoring.io/images/stock/0127-link-1200x628-branded_hu8ecef51834a4c6caf5a9f9eb22ef5238_122131_650x0_resize_q90_box.jpg","permalink":"/node-url-shortener/","title":"Building a Url Shortener With Node.Js"},{"categories":["Node"],"contents":"Development teams, nowadays, can deliver quick value to consumers with far less risk with the help of feature flag-driven development.\nFeature flags, however, are one more thing to think about when testing our code. So in this article, we\u0026rsquo;ll talk about some of the difficulties that testing presents in the age of feature flags and offer some suggestions on how to overcome them.\nTo help structure the discussion, we will outline five different sorts of tests that could be included in our testing plan:\n Unit Tests: Testing separate functions with unit tests. Integration Tests: Verifying how different modules work together. End-to-End Tests: End-to-end tests, also known as functional tests, examine how a real user might navigate our website. Quality Assurance (QA) Testing: A testing procedure that ensures functionality satisfies the requirement is termed quality assurance (QA) testing. User Acceptance Testing (UAT): Testing procedure to get stakeholders' approval that the functionality satisfies specifications.  The first three test types defined above are often executed automatically when using the Continuous Integration (CI) technique. QA testing, which can involve both manual and automated tests, may occasionally be performed by a specialized QA team. While the first four test types help determine whether anything was built correctly, UAT helps to determine if the product is acceptable and fit for the purpose.\nIn this article, we will try to perform a UAT directly in a production environment using some automation. One type of User Acceptance Testing is Beta Testing. Beta tests are performed either in a beta version of a product or as a test user in the same product running in a production environment alongside any other users. This helps in minimalizing the risks of product failures and enables customer validation.\n Example Code This article is accompanied by a working code example on GitHub. Why should we Perform Beta Tests in Production? We discuss testing in production a lot. Testing in production does not imply releasing code without tests and crossing one\u0026rsquo;s fingers. Instead, it refers to the capacity to test actual features with real data in a real environment using real people.\nFeature flags give developers, QA teams and UAT teams the freedom to test features in a genuine production environment before making them available to the rest of their user base. There is no impact on other users and no need to perform a complete rollback when a QA or UAT tester finds a bug.\nNow, since the tester is going to use the same environment along with other users, it must find a way to test the newly added features before enabling them for the rest of the users. They would also need to create a separate profile and enable those features when they start the manual or automation tests.\nThat’s where the real strength of feature flags lies. Continuously delivering features to production without releasing them to the public gives a confidence boost to the whole development team because features can be tested in production.\nSome of the important advantages of performing beta tests are:\n Even before the product is released, it provides quick feedback on the product, which helps to raise its quality and increase consumer satisfaction. The application can be tested for dependability, usability, and robustness, and testers can provide feedback and suggestions to developers to help them make improvements that will better fulfill consumer needs. Based on recommendations made by the testers, who are the actual users, it assists various organizational teams in making well-informed judgments about a product. Since the product is tested by actual users in a production environment, it provides an accurate insight into what customers like and dislike about it. It helps to address software bugs that might not have been addressed or missed during any testing cycles. It reduces the probability of a product failing because it has previously been tested before going into production.  Feature Flags in Automated User Acceptance Tests However, using feature flags while performing traditional automated integration testing may be difficult. We need to know the state of any feature flags and may even need to enable or disable a feature flag for a given test.\nConsider that a new build has been released and deployed to the production environment. Now a QA tester has to test the existing old functionalities and verify if the new functionalities added over the existing ones are properly load-tested. In a conventional release process, the feature can be released to production and then load-tested right after release. But what if the feature doesn\u0026rsquo;t work? We have to roll quickly roll back before the users have been impacted too much.\nHere feature flags play a big role. Instead of deploying the builds with all the new features activated, we can deploy those features under a (disabled) feature flag even before it\u0026rsquo;s completely tested. Now we might need to write automation tests that would first test the old functionality and then enable the flags to bring in the new functionalities on top of it. All of this has to be dynamic and it should be executed on the same page with some waiting period in between to observe any kind of glitch. We should also be able to take snapshots at each stage for reporting.\nThis is where Cypress can be quite useful. Cypress automation testing lets us change the code and execute the same on the fly. This would simulate the exact scenario of how a user would see the changes in the application. Cypress also has a built-in wait for requests so that we don\u0026rsquo;t need to configure wait times manually. This auto-wait feature also helps Cypress tests to be less flaky.\nNow if there are any issues observed due to those new functionalities, we can easily roll back to the old version by simply disabling the feature flag. This helps us in quick turn-around. With a feature management platform like LaunchDarkly, we can also just enable the features for a test user that we use only for the automated tests so that the real users will not be impacted at all by a potentially broken new feature.\nBrief Introduction to LaunchDarkly and its Features LaunchDarkly is a feature management service that takes care of all the feature flagging concepts. The name is derived from the concept of a “dark launch”, which deploys a feature in a deactivated state and activates it when the time is right.\nLaunchDarkly is a cloud-based service and provides a UI to manage everything about our feature flags. For each flag, we need to define one or more variations. The variation can be a boolean, an arbitrary number, a string value, or a JSON snippet.\nWe can define targeting rules to define which variation a feature flag will show to its user. By default, a targeting rule for a feature flag is deactivated. The simplest targeting rule is “show variation X for all users”. A more complex targeting rule is “show variation A for all users with attribute X, variation B for all users with attribute Y, and variation C for all other users”.\nWe can use the LaunchDarkly SDK in our code to access the feature flag variations. It provides a persistent connection to LaunchDarkly\u0026rsquo;s streaming infrastructure to receive server-sent-events (SSE) whenever there is a change in a feature flag. If the connection fails for some reason, it falls back to default values.\nCreate a Simple React Application In this article, we will focus on covering UAT test cases for a React UI. For this, we will define a pretty simple React application and focus primarily on writing different test cases with feature flags. To demonstrate such power to control the feature flags from Cypress tests, we will just grab an existing copy of LaunchDarkly’s example React application.\nWe can clone and create our copy using the command:\nnpx degit launchdarkly/react-client-sdk/examples/hoc react-cypress-launchdarkly-feature-flag-test We are using the degit command to copy the repo to our local directory.\nWe will first create a new LaunchDarkly project named “Reflectoring.io” and define two environments. We will now use a “Production” environment.\nThen we will define a new String feature flag test-greeting-from-cypress with three variations.\nNow, since we want to test different flags for different users, we will also switch on the “Targeting” option.\nNow we will update our code to define the Client SDK ID and show the current greeting using the feature flag value. This can be changed in app.js:\nimport React from \u0026#39;react\u0026#39;; import { Switch, Route, Redirect } from \u0026#39;react-router-dom\u0026#39;; import { withLDProvider } from \u0026#39;launchdarkly-react-client-sdk\u0026#39;; import SiteNav from \u0026#39;./siteNav\u0026#39;; import Home from \u0026#39;./home\u0026#39;; import HooksDemo from \u0026#39;./hooksDemo\u0026#39;; const App = () =\u0026gt; ( \u0026lt;div\u0026gt; \u0026lt;SiteNav /\u0026gt; \u0026lt;main\u0026gt; \u0026lt;Switch\u0026gt; \u0026lt;Route exact path=\u0026#34;/\u0026#34; component={Home} /\u0026gt; \u0026lt;Route path=\u0026#34;/home\u0026#34;\u0026gt; \u0026lt;Redirect to=\u0026#34;/\u0026#34; /\u0026gt; \u0026lt;/Route\u0026gt; \u0026lt;Route path=\u0026#34;/hooks\u0026#34; component={HooksDemo} /\u0026gt; \u0026lt;/Switch\u0026gt; \u0026lt;/main\u0026gt; \u0026lt;/div\u0026gt; ); // Set clientSideID to your own Client-side ID. You can find this in // your LaunchDarkly portal under Account settings / Projects // https://docs.launchdarkly.com/sdk/client-side/javascript#initializing-the-client const user = { key: \u0026#39;CYPRESS_TEST_1234\u0026#39; }; export default withLDProvider({ clientSideID: \u0026#39;63**********************\u0026#39;, user })(App); Then the Home page would simply use the value of the flag to show the greeting:\nimport React from \u0026#39;react\u0026#39;; import PropTypes from \u0026#39;prop-types\u0026#39;; import styled from \u0026#39;styled-components\u0026#39;; import { withLDConsumer } from \u0026#39;launchdarkly-react-client-sdk\u0026#39;; const Root = styled.div` color: #001b44; `; const Heading = styled.h1` color: #00449e; `; const Home = ({ flags }) =\u0026gt; ( \u0026lt;Root\u0026gt; \u0026lt;Heading\u0026gt;{flags.testGreetingFromCypress}, World !!\u0026lt;/Heading\u0026gt; \u0026lt;div\u0026gt; This is a LaunchDarkly React example project. The message above changes the greeting, based on the current feature flag variation. \u0026lt;/div\u0026gt; \u0026lt;/Root\u0026gt; ); Home.propTypes = { flags: PropTypes.object.isRequired, }; export default withLDConsumer()(Home); Now when we start our application using the following command we see the following UI:\nnpm start Setting up Cypress Tests A breakthrough front-end testing framework called Cypress makes it simple to create effective and adaptable tests for your online apps. With features like simple test configuration, practical reporting, an appealing dashboard interface, and a lot more, it makes it possible to perform advanced testing for both unit tests and integration tests.\nThe main benefit of Cypress is that it is created in JavaScript, the most-used language for front-end web development. Since it was first made available to the public, it has gained a sizable following among developers and QA engineers (about 32K GitHub stars).\nCypress is an open-source testing framework based on JavaScript that supports web application testing. Contrary to Selenium, Cypress does not require driver binaries to function fully on a real browser. The shared platform between the automated code and the application code provides total control over the application being tested.\nTo execute the application and test code in the same event loop, Cypress operates on a NodeJS server that connects with the test runner (Browser). This in turn allows the Cypress code to mock and even change the JavaScript object on the fly. This is one of the primary reasons why Cypress tests are expected to execute faster than corresponding Selenium tests.\nTo start writing our tests, let’s start by installing Cypress test runner:\nnpm install --save-dev cypress Setting up the LaunchDarkly Plugin Now we would be mostly testing user-targeted features that would be behind feature flags. We would hold the user\u0026rsquo;s identity in the client session and send the user identity to the LaunchDarkly server to query for the state of a feature flag.\nTo get the state of a feature flag, we need to make HTTP calls. Although making HTTP requests from Node and Cypress is simple, LaunchDarkly uses a higher-level logic that makes it a bit more complicated than just using a simple HTTP client.\nTo reduce the complexity, we can use the abstraction provided by a plugin called cypress-ld-control that Cypress tests can utilize. Let\u0026rsquo;s put this plugin in place and use it:\nnpm install --save-dev cypress-ld-control To use this plugin, we need to understand some of the functions defined by their API and how we can add them as part of the cypress tasks:\n  getFeatureFlag:\nReturns a particular value for a defined feature flag:\ncy.task(\u0026#39;cypress-ld-control:getFeatureFlag\u0026#39;, \u0026#39;my-flag-key\u0026#39;).then(flag =\u0026gt; {...})   setFeatureFlagForUser:\nThis uses the user-level targeting feature to set a flag for a given user:\ncy.task(\u0026#39;cypress-ld-control:setFeatureFlagForUser\u0026#39;, { featureFlagKey: \u0026#39;my-flag-key\u0026#39;, userId: \u0026#39;string user id\u0026#39;, variationIndex: 1 // must be index to one of the variations })   removeUserTarget:\nThis removes the user target that we have set in the above function:\ncy.task(\u0026#39;cypress-ld-control:removeUserTarget\u0026#39;, { featureFlagKey, userId })   As we can see that every task is prefixed with cypress-ld-control: string and every command takes zero or a single options object as an argument. Finally, every command returns either an object or a null, but never undefined.\nDefine Cypress Tasks To change the values of the feature flags and individual user targets, we need to first generate an access token in LaunchDarkly UI.\nThen we can note the Project key from the Projects page under Account Settings.\nNext, we can load the plugin with environment variables:\nconst { initLaunchDarklyApiTasks } = require(\u0026#39;cypress-ld-control\u0026#39;); require(\u0026#39;dotenv\u0026#39;).config(); module.exports = (on, config) =\u0026gt; { const tasks = { // add your other Cypress tasks if any  } if ( process.env.LAUNCH_DARKLY_PROJECT_KEY \u0026amp;\u0026amp; process.env.LAUNCH_DARKLY_AUTH_TOKEN ) { const ldApiTasks = initLaunchDarklyApiTasks({ projectKey: process.env.LAUNCH_DARKLY_PROJECT_KEY, authToken: process.env.LAUNCH_DARKLY_AUTH_TOKEN, environment: \u0026#39;production\u0026#39;, // the name of your environment to use  }) // copy all LaunchDarkly methods as individual tasks  Object.assign(tasks, ldApiTasks) // set an environment variable for specs to use  // to check if the LaunchDarkly can be controlled  config.env.launchDarklyApiAvailable = true } else { console.log(\u0026#39;Skipping cypress-ld-control plugin\u0026#39;) } // register all tasks with Cypress  on(\u0026#39;task\u0026#39;, tasks) // IMPORTANT: return the updated config object  return config } Test Greetings Next, we can start writing our Cypress tasks using cy.task() function. So consider if the test is to see a casual greeting header, we can simply write:\nbefore(() =\u0026gt; { expect(Cypress.env(\u0026#39;launchDarklyApiAvailable\u0026#39;), \u0026#39;LaunchDarkly\u0026#39;).to.be.true }) const featureFlagKey = \u0026#39;testing-launch-darkly-control-from-cypress\u0026#39; const userId = \u0026#39;USER_1234\u0026#39; it(\u0026#39;shows a casual greeting\u0026#39;, () =\u0026gt; { // target the given user to receive the first variation of the feature flag  cy.task(\u0026#39;cypress-ld-control:setFeatureFlagForUser\u0026#39;, { featureFlagKey, userId, variationIndex: 0, }) cy.visit(\u0026#39;/\u0026#39;) cy.contains(\u0026#39;h1\u0026#39;, \u0026#39;Hello, World !!\u0026#39;).should(\u0026#39;be.visible\u0026#39;) }); Then we can run our tests by defining a script in package.json as follows:\n\u0026#34;scripts\u0026#34;: { \u0026#34;start\u0026#34;: \u0026#34;node src/server/index.js\u0026#34;, \u0026#34;test\u0026#34;: \u0026#34;start-test 3000 \u0026#39;cypress open\u0026#39;\u0026#34; } Then we can simply execute:\nnpm run test Next, we can define a few more variations and cover some more test cases as follows:\n/// \u0026lt;reference types=\u0026#34;cypress\u0026#34; /\u0026gt;  before(() =\u0026gt; { expect(Cypress.env(\u0026#39;launchDarklyApiAvailable\u0026#39;), \u0026#39;LaunchDarkly\u0026#39;).to.be.true }); const featureFlagKey = \u0026#39;test-greeting-from-cypress\u0026#39;; const userId = \u0026#39;CYPRESS_TEST_1234\u0026#39;; it(\u0026#39;shows a casual greeting\u0026#39;, () =\u0026gt; { // target the given user to receive the first variation of the feature flag  cy.task(\u0026#39;cypress-ld-control:setFeatureFlagForUser\u0026#39;, { featureFlagKey, userId, variationIndex: 0, }) cy.visit(\u0026#39;/\u0026#39;) cy.contains(\u0026#39;h1\u0026#39;, \u0026#39;Hello, World !!\u0026#39;).should(\u0026#39;be.visible\u0026#39;) }); it(\u0026#39;shows a formal greeting\u0026#39;, () =\u0026gt; { cy.task(\u0026#39;cypress-ld-control:setFeatureFlagForUser\u0026#39;, { featureFlagKey, userId, variationIndex: 1, }) cy.visit(\u0026#39;/\u0026#39;) cy.contains(\u0026#39;h1\u0026#39;, \u0026#39;Good Morning, World !!\u0026#39;).should(\u0026#39;be.visible\u0026#39;) }); it(\u0026#39;shows a vacation greeting\u0026#39;, () =\u0026gt; { cy.task(\u0026#39;cypress-ld-control:setFeatureFlagForUser\u0026#39;, { featureFlagKey, userId, variationIndex: 2, }) cy.visit(\u0026#39;/\u0026#39;) cy.contains(\u0026#39;h1\u0026#39;, \u0026#39;Hurrayyyyy, World\u0026#39;).should(\u0026#39;be.visible\u0026#39;) // print the current state of the feature flag and its variations  cy.task(\u0026#39;cypress-ld-control:getFeatureFlag\u0026#39;, featureFlagKey) .then(console.log) // let\u0026#39;s print the variations to the Command Log side panel  .its(\u0026#39;variations\u0026#39;) .then((variations) =\u0026gt; { variations.forEach((v, k) =\u0026gt; { cy.log(`${k}: ${v.name}is ${v.value}`) }) }) }); it(\u0026#39;shows all greetings\u0026#39;, () =\u0026gt; { cy.visit(\u0026#39;/\u0026#39;) cy.task(\u0026#39;cypress-ld-control:setFeatureFlagForUser\u0026#39;, { featureFlagKey, userId, variationIndex: 0, }) cy.contains(\u0026#39;h1\u0026#39;, \u0026#39;Hello, World !!\u0026#39;) .should(\u0026#39;be.visible\u0026#39;) .wait(1000) cy.task(\u0026#39;cypress-ld-control:setFeatureFlagForUser\u0026#39;, { featureFlagKey, userId, variationIndex: 1, }) cy.contains(\u0026#39;h1\u0026#39;, \u0026#39;Good Morning, World !!\u0026#39;).should(\u0026#39;be.visible\u0026#39;).wait(1000) cy.task(\u0026#39;cypress-ld-control:setFeatureFlagForUser\u0026#39;, { featureFlagKey, userId, variationIndex: 2, }) cy.contains(\u0026#39;h1\u0026#39;, \u0026#39;Hurrayyyyy, World !!\u0026#39;).should(\u0026#39;be.visible\u0026#39;) }); after(() =\u0026gt; { cy.task(\u0026#39;cypress-ld-control:removeUserTarget\u0026#39;, { featureFlagKey, userId }) }); We are also defining a task at the end to remove any user targets being created as part of this task. Finally, we can see all the test output being populated in the Cypress dashboard UI. We can launch the Cypress UI and click on the “Run” option, where we can see all the task execution with variations being printed.\nIf you notice, as discussed above, we are testing the feature behind a feature flag with different variations. We are updating the flag value dynamically and then execute our tests on the fly. Cypress also runs these tests with a default built-in wait period. However, if we would like to add validations we can add a dynamic wait period to observe the changes in the UI.\nTesting a User-targeted Feature In our previous article, we had represented a button in UI which would be populated based on the logged-in user. We can add the same button here and add test cases using Cypress to cover the functionality of clicking the button and validating the popup alert.\nFor this, we will update our home page logic:\nconst theme = { blue: { default: \u0026#34;#3f51b5\u0026#34;, hover: \u0026#34;#283593\u0026#34; } }; const Button = styled.button` background-color: ${(props) =\u0026gt; theme[props.theme].default}; color: white; padding: 5px 15px; border-radius: 5px; outline: 0; text-transform: uppercase; margin: 10px 0px; cursor: pointer; box-shadow: 0px 2px 2px lightgray; transition: ease background-color 250ms; \u0026amp;:hover { background-color: ${(props) =\u0026gt; theme[props.theme].hover}; } \u0026amp;:disabled { cursor: default; opacity: 0.7; } `; const clickMe = () =\u0026gt; { alert(\u0026#34;A new shiny feature pops up!\u0026#34;); }; const Home = ({ flags }) =\u0026gt; ( \u0026lt;Root\u0026gt; \u0026lt;Heading\u0026gt;{flags.testGreetingFromCypress}, World !!\u0026lt;/Heading\u0026gt; \u0026lt;div\u0026gt; This is a LaunchDarkly React example project. The message above changes the greeting, based on the current feature flag variation. \u0026lt;/div\u0026gt; \u0026lt;div\u0026gt; {flags.showShinyNewFeature ? \u0026lt;Button id=\u0026#39;shiny-button\u0026#39; theme=\u0026#39;blue\u0026#39; onClick={clickMe}\u0026gt;Shiny New Feature\u0026lt;/Button\u0026gt;: \u0026#39;\u0026#39;} \u0026lt;/div\u0026gt; \u0026lt;div\u0026gt; {flags.showShinyNewFeature ? \u0026#39;This button will show new shiny feature in UI on clicking it.\u0026#39;: \u0026#39;\u0026#39;} \u0026lt;/div\u0026gt; \u0026lt;/Root\u0026gt; ); Now the user attribute in app.js needs to be updated to “John Doe”. Thus, when John logs in, he will see the shiny new button, whereas others won’t.\nconst user = { key: \u0026#39;john_doe\u0026#39; }; Similarly, we will add a task in the existing cypress test spec to validate the click event of a button and its outcome alert of the popup:\nit(\u0026#39;click a button\u0026#39;, () =\u0026gt; { cy.task(\u0026#39;cypress-ld-control:setFeatureFlagForUser\u0026#39;, { featureFlagKey: \u0026#39;show-shiny-new-feature\u0026#39;, userId: \u0026#39;john_doe\u0026#39;, variationIndex: 0, }) cy.visit(\u0026#39;/\u0026#39;); var alerted = false; cy.on(\u0026#39;window:alert\u0026#39;, msg =\u0026gt; alerted = msg); cy.get(\u0026#39;#shiny-button\u0026#39;).should(\u0026#39;be.visible\u0026#39;).click().then( () =\u0026gt; expect(alerted).to.match(/A new shiny feature pops up!/)); }); As discussed above, this section helps in updating the flag value and executing our tests on the fly. Finally, we can see all the test output being populated in Cypress UI dashboard. We can launch the Cypress UI and click on “Run” option, where we can see all the task execution with variations being printed.\nDeploy Tests in CI Next, we can use GitHub Actions to run the same tests in CI. The workflows provided by CI using GitHub Actions allow us to create the code in our repository and run our tests. Workflows can run on virtual machines hosted by GitHub or on our servers. Using the repository dispatch webhook, we may set up our CI workflow to launch whenever a GitHub event takes place (for instance, if new code is pushed to your repository), on a predetermined timetable, or in response to an outside event.\nFor us to determine whether the change in our branch produces an error, GitHub executes our CI tests and includes the results of each test in the pull request. The changes we pushed are prepared to be evaluated by a team member or merged once all CI tests in a workflow pass. If a test fails, then we can easily get to know that one of our changes may have caused the failure.\nWe will use cypress-io/GitHub-action to install the dependencies, cache Cypress, start the application, and run the tests. We can define the environment variables in the repo and then use them.\nWe can then define a yaml configuration to run our CI tests:\nname: ci on: push jobs: test: runs-on: ubuntu-20.04 steps: - name: Checkout 🛎 uses: actions/checkout@v2 - name: Run tests 🧪 # https://github.com/cypress-io/github-action uses: cypress-io/github-action@v3 with: start: \u0026#39;yarn start\u0026#39; env: LAUNCH_DARKLY_PROJECT_KEY: ${{ secrets.LAUNCH_DARKLY_PROJECT_KEY }} LAUNCH_DARKLY_AUTH_TOKEN: ${{ secrets.LAUNCH_DARKLY_AUTH_TOKEN }} Conclusion As part of this article, we discussed how we can define conditional Cypress tests based on feature flags. We also made use of cypress-ld-control to set and remove flags for certain users. We have also used the LaunchDarkly client instance in Cypress tests to read the flag value for specific users. We also saw how these features support the two primary test techniques of conditional execution and controlled flag. In this blog post, we mainly saw how we can target features using individual user IDs.\nFeature flags are frequently seen as either a tool for product managers or engineers. In actuality, it\u0026rsquo;s both. Flags can help product managers better manage releases by synchronizing launch timings and enhancing the effectiveness of the feedback loop. DevOps and software development teams can benefit from their ability to cut costs and increase productivity.\nYou can refer to all the source code used in the article on Github.\n","date":"November 3, 2022","image":"https://reflectoring.io/images/stock/0104-on-off-1200x628-branded_hue5392027620fc7728badf521ca949f28_116615_650x0_resize_q90_box.jpg","permalink":"/nodejs-feature-flag-launchdarkly-react-cypress/","title":"Automated Tests with Feature Flags and Cypress"},{"categories":["Spring"],"contents":"Cross-site Request Forgery (CSRF, sometimes also called XSRF) is an attack that can trick an end-user using a web application to unknowingly execute actions that can compromise security. To understand what constitutes a CSRF attack, refer to this introductory article. In this article, we will take a look at how to leverage Spring\u0026rsquo;s built-in CSRF support when creating a web application.\nTo understand the detailed guidelines for preventing CSRF vulnerabilities, refer to the OWASP Guide.\n Example Code This article is accompanied by a working code example on GitHub. CSRF Protection in Spring The standard recommendation is to have CSRF protection enabled when we create a service that could be processed by browsers. If the created service is exclusively for non-browser clients we could disable CSRF protection. Spring provides two mechanisms to protect against CSRF attacks.\n Synchronizer Token Pattern Specifying the SameSite attribute on your session cookie  Sample Application to Simulate CSRF First, we will create a sample Spring Boot application that uses Spring Security and Thymeleaf. We will also add the thymeleaf extras module to help us integrate both individual modules.\nMaven dependencies:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-security\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-thymeleaf\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.thymeleaf.extras\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;thymeleaf-extras-springsecurity5\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Gradle dependencies:\ndependencies { compile \u0026#34;org.springframework.boot:spring-boot-starter-security\u0026#34; compile \u0026#34;org.springframework.boot:spring-boot-starter-thymeleaf\u0026#34; compile \u0026#34;org.thymeleaf.extras:thymeleaf-extras-springsecurity5\u0026#34; } Starter dependency versions Here, we have used Spring Boot version 2.6.3. Based on this version, Spring Boot internally resolves Spring Security version as 5.6.1 and Thymeleaf version as 3.0.14.RELEASE. However, we can override these versions if required in our pom.xml as below:\n\u0026lt;properties\u0026gt; \u0026lt;spring-security.version\u0026gt;5.2.5.RELEASE\u0026lt;/spring-security.version\u0026gt; \u0026lt;thymeleaf.version\u0026gt;3.0.1.RELEASE\u0026lt;/thymeleaf.version\u0026gt; \u0026lt;/properties\u0026gt;  This application uses the Spring Security default login page to sign in. Once logged in, we will create a simple email registration template. We will customize our login credentials in our application.yaml as:\nspring: security: user: name: admin password: passw@rd We have configured our application to run on port 8090. Now, let us start up our application:\nmvnw clean verify spring-boot:run (for Windows) ./mvnw clean verify spring-boot:run (for Linux) CSRF in Spring Spring security, provides CSRF protection by default. Therefore, to demonstrate a CSRF attack, we need to explicitly disable CSRF protection.\npublic class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) { http .authorizeRequests() .antMatchers(\u0026#34;/**\u0026#34;).permitAll() .and() .httpBasic() .and() .formLogin().permitAll() .and().csrf().disable(); } }  Next, let\u0026rsquo;s create a sample attacker application. This is another Spring Boot application that uses Thymeleaf to create a template that the attacker will use to register a fake email id. This application is configured to run on port 8091.\nmvnw clean verify spring-boot:run (for Windows) ./mvnw clean verify spring-boot:run (for Linux) Now, before we try to simulate this attack, let\u0026rsquo;s understand the parameters the attacker needs to know to carry out a successful CSRF attack:\n The user has an active session and the attack is triggered from within the session. The attacker knows the valid URL that will change the state and result in a security breach. The attacker is aware of all the valid parameters required to be sent to ensure the request goes through.  Now, let\u0026rsquo;s log in to the application and go to the email registration page.\nBefore we enter the email to register, let\u0026rsquo;s open a second tab and load the attacker\u0026rsquo;s application. This action is similar to an attacker tricking the user into clicking a button/link to make use of the same session and trigger the request on behalf of the user.\nWhen the user clicks on the Register button, the attacker triggers a request to the endpoint http://localhost:8090/registerEmail, registering his email id for all further communication. Here, since CSRF was disabled and the attacker knew all required valid parameters the request would go through successfully, and we would see this page\nDefault CSRF protection in Spring In the previous section, we were able to simulate a CSRF attack by explicitly disabling CSRF protection. Let\u0026rsquo;s take a look at what happens if we remove the CSRF configuration in Spring Security. Let\u0026rsquo;s set the security configuration to:\n@Configuration @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) { http .authorizeRequests() .antMatchers(\u0026#34;/**\u0026#34;).permitAll() .and() .httpBasic() .and() .formLogin().permitAll(); } } As we can see here, we haven\u0026rsquo;t explicitly enabled, disabled or configured any CSRF properties. Now, when we open the attacker application and click on the Register button, we see:\nWhitelabel Error Page This application has no explicit mapping for /error, so you are seeing this as a fallback. Thu Sep 29 04:50:02 AEST 2022 There was an unexpected error (type=Forbidden, status=403). This is because, as of Spring Security 4.0, CSRF protection is enabled by default.\nHow does the default Spring CSRF protection work? Spring Security uses the Synchronizer Token pattern to generate a CSRF token that protects against CSRF attacks.\nFeatures of the CSRF token are:\n The default CSRF token is generated at the server end by the Spring framework. This CSRF token (resolved automatically in thymeleaf due to the addition of thymeleaf-extras-springsecurity5 module) should be a part of every HTTP request. This is not a part of the cookie since the browser automatically includes cookies with every HTTP request. When an HTTP request is submitted, Spring Security will compare the expected CSRF token with the one sent in the HTTP request. The request will be processed only if the token values match else the request will be treated as a forged request and be rejected with status 403 (Forbidden). The CSRF token is generally included with requests that change state i.e. POST, PUT, DELETE, PATCH. Idempotent methods such as GET are not vulnerable to CSRF attacks since they do not change the server-side state and are protected by same origin policy.  Understanding key classes that enable CSRF protection CsrfFilter When CSRF is enabled, this filter is automatically called as part of the filter chain. To know the list of filters that apply, let\u0026rsquo;s enable debug logs in our application.yaml as :\nlogging: level: org.springframework.security.web: DEBUG On application startup, we should see the CsrfFilter in the console log along with others:\no.s.s.web.DefaultSecurityFilterChain : Will secure any request with [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@773c7147, org.springframework.security.web.context.SecurityContextPersistenceFilter@7e20f4e3, org.springframework.security.web.header.HeaderWriterFilter@79144d0e, org.springframework.security.web.csrf.CsrfFilter@34070bd2, org.springframework.security.web.authentication.logout.LogoutFilter@105c6c9e, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@3c6fb501, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@7a34c1f6, org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter@5abc5854, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1d0dad12, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@4f6ff62, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7af9595d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5c3007d, org.springframework.security.web.session.SessionManagementFilter@2579d8a, org.springframework.security.web.access.ExceptionTranslationFilter@46b21632, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@3ba5c4dd] The CsrfFilter extends the OncePerRequestFilter thus guaranteeing that the Filter would be called exactly once for a request. Its doFilterInternal() is responsible for generating and validating the token. It skips the Csrf validation and processing for GET, HEAD, TRACE and OPTIONS requests.\nHttpSessionCsrfTokenRepository This is the default implementation of the CsrfTokenRepository interface in Spring Security. The CsrfToken object is stored and validated in the HttpSession object. The token created is set to a pre-defined parameter name _csrf and header X-CSRF-TOKEN that can be accessed by valid client applications. The default implementation of token creation in the class is:\npublic final class HttpSessionCsrfTokenRepository implements CsrfTokenRepository { private static final String DEFAULT_CSRF_PARAMETER_NAME = \u0026#34;_csrf\u0026#34;; private static final String DEFAULT_CSRF_HEADER_NAME = \u0026#34;X-CSRF-TOKEN\u0026#34;; private static final String DEFAULT_CSRF_TOKEN_ATTR_NAME = HttpSessionCsrfTokenRepository.class.getName().concat(\u0026#34;.CSRF_TOKEN\u0026#34;); private String parameterName = \u0026#34;_csrf\u0026#34;; private String headerName = \u0026#34;X-CSRF-TOKEN\u0026#34;; private String createNewToken() { return UUID.randomUUID().toString(); } // Other methods here.... } UUID is a class that represents an immutable universally unique identifier.\nCsrfTokenRepository This interface helps customize the CSRF implementation. It contains the below methods:\npublic interface CsrfTokenRepository { CsrfToken generateToken(HttpServletRequest request); void saveToken(CsrfToken token, HttpServletRequest request, HttpServletResponse response); CsrfToken loadToken(HttpServletRequest request); } We need to implement these methods if we want to provide a custom implementation of CSRF token generation and its validation. Next, we need to plugin this class in our security configuration as below:\n@Configuration @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().csrfTokenRepository(csrfTokenRepository()); } private CsrfTokenRepository csrfTokenRepository() { return new CustomCsrfTokenRepository(); } } This configuration will ensure our CustomCsrfTokenRepository class is called instead of the default HttpSessionCsrfTokenRepository.\nCookieCsrfTokenRepository This implementation of CsrfTokenRepository is most commonly used when working with Angular or similar front-end frameworks that use session cookie authentication. It follows AngularJS conventions and stores the CsrfToken object in a cookie named XSRF-TOKEN and in the header X-XSRF-TOKEN.\npublic final class CookieCsrfTokenRepository implements CsrfTokenRepository { static final String DEFAULT_CSRF_COOKIE_NAME = \u0026#34;XSRF-TOKEN\u0026#34;; static final String DEFAULT_CSRF_PARAMETER_NAME = \u0026#34;_csrf\u0026#34;; static final String DEFAULT_CSRF_HEADER_NAME = \u0026#34;X-XSRF-TOKEN\u0026#34;; private String parameterName = \u0026#34;_csrf\u0026#34;; private String headerName = \u0026#34;X-XSRF-TOKEN\u0026#34;; private String cookieName = \u0026#34;XSRF-TOKEN\u0026#34;; } We can use the below security configuration to plug it into this repository:\n@Configuration public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override public void configure(HttpSecurity http) throws Exception { http .csrf() .csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse()); } } With this configuration token value is set in the XSRF-TOKEN cookie. The withHttpOnlyFalse() method ensures that the Angular client will be able to retrieve the cookie for all further requests. Once retrieved the client copies the token value to X-XSRF-TOKEN header for every state modifying XHR request. Spring will then compare the header and the cookie values and accept the request only if they are the same.\nCSRF protection in Angular CookieCsrfTokenRepository is intended to be used only when the client application is developed in a framework such as Angular. AngularJS comes with built-in protection for CSRF. For a detailed understanding refer to its documentation.\n Customizing CsrfTokenRepository In most cases, we\u0026rsquo;ll be happy with the default implementation of HttpSessionCsrfTokenRepository. However, if we intend to create custom tokens or save the tokens to a database we might need some customization.\nLet\u0026rsquo;s take a closer look at how we can customize CsrfTokenRepository. In our demo application, consider we need to customize token generation and add/update tokens based on the logged-in user. As we have seen in the previous section, we would need to implement three methods:\n generateToken(HttpServletRequest request)  public class CustomCsrfTokenRepository implements CsrfTokenRepository { public CsrfToken generateToken(HttpServletRequest request) { return new DefaultCsrfToken(headerName, \u0026#34;_csrf\u0026#34;, generateRandomToken()); } private String generateRandomToken() { int random = ThreadLocalRandom.current().nextInt(); return random + System.currentTimeMillis() + \u0026#34;\u0026#34;; } } As shown above, we have customised the token creation instead of using the default UUID random token.\n saveToken(CsrfToken token, HttpServletRequest request, HttpServletResponse response)  public class CustomCsrfTokenRepository implements CsrfTokenRepository { @Autowired public TokenRepository tokenRepository; private String headerName = \u0026#34;X-CSRF-TOKEN\u0026#34;; public void saveToken(CsrfToken token, HttpServletRequest request, HttpServletResponse response) { String username = request.getParameter(\u0026#34;username\u0026#34;); Optional\u0026lt;Token\u0026gt; tokenValueOpt = tokenRepository.findByUser(username); if (!tokenValueOpt.isPresent()) { Token tokenObj = new Token(); tokenObj.setUser(username); tokenObj.setToken(token.getToken()); tokenRepository.save(tokenObj); } } } Here, the saveToken() uses the generated random token to either save/retrieve from TokenRepository which maps to a H2 Database table Token that is responsible for storing user tokens.\n loadToken(HttpServletRequest request)  public class CustomCsrfTokenRepository implements CsrfTokenRepository { public CsrfToken loadToken(HttpServletRequest request) { Optional\u0026lt;Token\u0026gt; tokenOpt = Optional.empty(); String user = request.getParameter(\u0026#34;username\u0026#34;); if (Objects.nonNull(user)) { tokenOpt = tokenRepository.findByUser(user); } else if (Objects.nonNull( SecurityContextHolder.getContext().getAuthentication())) { Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal(); String username = \u0026#34;\u0026#34;; if (principal instanceof UserDetails) { username = ((UserDetails) principal).getUsername(); } else { username = principal.toString(); } tokenOpt = tokenRepository.findByUser(username); } if (tokenOpt.isPresent()) { Token tokenValue = tokenOpt.get(); return new DefaultCsrfToken( \u0026#34;X-CSRF-TOKEN\u0026#34;, \u0026#34;_csrf\u0026#34;, tokenValue.getToken()); } return null; } } Here, we get the logged-in user and fetch its token from the underlying database.\nExposing the token to HTTP requests In our example, we used the Spring thymeleaf template to make calls to the registerEmail endpoint. Let\u0026rsquo;s look at a valid email registration process:\nHere, we see the payload having _csrf parameter that the Spring application could validate and therefore the HTTP request was processed successfully. For this parameter to be passed in the HTTP request, we need to add the below code to the thymeleaf template:\n\u0026lt;input type=\u0026#34;hidden\u0026#34; th:name=\u0026#34;${_csrf.parameterName}\u0026#34; th:value=\u0026#34;${_csrf.token}\u0026#34; /\u0026gt; Spring dynamically resolves the _csrf.parameterName to _csrf and _csrf.token to a random UUID string.\nThis is detailed in the Spring documentation that states:\n Spring Security’s CSRF support provides integration with Spring’s RequestDataValueProcessor via its CsrfRequestDataValueProcessor. This means that if you leverage Spring’s form tag library, Thymeleaf, or any other view technology that integrates with RequestDataValueProcessor, then forms that have an unsafe HTTP method (i.e. post) will automatically include the actual CSRF token.\n Every HTTP request in the session will have the same CSRF token. Since this value is random and is not automatically included in the browser, the attacker application wouldn\u0026rsquo;t be able to deduce its value and his request would be rejected.\nSelective URL protection Spring Security provides a requireCsrfProtectionMatcher() method to enable CSRF protection selectively i.e we could enable CSRF for only a limited set of URLs as desired. The other endpoints will be excluded from CSRF protection.\n@Configuration @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) { http .authorizeRequests() .antMatchers(\u0026#34;/**\u0026#34;).permitAll() .and() .httpBasic() .and() .formLogin().permitAll() .and() .csrf() .requireCsrfProtectionMatcher( new AntPathRequestMatcher(\u0026#34;**/login\u0026#34;)); } } If we have multiple URLs that need to have CSRF protection, it can be achieved in the following ways:\n@Configuration @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) { http .authorizeRequests() .antMatchers(\u0026#34;/**\u0026#34;).permitAll() .and() .httpBasic() .and() .formLogin().permitAll() .and() .csrf().requireCsrfProtectionMatcher( new AntPathRequestMatcher(\u0026#34;**/login\u0026#34;)) .and() .csrf().requireCsrfProtectionMatcher( new AntPathRequestMatcher(\u0026#34;**/registerEmail\u0026#34;)); } } OR\nWe define a custom class called CustomAntPathRequestMatcher that implements Requestmatcher and handle URL pattern matching in that class.\npublic class CustomAntPathRequestMatcher implements RequestMatcher { private final AndRequestMatcher andRequestMatcher; public CustomAntPathRequestMatcher(String[] patterns) { List\u0026lt;RequestMatcher\u0026gt; requestMatchers = Arrays.asList(patterns) .stream() .map(p -\u0026gt; new AntPathRequestMatcher(p)) .collect(Collectors.toList()); andRequestMatcher = new AndRequestMatcher(requestMatchers); } @Override public boolean matches(HttpServletRequest request) { return andRequestMatcher.matches(request); } } Then we can use this class in our security configuration.\n@Configuration @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { String[] patterns = new String[]{ \u0026#34;/favicon.ico\u0026#34;, \u0026#34;/login\u0026#34;, \u0026#34;/registerEmail\u0026#34; }; @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers(\u0026#34;/**\u0026#34;).permitAll() .and() .httpBasic() .and() .formLogin().permitAll() .and() .csrf().requireCsrfProtectionMatcher( new CustomAntPathRequestMatcher(patterns)); } } On the other hand, we could have situations where we need to enable CSRF by default, but we need only a handful of URLs for which CSRF protection needs to be turned OFF. In such cases, we can use the below configuration:\n@Configuration @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { String[] patterns = new String[] { \u0026#34;**/disabledEndpoint\u0026#34;, \u0026#34;**/simpleCall\u0026#34; }; http .authorizeRequests().antMatchers(\u0026#34;/**\u0026#34;) .permitAll().and().httpBasic().and().formLogin().permitAll() .and() .csrf().ignoringAntMatchers(patterns); } } SameSite Cookie Attribute Spring Security provides us with another approach that could mitigate CSRF attacks. According to OWASP,\n “SameSite prevents the browser from sending the cookie along with cross-site requests. The main goal is mitigating the risk of cross-origin information leakage. It also provides some protection against cross-site request forgery attacks.”\n This attribute can be set to three values:\n Strict - This will prevent the browser from sending the cookie to the target site in all cross-site browsing contexts. This is the most restrictive forbidding third-party cookies to be sent in cross-site scenarios. Lax - This rule is slightly relaxed as with this value the server maintains the user’s logged-in session after the user arrives from an external link. None - This value is used to turn off the SameSite property. However, this is possible only if the Secure property is also set i.e the application needs to be HTTPS enabled.  Browser compatibility for SameSite attribute All recent versions of known browsers support the SameSite attribute. Its default value in case the attribute isn\u0026rsquo;t specified is set to Lax to enable defence against CSRF attacks.\n To configure the SameSite attribute in a SpringBoot application, we need to add the below configuration in application.yml:\nserver: servlet: session: cookie: same-site: Lax This configuration is supported only in SpringBoot versions 2.6.0 and above.\nAnother way to set this attribute in Set-Cookie is via org.springframework.http.ResponseCookie\n@Controller public class HomeController { @GetMapping public String homePage(HttpServletResponse response) { ResponseCookie responseCookie = ResponseCookie.from(\u0026#34;testCookie\u0026#34;, \u0026#34;cookieVal\u0026#34;) .sameSite(\u0026#34;Lax\u0026#34;) .build(); response.setHeader( HttpHeaders.SET_COOKIE, responseCookie.toString()); return \u0026#34;homePage\u0026#34;; } } With this cookie set, we should see:\nTesting CSRF in Spring Now that we have looked at how CSRF is configured and applied, let\u0026rsquo;s take a look at how to test them. First, we need to add the below testing dependencies:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.security\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-security-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; The spring-boot-starter-test includes basic testing tools like Junit, Mockito which will be used to test the application. The spring-security-test will integrate MockMvc with Spring Security allowing us to test security features incorporated in the application.\n@SpringBootTest(classes = {EmailController.class, HomeController.class, SecurityConfiguration.class}) @ExtendWith(SpringExtension.class) @ActiveProfiles(\u0026#34;test\u0026#34;) public class ControllerTest { @MockBean public CustomerEmailService customerEmailService; @Autowired private WebApplicationContext context; private MockMvc mockMvc; @BeforeEach public void setup() { this.mockMvc = MockMvcBuilders .webAppContextSetup(this.context) .apply(springSecurity()) .build(); } } Here, we have set up the MockMvc object using SecurityMockMvcConfigurers.springSecurity(). This will perform the initial setup we need to integrate Spring Security with Spring MVC Test. Spring Security testing framework provides static imports to help with the testing of various security scenarios:\nimport static org.springframework.security.test.web.servlet.request.SecurityMockMvcRequestPostProcessors.*; To test with CSRF, let\u0026rsquo;s implement a test SecurityConfiguration:\n@TestConfiguration @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers(\u0026#34;/**\u0026#34;).permitAll() .and() .httpBasic() .and() .formLogin(); } } Testing successful login @Test void shouldLoginSuccessfully() throws Exception { mockMvc.perform(formLogin().user(\u0026#34;admin\u0026#34;).password(\u0026#34;password\u0026#34;)) .andExpect(status().is3xxRedirection()); } Here, we have configured a sample user and password in our application-test.yaml.\nSecurityMockMvcRequestBuilders.FormLoginRequestBuilder.formLogin() method internally sets up SecurityMockMvcRequestPostProcessors.csrf() that will internally handle csrf tokens and validate user login successfully.\nTesting login with invalid CSRF @Test void shouldLoginErrorWithInvalidCsrf() throws Exception { mockMvc.perform(post(\u0026#34;/login\u0026#34;) .with(csrf().useInvalidToken()) .param(\u0026#34;username\u0026#34;, \u0026#34;admin\u0026#34;) .param(\u0026#34;password\u0026#34;, \u0026#34;password\u0026#34;)) .andExpect(status().isForbidden()); } To test, if the login works with an invalid CSRF, the testing framework provides us methods, to forcibly add an invalid CSRF token. With this applied, the test now returns 403.\nTesting login with invalid CSRF when we ignore /login For the same test as above, let\u0026rsquo;s tweak our SecurityConfiguration to ignore login. For testing, we can change our SecurityConfiguration to:\n@TestConfiguration @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) { http .authorizeRequests() .antMatchers(\u0026#34;/**\u0026#34;).permitAll() .and() .httpBasic() .and().formLogin() .and() .csrf().ignoringAntMatchers(\u0026#34;/login\u0026#34;); } } We notice that CSRF check is ignored for the endpoint, and despite setting an invalid CSRF, the login action was successful. For other state-changing endpoints, we can create similar scenarios and test for CSRF applicability.\nConclusion In this article, we have looked at how we can leverage in-built Spring CSRF features to protect our endpoints from CSRF attacks. We took a look at how to configure and implement them with examples. We also briefly touched upon the spring security testing framework and its CSRF capabilities.\nYou can play around with the example code on GitHub.\n","date":"October 21, 2022","image":"https://reflectoring.io/images/stock/0081-safe-1200x628-branded_hu3cea99ddea81138af0ed883346ac5ed4_108622_650x0_resize_q90_box.jpg","permalink":"/spring-csrf/","title":"Configuring CSRF/XSRF with Spring Security"},{"categories":["Software Craft"],"contents":"Feature flags, in their simplest form, are just if conditions in your code that check if a certain feature is enabled or not. This allows us to deploy features even when they are not ready, meaning that our codebase is always deployable. This, in turn, enables continuous deployment even while the team is continuously pushing small commits to the main branch.\nMore advanced feature flags allow us to target specific users. Instead of enabling a feature for everyone, we only enable it for a cohort of our users. This allows us to release a feature progressively to more and more users. If something goes wrong, it only goes wrong for a handful of users.\nIn this article, I want to go through some general best practices when using feature flags.\nDeploy Continuously Feature flags are one of the main enablers of continuous deployment. Using feature flags consistently, you can literally deploy any time, because all unfinished changes are hidden behind (disabled) feature flags and don\u0026rsquo;t pose a risk to your users.\nMake use of the fact that you can deploy any time and implement a continuous deployment pipeline that deploys your code to production every time you push to the mainline!\nContinuous deployment doesn\u0026rsquo;t mean that the code has to go out directly to production without tests. The pipeline should definitely include automated tests and it can also include a deployment to a staging environment for a smoke test.\nFeature flags bring advantages even without continuous deployment, but continuous deployment is a big one! High-performance teams use continuous deployment!\nUse Abstractions Often, a simple if/else in your code is good enough to implement a feature flag. The if condition checks whether the feature is enabled and, depending on the result, you go through the new code (i.e. the new feature) or the old code.\nWhen a feature gets bigger, however, or you do a refactoring that spans many places in the code, a single if/else doesn\u0026rsquo;t cut it anymore. If you want to put these changes behind a feature flag, you would have to sprinkle if conditions all over the codebase! How ugly! And not very maintainable!\nTo avoid that, think about introducing an abstraction for the changes you want to make. You could use the strategy pattern, for example, with one strategy implementation for the \u0026ldquo;disabled\u0026rdquo; state of the feature flag and another implementation for the \u0026ldquo;enabled\u0026rdquo; state. You can even implement a delegate strategy that knows the other strategies and decides when to use which strategy based on the state of the feature flag:\nThis way, instead of sprinkling if conditions all over your code, you have the feature encapsulated cleanly within an object and can call this object\u0026rsquo;s methods instead of polluting your codebase with lots of if conditions.\nThis also makes the feature more evident in the code. If the feature needs updating, it\u0026rsquo;s easier to find because it\u0026rsquo;s all in one place and not distributed across the codebase.\nTest in Production Testing in production sounds scary and is often considered a no go. Without feature flags, if things were hard to test locally, for example, because of dependencies on other systems or a certain state of data in the production environment, developers would have to release a feature to production blindly and then test it. If the test failed, they would have to revert the change quickly and redeploy because it\u0026rsquo;s now failing for every user!\nWith feature flags \u0026ldquo;testing in production\u0026rdquo; is no longer a taboo! Instead of releasing the change to all users, we can release the change just for ourselves! Using a feature management platform like LaunchDarkly, we can enable a feature flag for a test user in the production environment and then log in as that user and test the change in the production environment. All other users are not affected by the change in any way.\nIf the test fails, we don\u0026rsquo;t have to revert and redeploy. Instead, we can just disable the feature again. Or we can leave it enabled because we have only enabled it for our test user anyway, so no other users have been affected at any time!\nHaving the opportunity to test in production doesn\u0026rsquo;t mean that this should be the standard way of testing, though. There need to be automated tests in place that run before each deployment to make sure that we haven\u0026rsquo;t introduced regressions. Testing in production is an option we have in our toolbox, however, if we use feature flags.\nRollout Progressively Using feature flags, we not only have the opportunity to enable a feature just for us to test in production, but we can also roll the feature out to more and more users over time.\nInstead of enabling a feature for everyone after we have successfully tested it, we can enable it for a percentage of all users, for example. On day one, we may only enable the feature for 5% of users. If any of those users report a problem, we can disable the feature again and investigate. If all is good, we may enable the feature for 25% of the users the next day, and 100% the day after.\nAnother way of rolling out progressively is to define user cohorts. Some users are very interested in new features, even if they might be a bit buggy, yet. These users we can group into an \u0026ldquo;early adopter\u0026rdquo; cohort. Then, we can release all new features to this cohort first, asking for feedback, before we roll it out to everyone else.\nRolling out features to a percentage of users or a user cohort requires a feature management platform like LaunchDarkly that supports percentage rollout and user cohorts.\nMonitor the Rollout A progressive rollout only makes sense if we check how the rollout is going. Is the feature working as expected for the first cohort of users? Do they report any issues? Can we see any errors popping up in our logs or metrics?\nWhen adding a feature flag to the code, we should think about how we can monitor the health of the feature once we enable the feature flag. Can we add some logging that tells us the feature is working as expected? Can we emit some monitoring metrics that will appear on our dashboards that will tell us if something goes wrong?\nThen, once we\u0026rsquo;re rolling out the feature (i.e. enabling the feature flag for a cohort of users), we can monitor these logs and metrics to decide whether the feature is working as expected. This allows us to make educated decisions about whether to continue rolling out to the next cohort or disabling the feature again to fix things.\nTest your Feature Flags Adding a feature flag to a codebase is like adding any other code: things can go wrong. A common mistake when adding a feature flag is to accidentally invert the if condition, i.e. execute some code when the feature flag is disabled when you actually wanted to execute the code when the feature flag is enabled.\nSince we rely on feature flags to roll out even unfinished features, we must get the feature flags right. That means - same as for other code - feature flagged code should be covered by automated tests.\nYour tests should cover all values a feature flag can have. Most commonly, a feature flag only has the values true and false (i.e. enabled and disabled), so there should be two tests. But a feature flag also may have a string value, for example. In this case, make sure that you have tests for valid strings as well as invalid strings. What is the code doing if the feature flag has an invalid value? What is the code doing if the feature flag has no value at all (for example when your feature management platform has an outage)? Those are all scenarios that should be covered by tests.\nCache Feature Flag State in Loops When using a feature management service as the source of truth for the state of your feature flags, your code has to somehow get the state of a feature flag from that service. That means the code might have to make an expensive remote call to get the feature flag state.\nImagine now that you are doing some batch processing in a loop and for each iteration, you evaluate a feature flag to do a certain processing step or not. That means one potential remote call to the feature management service per iteration of that loop! Even the most performance-optimized code will slow down to a crawl!\nWhen you need a feature flag in a loop, consider storing the value of the feature flag in a variable before entering the loop. Then you can use this variable in the loop to avoid a remote call per iteration. Or use some other mechanism to cache the value of the feature flag so you don\u0026rsquo;t have to do a remote call every time.\nDepending on the feature you\u0026rsquo;re implementing, this may not be acceptable, though. Sometimes you want to be able to control the value of the feature flag in real time. Imagine you realize something is going wrong after the loop has started and you want to disable the feature flag for the rest of iterations. If you have cached the feature flag value, you can\u0026rsquo;t disable it on the fly and the rest of iterations will run with the old feature flag value.\nModern feature management services like LaunchDarkly provide clients that are smart enough not to make a remote call for every feature flag evaluation. Instead, the server pushes the feature flag values to the client every time they change. Anyway, it pays out to understand the capabilities of the feature flag client before using it in a loop.\nName Feature Flags Consistently Naming is hard. That\u0026rsquo;s true for programming in general and feature flags in particular.\nSame as for the rest of our code, feature flag code should be easily understandable. If a feature flag is misinterpreted it might mean that a feature goes out to users that shouldn\u0026rsquo;t see the feature, yet. That means that feature flags should be named in a way that tells us very clearly what the feature flag is doing.\nTry to find a naming pattern for your feature flags that makes it easy to recognize their meaning.\nA simple naming pattern is \u0026ldquo;XYZEnabled\u0026rdquo;. It\u0026rsquo;s clear that when this feature flag\u0026rsquo;s value is true, the feature is enabled and otherwise, it\u0026rsquo;s disabled.\nTry to avoid negated feature flag names like \u0026ldquo;XYZDisabled\u0026rdquo;, because that makes for awkward double-negation if conditions in your code like if !(XYZDisabled) {...}.\nDon\u0026rsquo;t Nest Feature Flags You probably have seen code before that is deeply nested like this:\nif(FooEnabled) { if(BarEnabled) { ... } else { if(BazEnabled) { ... } else { ... } } } else { ... } This code has a high cyclomatic complexity, meaning there are a lot of different branches the code can go through. This makes the code hard to understand and reason about.\nThe same is true for feature flags. Every evaluation of a feature flag in your code opens up another branch that may or may not be executed depending on the value of the feature flag.\nIt\u0026rsquo;s bad enough that feature flags increase the cyclomatic complexity of our code so we shouldn\u0026rsquo;t make it worse by unnecessarily nesting feature flags.\nIn the above code, the feature Baz only has an effect if the features Foo and Bar are also enabled. There may be valid reasons for this, but this is very hard to understand. Every time you want to enable or disable the Baz feature for a cohort of users, you have to make sure that the other two features are also enabled or disabled for the same cohort.\nAt some point, you will make a mistake and not get the results you expect!\nClean Up Your Feature Flags As we can see in the code above, feature flags add code to your codebase that is not really nice to read (even if you don\u0026rsquo;t nest feature flags). Once a feature has been rolled out to all users, you should remove the code from the codebase, because you no longer need to check whether the feature is enabled or not - it should be enabled for everyone and that means you don\u0026rsquo;t need an if condition anymore.\nAlso, bad things can happen if you keep the feature flag code in your codebase. Someone might stumble over the feature flag and accidentally disable it for a cohort of users or even all of them.\nSometimes, however, you might want to keep a feature flag around to act as a kill switch to quickly disable a feature should it cause problems.\nWeigh the value of a kill switch against the toil of keeping the code around when you decide whether to keep a feature flag in the code or not.\nUse a Feature Management Platform While feature flags can be implemented with a simple if/else branch for simple use cases, they are only really powerful if you are using a feature management platform like LaunchDarkly.\nThese platforms let you define user cohorts and roll out features to one cohort after another with the flick of a switch in a browser-based UI.\nThey also allow you to monitor when feature flags have been evaluated to give you insights about the usage of your features, among a lot of other things.\nIf you\u0026rsquo;re starting with feature flags today, start with a feature management platform.\n","date":"October 21, 2022","image":"https://reflectoring.io/images/stock/0122-flags-1200x628-branded_hu527fb7afa4c66bb9fbc35962086f5821_153264_650x0_resize_q90_box.jpg","permalink":"/blog/2022/2022-10-21-feature-flags-best-practices/","title":"Feature Flags Best Practices"},{"categories":["Spring"],"contents":"What is AOP? Aspect Oriented Programming (AOP) is a programming paradigm aiming to extract cross-cutting functionalities, such as logging, into what\u0026rsquo;s known as \u0026ldquo;Aspects\u0026rdquo;.\nThis is achieved by adding behavior (\u0026ldquo;Advice\u0026rdquo;) to existing code without changing the code itself. We specify which code we want to add the behavior to using special expressions (\u0026ldquo;Pointcuts\u0026rdquo;).\nFor example, we can tell the AOP framework to log all method calls happening in the system without us having to add the log statement in every method call manually.\nSpring AOP AOP is one of the main components in the Spring framework, it provides declarative services for us, such as declarative transaction management (the famous @Transactional annotation). Moreover, it offers us the ability to implement custom Aspects and utilize the power of AOP in our applications.\nSpring AOP uses either JDK dynamic proxies or CGLIB to create the proxy for a given target object. JDK dynamic proxies are built into the JDK, whereas CGLIB is a common open-source class definition library (repackaged into spring-core).\nIf the target object to be proxied implements at least one interface, a JDK dynamic proxy is used. All of the interfaces implemented by the target type are proxied. If the target object does not implement any interfaces, a CGLIB proxy is created.\nAOP Basic Terminologies The terminologies we will discuss are not Spring specific, they are general AOP concepts that Spring implements.\nLet\u0026rsquo;s start by introducing the four main building blocks of any AOP example in Spring.\nJoinPoint Simply put, a JoinPoint is a point in the execution flow of a method where an Aspect (new behavior) can be plugged in.\nAdvice It\u0026rsquo;s the behavior that addresses system-wide concerns (logging, security checks, etc\u0026hellip;). This behavior is represented by a method to be executed at a JoinPoint. This behavior can be executed Before, After, or Around the JoinPoint according to the Advice type as we will see later.\nPointcut A Pointcut is an expression that defines at what JoinPoints a given Advice should be applied.\nAspect Aspect is a class in which we define Pointcuts and Advices.\nSpring AOP Example And now let\u0026rsquo;s put those definitions into a coding example where we create a Log annotation that logs out a message to the console before the execution of the method starts.\nFirst, let\u0026rsquo;s include Spring\u0026rsquo;s AOP and test starters dependencies.\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-aop\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.7.4\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; Now, let\u0026rsquo;s create the Log annotation we want to use:\nimport java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; @Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) public @interface Log { } What this does is create an annotation that is only applicable to methods and gets processed at runtime.\nThe next step is creating the Aspect class with a Pointcut and Advice:\nimport org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.aspectj.lang.annotation.Pointcut; import org.springframework.stereotype.Component; @Component @Aspect public class LoggingAspect { @Pointcut(\u0026#34;@annotation(Log)\u0026#34;) public void logPointcut(){ } @Before(\u0026#34;logPointcut()\u0026#34;) public void logAllMethodCallsAdvice(){ System.out.println(\u0026#34;In Aspect\u0026#34;); } } Linking this to the definitions we introduced up top we notice the @Aspect annotation which marks the LoggingAspect class as a source for @Pointcut and Advice (@Before). Note as well that we annotated the class as a @Component to allow Spring to manage this class as a Bean.\nMoreover, we used the expression @Pointcut(\u0026quot;@annotation(Log)\u0026quot;) to describe which potential methods (JoinPoints) are affected by the corresponding Advice method. In this case, we want to add the advice to all methods that are annotated with our @Log annotation.\nThis brings us to @Before(\u0026quot;logPointcut()\u0026quot;) that executes the annotated method logAllMethodCallsAdvice before the execution of any method annotated with @Log.\nNow, let\u0026rsquo;s create a Spring Service that will use the aspect we defined:\nimport org.springframework.stereotype.Service; @Service public class ShipmentService { @Log // this here is what\u0026#39;s called a join point  public void shipStuff(){ System.out.println(\u0026#34;In Service\u0026#34;); } } And let\u0026rsquo;s test it out in a @SpringBootTest\nimport org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; @SpringBootTest class AopApplicationTests { @Autowired ShipmentService shipmentService; @Test void testBeforeLog() { shipmentService.shipStuff(); } } This will spin up a Spring context and load the LoggingAspect and the ShipmentService. Next, in the test method, we call the shipStuff() method which was annotated by @Log.\nIf we check the console we should see\nIn Aspect In Service This means that the logAllMethodCallsAdvice method was indeed executed before the shipStuff() method.\nDeeper Look Into Spring AOP\u0026rsquo;s Annotations Let\u0026rsquo;s explore the full range of capabilities offered by Spring\u0026rsquo;s AOP annotations.\nPointcut Pointcut expressions start with a Pointcut Designator (PCD), which specifies what methods to be targeted by our Advice.\nexecution This is used to match a joinPoint method\u0026rsquo;s signature.\n@Component @Aspect public class LoggingAspect { ... @Pointcut(\u0026#34;execution(public void io.reflectoring.springboot.aop.ShipmentService.shipStuffWithBill())\u0026#34;) public void logPointcutWithExecution(){} } The above Pointcut will match the method named shipStuffWithBill with the signature public void that lives in the class io.reflectoring.springboot.aop.ShipmentService.\nNow, let\u0026rsquo;s add Advice matching the above Pointcut\n@Component @Aspect public class LoggingAspect { ... @Pointcut(\u0026#34;execution(public void io.reflectoring.springboot.aop.ShipmentService.shipStuffWithBill())\u0026#34;) public void logPointcutWithExecution(){} @Before(\u0026#34;logPointcutWithExecution()\u0026#34;) public void logMethodCallsWithExecutionAdvice() { System.out.println(\u0026#34;In Aspect from execution\u0026#34;); } } And let\u0026rsquo;s put it to the test.\n@SpringBootTest class AopApplicationTests { @Autowired ShipmentService shipmentService; ... @Test void testBeforeLogWithBill() { shipmentService.shipStuffWithBill(); } } This should print out\nIn Aspect from execution In Service with Bill Note, that we can also use Wildcards to write a more flexible expression. For example, the expression\nexecution(public void io.reflectoring.springboot.aop.ShipmentService.*()) will match any public void method that doesn\u0026rsquo;t take parameters in ShipmentService.\nMoreover, the expression\nexecution(public void io.reflectoring.springboot.aop.ShipmentService.*(..)) will match any public void method that takes zero or more parameters in ShipmentService.\nwithin This is used to match all the JoinPoint methods in a given class, package, or sub-package.\n@Component @Aspect public class LoggingAspect { ... @Pointcut(\u0026#34;within(io.reflectoring.springboot.aop.BillingService)\u0026#34;) public void logPointcutWithin() {} @Before(\u0026#34;logPointcutWithin()\u0026#34;) public void logMethodCallsWithinAdvice() { System.out.println(\u0026#34;In Aspect from within\u0026#34;); } } Let\u0026rsquo;s introduce a new Service, called the BillingService.\n@Service public class BillingService { public void createBill() { System.out.println(\u0026#34;Bill created\u0026#34;); } } And putting it to the test\n@SpringBootTest class AopApplicationTests { ... @Autowired BillingService billingService; @Test void testWithin() { billingService.createBill(); } } This will give us\nIn Aspect from within Bill created Note that we can also use Wildcards to be more flexible. For example, let\u0026rsquo;s write an expression to match all methods in the package io.reflectoring.springboot.aop\nwithin(io.reflectoring.springboot.aop.*) args This is used to match arguments of JointPoint methods.\n@Component @Aspect public class LoggingAspect { ... @Pointcut(\u0026#34;execution(public void io.reflectoring.springboot.aop.BillingService.createBill(Long))\u0026#34;) public void logPointcutWithArgs() {} @Before(\u0026#34;logPointcutWithArgs()\u0026#34;) public void logMethodCallsWithArgsAdvice() { System.out.println(\u0026#34;In Aspect from Args\u0026#34;); } } Now, let\u0026rsquo;s add a method that takes a Long argument.\n@Service public class BillingService { ... public void createBill(Long price) { System.out.println(\u0026#34;Bill Created: \u0026#34; + price); } } And the test\n@SpringBootTest class AopApplicationTests { @Test void testWithArgs() { billingService.createBill(10L); } } This should output\nIn Aspect from Args Bill Created: 10 @annotation This is used to match a JointPoint method annotated with a given annotation. We used it in our first example of AOP.\n@Component @Aspect public class LoggingAspect { ... @Pointcut(\u0026#34;@annotation(Log)\u0026#34;) public void logPointcut(){ } @Before(\u0026#34;logPointcut()\u0026#34;) public void logAllMethodCallsAdvice(){ System.out.println(\u0026#34;In Aspect\u0026#34;); } ... } Then, we annotated a method with it\n@Service public class ShipmentService { @Log // this here is what\u0026#39;s called a join point  public void shipStuff(){ System.out.println(\u0026#34;In Service\u0026#34;); } } With the test\n@SpringBootTest class AopApplicationTests { @Autowired ShipmentService shipmentService; ... @Test void testBeforeLog() { shipmentService.shipStuff(); } ... } Which should output\nIn Aspect In Service Combining PointCut Expressions We can combine more than a single PointCut expression using logical operators, which are \u0026amp;\u0026amp; (and), || (or) and ! (not) operators.\nSay we have an OrderService.\n@Service public class OrderService { public String orderStuff() { System.out.println(\u0026#34;Ordering stuff\u0026#34;); return \u0026#34;Order\u0026#34;; } public void cancelStuff() { System.out.println(\u0026#34;Canceling stuff\u0026#34;); } } Now, let\u0026rsquo;s write a PointCut that matches all the methods in OrderService and that has a return type of String.\n@Component @Aspect public class LoggingAspect { ... @Pointcut(\u0026#34;within(io.reflectoring.springboot.aop.OrderService) \u0026amp;\u0026amp; execution(public String io.reflectoring.springboot.aop.OrderService.*(..))\u0026#34;) public void logPointcutWithLogicalOperator(){} @Before(\u0026#34;logPointcutWithLogicalOperator()\u0026#34;) public void logPointcutWithLogicalOperatorAdvice(){ System.out.println(\u0026#34;In Aspect from logical operator\u0026#34;); } } And as a test\n@SpringBootTest class AopApplicationTests { ... @Autowired OrderService orderService; ... @Test void testOrderWithLogicalOperator() { orderService.orderStuff(); } @Test void testCancelWithLogicalOperator() { orderService.cancelStuff(); } } The testOrderWithLogicalOperator method should print out\nIn Aspect from logical operator Ordering stuff While the method testCancelWithLogicalOperator should print out\nCanceling stuff Advice Annotations So far we have been using the @Before Advice annotation simply. Spring AOP, however, provides more interesting functionalities.\n@Before We can capture the JoinPoint at the @Before annotated method which offers us much useful information like the method name, method arguments, and many more. For example, let\u0026rsquo;s log the name of the method.\n@Component @Aspect public class LoggingAspect { @Pointcut(\u0026#34;@annotation(Log)\u0026#34;) public void logPointcut(){} @Before(\u0026#34;logPointcut()\u0026#34;) public void logAllMethodCallsAdvice(JoinPoint joinPoint){ System.out.println(\u0026#34;In Aspect at \u0026#34; + joinPoint.getSignature().getName()); } } And testing it\n@SpringBootTest class AopApplicationTests { @Autowired ShipmentService shipmentService; @Test void testBeforeLog() { shipmentService.shipStuff(); } } Will print out\nIn Aspect at shipStuff In Service @After This advice is run after the method finishes running, this could be by normally returning or by throwing an exception.\nLet\u0026rsquo;s introduce a new annotation\n@Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) public @interface AfterLog {} @Component @Aspect public class LoggingAspect { ... @Pointcut(\u0026#34;@annotation(AfterLog)\u0026#34;) public void logAfterPointcut(){} @After(\u0026#34;logAfterPointcut()\u0026#34;) public void logMethodCallsAfterAdvice(JoinPoint joinPoint) { System.out.println(\u0026#34;In After Aspect at \u0026#34; + joinPoint.getSignature().getName()); } } And let\u0026rsquo;s modify our service to use the new annotation\n@Service public class OrderService { ... @AfterLog public void checkStuff() { System.out.println(\u0026#34;Checking stuff\u0026#34;); } } And as for the test\n@SpringBootTest class AopApplicationTests { ... @Test void testCheckingStuffWithAfter() { orderService.checkStuff(); } } This should output\nChecking stuff In After Aspect at checkStuff @AfterReturning This is similar to @After but it\u0026rsquo;s run only after a normal execution of the method.\n@AfterThrowing This is similar to @After but it\u0026rsquo;s run only after an exception is thrown while executing the method.\n@Around This annotation allows us to take actions either before or after a JoinPoint method is run. We can use it to return a custom value or throw an exception or simply let the method run and return normally.\nLet\u0026rsquo;s start by defining a new ValidationService\n@Service public class ValidationService { public void validateNumber(int argument) { System.out.println(argument + \u0026#34; is valid\u0026#34;); } } And a new Aspect class\n@Component @Aspect public class ValidationAspect { @Pointcut(\u0026#34;within(io.reflectoring.springboot.aop.ValidationService)\u0026#34;) public void validationPointcut(){} @Around(\u0026#34;validationPointcut()\u0026#34;) public void aroundAdvice(ProceedingJoinPoint joinPoint) throws Throwable { System.out.println(\u0026#34;In Around Aspect\u0026#34;); int arg = (int) joinPoint.getArgs()[0]; if (arg \u0026lt; 0) throw new RuntimeException(\u0026#34;Argument should not be negative\u0026#34;); else joinPoint.proceed(); } } The above Pointcut expression will capture all methods that are in the class ValidationService. Then, the aroundAdvice() advice will check the first argument of the method if it\u0026rsquo;s negative it will throw an exception, otherwise it will allow the method to execute and return normally.\n@SpringBootTest class AopApplicationTests { ... @Autowired ValidationService validationService; @Test void testValidAroundAspect() { validationService.validateNumber(10); } } This will print out\nIn Around Aspect 10 is valid And now let\u0026rsquo;s try a case where we will get an exception.\n@SpringBootTest class AopApplicationTests { ... @Autowired ValidationService validationService; ... @Test void testInvalidAroundAspect() { validationService.validateNumber(-4); } } This should output\nIn Around Aspect java.lang.RuntimeException: Argument should not be negative ... Conclusion Aspect Oriented Programming (AOP) allows us to address cross-cutting problems by coding our solutions into Aspects that are invoked by the Spring AOP framework.\nIt forms one of the main building blocks of the Spring framework allowing it to hide complexity behind Aspects.\nThe framework offers us a powerful collection of annotations that we covered and ran through examples testing each one of them.\n","date":"October 17, 2022","image":"https://reflectoring.io/images/stock/0126-books-1200x628-branded_hu0b99844ea1b513b030a138fea8f9b8b1_333243_650x0_resize_q90_box.jpg","permalink":"/aop-spring/","title":"AOP with Spring (Boot)"},{"categories":["Software Craft"],"contents":"Continuous Deployment is the state-of-the-art way to ship software these days. Often, however, it\u0026rsquo;s not possible to practice continuous deployment because the context doesn\u0026rsquo;t allow it (yet).\nIn this article, we\u0026rsquo;re going to take a look at what continuous deployment means, the benefits it brings, and what tools and practices help us to build a successful continuous delivery pipeline.\nWhat\u0026rsquo;s Continuous Deployment? Let\u0026rsquo;s start by discussing what \u0026ldquo;continuous deployment\u0026rdquo; means.\nIt\u0026rsquo;s often described as a pipeline that transports software changes from the developers to the production environment:\nEach developer contributes changes to the pipeline, which go through a series of steps until they are automatically deployed to production.\nInstead of a pipeline, we can also use the metaphor of a conveyor belt that takes the developers' changes and transports them into the production environment. Like the conveyor belt triggered the industrialization in the 20th century, we might even think of continuous deployment triggering the industrialization of software development. I don\u0026rsquo;t think we\u0026rsquo;re quite there, yet, however, because setting up a proper continuous deployment pipeline is still harder (and more expensive) than it should be, discouraging some developers and managers from implementing the practice.\nIn any case, the main point of continuous deployment is that we have an automated way of getting changes into production quickly and with little risk to break things in production.\nIt\u0026rsquo;s not enough to set up a deployment pipeline and have developers drop their changes into it. The changes could block the pipeline (a failing build, for example), or they could introduce bugs that break things in production.\nContinuous deployment is all about confidence. We pass the responsibility of deploying our application to an automated system. Giving up control means that we need to be confident that this system is working. We also need to be confident that the system will notify us if something is wrong.\nIf we don\u0026rsquo;t have confidence in our continuous deployment systems and processes, chances are that we will want to revert to manual deployments because they give us more control.\nManual deployments, however, are proven to be slower and riskier, so we\u0026rsquo;ll want to build the most confidence-inspiring automated continuous deployment pipeline we can. And that means not only building confidence in the technical aspects of the pipeline, but also the methods and processes around it.\nContinuous Integration vs. Continuous Delivery vs. Continuous Deployment Continuous integration (CI) means that an automated system integrates the changes all developers made into the main branch regularly.\nThe changes in the main branch will trigger a build that compiles and packages the code and runs automated tests to check that the changes have not introduced regressions.\nContinuous delivery means that the automated system also creates a deployable artifact with each run. These artifacts might be stored in a package registry as an NPM package or a Docker image, for example. We can then decide to deploy the package into production or not.\nWhile continuous delivery ensures that we have a deployable artifact containing the latest changes from all developers at all times, continuous deployment goes a step further and ensures that the automated system deploys this artifact into production as soon as it has been created.\nWithout continuous integration, there is no continuous delivery and without continuous delivery, there is no continuous deployment.\nTrunk-based Development As we pointed out, continuous deployment is all about improving development velocity while keeping the risk of deploying changes to production low.\nTo keep the risk of changes low, the changes have to be as small as possible. A practice that directly supports small changes is trunk-based development:\nTrunk-based development means that developers each contribute their changes to the main branch - the trunk, main branch, or mainline - as often as possible, making each change small enough that it can be understood and reviewed quickly. A rule of thumb is to say that each developer\u0026rsquo;s changes are merged into the main branch at least once a day.\nSince each change is so small, we trust that it won\u0026rsquo;t introduce any issues so it can be automatically deployed to production by our continuous deployment pipeline.\nThe goal of trunk-based development is to avoid long-lived, large, and risky feature branches that require comprehensive peer reviews in favor of small iterations on the code that can be directly committed into the trunk. Practices that directly support trunk-based development are pair programming or mob programming because they have built-in peer review and knowledge sharing. That means a change can be merged into the trunk without the ritual of a separate code review.\nThe bigger the changes we introduce to the trunk the bigger the risk of things breaking in production, so trunk-based development forces you to do small changes and to have good review or pairing practices in place.\nHow does trunk-based development support continuous deployment? Trunk-based development is all about pushing small changes to the mainline continuously. Each change is so small and risk-free that we trust our automated pipeline to deploy it into production right away.\nFeature Flags Feature flags go hand in hand with trunk-based development, but they bring value even when we\u0026rsquo;re using feature branches and pull requests to merge changes into the trunk.\nSince each change that we merge to the trunk is supposed to be small, we will have to commit unfinished changes. But we don\u0026rsquo;t want these unfinished changes to be visible to the users of our application, yet.\nTo hide certain features from users, we can introduce a feature flag. A feature flag is an if/else branch in our code that enables a certain code path only if a feature is enabled.\nIf we put our changes behind a disabled feature flag, we can iterate on the feature commit by commit and deploy each commit to production until the feature is complete. Then, we can enable the completed feature for the users to enjoy.\nWe can even decide to release the feature to certain groups of early adopters before releasing it to all users. Feature flags allow a range of different rollout strategies like a rollout to only a percentage of users or a certain cohort of users.\nFor example, we can only enable the feature for ourselves, so we can test the new feature in production before rolling it out to the rest of the users.\nTo control the state of the feature flags (enabled, disabled), we can use a feature management platform like LaunchDarkly that allows us to enable and disable a feature flag at any time, without redeploying or restarting our application.\nHow do feature flags support continuous deployment? Feature flags allow us to deploy unfinished changes into production. We can push small changes to production continuously and enable the feature flag once the feature is ready. We don\u0026rsquo;t need to merge long-lived feature branches that are potentially risky and might break a deployment.\nQuick Code Review To get changes into the trunk quickly and safely, it\u0026rsquo;s best to have another pair of eyes on the changes before they get merged into the trunk. There are two common approaches to getting code reviewed properly: pull requests and pair (or mob) programming.\nWhen using pull requests, a developer has to raise a pull request with their changes. The term comes from open source development where a contributor requests the maintainers of a project to \u0026ldquo;pull\u0026rdquo; their changes into the main branch of the project.\nAnother developer then reviews the changes in the pull request and approves it or requires changes to the code. Finally, the pull request is merged into the main branch.\nWhile pull requests are a great tool for distributed open-source development where strangers can contribute code, they sometimes feel like overhead in a corporate setting, where people communicate synchronously via video chat (or even in the office in real life!). In these cases, pair programming or mob programming can be an alternative.\nIn pair programming, we work on a change together. Since we\u0026rsquo;ve had 4 (or more) eyes on the problem the whole time, we don\u0026rsquo;t need to create a pull request that has to be reviewed but can instead merge our changes directly into the main branch.\nIn any case - whether we\u0026rsquo;re using pull requests or pair programming - to support continuous deployment we should make sure that we merge our changes into the main branch as quickly as possible. That means that pull requests shouldn\u0026rsquo;t wait days to be reviewed, but should be reviewed within a day at the least.\nHow does quick code review support continuous deployment? The longer we need to merge a change into the main branch, the more other changes will have accumulated in the meantime. Every accumulated change to the main branch might be incompatible with the change we want to merge, leading to merge conflicts, a broken deployment pipeline, or even a bad deployment that breaks a certain feature in production - all things that we want to avoid with continuous deployment.\nAutomated Tests I probably don\u0026rsquo;t need to convince you to write automated tests that run with every build. It has been common practice in the industry for quite some time.\nEvery change we make should trigger an automated suite of tests that checks if we have introduced any regressions into the codebase. Continuous deployment can\u0026rsquo;t work without automated tests, because these tests are what give us the confidence to let a machine decide when to deploy our application.\nThat decision is pretty simple: if the tests were successful, deploy. If there was at least one failing test, don\u0026rsquo;t deploy. Only if the test suite is of high quality will we be confident with deploying often.\nWhen a test is failing, the deployment pipeline is blocked. No change is going to be deployed until the test has been fixed. If it takes too long to fix the test, many other changes might have accumulated in the pipeline and one of them might have caused another test to fail, which was hidden by the first failing test.\nSo, if the pipeline is blocked, unblocking it is priority number one!\nThe majority of the automated tests will usually be unit tests that each cover an isolated part of the codebase (i.e. a small group of classes, a single class, or even a method). These are quick to write and relatively easy to maintain.\nHowever, unit tests don\u0026rsquo;t prove that the \u0026ldquo;units\u0026rdquo; work well with each other, so you should at least think about adding some integration tests to your test suite. The definition of \u0026ldquo;integration test\u0026rdquo; isn\u0026rsquo;t the same in every context. They might start your application locally, send some requests against it, and then verify if the responses are as expected, for example.\nIt\u0026rsquo;s usually a good idea to have many cheap, quick, and stable tests (unit tests) and fewer complex, maintenance-heavy tests (manual tests) as outlined in the test pyramid:\nThat said, there may be arguments for an application to have no unit tests at all and instead only integration or end-to-end tests, so make your own opinion about which tests make the most sense in your context. It\u0026rsquo;s a good idea to have a testing strategy!\nHow do automated tests support continuous deployment? Without automated tests, we can\u0026rsquo;t have any confidence that the changes we push to production won\u0026rsquo;t break anything. A test suite with a high coverage gives us the confidence to deploy any change directly to production.\nPost-Deployment Verification We would have even more confidence in our automated deployment if our changes were automatically deployed to a staging environment and tested there before being deployed to production.\nThis is where post-deployment verification (PDV) tests come into play.\nAs the name suggests, a PDV automatically checks if everything is alright after having deployed the application. The difference to the automated tests discussed in the previous section is that post-deployment verifications run against the real application in a real environment, whereas automated tests usually run in a local environment where all external dependencies are mocked away.\nThat means that PDV checks can also verify if external dependencies like a database or a 3rd party service are working as expected.\nAs an example, a PDV could log into the application deployed on staging as a real user, trigger a few of the main use cases, and verify if the results are as expected.\nWith a PDV, our continuous deployment pipeline might be configured as follows:\n Run the automated tests. Deploy the application to the staging environment. Run a post-deployment verification test against the staging environment. If the verification was successful, deploy the application to the production environment. Optional: run a post-deployment verification test against the production environment. Optional: if the verification failed, roll the production environment back to the previous (hopefully working) version.  This way, we have a safety net built into our deployment pipeline: it will only deploy to production if the deployment to staging has proven to be successful. This alone gives us a lot of trust in the pipeline and the confidence we need to let a machine decide to deploy for us.\nWe can additionally add a PDV against production after a production deployment, to check that the deployment was successful, but this task can also be done by synthetic monitoring.\nHow does post-deployment verification support continuous deployment? Post-deployment verification adds another safety net to our deployment pipeline that helps to identify bad changes before they go into production. This gives us more confidence in our automated deployment pipeline.\nSynthetic Monitoring Once our application is deployed to a staging or production environment, how do we know that it\u0026rsquo;s still working as expected an hour after the latest change has been deployed? We don\u0026rsquo;t want to check it manually every couple of minutes, so we can set up a synthetic monitoring job that checks that for us.\nSynthetic monitoring means that were are generating artificial (synthetic) traffic on our application to monitor if it\u0026rsquo;s working as expected.\nA synthetic monitoring check should log into the application as a dedicated test user, run some of the main use cases, and verify that the results are as expected. If it\u0026rsquo;s failing for some reason, or not producing the expected results, it fails and alerts us that the application isn\u0026rsquo;t working as expected. A human can then investigate what\u0026rsquo;s wrong.\nIf we configure a synthetic monitoring check to run every couple of minutes, it gives us a lot of confidence because we know the application is working while we sleep.\nAs a bonus, we can re-use our post-deployment verification checks as synthetic monitoring checks!\nHow does synthetic monitoring support continuous deployment? With synthetic monitoring we have an additional safety net that can catch errors in the production environment before they do too much damage. Knowing that we have that safety net gives us the confidence to deploy small changes continuously.\nMetrics Say a synthetic monitoring check alerts us that something is wrong. Maybe it couldn\u0026rsquo;t finish the main use case for some reason. Is it because the cloud service we\u0026rsquo;re using failed? Is it because the queue we\u0026rsquo;re using is full? Is it because the servers ran out of memory or CPU?\nWe can only investigate these things if we have dashboards with some charts that show metrics like:\n successful and failing requests to the cloud service over time, depth of the queue over time, and memory and CPU consumption over time.  Having these metrics, when we get alerted, we can take a glance at the dashboards and check if there are any suspicious spikes at the time of the alert. If there are, they might lead us to the root cause of our problem.\nGetting metrics like this means that our application needs to emit events that count the number of requests, for example. And our infrastructure (queues, servers) needs to emit metrics, too. These metrics should be collected in a central hub where we can view them conveniently.\nHow do metrics support continuous deployment? Metrics give us confidence that we can figure out the root cause when something went wrong. The more confidence, the more likely are we going to push small changes to production continuously.\nAlerting and On-call If something in our production environment goes wrong, we want to know about it before the users can even start to complain about it. That means we have to configure the system to alert us under certain conditions.\nAlerting should be configured to synthetic monitoring checks and certain metrics. If a metric like CPU or memory consumption goes above or below a certain threshold for too long, it should send an alert and wake up a human to investigate and fix the problem.\nWho will be notified of the alert, though? If it\u0026rsquo;s in the middle of the night, we don\u0026rsquo;t want to wake up the whole team. And anyway, if the whole team is paged, chances are that no one responds because everyone thinks someone else will do it. Alerts can have different priorities and only high-priority alerts wake people up in the middle of the night.\nThis is where an on-call rotation comes into play. Every week (or 2 weeks, or month), a different team member is on-call, meaning that the alerting is routed to them. They get the alert in the middle of the night, investigate the root cause, and fix it if possible. If they can\u0026rsquo;t fix the issue themselves, they alert other people who can help them. Or, they decide that the issue isn\u0026rsquo;t important enough and the fixing can wait until the next morning (in which case the alerting might need to be adjusted to not wake anyone up when this error happens the next time).\nWhile alerting and an on-call rotation are not necessary for implementing a continuous deployment, they strongly support it. If errors in production go unnoticed because there was no alert, chances are that you lose faith in your automated deployments and revert to manual deployments.\nHow does alerting support continuous deployment? Knowing that we will be alerted if something goes wrong gives us yet another confidence boost that makes it easier for us to push small changes to production continuously.\nStructured Logging I probably don\u0026rsquo;t need to stress this, but proper logging makes all the difference in investigating an issue. It gives us confidence that we can figure out what\u0026rsquo;s wrong after the fact. Together with metrics dashboards, logs are a powerful observability tool.\nLogs should be structured (so they\u0026rsquo;re searchable) and collected in a central log server so everyone who needs access can access them via a web interface.\nWhile not strictly necessary for continuous deployment, knowing that we have a proper logging setup boosts our confidence so that we\u0026rsquo;re more likely to trust the continuous deployment process.\nHow does structured logging support continuous deployment? Similar to metrics, proper logging boosts our confidence that we can figure out the root cause when something went wrong. This gives us peace of mind to push small changes to production continuously.\nConclusion Continuous deployment is just as much about building confidence as it is about the tooling that supports the continuous deployment pipeline.\nIf we don\u0026rsquo;t have confidence in our automated deployment processes, we will want to go back to manual deployments because they give us the feeling of control. And we can only have confidence if the automation works most of the time and if it alerts us when something goes wrong so we can act.\nBuilding a continuous deployment pipeline can be a big cost, but once set up, it will save a lot of time and effort and open the way towards a DevOps culture that trusts in pushing changes to production continuously.\n","date":"October 10, 2022","image":"https://reflectoring.io/images/stock/0121-pipes-1200x628_hu91bd8723cad7b8d82c83b03f91500170_199413_650x0_resize_q90_box.jpg","permalink":"/blog/2022/2022-10-10-continuous-deployment-practices/","title":"9 Practices to Support Continuous Deployment"},{"categories":["Spring"],"contents":"It is common to encounter applications that run in different time zones. Handling date and time operations consistently across multiple layers of an application can be tricky.\nIn this article, we will try to understand the options available in Java and apply them in the context of a Spring application to effectively handle time zones.\n Example Code This article is accompanied by a working code example on GitHub. Understanding GMT, UTC, and DST  Greenwich Mean Time (GMT) is a time zone used in some parts of the world, mainly Europe and Africa. It was recommended as the prime meridian of the world in 1884 and eventually became the basis for a global system of time zones. However, the United Nations officially adopted UTC as a standard in 1972 since it was more accurate than GMT for setting clocks. Universal Coordinated Time (UTC) is not a time zone. It is a universally preferred standard that can be used to display time zones. Daylight Savings Time (DST) is the practice of setting clocks forward by one hour in the summer months and back again in the fall, to make better use of natural daylight. Neither GMT nor UTC are affected by DST. To account for DST changes, countries or states usually switch to another time zone. For instance, in Australian summer, the states that observe DST will move from Australian Eastern Standard Time (AEST) to Australian Eastern Daylight Time (AEDT).  Operations around dates, times, and time zones can be confusing and prone to errors. To understand some problems around dates refer to this article. We will deep-dive into various aspects of handling time zones in the further sections.\nDrawbacks of Legacy java.util Date Classes Let\u0026rsquo;s look at a few reasons why you should avoid the date and time classes in the java.util package when developing applications.\nMissing Timezone Information public class DeprecatedExamples { @Test public void testCurrentDate() { Date now = new Date(); Date before = new Date(1661832030000L); assertThat(now).isAfter(before); } }  java.util.Date represents an instant in time. Also, it has no time zone information. So, it considers any given date to be in the default system time zone which could differ depending on which computer the code runs on. For instance, if a person runs this test from their computer in another country, they might see a different date and time derived from the given milliseconds.  Creating Date Objects public class DeprecatedExamples { @Test public void testCustomDate() { System.out.println(\u0026#34;Create date for 17 August 2022 23:30\u0026#34;); int year = 2022 - 1900; int month = 8 - 1; Date customDate = new Date(year, month, 17, 23, 30); assertThat(customDate.getYear()).isEqualTo(year); assertThat(customDate.getMonth()).isEqualTo(month); assertThat(customDate.getDate()).isEqualTo(17); } }  Creating a custom date with this API is very inconvenient. Firstly, the year starts with 1900, so we must subtract 1900 so that the right year is considered. Also, to derive the months, we need to use indexes 0-11. In this example, to create a date in August we would use 7 and not 8.  Mutable Classes public class DeprecatedExamples { @Test public void testMutableClasses() { System.out.println(\u0026#34;Create date for 17 August 2022 23:30\u0026#34;); int year = 2022 - 1900; int month = 8 - 1; Date customDate = new Date(year, month, 17, 23, 30); assertThat(customDate.getHours()).isEqualTo(23); assertThat(customDate.getMinutes()).isEqualTo(30); customDate.setHours(20); customDate.setMinutes(50); assertThat(customDate.getHours()).isEqualTo(20); assertThat(customDate.getMinutes()).isEqualTo(50); Calendar calendar = Calendar.getInstance( TimeZone.getTimeZone(\u0026#34;Australia/Sydney\u0026#34;)); assertThat(calendar.getTimeZone()) .isEqualTo(TimeZone.getTimeZone(\u0026#34;Australia/Sydney\u0026#34;)); calendar.setTimeZone(TimeZone.getTimeZone(\u0026#34;Europe/London\u0026#34;)); assertThat(calendar.getTimeZone()) .isEqualTo(TimeZone.getTimeZone(\u0026#34;Europe/London\u0026#34;)); } }  Immutability is a key concept that ensures that java objects are thread-safe and concurrent access does not lead to an inconsistent state. The Date API provides mutators such as setHours(), setMinutes(), setDate(), making Date objects mutable. Similarly, the Calendar class also has setter methods setTimeZone(), add() which allows an object to be modified. Since the date objects are mutable, it becomes the responsibility of the developer to clone the object before use to be thread-safe.  Formatting Dates public class DeprecatedExamples { @Test public void testDateFormatter() { TimeZone zone = TimeZone.getTimeZone(\u0026#34;Europe/London\u0026#34;); DateFormat dtFormat = new SimpleDateFormat(\u0026#34;dd/MM/yyyy HH:mm\u0026#34;); Calendar cal = Calendar.getInstance(zone); Date date = cal.getTime(); String strFormat = dtFormat.format(date); assertThat(strFormat).isNotNull(); } } With the Date API, formatting can be quite tedious and the process involves numerous steps. As seen in the example above, there are various flaws in this process:\n The Date API itself does not store any formatting information. Therefore, we need to use it in combination with the SimpleDateFormat. The SimpleDateFormat class is not thread-safe so it cannot be used in multithreaded applications without proper synchronization. As the Date API does not have time zone information, we have to use the Calendar class. However, it cannot be formatted, so we extract the date from Calendar for formatting.  SQL Dates java.sql.Date and java.sql.Timestamp are wrapper classes around java.util.Date that handles SQL-specific requirements. They represent an SQL DATE and TIMESTAMP type respectively and should be used only when working with databases like to get or set a date or a timestamp on a java.sql.PreparedStatement, java.sql.ResultSet, java.sql.SQLData and other similar datatypes.\n In retrospect, this API hasn\u0026rsquo;t been designed very thoughtfully since java.sql.Date, java.sql.Time and java.sql.Timestamp all extend the java.util.Date class. Due to differences between the subclasses and java.util.Date, the documentation itself suggests not to use the Date class generically thus violating the Liskov Substitution Principle.  Deprecated methods  Most of the methods in the java.util.Date class are deprecated. However, they are not officially removed from the JDK library to support legacy codebases. To overcome the shortcomings of java.util classes, Java 8 introduced the new DateTime API in the java.time package.   Java 8 java.time API The newer DateTime API is heavily influenced by the Jodatime library which was the de facto standard before Java 8. In this section, we will look at some commonly used date-time classes and their corresponding operations.\nLocalDate java.time.LocalDate is an immutable date object that does not store time or time zone information. However, we can pass the java.time.ZoneId object to get the local date in a particular time zone.\nSome LocalDate code examples:\npublic class DateTimeExamples { private Clock clock; @BeforeEach public void setClock() { clock = Clock.system(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); } @Test public void testLocalDate() { // we can create a new LocalDate at the time of any given Clock  LocalDate today = LocalDate.now(clock); assertThat(today.get(ChronoField.MONTH_OF_YEAR)).isPositive(); assertThat(today.get(ChronoField.YEAR)).isPositive(); assertThat(today.get(ChronoField.DAY_OF_MONTH)).isPositive(); Assertions.assertThrows(UnsupportedTemporalTypeException.class, () -\u0026gt; { today.get(ChronoField.HOUR_OF_DAY); }); // LocalDate only has the year, month, and day fields, no hours  LocalDate customDate = LocalDate.of(2022, Month.SEPTEMBER, 2); assertThat(customDate.getYear()).isEqualTo(2022); assertThat(customDate.getMonth()).isEqualTo(Month.SEPTEMBER); assertThat(customDate.getDayOfMonth()).isEqualTo(2); Assertions.assertThrows(UnsupportedTemporalTypeException.class, () -\u0026gt; { customDate.get(ChronoField.HOUR_OF_DAY); }); // creating a LocalDate in another time zone  assertThat(clock.getZone()) .isEqualTo(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); LocalDate zoneDate = LocalDate .now(ZoneId.of(\u0026#34;America/Anchorage\u0026#34;)); assertThat(today) .isCloseTo(zoneDate, within(1, ChronoUnit.DAYS)); // formatting a DateTime  DateTimeFormatter formatter = DateTimeFormatter .ofPattern(\u0026#34;dd-MM-yyyy\u0026#34;); assertThat(zoneDate).isEqualTo( LocalDate.parse(zoneDate.format(formatter), formatter)); // exception when trying to create an invalid date  Assertions.assertThrows(DateTimeException.class, () -\u0026gt; { LocalDate.of(2022, Month.SEPTEMBER, 31); }); } } LocalTime java.time.LocalTime is an immutable object that stores time up to nanosecond precision. It does not store date or time zone information. However, java.time.ZoneId can be used to get the time at a specific time zone.\nSome LocalTime code examples:\npublic class DateTimeExamples { private Clock clock; @BeforeEach public void setClock() { clock = Clock.system(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); } @Test public void testLocalTime() { //Time based on the time zone set in `Clock`  LocalTime now = LocalTime.now(clock); assertThat(now.get(ChronoField.HOUR_OF_DAY)).isPositive(); assertThat(now.get(ChronoField.MINUTE_OF_DAY)).isPositive(); assertThat(now.get(ChronoField.SECOND_OF_DAY)).isPositive(); //java.time.temporal.UnsupportedTemporalTypeException:  // Unsupported field: MonthOfYear  Assertions.assertThrows(UnsupportedTemporalTypeException.class, () -\u0026gt; { now.get(ChronoField.MONTH_OF_YEAR); }); LocalTime customTime = LocalTime.of(21, 40, 50); assertThat(customTime.get(ChronoField.HOUR_OF_DAY)).isEqualTo(21); assertThat(customTime.get(ChronoField.MINUTE_OF_HOUR)).isEqualTo(40); assertThat(customTime.get(ChronoField.SECOND_OF_MINUTE)).isEqualTo(50); // Has offset of UTC-8 or UTC-9  LocalTime zoneTime = LocalTime.now(ZoneId.of(\u0026#34;America/Anchorage\u0026#34;)); assertThat(now) .isCloseTo(zoneTime, within(19, ChronoUnit.HOURS)); DateTimeFormatter formatter = DateTimeFormatter.ofPattern(\u0026#34;HH:mm:ss\u0026#34;); //Should be almost same if not exact  assertThat(LocalTime.parse(zoneTime.format(formatter))) .isCloseTo(zoneTime, within(1, ChronoUnit.SECONDS)); // java.time.DateTimeException:  // Invalid value for HourOfDay (valid values 0 - 23): 25  Assertions.assertThrows(DateTimeException.class, () -\u0026gt; { LocalTime.of(25, 40, 50); }); } } LocalDateTime java.time.LocalDateTime is an immutable object that is a combination of both java.time.LocalDate and java.time.LocalTime.\nSome LocalDateTime code examples:\npublic class DateTimeExamples { private Clock clock; @BeforeEach public void setClock() { clock = Clock.system(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); } @Test public void testLocalDateTime() { //Time based on the time zone set in `Clock`  LocalDateTime currentDateTime = LocalDateTime.now(clock); assertThat(currentDateTime.get(ChronoField.DAY_OF_MONTH)).isPositive(); assertThat(currentDateTime.get(ChronoField.MONTH_OF_YEAR)).isPositive(); assertThat(currentDateTime.get(ChronoField.YEAR)).isPositive(); assertThat(currentDateTime.get(ChronoField.HOUR_OF_DAY)).isPositive(); assertThat(currentDateTime.get(ChronoField.MINUTE_OF_DAY)).isPositive(); assertThat(currentDateTime.get(ChronoField.SECOND_OF_DAY)).isPositive(); // Using Clock Timezone + Local Date + LocalTime  LocalDateTime currentUsingLocals = LocalDateTime.of(LocalDate.now(clock), LocalTime.now(clock)); // Should be almost same if not exact  assertThat(currentDateTime) .isCloseTo(currentUsingLocals, within(5, ChronoUnit.SECONDS)); LocalDateTime customDateTime = LocalDateTime.of(2022, Month.SEPTEMBER, 1, 10, 30, 59); assertThat(customDateTime.get(ChronoField.DAY_OF_MONTH)).isEqualTo(1); assertThat(customDateTime.get(ChronoField.MONTH_OF_YEAR)) .isEqualTo(Month.SEPTEMBER.getValue()); assertThat(customDateTime.get(ChronoField.YEAR)).isEqualTo(2022); assertThat(customDateTime.get(ChronoField.HOUR_OF_DAY)).isEqualTo(10); assertThat(customDateTime.get(ChronoField.MINUTE_OF_HOUR)).isEqualTo(30); assertThat(customDateTime.get(ChronoField.SECOND_OF_MINUTE)).isEqualTo(59); // Comparing zone offset of UTC+2 with Australia/Sydney (UTC+10 OR UTC+11)  LocalDateTime zoneDateTime = LocalDateTime.now(ZoneId.of(\u0026#34;+02:00\u0026#34;)); assertThat(currentUsingLocals) .isCloseTo(zoneDateTime, within(9, ChronoUnit.HOURS)); String currentDateTimeStr = \u0026#34;20-02-2022 10:30:45\u0026#34;; DateTimeFormatter format = DateTimeFormatter.ofPattern(\u0026#34;dd-MM-yyyy HH:mm:ss\u0026#34;); LocalDateTime parsedTime = LocalDateTime.parse(currentDateTimeStr, format); assertThat(parsedTime.get(ChronoField.DAY_OF_MONTH)).isEqualTo(20); assertThat(parsedTime.get(ChronoField.MONTH_OF_YEAR)) .isEqualTo(Month.FEBRUARY.getValue()); assertThat(parsedTime.get(ChronoField.YEAR)).isEqualTo(2022); assertThat(parsedTime.get(ChronoField.HOUR_OF_DAY)).isEqualTo(10); assertThat(parsedTime.get(ChronoField.MINUTE_OF_HOUR)).isEqualTo(30); assertThat(parsedTime.get(ChronoField.SECOND_OF_MINUTE)).isEqualTo(45); //java.time.zone.ZoneRulesException: Unknown time-zone ID: Europ/London  Assertions.assertThrows(ZoneRulesException.class, () -\u0026gt; { LocalDateTime.now(ZoneId.of(\u0026#34;Europ/London\u0026#34;)); }); } } ZonedDateTime java.time.ZonedDateTime is an immutable representation of date, time and time zone. It automatically handles Daylight Saving Time (DST) clock changes via the java.time.ZoneId which internally resolves the zone offset.\nSome ZonedDateTime code examples:\npublic class DateTimeExamples { private Clock clock; @BeforeEach public void setClock() { clock = Clock.system(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); } @Test public void testZonedDateTime() { //Time based on the time zone set in `Clock`  ZonedDateTime currentZoneDateTime = ZonedDateTime.now(clock); assertThat(currentZoneDateTime.getZone()) .isEqualTo(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); assertThat(currentZoneDateTime.get(ChronoField.DAY_OF_MONTH)).isPositive(); assertThat(currentZoneDateTime.get(ChronoField.MONTH_OF_YEAR)).isPositive(); assertThat(currentZoneDateTime.get(ChronoField.YEAR)).isPositive(); assertThat(currentZoneDateTime.get(ChronoField.HOUR_OF_DAY)).isPositive(); assertThat(currentZoneDateTime.get(ChronoField.MINUTE_OF_HOUR)).isPositive(); assertThat(currentZoneDateTime.get(ChronoField.SECOND_OF_MINUTE)).isPositive(); // Clock TZ + LocalDateTime + Specified ZoneId  ZonedDateTime withLocalDateTime = ZonedDateTime.of(LocalDateTime.now(clock), ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); // Should be almost same if not exact  assertThat(currentZoneDateTime) .isCloseTo(withLocalDateTime, within(5, ChronoUnit.SECONDS)); // Clock TZ + LocalDate + LocalTime + Specified zone  ZonedDateTime withLocals = ZonedDateTime.of(LocalDate.now(clock), LocalTime.now(clock), clock.getZone()); // Should be almost same if not exact  assertThat(withLocalDateTime) .isCloseTo(withLocals, within(5, ChronoUnit.SECONDS)); ZonedDateTime customZoneDateTime = ZonedDateTime.of(2022, Month.FEBRUARY.getValue(), MonthDay.now(clock).getDayOfMonth(), 20, 45, 50, 55, ZoneId.of(\u0026#34;Europe/London\u0026#34;)); assertThat(customZoneDateTime.getZone()) .isEqualTo(ZoneId.of(\u0026#34;Europe/London\u0026#34;)); assertThat(customZoneDateTime.get(ChronoField.DAY_OF_MONTH)) .isEqualTo(MonthDay.now(clock).getDayOfMonth()); assertThat(customZoneDateTime.get(ChronoField.MONTH_OF_YEAR)) .isEqualTo(Month.FEBRUARY.getValue()); assertThat(customZoneDateTime.get(ChronoField.YEAR)).isEqualTo(2022); assertThat(customZoneDateTime.get(ChronoField.HOUR_OF_DAY)).isEqualTo(20); assertThat(customZoneDateTime.get(ChronoField.MINUTE_OF_HOUR)).isEqualTo(45); assertThat(customZoneDateTime.get(ChronoField.SECOND_OF_MINUTE)).isEqualTo(50); DateTimeFormatter formatter = DateTimeFormatter.ofPattern(\u0026#34;yyyy-MM-dd HH:mm:ss a\u0026#34;); // This String has no time zone information. Provide one for successful parsing.  String timeStamp1 = \u0026#34;2022-03-27 10:15:30 AM\u0026#34;; // Has offset UTC+0 or UTC+1  ZonedDateTime parsedZonedTime1 = ZonedDateTime.parse(timeStamp1, formatter.withZone(ZoneId.of(\u0026#34;Europe/London\u0026#34;))); // Has offset UTC+10 or UTC+11  ZonedDateTime parsedZonedTime2 = parsedZonedTime1.withZoneSameInstant(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); assertThat(parsedZonedTime1) .isCloseTo(parsedZonedTime2, within(10, ChronoUnit.HOURS)); } } OffsetDateTime java.time.OffsetDateTime is an immutable representation of java.time.Instant that represents an instant in time along with an offset from UTC/GMT. When zone information needs to be saved in the database this format is preferred as it would always represent the same instant on the timeline (especially when the server and database represent different time zones, a conversion that represents time at the same instant would be required).\nSome OffsetDateTime code examples:\npublic class DateTimeExamples { private Clock clock; @BeforeEach public void setClock() { clock = Clock.system(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); } @Test public void testOffsetDateTime() { OffsetDateTime currentDateTime = OffsetDateTime.now(clock); // Offset can be either UTC+10 or UTC+11 depending on DST  Assertions.assertTrue(Stream.of(ZoneOffset.of(\u0026#34;+10:00\u0026#34;), ZoneOffset.of(\u0026#34;+11:00\u0026#34;)).anyMatch(zo -\u0026gt; zo.equals(currentDateTime.getOffset()))); assertThat(currentDateTime.get(ChronoField.DAY_OF_MONTH)).isPositive(); assertThat(currentDateTime.get(ChronoField.MONTH_OF_YEAR)).isPositive(); assertThat(currentDateTime.get(ChronoField.YEAR)).isPositive(); assertThat(currentDateTime.get(ChronoField.HOUR_OF_DAY)).isPositive(); assertThat(currentDateTime.get(ChronoField.MINUTE_OF_HOUR)).isPositive(); assertThat(currentDateTime.get(ChronoField.SECOND_OF_MINUTE)).isPositive(); // For the specified offset, check the difference with the current  // Since Offset is hardcoded, Zone rules will not apply here for +01:00 (No DST)  ZoneOffset zoneOffSet = ZoneOffset.of(\u0026#34;+01:00\u0026#34;); OffsetDateTime offsetDateTime = OffsetDateTime.now(zoneOffSet); assertThat(currentDateTime) .isCloseTo(offsetDateTime, within(10, ChronoUnit.HOURS)); // Offset + LocalDate + LocalTime  // Since Offset here is derived from Zone Id, Zone rules will apply  // and DST changes will be considered  OffsetDateTime fromLocals = OffsetDateTime.of(LocalDate.now(clock), LocalTime.now(clock), currentDateTime.getOffset()); Assertions.assertTrue(Stream.of(ZoneOffset.of(\u0026#34;+10:00\u0026#34;), ZoneOffset.of(\u0026#34;+11:00\u0026#34;)).anyMatch(zo -\u0026gt; zo.equals(fromLocals.getOffset()))); assertThat(currentDateTime) .isCloseTo(fromLocals, within(5, ChronoUnit.SECONDS)); OffsetDateTime fromLocalDateTime = OffsetDateTime.of( LocalDateTime.of(2022, Month.NOVEMBER, 1, 10, 10, 10), currentDateTime.getOffset()); Assertions.assertTrue(Stream.of(ZoneOffset.of(\u0026#34;+10:00\u0026#34;), ZoneOffset.of(\u0026#34;+11:00\u0026#34;)).anyMatch(zo -\u0026gt; zo.equals(fromLocalDateTime.getOffset()))); assertThat(fromLocalDateTime.get(ChronoField.DAY_OF_MONTH)).isEqualTo(1); assertThat(fromLocalDateTime.get(ChronoField.MONTH_OF_YEAR)) .isEqualTo(Month.NOVEMBER.getValue()); assertThat(fromLocalDateTime.get(ChronoField.YEAR)).isEqualTo(2022); assertThat(fromLocalDateTime.get(ChronoField.HOUR_OF_DAY)).isEqualTo(10); assertThat(fromLocalDateTime.get(ChronoField.MINUTE_OF_HOUR)).isEqualTo(10); assertThat(fromLocalDateTime.get(ChronoField.SECOND_OF_MINUTE)).isEqualTo(10); // Defined offset based on zone rules will consider DST.  // Date: 1st Nov 2022 10:10:10  OffsetDateTime fromLocalsWithDefinedOffset = OffsetDateTime.of(LocalDate.now(clock), LocalTime.now(clock), ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;).getRules().getOffset( LocalDateTime.of(2022, Month.NOVEMBER, 1, 10, 10, 10))); assertThat(fromLocalsWithDefinedOffset.getOffset()) .isEqualTo(ZoneOffset.of(\u0026#34;+11:00\u0026#34;)); OffsetDateTime sameInstantDiffOffset = currentDateTime.withOffsetSameInstant(ZoneOffset.of(\u0026#34;+01:00\u0026#34;)); assertThat(currentDateTime) .isCloseTo(sameInstantDiffOffset, within(10, ChronoUnit.HOURS)); OffsetDateTime dt = OffsetDateTime.parse(\u0026#34;2011-12-03T10:15:30+01:00\u0026#34;, DateTimeFormatter.ISO_OFFSET_DATE_TIME); DateTimeFormatter fmt = DateTimeFormatter.ofPattern(\u0026#34;yyyy-MM-dd\u0026#39;T\u0026#39;HH:mm:ss\u0026#39;Z\u0026#39;\u0026#34;); assertThat(fmt.format(dt)).contains(\u0026#34;Z\u0026#34;); ZonedDateTime currentZoneDateTime = ZonedDateTime.now(clock); OffsetDateTime convertFromZoneToOffset = currentZoneDateTime.toOffsetDateTime(); assertThat(currentDateTime) .isCloseTo(convertFromZoneToOffset, within(5, ChronoUnit.SECONDS)); } } Compatibility with the Legacy API As a part of the Date/Time API, methods have been introduced to convert from objects of the old date API to the newer API objects .\nCode examples to convert between old and new date formats:\npublic class DateUtilToTimeExamples { private Clock clock; @BeforeEach public void setClock() { clock = Clock.system(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); } @Test public void testWorkingWithLegacyDateInJava8() { Date date = new Date(); Instant instant = date.toInstant(); assertThat(instant).isNotEqualTo(clock.instant()); ZonedDateTime zdt = instant.atZone(clock.getZone()); assertThat(zdt.getZone()) .isEqualTo(ZoneId.of(\u0026#34;Australia/Sydney\u0026#34;)); LocalDate ld = zdt.toLocalDate(); assertThat(ld).isEqualTo(LocalDate.now(clock)); ZonedDateTime zdtDiffZone = zdt.withZoneSameInstant(ZoneId.of(\u0026#34;Europe/London\u0026#34;)); assertThat(zdtDiffZone.getZone()) .isEqualTo(ZoneId.of(\u0026#34;Europe/London\u0026#34;)); } @Test public void testWorkingWithLegacyCalendarInJava8() { Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone(clock.getZone())); assertThat(calendar.getTimeZone()) .isEqualTo(TimeZone.getTimeZone(\u0026#34;Australia/Sydney\u0026#34;)); Date calendarDate = calendar.getTime(); Instant instant = calendar.toInstant(); assertThat(calendarDate.toInstant()).isEqualTo(calendar.toInstant()); ZonedDateTime instantAtDiffZone = instant.atZone(ZoneId.of(\u0026#34;Europe/London\u0026#34;)); assertThat(instantAtDiffZone.getZone()) .isEqualTo(ZoneId.of(\u0026#34;Europe/London\u0026#34;)); LocalDateTime localDateTime = instantAtDiffZone.toLocalDateTime(); LocalDateTime localDateTimeWithZone = LocalDateTime.now(ZoneId.of(\u0026#34;Europe/London\u0026#34;)); assertThat(localDateTime) .isCloseTo(localDateTimeWithZone, within(5, ChronoUnit.SECONDS)); } } As we can see in the examples, methods are provided to convert to java.time.Instant which represents a timestamp at a particular instant.\nAdvantages of the new DateTime API  Operations such as formatting, parsing, time zone conversions can be easily performed. Exception handling with classes java.time.DateTimeException, java.time.zone.ZoneRulesException are well-detailed and easy to comprehend. All classes are immutable making them thread-safe. Each of the classes provides a variety of utility methods that help compute, extract, and modify date-time information thus catering to most common use cases. Additional complex date computations are available in conjunction with java.time.Temporal package, java.time.Period and java.time.Duration classes. Methods are added to the legacy APIs to convert objects to java.time.Instant and let the legacy code use the newer APIs.  Dealing with Timezones in a Spring Boot Application In this section, we will take a look at how to handle time zones when working with Spring Boot and JPA.\nIntroduction to a Sample Spring Boot Application For demonstration purposes, we will use this application to look at how time zone conversions apply. This application is a Spring Boot application with MySQL as the underlying database. First, let\u0026rsquo;s look at the database.\nAccording to Oracle official documentation:\n You can change the database time zone manually but Oracle recommends that you keep it as UTC (the default) to avoid data conversion and improve performance when data is transferred among databases. This configuration is especially important for distributed databases, replication, and export and import operations.\n This applies to all databases, so conforming with the preferred practice, we will configure MySQL to use UTC as default when working with JPA. This removes the complication of converting between time zones. Now we just need to handle time zones at the server.\nFor this to apply, we will configure the below properties in application.yml\nspring: jpa: database-platform: org.hibernate.dialect.MySQL8Dialect properties: hibernate: jdbc: time_zone: UTC Mapping MySQL Date Types to Java Let\u0026rsquo;s take a quick look at some MySQL date types:\n DATE : The DATE type is used for values with a date part but no time part. MySQL retrieves and displays DATE values in YYYY-MM-DD format. DATETIME - The DATETIME type is used for values that contain both date and time parts. MySQL retrieves and displays DATETIME values in YYYY-MM-DD hh:mm:ss format. TIMESTAMP - The format for TIMESTAMP is similar to DATETIME. The only difference being TIMESTAMP by default stores values in UTC. TIME - The TIME type stores the time in hh:mm:ss format. But it can also store time up to microseconds (6 digits precision).  Now that we understand the supported data types, let\u0026rsquo;s look at how to map them with the Java Date/Time API.\nThe MySQL table is defined as follows:\nCREATE TABLE IF NOT EXISTS `timezonedb`.`date_time_tbl` ( `id` INT NOT NULL AUTO_INCREMENT, `date_str` VARCHAR ( 500 ) NULL, `date_time` DATETIME NULL, `local_time` TIME NULL, `local_date` DATE NULL, `local_datetime_dt` DATETIME NULL, `offset_datetime` TIMESTAMP NULL, `zoned_datetime` TIMESTAMP NULL, `created_at` TIMESTAMP NOT NULL, PRIMARY KEY ( `id` )); The corresponding JPA entity is as below:\n@Entity @Table(name = \u0026#34;date_time_tbl\u0026#34;) public class DateTimeEntity implements Serializable { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Integer id; @Column(name = \u0026#34;date_str\u0026#34;) private String dateStr; @Column(name = \u0026#34;date_time\u0026#34;) private Date date; @Column(name = \u0026#34;local_date\u0026#34;) private LocalDate localDate; @Column(name = \u0026#34;local_time\u0026#34;) private LocalTime localTime; @Column(name = \u0026#34;local_datetime_dt\u0026#34;) private LocalDateTime localDateTime; @Column(name = \u0026#34;local_datetime_ts\u0026#34;) private LocalDateTime localDateTimeTs; @Column(name = \u0026#34;offset_datetime\u0026#34;) private OffsetDateTime offsetDateTime; @Column(name = \u0026#34;zoned_datetime\u0026#34;) private ZonedDateTime zonedDateTime; @Column(name = \u0026#34;created_at\u0026#34;, nullable = false, updatable = false) @CreationTimestamp private OffsetDateTime createdAt; } Understanding the Server Timezone Setup    GMT Europe/London Europe/Berlin     UTC +00:00 UTC +01:00 UTC +02:00   2022-09-08T06:38:03 2022-09-08T07:38:03 2022-09-08T08:38:03    As seen, the database should store the timestamp in UTC (+00:00). We will run our Spring Boot application to the custom time zones Europe/London and Europe/Berlin. Since these time zones have offset +01:00 and +02:00 respectively, we can easily compare the timestamps stored and retrieved.\nTo start the Spring application in Europe/London time zone we specify the time zone in the arguments as :\nmvnw clean verify spring-boot:run \\ -Dspring-boot.run.jvmArguments=\u0026#34;-Duser.timezone=Europe/London\u0026#34; Similarly, to start the application in Europe/Berlin time zone we would specify the arguments as :\nmvnw clean verify spring-boot:run \\ -Dspring-boot.run.jvmArguments=\u0026#34;-Duser.timezone=Europe/Berlin\u0026#34; We have configured two endpoints in the controller class:\n http://localhost:8083/app/v1/timezones/default stores a specific date/time in the jvm argument specified time zone. http://localhost:8083/app/v1/timezones/dst This endpoint indicates the end of DST in the specific time zone for the defined date. This will help us understand how the application handles DST changes.  Daylight Savings Time As on 8th September 2022, both the time zones Europe/London and Europe/Berlin are on DST. Their corresponding time zones are British Summer Time (BST / (UTC+1) and Central European Summer Time (CEST / UTC+2).\nAfter 30th October 2022, the DST will end and they will be back to Greenwich Mean Time (GMT / UTC) and Central European Time (CET / UTC+1) respectively.\nIn our sample application, we will consider the below date and time for understanding and executing our test cases.\nDST Date/Time : 8th September 2022 21:21:17\nNon DST Date/Time : 8th November 2022 09:10:20\n Comparing Timezone Results Let\u0026rsquo;s look at the output of the REST endpoint /timezones/default and compare the dates with the dates we have stored in the database. The application has been started with the time zone Europe/London:\nIn comparison, we can make note of the following points:\n VARCHAR representation of date (column date_str) in the database is not recommended, since it stores the date in the format and the zone it was sent. This could result in inconsistent date formats and the final date stored will not be in UTC making it difficult to convert back into the application. java.util.Date stored in the DB (column date_time) has no zone information making it difficult to represent the right date-time format in the application. Similarly, the DATE and TIME columns (columns local_date and local_time) need additional information, especially when working with time zones. LocalDateTime (column local_datetime_dt) although represents the correct date-time still needs additional information when working with time zones. As we can see, OffsetDateTime and ZonedDateTime (columns offset_datetime and zoned_datetime) give all the required information for the dates to be stored in UTC and retrieved in the right format. Therefore, we can conclude that DATETIME and TIMESTAMP should be the preferred choice when storing date-time in MySQL databases.  Now, let\u0026rsquo;s start the application in the time zone Europe/Berlin and compare the output of the REST endpoint with the database again:\nThe results in this time zone are consistent with the points noted above.\nNext, let\u0026rsquo;s see what happens when the time zone is set to Europe/Berlin and when the DST ends on 30th October 2022. The custom date considered here is 8th November 2022:\nWhen DST ends, Europe/Berlin will shift to UTC+1 time zone and this is consistent with the results as seen in the output above. In all above cases, OffsetDateTime and ZonedDateTime show the same results. This is because the OffsetDateTime is derived from ZoneId. All DST rules apply to ZoneId and hence OffsetDateTime gives the correct representation that includes DST changes. As discussed in the API section that details differences between OffsetDateTime and ZonedDateTime, we could use the one that best suits our use case.\nTesting Timezones in a Spring Boot Application When working with time zones and unit testing applications, we might want to control the dates and make them agnostic of the system time zone. The Date/Time API provides java.time.Clock that can be used for achieving this. According to the official documentation:\n Use of a Clock is optional. All key date-time classes also have a now() factory method that uses the system clock in the default time zone. The primary purpose of this abstraction is to allow alternate clocks to be plugged in as and when required. Applications use an object to obtain the current time rather than a static method. This can simplify testing.\n With this approach, we could define a Clock object in the desired time zone and pass it to any of the Date/Time API classes to get the corresponding date-time:\n@TestConfiguration public class ServiceConfiguration { @Bean public Clock clock() { return Clock.system(ZoneId.of(\u0026#34;Europe/London\u0026#34;)); } } Further, we also need to enable the bean overriding feature in our application.yml file as below:\nspring.main.allow-bean-definition-overriding=true This will help us override the beans for our test configuration. To understand how this works, refer to this article.\nNow, to get time zone-specific information, we can use:\nOffsetDateTime current = OffsetDateTime.now(clock); Further, we could also fix the clock to set it to a particular instant in a time zone:\nClock.fixed( Instant.parse(\u0026#34;2022-11-08T09:10:20.00Z\u0026#34;), ZoneId.of(\u0026#34;Europe/Berlin\u0026#34;) ); With this set, OffsetDateTime.now(clock) will always return the same time.\nTo always apply the default system time zone, we could use:\nClock.systemDefaultZone(); By setting the clock parameter, testing the same application in different time zones, with or without DST becomes much easier. We can then use some assertions in our tests to make sure the conversions are according to the configured time zones.\n@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT, classes = ServiceConfiguration.class) @ActiveProfiles(\u0026#34;test\u0026#34;) @AutoConfigureMockMvc public class DateTimeControllerTest { @Autowired private Clock clock; @Autowired private DateTimeService service; @Autowired private MockMvc mockMvc; @Autowired private DateTimeRepository repository; @BeforeEach void setup() { repository.deleteAll(); } @Test public void saveDateTimeObject() throws Exception { ResultActions response = mockMvc.perform(post(\u0026#34;/app/v1/timezones/default\u0026#34;)); List\u0026lt;DateTimeEntity\u0026gt; list = repository.findAll(); assertTrue(!list.isEmpty()); assertTrue(list.size() == 1); response.andDo(print()) .andExpect(status().isOk()) .andExpect(jsonPath(\u0026#34;$.applicationTimezone\u0026#34;).value(\u0026#34;Europe/London\u0026#34;)) .andExpect(jsonPath(\u0026#34;$.[\u0026#39;zonedDateTime (column zoned_datetime)\u0026#39;]\u0026#34;, Matchers.containsString(\u0026#34;+01:00\u0026#34;))) .andExpect(jsonPath(\u0026#34;$.[\u0026#39;offsetDateTime (column offset_datetime)\u0026#39;]\u0026#34;, Matchers.containsString(\u0026#34;+01:00\u0026#34;))) .andExpect(jsonPath(\u0026#34;$.[\u0026#39;localDateTime (column local_datetime_dt)\u0026#39;]\u0026#34;, Matchers.not(Matchers.containsString(\u0026#34;+01:00\u0026#34;)))); } } Best Practices for Storing Timezones in the Database Based on all the information we have gathered from executing our sample application, we can derive a common set of best practices that will apply to any database and will help us work with time zones correctly:\n Most databases support date and timestamp fields. Always store dates in the corresponding column types and never use VARCHAR. Recommended practice is to store timestamps in UTC to help handle zone conversions better. Column types like DATE and TIME should not be preferred since they do not have zone information. In most cases you would want to store data with time zone that will cater to multiple time zones making the application less prone to time conversion errors.  Conclusion We have seen the numerous advantages of the DateTime API and how it efficiently lets us save and retrieve timestamp information when working with databases. We have also seen a few examples of testing the created endpoints across time zones by manipulating the Clock in our unit tests.\n","date":"September 26, 2022","image":"https://reflectoring.io/images/stock/0111-clock-1200x628-branded_hu11424c7716805d3162fd43f6bfa1fe41_91574_650x0_resize_q90_box.jpg","permalink":"/spring-timezones/","title":"Handling Timezones in a Spring Boot Application"},{"categories":["Java"],"contents":"In software development, unit testing means writing tests to verify that our code is working as logical units.\nDifferent people tend to think of a \u0026ldquo;logical unit\u0026rdquo; in different ways. We will define it as a unit of behavior that tests not a method or a class on its own but as the collective behavior of many methods that does useful business work.\nUnit testing allows us to catch bugs early on in the development phase plus it allows us to refactor and change our code knowing that if we mess up something the tests will light red and notify us.\n Example Code This article is accompanied by a working code example on GitHub. JUnit and JUnit 5 JUnit is an Open Source testing framework for Java that relies heavily on annotations to run and manage our tests. It allows us to write our tests in logically separated suites which are executed in parallel fast runs.\nJUnit 5 JUnit 5 was developed to leverage the new and powerful advances in the Java language introduced in Java 8 and beyond. It embraces the functional and declarative style of programming which is more readable and easier to write. JUnit 5 can use more than one extension at a time, which was not possible in earlier versions where only one runner could be used at a time. This allows us to combine Spring extensions with other extensions including custom ones that we write.\nBasic components of the JUnit 5 project The components of JUnit are:\nJUnit Platform\n Serves as a foundation for launching testing frameworks on the JVM. Provides the Test Engine API for developing a testing framework that runs on the platform.  JUnit Jupiter\n Offers us the annotations and the extensions to write the tests.  JUnit Vintage\n Offers us the ability to run JUnit 4 and JUnit 3 tests on the platform.  Annotations JUnit annotations are offered to us by the Jupiter component. They make our tests much more readable and easy to write. Let\u0026rsquo;s take a look at the most important ones.\nBefore we start with the code we need to add the junit-jupiter-api dependency.\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.junit.jupiter\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;junit-jupiter-api\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.9.0\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; @Test Adding this annotation to a public void method allows us to simply run that method as a test.\nimport org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.assertEquals; public class DogTest { @Test public void testBark() { String expectedString = \u0026#34;woof\u0026#34;; assertEquals(expectedString, \u0026#34;woof\u0026#34;); } } Here we use the assertEquals() method to assert that the two strings are equal, we will cover it in more detail later. Now, to run the tests all we need to do is run the maven command mvn test.\nWe should see the test run and pass:\n[INFO] Running DogTest [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 s - in DogTest [INFO] [INFO] Results: [INFO] [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 Let\u0026rsquo;s check what happens when the test fails.\npublic class DogTest { @Test public void barkFailure() { String expectedString = \u0026#34;Meow\u0026#34;; assertEquals(expectedString, \u0026#34;Woof\u0026#34;); } } Then we will see this in the log output:\n[INFO] Running DogTest [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.066 s \u0026lt;\u0026lt;\u0026lt; FAILURE! - in DogTest [ERROR] DogTest.barkFailure Time elapsed: 0.036 s \u0026lt;\u0026lt;\u0026lt; FAILURE! org.opentest4j.AssertionFailedError: expected: \u0026lt;Meow\u0026gt; but was: @BeforeAll A method annotated with @BeforeAll has to be a static method and it will run once before any test is run.\npublic class DogTest { @BeforeAll public static void init() { System.out.println(\u0026#34;Doing stuff\u0026#34;); } @Test public void testBark() { String expectedString = \u0026#34;woof\u0026#34;; assertEquals(expectedString, \u0026#34;woof\u0026#34;); System.out.println(\u0026#34;WOOF!\u0026#34;); } } This will output:\nDoing stuff WOOF! @BeforeEach A method annotated with @BeforeEach will run once before every test method annotated with @Test.\npublic class DogTest { @BeforeEach public void doEach() { System.out.println(\u0026#34;Hey Doggo\u0026#34;); } @Test public void testBark1() { String expectedString = \u0026#34;woof1\u0026#34;; assertEquals(expectedString, \u0026#34;woof1\u0026#34;); System.out.println(\u0026#34;WOOF =\u0026gt; 1\u0026#34;); } @Test public void testBark2() { String expectedString = \u0026#34;woof2\u0026#34;; assertEquals(expectedString, \u0026#34;woof2\u0026#34;); System.out.println(\u0026#34;WOOF =\u0026gt; 2\u0026#34;); } } This will output:\nHey Doggo WOOF =\u0026gt; 1 Hey Doggo WOOF =\u0026gt; 2 Order not guaranteed JUnit doesn\u0026rsquo;t guarantee that tests will run in the order they are in the file, so we might see WOOF =\u0026gt; 2 before WOOF =\u0026gt; 1.\n @AfterAll Similar to @BeforeAll, but it runs after all the tests are run:\npublic class DogTest { @AfterAll public static void finish() { System.out.println(\u0026#34;Finishing stuff\u0026#34;); } @Test public void testBark1() { String expectedString = \u0026#34;woof1\u0026#34;; assertEquals(expectedString, \u0026#34;woof1\u0026#34;); System.out.println(\u0026#34;WOOF =\u0026gt; 1\u0026#34;); } } This should output:\nWOOF =\u0026gt; 1 Finishing stuff @AfterEach This will run once after each test is run:\npublic class DogTest { @AfterEach public void doAfterEach() { System.out.println(\u0026#34;Bye Doggo\u0026#34;); } @Test public void testBark1() { String expectedString = \u0026#34;woof1\u0026#34;; assertEquals(expectedString, \u0026#34;woof1\u0026#34;); System.out.println(\u0026#34;WOOF =\u0026gt; 1\u0026#34;); } @Test public void testBark2() { String expectedString = \u0026#34;woof2\u0026#34;; assertEquals(expectedString, \u0026#34;woof2\u0026#34;); System.out.println(\u0026#34;WOOF =\u0026gt; 2\u0026#34;); } } This should output\nWOOF =\u0026gt; 1 Bye Doggo WOOF =\u0026gt; 2 Bye Doggo @Disabled This annotation can be applied to @Test methods to prevent them from running, or it can be even applied to a class to prevent all the @Test methods inside it from running.\npublic class DogTest { @Disabled(\u0026#34;Dog 1 please don\u0026#39;t woof\u0026#34;) @Test public void testBark1() { String expectedString = \u0026#34;woof1\u0026#34;; assertEquals(expectedString, \u0026#34;woof1\u0026#34;); System.out.println(\u0026#34;WOOF =\u0026gt; 1\u0026#34;); } @Test public void testBark2() { String expectedString = \u0026#34;woof2\u0026#34;; assertEquals(expectedString, \u0026#34;woof2\u0026#34;); System.out.println(\u0026#34;WOOF =\u0026gt; 2\u0026#34;); } } We should see the test not running and the disabled message being displayed:\nDog 1 please don't woof WOOF =\u0026gt; 2 Assertions JUnit offers basic assertions through the Jupiter module. Let\u0026rsquo;s explore the main ones. For the full list please check JUnit\u0026rsquo;s documentation.\nYou don\u0026rsquo;t have to use JUnit\u0026rsquo;s assertions, but can instead use a library like assertJ, if you want. Read more about assertJ in our dedicated article.\nYou can also use Hamcrest assertions as explained later in this article.\nassertEquals() This method compares two given parameters for equality. It can take any of the primitive types (int, float, etc\u0026hellip;) plus accepting Objects for which it calls the equals() method to check the equality. We have seen many examples of it in our previous code snippets.\nassertNotEquals() It checks for non-equality for two given parameters (primitives or objects).\npublic class DogTest { @Test public void testNotBark() { String unexpectedString = \u0026#34;\u0026#34;; assertNotEquals(unexpectedString, \u0026#34;woof\u0026#34;); System.out.println(\u0026#34;Didn\u0026#39;t woof!!\u0026#34;); } } This should pass and print:\nDidn't woof!! assertNull() It simply does a null check on the given parameter:\npublic class DogTest { @Test public void nullCheck() { Object dog = null; assertNull(dog); System.out.println(\u0026#34;Null dog :(\u0026#34;); } } This will print:\nNull dog :( assertNotNull() Unlike, assertNul(), this fails if the given object isn\u0026rsquo;t null;\npublic class DogTest { @Test public void nonNullCheck() { String dog = \u0026#34;Max\u0026#34;; assertNotNull(dog); System.out.println(\u0026#34;Hey I am \u0026#34; + dog); } } This will print:\nHey I am Max assertTrue() It will fail if the given boolean parameter is false:\npublic class DogTest { @Test public void trueCheck() { int dogAge = 2; assertTrue(dogAge \u0026lt; 5); System.out.println(\u0026#34;I am young :)\u0026#34;); } } This will print:\nI am young :) assertFalse() It will fail if the given boolean parameter is true:\npublic class DogTest { @Test public void falseCheck() { int dogAge = 7; assertFalse(dogAge \u0026lt; 5); System.out.println(\u0026#34;I am old :(\u0026#34;); } } This should show us:\nI am old :( Assertions with Hamcrest The first question that comes to mind is, why do we need an extra dependency to do the assertions when JUnit offers us a basic set of assertions?\nHamcrest is one of the most popular libraries for writing assertions, it offers a fluent API that enables us to easily write and read object matchers.\nAs we will see in the code examples below, Hamcrest assertions make the tests much more readable because they read almost like natural sentences. Let\u0026rsquo;s get to know some of the most important methods of Hamcrest.\nFirst, let\u0026rsquo;s introduce the Maven dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.hamcrest\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;hamcrest-all\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.3\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; assertThat() This is the starting point for most of our assertion statements, as the first parameter it takes the object (or primitive) under test, and as the second parameter, it takes a Matcher:\nimport org.junit.jupiter.api.Test; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.*; public class CatTest { @Test public void testMeow() { String catName = \u0026#34;Stilla\u0026#34;; int catAge = 3; boolean isNice = false; assertThat(catName, equalTo(\u0026#34;Stilla\u0026#34;)); assertThat(catAge, lessThan(5)); assertThat(isNice, is(false)); } } Notice the flexibility we have on the Matchers we pass, there are many more Matchers we can use on many more data types.\nLet\u0026rsquo;s explore those Matchers a bit more.\ninstanceOf() It tests whether an object is an instance of a certain class. Say we have a Cat class\npublic class Cat { } public class CatTest { @Test public void testCatInstance() { Cat cat = new Cat(); assertThat(cat, instanceOf(Cat.class)); } } sameInstance() It verifies if two object references are the same instance.\npublic class CatTest { @Test public void testSameCatInstance() { Cat cat = new Cat(); assertThat(cat, sameInstance(cat)); } } hasItems() It verifies that the given items exist in the collection under test. The existence is checked through the equals() object method.\npublic class CatTest { @Test public void testCollectionContaining() { List\u0026lt;String\u0026gt; catNames = asList(\u0026#34;Phibi\u0026#34;, \u0026#34;Monica\u0026#34;, \u0026#34;Stilla\u0026#34;); assertThat(catNames, hasItems(\u0026#34;Monica\u0026#34;, \u0026#34;Phibi\u0026#34;)); assertThat(catNames, not(hasItems(\u0026#34;Melih\u0026#34;))); } } Note the chaining of calls performed with not(hasItems(\u0026quot;Melih\u0026quot;)), this allows us to mix and match those operators to have complex and readable assertions.\nhasSize() It checks the size of a given collection against an expected size.\npublic class CatTest { @Test public void testCollectionSize() { List\u0026lt;String\u0026gt; catNames = asList(\u0026#34;Phibi\u0026#34;, \u0026#34;Monica\u0026#34;); assertThat(catNames, hasSize(2)); } } hasProperty() This is used to check if an object\u0026rsquo;s property (field) matches a certain condition.\nFirstly, let\u0026rsquo;s modify our Cat class to have a name property with a getter and a constructor.\npublic class Cat { private String name; public Cat(String name) { this.name = name; } public String getName() { return name; } } public class CatTest { @Test public void testBean() { Cat cat = new Cat(\u0026#34;Mimi\u0026#34;); assertThat(cat, hasProperty(\u0026#34;name\u0026#34;, equalTo(\u0026#34;Mimi\u0026#34;))); } } equalToIgnoringCase() It checks if two strings match regardless of capital or small letters.\npublic class CatTest { @Test public void testStringEquality() { String catNameInCaps = \u0026#34;RACHEL\u0026#34;; assertThat(catNameInCaps, equalToIgnoringCase(\u0026#34;rachel\u0026#34;)); } } containsString() It checks if a string is contained in another string.\npublic class CatTest { @Test public void testStringContains() { String catName = \u0026#34;Joey The Cute\u0026#34;; assertThat(catName, containsString(\u0026#34;Cute\u0026#34;)); } } Assumptions We can use Assumptions when we want a test to run only under certain conditions (i.e. under certain assumptions). If the condition isn\u0026rsquo;t met then JUnit simply ignores the test. This is different from using an assert(), for example, which will cause the test to fail if the condition isn\u0026rsquo;t met.\nSay we have a test that we only want to run on Windows machines:\nLet\u0026rsquo;s start by introducing a GoldFish class\npublic class GoldFish { private String name; private int age; // constructor and getters } @Test public void testBooleanAssumption() { GoldFish goldFish = new GoldFish(\u0026#34;Windows Jelly\u0026#34;, 1); assumeTrue(System.getProperty(\u0026#34;os.name\u0026#34;).contains(\u0026#34;Windows\u0026#34;); assertThat(goldFish.getName(), equalToIgnoringCase(\u0026#34;Windows Jelly\u0026#34;)); } If we are running on a Linux machine we will see that the test is ignored with the following message:\norg.opentest4j.TestAbortedException: Assumption failed: assumption is not true Had we used the assert() like this:\n@Test public void testBooleanAssert() { GoldFish goldFish = new GoldFish(\u0026#34;Windows Jelly\u0026#34;, 1); assert(System.getProperty(\u0026#34;os.name\u0026#34;).contains(\u0026#34;Windows\u0026#34;)); assertThat(goldFish.getName(), equalToIgnoringCase(\u0026#34;Windows Jelly\u0026#34;)); } our test would instead fail with an assertion error:\njava.lang.AssertionError Exception Testing As part of our test scenarios, we might want to test that in certain conditions a particular exception is thrown. Let\u0026rsquo;s check out how JUnit 5 helps us do this.\nJUnit 5 offers us the assertThrows() method, which takes the class of the exception we are excepting (or one of its superclasses) as a first parameter. As for the second parameter it expects a Runnable that contains the code under test.\nLet\u0026rsquo;s add a method to calculate the speed of GoldFish:\npublic class GoldFish { private String name; private int age; public int calculateSpeed() { if (age == 0){ throw new RuntimeException(\u0026#34;This will fail :((\u0026#34;); } return 10 / age; } // constructor and getters } Our test would look like this:\npublic class GoldFishTest { @Test public void testException() { GoldFish goldFish = new GoldFish(\u0026#34;Goldy\u0026#34;, 0); RuntimeException exception = assertThrows(RuntimeException.class, goldFish::calculateSpeed); assertThat(exception.getMessage(), equalToIgnoringCase(\u0026#34;This will fail :((\u0026#34;)); } } Parameterized Testing The idea behind parameterized testing is to execute the same test but with different parameters. JUnit 5 offers a rich API to handle this, however, we are going to briefly explore the most common way to do it.\nWe first need to add this dependency\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.junit.jupiter\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;junit-jupiter-params\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.9.0\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; public class GoldFishTest { @ParameterizedTest @MethodSource(\u0026#34;provideFishes\u0026#34;) public void parameterizedTest(GoldFish goldFish) { assertTrue(goldFish.getAge() \u0026gt;= 1); } private static Stream\u0026lt;Arguments\u0026gt; provideFishes() { return Stream.of( Arguments.of(new GoldFish(\u0026#34;Browny\u0026#34;, 1)), Arguments.of(new GoldFish(\u0026#34;Greeny\u0026#34;, 2)) ); } } Notice that the name provided to the @MethodSource annotation should match the name of the provider method, provideFishes() in this case. This will run the same test twice but with two different parameter values.\nConclusion We have explored JUnit 5 and its components, and we have also covered the most important test annotations. We learned how to use the Assertions API and assert thrown exceptions.\n","date":"September 11, 2022","image":"https://reflectoring.io/images/stock/0125-tools-1200x628-branded_hu82ff8da5122675223ceb88a08f293300_139357_650x0_resize_q90_box.jpg","permalink":"/junit5/","title":"JUnit 5 by Examples"},{"categories":["Node"],"contents":"Have you ever wanted to perform a specific task on your application server at specific times without physically running them yourself? Or we\u0026rsquo;d rather spend your time on more important tasks than remember to periodically clear or move data from one part of the server to another.\nWe can use cron job schedulers to automate such tasks.\nCron job scheduling is a common practice in modern applications. The original cron is a daemon, which means that it only needs to be started once, and will lay dormant until it is required. Another example of a deamon is the web server. The web server remains idle until a request for a web page is made.\n Example Code This article is accompanied by a working code example on GitHub. Using node-cron In this article, we\u0026rsquo;ll use the node-cron library to schedule cron jobs in several Node.js demo applications. node-cron is a Node module that is made available by npm.\nTo schedule jobs using node-cron, we need to invoke the method cron.schedule() method.\ncron.schedule() Method Syntax The signature of the cron.schedule() method looks like this:\ncron.schedule(expression, function, options); Let\u0026rsquo;s explore each of the arguments.\nCron Expression This is the first argument of the cron.schedule() method. This expression is used to specify the schedule on which a cron job is to be executed. The expression is sometimes also called a \u0026ldquo;crontab\u0026rdquo; expression, from the command-line tool crontab, which can schedule multiple jobs in one \u0026ldquo;crontab\u0026rdquo; file.\nThe cron expression is made up of 6 elements, separated by a space.\nHere\u0026rsquo;s a quick reference to the cron expression, Indicating what each element represents.\n ┌────────────── second (0 - 59) (optional) │ ┌──────────── minute (0 - 59) │ │ ┌────────── hour (0 - 23) │ │ │ ┌──────── day of the month (1 - 31) │ │ │ │ ┌────── month (1 - 12) │ │ │ │ │ ┌──── day of the week (0 - 6) (0 and 7 both represent Sunday) │ │ │ │ │ │ │ │ │ │ │ │ * * * * * * We can replace each asterisk with one of the following characters so that the expression describes the time we want the job to be executed:\n *: An asterisk means \u0026ldquo;every interval\u0026rdquo;. For example, if the asterisk symbol is in the \u0026ldquo;month\u0026rdquo; field, it means the task is run every month. ,: The comma allows us to specify a list of values for repetition. For example, if we have 1, 3, 5 in the \u0026ldquo;month\u0026rdquo; field, the task will run in months 1, 3, and 5 (January, March, and May). -: The hyphen allows us to specify a range of values. If we have 1-5 in the \u0026ldquo;day of the week\u0026rdquo; field, the task will run every weekday (from Monday to Friday). /: The slash allows us to specify expressions like \u0026ldquo;every xth interval\u0026rdquo;. If we have */4 in the \u0026ldquo;hour\u0026rdquo; field, it means the action will be performed every 4 hours.  The \u0026ldquo;seconds\u0026rdquo; element can be left out. In this case, cron expression will only consist of 5 elements and the first elements describes the minutes and not the seconds.\nIf you\u0026rsquo;re unsure about manually writing cron ex[ressions, you can use free tools like Crontab Generator or Crontab.guru to generate a cron expression for us.\nCron Function This is the second argument of the cron.schedule() method. This argument is the function that will be executed every time when the cron expression triggers.\nWe can do whatever we want in this function. We can send an email, make a database backup, or download data.\nCron Options In the third argument of the cron.schedule() method we can provide some options. This argument is optional.\nHere is an example of what cron options object looks like.\n{ scheduled: false, timezone: \u0026#34;America/Sao_Paulo\u0026#34; } The scheduled option here is a boolean to indicate whether the job is enabled or not (default is true).\nWith the timezone option we can define the timezone in which the cron expression should be evaluated.\nSetting Up a Node.Js Application Let\u0026rsquo;s set up a Node.js application to play around with node-cron.\nTo begin, we create a new folder:\nmkdir node-cron-demo Next, we change into the new project\u0026rsquo;s directory\ncd node-cron-demo We will need to create a file index.js here. This is where we\u0026rsquo;ll be writing all our code:\ntouch index.js Run the command below to initialize the project. This will generate a package.json file which can be used to keep track of all dependencies installed in our project.\nnpm init -y Next, we will install node-cron and other modules used later in this article.\nnpm install node-cron node-mailer Implementing Cron Jobs with node-cron To demonstrate the functionality of the node-cron library, we will build 4 sample applications using node.js.\n1. Scheduling a Simple Task with node-cron Any task of our choosing can be automated and run at a specific time using cron job schedulers. In this section, we\u0026rsquo;ll write a simple function that logs to the terminal at our specified time.\nInput the following code into the index.js file to create our simple task scheduler:\nconst cron = require(\u0026#34;node-cron\u0026#34;); const express = require(\u0026#34;express\u0026#34;); const app = express(); cron.schedule(\u0026#34;*/15 * * * * *\u0026#34;, function () { console.log(\u0026#34;---------------------\u0026#34;); console.log(\u0026#34;running a task every 15 seconds\u0026#34;); }); app.listen(3000, () =\u0026gt; { console.log(\u0026#34;application listening.....\u0026#34;); }); In the code block above we are making a simple log to the application\u0026rsquo;s terminal.\nRun node index.js in the terminal and you\u0026rsquo;ll get the following output:\napplication listening..... --------------------- running a task every 15 second 2. Scheduling Email Using node-cron Emailing is a common feature of modern applications. Cron jobs can be used to accomplish this. For instance, a job schedule can be set to automatically send users an email each month with the most recent information from a blog or a product.\nIn the example, we will be using Google\u0026rsquo;s email service provider Gmail. If you have a Gmail account insert it in the code below to test out our newly created email scheduler.\nTo use node-mailer with Gmail, you must first create an app password for Gmail to allow third-party access.\nSet up your Gmail app password following these steps:\n First head to your Gmail Account. Click on the profile image to the right. Click on Manage your Google Account, then click Security. In the Signing in to Google section select the App password option. If the App password option is unavailable, set up 2-Step Verification for your account. Select the app (mail) and the current device we want to generate the app password for. Click Generate. Copy the generated 16-character code from the yellow bar on your device.  To create our email scheduler application insert the following code into the index.js file:\nconst express = require(\u0026#34;express\u0026#34;); const cron = require(\u0026#34;node-cron\u0026#34;); const nodemailer = require(\u0026#34;nodemailer\u0026#34;); app = express(); //send email after 1 minute cron.schedule(\u0026#34;1 * * * *\u0026#34;, function () { mailService(); }); function mailService() { let mailTransporter = nodemailer.createTransport({ service: \u0026#34;gmail\u0026#34;, auth: { user: \u0026#34;\u0026lt;your-email\u0026gt;@gmail.com\u0026#34;, // use generated app password for gmail  pass: \u0026#34;***********\u0026#34;, }, }); // setting credentials  let mailDetails = { from: \u0026#34;\u0026lt;your-email\u0026gt;@gmail.com\u0026#34;, to: \u0026#34;\u0026lt;user-email\u0026gt;@gmail.com\u0026#34;, subject: \u0026#34;Test Mail using Cron Job\u0026#34;, text: \u0026#34;Node.js Cron Job Email Demo Test from Reflectoring Blog\u0026#34;, }; // sending email  mailTransporter.sendMail(mailDetails, function (err, data) { if (err) { console.log(\u0026#34;error occurred\u0026#34;, err.message); } else { console.log(\u0026#34;---------------------\u0026#34;); console.log(\u0026#34;email sent successfully\u0026#34;); } }); } app.listen(3000, () =\u0026gt; { console.log(\u0026#34;application listening.....\u0026#34;); }); In the above code we are using the node-mailer and node-cron modules we have installed earlier. The node-mailer dependency allows us to send e-mails from our Node.js application using any email service of our choice.\nWith the expression 1 * * * *, we scheduled our mail to be sent once every minute using Gmail.\nRun the script node index.js and you\u0026rsquo;ll get the following output:\napplication listening..... --------------------- email sent successfully Check your inbox to confirm the email is sent.\n3. Monitoring Server Resources Over Time Cron jobs can be used to schedule logging tasks and monitor server resources in our Node.js applications. Let\u0026rsquo;s say something happened, like a network delay or a warning message. We can schedule a cron job to log at a specific time or interval to track our server status. This can act as an automatic monitoring over time.\nIn this section, we will log the application\u0026rsquo;s server resources in csv format. This makes our log data more machine-readable. The generated .csv file can be imported into a spreadsheet to create graphs for more advanced use cases.\nInsert the following code into the index.js file to generate the .csv file at the scheduled time:\nconst process = require(\u0026#34;process\u0026#34;); const fs = require(\u0026#34;fs\u0026#34;); const os = require(\u0026#34;os\u0026#34;); const cron = require(\u0026#34;node-cron\u0026#34;); const express = require(\u0026#34;express\u0026#34;); app = express(); // setting a cron job for every 15 seconds cron.schedule(\u0026#34;*/15 * * * * *\u0026#34;, function () { let heap = process.memoryUsage().heapUsed / 1024 / 1024; let date = new Date().toISOString(); const freeMemory = Math.round((os.freemem() * 100) / os.totalmem()) + \u0026#34;%\u0026#34;; // date | heap used | free memory  let csv = `${date}, ${heap}, ${freeMemory}\\n`; // storing log In .csv file  fs.appendFile(\u0026#34;demo.csv\u0026#34;, csv, function (err) { if (err) throw err; console.log(\u0026#34;server details logged!\u0026#34;); }); }); app.listen(3000, () =\u0026gt; { console.log(\u0026#34;application listening.....\u0026#34;); }); In the code block above, we are using the Node.js fs module. fs enables interaction with the file system allowing us to create a log file.\nThe OS module gives access to the application\u0026rsquo;s Operating System (OS) and the process module, provides details about, the current Node.js process.\nWe are using the method process.heapUsed . The heapUsed refer to V8\u0026rsquo;s memory use by our application. And os.freemem() shows available RAM, os.totalmem() show entire memory capacity.\nThe log is saved in .csv format, with the date/time in the first column, memory usage in the second, and the memory available in the third. These data are recorded and saved in a generated demo.csv file at 15-second intervals.\nRun the script: node index.js\nAllow your application to run, we will notice a file named demo.csv is generated with content similar to the following:\n2022-08-31T00:19:45.912Z, 8.495856, 10% 2022-08-31T00:20:00.027Z, 7.083216, 10% 2022-08-31T00:20:15.133Z, 7.139864, 9% 2022-08-31T00:20:30.219Z, 7.188568, 12% 2022-08-31T00:20:45.414Z, 7.23724, 11% 4. Deleting / Refreshing a Log File Consider a scenario where we are working with a large application that records the status of all activity in the log file. The log status file would eventually grow large and out of date. we can routinely delete the log file from the server. For instance, we could routinely delete the log status file using a job scheduler on the 25th of every month.\nIn this example, we will be deleting the log status file that was previously created:\nconst express = require(\u0026#34;express\u0026#34;); const cron = require(\u0026#34;node-cron\u0026#34;); const fs = require(\u0026#34;fs\u0026#34;); app = express(); // remove the demo.csv file every twenty-first day of the month. cron.schedule(\u0026#34;0 0 25 * *\u0026#34;, function () { console.log(\u0026#34;---------------------\u0026#34;); console.log(\u0026#34;deleting logged status\u0026#34;); fs.unlink(\u0026#34;./demo.csv\u0026#34;, err =\u0026gt; { if (err) throw err; console.log(\u0026#34;deleted successfully\u0026#34;); }); }); app.listen(3000, () =\u0026gt; { console.log(\u0026#34;application listening.....\u0026#34;); }); Notice the pattern used: 0 0 25 * *.\n minutes and hours as 0 and 0 (“00:00” - the start of the day). specific day of the month as 25. month or day of the week isn\u0026rsquo;t defined.  Now, run the script: node index.js\nOn the 25th of every month, your log status will be deleted with the following output:\napplication listening..... --------------------- deleting logged status deleted successfully Switch cron expression to a shorter time interval - like every minute. To verify the task is been executed.\nChecking the application directory. we will notice the demo.csv file has been deleted.\nConclusion This article uses various examples to demonstrate how to schedule tasks on the Node.js server and the concept of using the node-cron schedule method to automate and schedule repetitive or future tasks. we can use this idea in both current and upcoming projects. The source code for each example can be found here.\nThere are other job scheduler tools accessible to node.js applications such as node-schedule, Agenda, Bree, Cron, and Bull. Be sure to assess each one to find the best fit for your specific project.\n","date":"September 5, 2022","image":"https://reflectoring.io/images/stock/0043-calendar-1200x628-branded_hu4de637414a60e632f344e01d7e13a994_98685_650x0_resize_q90_box.jpg","permalink":"/schedule-cron-job-in-node/","title":"Scheduling Jobs with Node.js"},{"categories":["Software Craft"],"contents":"Rolling out new features is one of the most satisfying parts of our job. Finally, users will see the new feature we\u0026rsquo;ve worked on so hard!\nFeature flags (or feature toggles) are basically if/else blocks in our application code that control whether or not a feature is available for a specific user. With feature flags, we can decide who gets to see which feature when.\nThis gives us fine-grained control over the rollout of new features. We can decide to activate it only for ourselves to test it or roll it out it to all users at once.\nThis article discusses different rollout strategies that are possible when using feature flags and when they make sense.\nDark Launch The first rollout strategy is the \u0026ldquo;dark launch\u0026rdquo;:\nWe deploy a new version of our app that contains the new feature, but the feature flag for this feature is disabled for now. We can activate the feature for a subset of our users (or for all users) later.\nWhy should we deploy a deactivated feature? Why not just deploy the new feature without a feature flag? It would automatically be available for every user after a successful deployment, after all.\nThe reason that we want to use a feature flag for this is that we want to decouple the feature rollout from the deployment of the application.\nThere are a million things that can go wrong with a deployment. Someone might have introduced a bug that prevents the application from starting up. Or the Kubernetes cluster has a hiccup (probably due to a misplaced whitespace character in some YAML file).\nSimilarly, there are a million things that can go wrong with releasing a new feature. We might have forgotten to cover an edge case, overlooked a security hole, or users are using the feature in ways that we don\u0026rsquo;t want them to.\nWhen a deployment fails, we want to know quickly what has caused the deployment to fail and fix it. If it\u0026rsquo;s not fixed quickly, the deployment pipeline is blocked and we can\u0026rsquo;t release new features at all!\nYou can imagine that the number of potential root causes for a deployment failure is much lower if we deploy new features behind a (deactivated) feature flag. The deactivated code can\u0026rsquo;t be the reason for the deployment failure and we can ignore the new feature in the investigation for the root cause.\nSo, we\u0026rsquo;ll want to release every new feature in a \u0026ldquo;dark launch\u0026rdquo; before we enable the feature for any users.\nGlobal Rollout Once a feature is deployed (but inactive), the simplest rollout strategy is to roll out a feature to all users at the same time:\nOnce the deployment of the new feature has been successful, we can enable the feature flag and all users can enjoy the new feature.\nWe can then monitor the usage of the new feature with our logs or analytics dashboards to see if users are adopting the feature and how they\u0026rsquo;re using it. The release of the feature has been completely independent of the app\u0026rsquo;s deployment!\nIssues that might have come up during the deployment will have been sorted out already when we\u0026rsquo;re rolling out the feature. If any issues come up after rolling out the feature, we know that the new feature is a likely candidate for causing them and we can investigate in this direction.\nIf issues with a new feature are severe enough, we might decide to deactivate the feature again, using the feature flag as a \u0026ldquo;kill switch\u0026rdquo;.\nKill Switch Although technically the opposite of a rollout strategy, a kill switch is a valuable tool to have in case of emergency:\nSay we have successfully rolled out a feature via a global rollout. All users are enjoying the new feature. Then, we realize that the new feature has introduced a new security vulnerability that allows hackers to siphon away our user\u0026rsquo;s data. A nightmare (not only) for the developer who built that feature!\nIf the feature is behind a feature flag, we can just toggle the flag to deactivate the feature and our users' data is safe again. Users might complain that they can\u0026rsquo;t use the new feature anymore, but that\u0026rsquo;s better than users complaining about being blackmailed by a hacker with access to their data.\nThere are lots of other potential reasons for killing a feature:\n the feature doesn\u0026rsquo;t have the expected effect, the feature is not working properly, users are unhappy with the feature, \u0026hellip;  In any case, we can use a feature flag as a kill switch to deactivate the feature temporarily or permanently.\nThe \u0026ldquo;kill switch\u0026rdquo; option illustrates another advantage of decoupling deployments from feature rollout: if the security vulnerability from above would have been deployed without a feature flag, we would have had to roll back the application to before the vulnerability was introduced and redeploy it.\nIf a deployment takes 30 minutes, the security vulnerability would still be there for 30 minutes until the deployment was successful. As if that isn\u0026rsquo;t bad enough, all changes to the codebase that have been made after that vulnerable feature was added would not be available in the new deployment, because we have rolled back the code! Quite a blast radius!\nPercentage Rollout Even though activating or deactivating a feature for all users at the same time is quite powerful already, feature flags give us enough control for more sophisticated rollout strategies.\nA common rollout strategy is a percentage rollout:\nInstead of activating a feature for all users at once, we activate it in increments for a growing percentage of the users. We might decide to go in 25% increments as in the image above or we might be a bit unsure about the feature and activate it for only 5% of the users for starters to see if something goes wrong. It\u0026rsquo;s better if something goes wrong for only 5% of the users than for all users, after all.\nAfter the feature has been active for a couple of days or so without issues, we feel confident enough to enable it for a bigger percentage of the users, until, ultimately, we enable it for all users.\nHere\u0026rsquo;s another advantage of feature flags: feature flags give us confidence in rolling out features.\nEven a rollout to only 5% of users can be a successful rollout! Knowing that only 5% of the users are affected if something goes wrong - and that we can turn it off at any time with the flick of a switch - gives us enormous peace of mind. If everything goes as planned, we feel confident in rolling out the feature flag to a bigger audience.\nImplementing a dark launch, global rollout, and kill switch is rather easy. We just need an if/else block in our code that checks a boolean value in a database that we can switch to true or false at any time.\nWith a percentage rollout, however, we enter a territory where it\u0026rsquo;s not as easy to implement anymore. We can\u0026rsquo;t just evaluate the feature flag to true in 25% of the cases, because that would mean that the same user would see the feature 25% of the times they use the app and not see the feature the other 75%!\nWe need to split the user base into two fixed cohorts: one with 25% of the users and the other with the rest of the users. Then, we always evaluate the feature flag to true when the user is part of the first cohort and always to false if the feature is part of the second cohort.\nThis requires quite a sophisticated management of feature flag state per user and is not so easily implemented. This is why we should take advantage of a feature management platform like LaunchDarkly to implement feature flags.\nCanary Launch Similar to a percentage rollout is a canary launch:\nInstead of rolling out to all users at once, we only roll out to a small subset of users. This is very similar to a percentage rollout - the line between a canary launch and a percentage rollout is blurred.\nThe term “canary launch” comes from the rather morbid practice of taking canary birds down into mines to act as an early warning of poisonous and odorless gases. Due to their faster oxygen consumption, the birds would drop dead before any human would notice a change. When a bird died in a mine, the miners would quickly evacuate.\nWhen we roll out a feature to a small subset of users, these users act as our \u0026ldquo;canaries\u0026rdquo; and will tell us if something is wrong (and hopefully not drop dead!). When nobody complains after a time, and the logs and metrics look good, we can enable the feature for the rest of the users.\nA percentage rollout with a small starting percentage of, say, 5% may act as a canary launch. Every new feature we deploy would potentially target a different 5% of the user base, however, and we wouldn\u0026rsquo;t know how these users would react to any problems that we might introduce with new features.\nIt would be nice if we could roll out every new feature to the same group of \u0026ldquo;early adopter\u0026rdquo; users. This group of users is hand-picked for their early adopter mindset. We might know these users from interviews or support cases and we expect them to be understanding of any issues in an early version of a new feature.\nSo, instead of a percentage rollout, we might define a fixed cohort of early adopter users and use them in a canary launch before rolling out to the rest of the users.\nRing Deployment Taking the idea of a canary launch to the next level is a strategy called \u0026ldquo;ring deployment\u0026rdquo;:\nInstead of only defining one cohort for early adopter users that are comfortable with acting as \u0026ldquo;canaries\u0026rdquo;, we define multiple cohorts with ever-increasing impact. If we visualize these cohorts as rings, we see why it\u0026rsquo;s called \u0026ldquo;ring deployment\u0026rdquo;.\nThen, we release a feature to one \u0026ldquo;ring\u0026rdquo; of users after another, starting with the innermost ring. We control the release with a feature flag, which we enable for one cohort after another.\nWe might decide that the innermost cohort consists only of friendly users within our own organization, for example. The next cohort might consist of external early adopters - the users we have already talked about in the \u0026ldquo;canary launch\u0026rdquo; section. The third cohort might be the rest of all users. We can have as many rings as make sense in our specific case.\nYou might wonder why it\u0026rsquo;s called ring deployment when feature flags should actually be about decoupling deployment from rolling out a feature. Indeed, a better name in this context would be \u0026ldquo;ring rollout\u0026rdquo;.\nThe term \u0026ldquo;ring deployment\u0026rdquo; is widely used, however, even when it\u0026rsquo;s not about deployment, but rather about feature rollout (which shouldn\u0026rsquo;t be coupled with a deployment). It stems from a time when feature flags weren\u0026rsquo;t widely adopted, and a new version of an application was actually deployed in multiple different versions to achieve the same effect. The network infrastructure would then route requests from users in one \u0026ldquo;ring\u0026rdquo; to one version, and requests from users in another \u0026ldquo;ring\u0026rdquo; to another version of the app. The term \u0026ldquo;ring deployment\u0026rdquo; stuck, and so I\u0026rsquo;m using it here for better recognition.\nA/B Test Sometimes, we\u0026rsquo;re not certain about how a certain feature would perform, so we would like to experiment with different options. In such a case, we can perform an A/B test with the help of a feature flag:\nIn an A/B test (also called blue/green or red/black deployment), we have two or more different versions of a feature and we want to compare their performance. \u0026ldquo;Performance\u0026rdquo; might mean technical performance (for example how fast the system responds) or business performance (for example how a feature impacts conversion rate). In any case, it\u0026rsquo;s measured by a metric we define.\nTo compare the performance, we want to show one version of the feature to one group of customers and another version of the feature to another group of customers.\nWith feature flags, we can achieve just that. We can either use one feature flag per \u0026ldquo;version\u0026rdquo; of the feature that we want to compare and then enable those features for a different group of users each.\nOr, with a feature management platform like LaunchDarkly, we can create a single feature flag with multiple \u0026ldquo;variations\u0026rdquo; and define a target cohort for each of the variations. LaunchDarkly also has an \u0026ldquo;experimentation\u0026rdquo; feature that will show you the performance metrics you chose for each of the feature\u0026rsquo;s variations.\nNot all features lend themselves to experiments, but if you have a case where you can compare a metric between different feature versions, an A/B test with feature flags is a very powerful tool for data-driven decisions.\nManaging the Feature State As mentioned in the article above, if you want to use rollout strategies that are more advanced than a global rollout, you probably don\u0026rsquo;t want to implement a solution for managing the state per user and feature flag yourself, but instead rely on a feature management platform like LaunchDarkly which has solutions for all the rollout strategies mentioned in this article and more.\nIf you\u0026rsquo;re interested in starting with a simple solution, however, you might enjoy the comparison between Togglz and LaunchDarkly. If you\u0026rsquo;re interested in tips and tricks around feature flags with Spring Boot, you might enjoy the article about feature flags with Spring Boot.\n","date":"August 29, 2022","image":"https://reflectoring.io/images/stock/0038-package-1200x628-branded_hu7e104c3cc9032be3d32f9334823f6efc_80797_650x0_resize_q90_box.jpg","permalink":"/rollout-strategies-with-feature-flags/","title":"Rollout Strategies with Feature Flags"},{"categories":["Spring"],"contents":"Cross-Origin Resource Sharing (CORS) is an HTTP-header-based mechanism that allows servers to explicitly allowlist certain origins and helps bypass the same-origin policy.\nThis is required since browsers by default apply the same-origin policy for security. By implementing CORS in a web application, a webpage could request additional resources and load into the browser from other domains.\nThis article will focus on the various ways in which CORS can be implemented in a Spring-based application. To understand how CORS works in detail, please refer to this excellent introductory article.\n Example Code This article is accompanied by a working code example on GitHub. Overview of CORS-Specific HTTP Response Headers The CORS specification defines a set of response headers returned by the server that will be the focus of the subsequent sections.\n   Response Headers Description     Access-Control-Allow-Origin Comma-separated list of whitelisted origins or \u0026ldquo;*\u0026rdquo;.    Access-Control-Allow-Methods Comma-separated list of HTTP methods the web server allows for cross-origin requests.   Access-Control-Allow-Headers Comma-separated list of HTTP headers the web server allows for cross-origin requests.   Access-Control-Expose-Headers Comma-separated list of HTTP headers that the client script can consider safe to display.   Access-Control-Allow-Credentials If the browser makes a request to the server by passing credentials (in the form of cookies or authorization headers), its value is set to true.   Access-Control-Max-Age Indicates how long the results of a preflight request can be cached.    Setting up a Sample Client Application We will use a simple angular application that will call the REST endpoints that we can inspect using browser developer tools. You can check out the source code on GitHub.\nng serve --open We should be able to start the client application successfully.\nSetting up a Sample Server Application We will use a sample Spring-based application with GET and POST requests that the client application can call. Note that you will find two separate applications: one that uses Spring MVC (REST) and the other that uses the Spring Reactive stack.\nFor simplicity, the CORS configuration across both applications is the same and the same endpoints have been defined. Both servers start at different ports 8091 and 8092.\nThe Maven Wrapper bundled with the application will be used to start the service. You can check out the Spring REST source code and the Spring Reactive source code.\nmvnw clean verify spring-boot:run (for Windows) ./mvnw clean verify spring-boot:run (for Linux) Once the Spring application successfully starts, the client application should be able to successfully load data from the server.\nCall to the Spring REST server:\nCall to the Spring Reactive server:\nUnderstanding @CrossOrigin Attributes In the Spring Boot app, we\u0026rsquo;re using the @CrossOrigin annotation to enable cross-origin calls. Let\u0026rsquo;s first understand the attributes that @CrossOrigin supports.\n   Attributes Description     origins Allows you to specify a list of allowed origins. By default, it allows all origins. The attribute value will be set in the Access-Control-Allow-Origin header of both the preflight response and the actual response.   Example Usage:  @CrossOrigin(origins = \u0026quot;http://localhost:8080\u0026quot;)  @CrossOrigin(origins = {\u0026quot;http://localhost:8080\u0026quot;, \u0026quot;http://testserver:8087\u0026quot;})   allowedHeaders Allows you to specify a list of headers that will be accepted when the browser makes the request. By default, any headers will be allowed. The value specified in this attribute is used in Access-Control-Allow-Headers in the preflight response.   Example Usage:  @CrossOrigin(allowedHeaders = {\u0026quot;Authorization\u0026quot;, \u0026quot;Origin\u0026quot;})   exposedHeaders List of headers that are set in the actual response header. If not specified, only the safelisted headers will be considered safe to be exposed by the client script.   Example Usage:  @CrossOrigin(exposedHeaders = {\u0026quot;Access-Control-Allow-Origin\u0026quot;,\u0026quot;Access-Control-Allow-Credentials\u0026quot;})   allowCredentials When credentials are required to invoke the API, set Access-Control-Allow-Credentials header value to true. In case no credentials are required, omit the header.   Example Usage:  @CrossOrigin(allowCredentials = true)   maxAge Default maxAge is set to 1800 seconds (30 minutes). Indicates how long the preflight responses can be cached.   Example Usage:  @CrossOrigin(maxAge = 300)    What If We Do Not Configure CORS? Consider our Spring Boot Application has not been configured for CORS support. If we try to hit our angular application running on port 4200, we see this error on the developer console:\nAccess to XMLHttpRequest at http://localhost:8091 from origin http://localhost:4200 has been blocked by CORS policy: No 'Access-Control-Allow-Origin` header is present on the requested resource This is because even though both applications are served from localhost, they are not considered the same origin because the port is different.\nConfiguring CORS in a Spring Web MVC Application The initial setup created with a Spring Initializr holds all the required CORS dependencies. No external dependencies need to be added. Refer to this sample Spring Web Application project.\nDefining @CrossOrigin at the Class Level @CrossOrigin(maxAge = 3600) @RestController @RequestMapping(\u0026#34;cors-library/managed/books\u0026#34;) public class LibraryController {} Here since we have defined @CrossOrigin:\n All @RequestMapping methods (and methods using the shorthand annotations @GetMapping, @PostMapping, etc.) in the controller will accept cross-origin requests. Since maxAge = 3600, all pre-flight responses will be cached for 60 mins.  Defining @CrossOrigin at the Method Level @CrossOrigin(origins = \u0026#34;http://localhost:4200\u0026#34;, allowedHeaders = \u0026#34;Requestor-Type\u0026#34;, exposedHeaders = \u0026#34;X-Get-Header\u0026#34;) @GetMapping public ResponseEntity\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getBooks(@RequestParam String type) { HttpHeaders headers = new HttpHeaders(); headers.set(\u0026#34;X-Get-Header\u0026#34;, \u0026#34;ExampleHeader\u0026#34;); return ResponseEntity.ok().headers(headers).body(libraryService.getAllBooks(type)); } This will have the following effects:\n Only requests coming from origin http://localhost:4200 will be accepted. If we expect only certain headers to be accepted, we can specify those headers in the allowedHeaders attribute. If the Requestor-Type header is not sent by the browser, the request will not be processed. If we set certain response headers, for the client application to be able to use them, we need to explicitly set the list of response headers to be exposed using the exposedHeaders attribute.  Combination of @CrossOrigin at Class and Method Levels @CrossOrigin(maxAge = 3600) @RestController @RequestMapping(\u0026#34;cors-library/managed/books\u0026#34;) public class LibraryController { private static final Logger log = LoggerFactory.getLogger(LibraryController.class); private final LibraryService libraryService; public LibraryController(LibraryService libraryService) { this.libraryService = libraryService; } @CrossOrigin(origins = \u0026#34;http://localhost:4200\u0026#34;, allowedHeaders = \u0026#34;Requestor-Type\u0026#34;) @GetMapping public ResponseEntity\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getBooks(@RequestParam String type) { HttpHeaders headers = new HttpHeaders(); headers.set(\u0026#34;X-Get-Header\u0026#34;, \u0026#34;ExampleHeader\u0026#34;); return ResponseEntity.ok().headers(headers).body(libraryService.getAllBooks(type)); } }  By defining the annotation at both class and method levels its combined attributes will be applied to the methods i.e (origins, allowedHeaders, ``) In all the above cases we can define both global CORS cmaxAgeonfiguration and local configuration using @CrossOrigin. For attributes that accept multiple values, a combination of global and local values will apply (i.e. they are merged). For attributes that accept only a single value, the local value will take precedence over the global one.  Enabling CORS Globally Instead of adding CORS to each of the resources separately, we could define a common CORS configuration that would apply to all resources defined.\nHere, we will use a WebMvcConfigurer which is a part of the Spring Web MVC library\nBy overriding the addCorsMapping() method we will configure CORS to all URLs that are handled by Spring Web MVC.\nTo define the same configuration (as explained in the previous sections) globally, we will use the configuration parameters defined in application.yml to create a bean as defined below.\nThe properties defined in application.yml (allowed-origins, allowed-methods, max-age, allowed-headers, exposed-headers) are custom properties that map to the self-defined class Cors via @ConfigurationProperties(prefix = \u0026quot;web\u0026quot;)\nweb: cors: allowed-origins: \u0026#34;http://localhost:4200\u0026#34; allowed-methods: GET, POST, PATCH, PUT, DELETE, OPTIONS, HEAD max-age: 3600 allowed-headers: \u0026#34;Requestor-Type\u0026#34; exposed-headers: \u0026#34;X-Get-Header\u0026#34; @Bean public WebMvcConfigurer corsMappingConfigurer() { return new WebMvcConfigurer() { @Override public void addCorsMappings(CorsRegistry registry) { WebConfigProperties.Cors cors = webConfigProperties.getCors(); registry.addMapping(\u0026#34;/**\u0026#34;) .allowedOrigins(cors.getAllowedOrigins()) .allowedMethods(cors.getAllowedMethods()) .maxAge(cors.getMaxAge()) .allowedHeaders(cors.getAllowedHeaders()) .exposedHeaders(cors.getExposedHeaders()); } }; } CorsConfiguration defaults addMapping() returns a CorsRegistration object which applies a default CorsConfiguration if one or more methods (allowedOrigins, allowedMethods, maxAge, allowedHeaders, exposedHeaders) are not explicitly defined. Refer to the Spring library method CorsConfiguration.applyPermitDefaultValues() to understand the defaults applied.\n Configuring CORS in a Spring Webflux application The initial setup is created with a Spring Initializr and uses Spring Webflux, Spring Data R2DBC, and H2 Database. No external dependencies need to be added. Refer to this sample Spring Webflux project.\nCORS Configuration for Spring Webflux using @CrossOrigin Similar to Spring MVC, in Spring Webflux we can define @CrossOrigin at the class level or the method level. The same @CrossOrigin attributes described in the previous sections will apply. Also, when the annotation is defined at both class and method, its combined attributes will apply to the methods.\n@CrossOrigin(origins = \u0026#34;http://localhost:4200\u0026#34;, allowedHeaders = \u0026#34;Requestor-Type\u0026#34;, exposedHeaders = \u0026#34;X-Get-Header\u0026#34;) @GetMapping public ResponseEntity\u0026lt;Mono\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt;\u0026gt; getBooks(@RequestParam String type) { HttpHeaders headers = new HttpHeaders(); headers.set(\u0026#34;X-Get-Header\u0026#34;, \u0026#34;ExampleHeader\u0026#34;); return ResponseEntity.ok().headers(headers).body(libraryService.getAllBooks(type)); } Enabling CORS Configuration Globally in Spring Webflux To define CORS globally in a Spring Webflux application, we use the WebfluxConfigurer and override the addCorsMappings(). Similar to Spring MVC, it uses a CorsConfiguration with defaults that can be overridden as required.\n@Bean public WebFluxConfigurer corsMappingConfigurer() { return new WebFluxConfigurer() { @Override public void addCorsMappings(CorsRegistry registry) { WebConfigProperties.Cors cors = webConfigProperties.getCors(); registry.addMapping(\u0026#34;/**\u0026#34;) .allowedOrigins(cors.getAllowedOrigins()) .allowedMethods(cors.getAllowedMethods()) .maxAge(cors.getMaxAge()) .allowedHeaders(cors.getAllowedHeaders()) .exposedHeaders(cors.getExposedHeaders()); } }; } Enabling CORS Using WebFilter The Webflux framework allows CORS configuration to be set globally via CorsWebFilter. We can use the CorsConfiguration object to set the required configuration and register CorsConfigurationSource to be used with the filter.\nHowever, by default, the CorsConfiguration in case of filters does not assign default configuration to the endpoints! Only the specified configuration can be applied.\nAnother option is to call CorsConfiguration.applyPermitDefaultValues() explicitly.\n@Bean public CorsWebFilter corsWebFilter() { CorsConfiguration corsConfig = new CorsConfiguration(); corsConfig.setAllowedOrigins(Arrays.asList(\u0026#34;http://localhost:4200\u0026#34;)); corsConfig.setMaxAge(3600L); corsConfig.addAllowedMethod(\u0026#34;*\u0026#34;); corsConfig.addAllowedHeader(\u0026#34;Requestor-Type\u0026#34;); corsConfig.addExposedHeader(\u0026#34;X-Get-Header\u0026#34;); UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration(\u0026#34;/**\u0026#34;, corsConfig); return new CorsWebFilter(source); } Enabling CORS with Spring Security If Spring Security is applied to a Spring application, CORS must be processed before Spring Security comes into action since preflight requests will not contain cookies and Spring Security will reject the request as it will determine that the user is not authenticated. Here the examples shown will demonstrate basic authentication.\nTo apply Spring security we will add the below dependency Maven:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-security\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Gradle:\nimplementation \u0026#39;org.springframework.boot:spring-boot-starter-security\u0026#39; Spring Security Applied to Spring Web MVC Spring security by default protects every endpoint. However, this would cause CORS errors since a browser\u0026rsquo;s OPTIONS preflight requests would be blocked. To make Spring Security bypass preflight requests we need to add http.cors() to the HTTPSecurity object as shown:\n@Configuration @EnableConfigurationProperties(BasicAuthConfigProperties.class) @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { private final BasicAuthConfigProperties basicAuth; public SecurityConfiguration(BasicAuthConfigProperties basicAuth) { this.basicAuth = basicAuth; } protected void configure(HttpSecurity http) throws Exception { http.cors(); } } To set up additional CORS configuration with Spring Security after bypassing pre-flight requests, we can configure CORS using the @CrossOrigin annotation:\n@CrossOrigin(maxAge = 3600, allowCredentials = \u0026#34;true\u0026#34;) @RestController @RequestMapping(\u0026#34;cors-library/managed/books\u0026#34;) public class LibraryController { private static final Logger log = LoggerFactory.getLogger(LibraryController.class); private final LibraryService libraryService; public LibraryController(LibraryService libraryService) { this.libraryService = libraryService; } @CrossOrigin(origins = \u0026#34;http://localhost:4200\u0026#34;, allowedHeaders = {\u0026#34;Requestor-Type\u0026#34;, \u0026#34;Authorization\u0026#34;}, exposedHeaders = \u0026#34;X-Get-Header\u0026#34;) @GetMapping public ResponseEntity\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getBooks(@RequestParam String type) { HttpHeaders headers = new HttpHeaders(); headers.set(\u0026#34;X-Get-Header\u0026#34;, \u0026#34;ExampleHeader\u0026#34;); return ResponseEntity.ok().headers(headers).body(libraryService.getAllBooks(type)); } } Or, we can create a CorsConfigurationSource bean:\n@Bean CorsConfigurationSource corsConfigurationSource() { CorsConfiguration configuration = new CorsConfiguration(); configuration.setAllowedOrigins(Arrays.asList(\u0026#34;http://localhost:4200\u0026#34;)); configuration.setAllowedMethods(Arrays.asList(\u0026#34;GET\u0026#34;,\u0026#34;POST\u0026#34;,\u0026#34;PATCH\u0026#34;, \u0026#34;PUT\u0026#34;, \u0026#34;DELETE\u0026#34;, \u0026#34;OPTIONS\u0026#34;, \u0026#34;HEAD\u0026#34;)); configuration.setAllowCredentials(true); configuration.setAllowedHeaders(Arrays.asList(\u0026#34;Authorization\u0026#34;, \u0026#34;Requestor-Type\u0026#34;)); configuration.setExposedHeaders(Arrays.asList(\u0026#34;X-Get-Header\u0026#34;)); configuration.setMaxAge(3600L); UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration(\u0026#34;/**\u0026#34;, configuration); return source; } Spring Security Applied to Spring Webflux In case of Webflux, despite using Spring Security the most preferred way of applying CORS configuration to oncoming requests is to use the CorsWebFilter. We can disable the CORS integration with Spring security and instead integrate with CorsWebFilter by providing a CorsConfigurationSource:\n@Configuration @EnableWebFluxSecurity @EnableConfigurationProperties(BasicAuthConfigProperties.class) public class SecurityConfiguration { private final BasicAuthConfigProperties basicAuth; public SecurityConfiguration(BasicAuthConfigProperties basicAuth) { this.basicAuth = basicAuth; } @Bean public SecurityWebFilterChain securityWebFilterChain(ServerHttpSecurity http) { http.cors(cors -\u0026gt; cors.disable()) .securityMatcher(new PathPatternParserServerWebExchangeMatcher(\u0026#34;/**\u0026#34;)) .authorizeExchange() .anyExchange().authenticated().and() .httpBasic(); return http.build(); } @Bean public MapReactiveUserDetailsService userDetailsService() { UserDetails user = User.withDefaultPasswordEncoder() .username(basicAuth.getUsername()) .password(basicAuth.getPassword()) .roles(\u0026#34;USER\u0026#34;) .build(); return new MapReactiveUserDetailsService(user); } @Bean public CorsConfigurationSource corsConfiguration() { CorsConfiguration corsConfig = new CorsConfiguration(); corsConfig.applyPermitDefaultValues(); corsConfig.setAllowCredentials(true); corsConfig.addAllowedMethod(\u0026#34;GET\u0026#34;); corsConfig.addAllowedMethod(\u0026#34;PATCH\u0026#34;); corsConfig.addAllowedMethod(\u0026#34;POST\u0026#34;); corsConfig.addAllowedMethod(\u0026#34;OPTIONS\u0026#34;); corsConfig.setAllowedOrigins(Arrays.asList(\u0026#34;http://localhost:4200\u0026#34;)); corsConfig.setAllowedHeaders(Arrays.asList(\u0026#34;Authorization\u0026#34;, \u0026#34;Requestor-Type\u0026#34;)); corsConfig.setExposedHeaders(Arrays.asList(\u0026#34;X-Get-Header\u0026#34;)); UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration(\u0026#34;/**\u0026#34;, corsConfig); return source; } @Bean public CorsWebFilter corsWebFilter() { return new CorsWebFilter(corsConfiguration()); } } Conclusion In short, the CORS configuration depends on multiple factors:\n Spring Web / Spring Webflux Local / Global CORS config Spring Security or not  Depending on the framework we can decide which method works best and is the easiest to implement so that we can avoid CORS errors. You can play around with the sample application on GitHub.\n","date":"August 26, 2022","image":"https://reflectoring.io/images/stock/0014-handcuffs-1200x628-branded_huae3cc3247040192bd8d36200fb5209d6_187949_650x0_resize_q90_box.jpg","permalink":"/spring-cors/","title":"Configuring CORS with Spring Boot and Spring Security"},{"categories":["Software Craft","Java"],"contents":"Putting your code behind a feature flag means that you can deploy unfinished changes. As long as the feature flag is disabled, the changes are not having an effect.\nAmong other benefits, this enables you to continuously merge tiny changes into production and avoids the need for long-lived feature branches and big pull requests.\nWhen we deploy unfinished code to production, however, we want to be extra certain that this code is not being executed! This means we should write tests for our feature flags.\nWhy You Should Test Your Feature Flags A big reason why we should test feature flags is the one I mentioned already: with a feature flag, we\u0026rsquo;re potentially deploying unfinished code to production. We want to make sure that this code is not accidentally being executed until we want it to be executed.\nYou might say that feature flag code is so trivial that we don\u0026rsquo;t need to test it. After all it can be as simple as an if/else branch like this:\nclass SystemUnderTest { public String doSomething() { if (featureOneEnabled) { // new code  return \u0026#34;new\u0026#34;; } else { // old code  return \u0026#34;old\u0026#34;; } } } This code checks if a feature is enabled and then returns either returns the string \u0026ldquo;new\u0026rdquo; or the string \u0026ldquo;old\u0026rdquo;. What is there to test?\nEven in this simple scenario, let’s consider what happens if we accidentally invert the feature flag value in the if condition:\nclass SystemUnderTest { public String doSomething() { if (featureOneEnabled) { // old code  return \u0026#34;old\u0026#34;; } else { // new code  return \u0026#34;new\u0026#34;; } } } It’s important to note that real code frequently doesn’t include a comment saying // old code and // new code, so you might not be able to easily distinguish between the old and new code at a glance.\nThis is an extremely simple example. Imagine if your use of flags were more complex, for instance using dependent flags or multivariate flags (flags with many possible values) that pass configuration values. It’s easy to see how mistakes can can happen!\nIf the above code is deployed to production, the feature flag will most likely default to the value false and execute the new code instead of the old code! The deployment will potentially break things for our users while we expect that the change is hidden behind the feature flag.\nHow do you avoid this? For the above example, we’d do this by writing a test that checks the following:\n Is the old code executed when the feature flag is disabled? Is the new code executed when the feature flag is enabled?  Ideally, the first question is answered by our existing test base. Given that the old code has been covered by a unit test, this test should fail if we have accidentally inverted the feature flag because during the test the new code would now be executed instead of the old code.\nLet\u0026rsquo;s look at what feature flag tests might look like!\nCreating a Feature Flag Service To make feature flags easily testable, it\u0026rsquo;s a good idea to put them into an interface like this FeatureFlagService:\npublic interface FeatureFlagService { boolean featureOneEnabled(); // ... other feature flags ... } For each feature flag, we add a new method to this interface.\nThe implementations of these methods retrieve the value of the feature flag from our feature management platform (like LaunchDarkly). The feature management platform manages the state of our feature flags for us and allows us to turn them on or off for all or a specific cohort of users. If you want to read more on different feature flagging options, read my article about making or buying a feature flag tool).\nWith this FeatureFlagService we now have a single place for all our feature flags. We can inject the implementation of this service into any code that needs to evaluate a feature flag and then use it like this:\nclass SystemUnderTest{ private final FeatureFlagService featureFlagService; SystemUnderTest(FeatureFlagService featureFlagService){ this.featureFlagService = featureFlagService; } public String doSomething() { if (featureFlagService.featureOneEnabled()) { // new code  return \u0026#34;new\u0026#34;; } else { // old code  return \u0026#34;old\u0026#34;; } } } Another advantage of centralizing our feature flags like this is that we create a layer of abstraction over our feature flagging tool. If we decide to change from one tool to another, the interface stays the same and we only need to change the implementation of the FeatureFlagService. The rest of the code stays untouched.\nMocking the Feature Flag Service In our tests, we\u0026rsquo;ll want to mock the values the FeatureFlagService returns so that we can test the code paths with the feature flag enabled or disabled. Having all feature flag evaluations behind an interface makes it easy to mock our feature flags.\nCreating a MockFeatureFlagService A simple way of mocking is to create a custom implementation of the FeatureFlagService interface that allows us to change the feature flag state on-demand:\nclass MockFeatureFlagService implements FeatureFlagService { private boolean isFeatureOneEnabled = false; boolean featureOneEnabled(){ return isFeatureOneEnabled; } void setFeatureOneEnabled(boolean flag){ this.isFeatureOneEnabled = flag; } } In a unit test, we can then inject the MockFeatureFlagService into the system under test and change the state as required in the tests:\nclass MyTest { private final MockFeatureFlagService featureFlagService = new MockFeatureFlagService(); private final SystemUnderTest sut = new SystemUnderTest(featureFlagService); @Test void testOldState(){ featureFlagService.setFeatureOneEnabled(false); assertThat(sut.doSomething()).equals(\u0026#34;old\u0026#34;); } @Test void testNewState(){ featureFlagService.setFeatureOneEnabled(true); assertThat(sut.doSomething()).equals(\u0026#34;new\u0026#34;); } } Using Mockito Instead of implementing our own MockFeatureFlagService implementation, we can also use a mocking library like Mockito :\nclass MyTest { private final FeatureFlagService featureFlagService = Mockito.mock(FeatureFlagService.class); private final SystemUnderTest sut = new SystemUnderTest(featureFlagService); @Test void testOldState(){ given(featureFlagService.featureOneEnabled()).willReturn(false); assertThat(sut.doSomething()).equals(\u0026#34;old\u0026#34;); } @Test void testNewState(){ given(featureFlagService.featureOneEnabled()).willReturn(true); assertThat(sut.doSomething()).equals(\u0026#34;new\u0026#34;); } } This has the same effect, but saves us from writing a whole MockFeatureFlagService class, because we can use Mockito\u0026rsquo;s given() or when() methods to define the return value of the FeatureFlagService on demand.\nThere is a cost, however: we no longer have a single place where we control all the values of the mocked feature flags as we do when we have a MockFeatureFlagService because we\u0026rsquo;re now defining each feature flag value on demand right where we need it. That means we cannot define the default values for the feature flags used in our tests in a central place!\nChoose Default Values Carefully! No matter which way of mocking feature flags you use, choose the default values of those feature flags carefully!\nThe default value of a feature flag in tests should be the same value as the feature flag has (or will soon have) in production!\nImagine what can happen if the default value of a feature flag in our test is false, while the feature flag is true in production. We add some code to our application and the tests are all still passing so we assume everything is alright. However, we overlooked that the code we added only runs if the feature flag is false, while the feature flag in production is set to true! The tests didn\u0026rsquo;t save us because they had a different default value for the feature flag than the production environment!\nThis is where a central MockFeatureFlagService comes in handy. We can define all the default values there and even change them over time when we change a feature flag value in production. The tests will always use the same default values for feature flag states as in production, avoiding an issue like the one outlined above.\nThis is useful even if you’re using a feature management platform. For instance, LaunchDarkly enables you to define a default value in case of any failure in retrieving the value from the LaunchDarkly service. Having these values centralized can help eliminate any mistake.\nTesting the Feature Flag Lifecycle Most feature flags go through a common lifecycle. We create them, we activate them, and then we remove them again, although this lifecycle can differ for different types of flags (permanent flags that manage configuration changes, are not removed, for example). Let’s take a look at what the tests should look like at each stage of the typical feature flag lifecycle.\nBefore the Feature Flag Let\u0026rsquo;s say that our test code looks like this before we have introduced a feature flag:\nclass MyTest { private final SystemUnderTest sut = new SystemUnderTest(); @Test void existingTest(){ assertThat(sut.doSomething()).equals(\u0026#34;old\u0026#34;); } } The method doSomething() returns the String \u0026ldquo;old\u0026rdquo;.\nAdding a Test Case for the Feature Flag Now, we have decided to change the logic of the doSomething() method, but we don\u0026rsquo;t want to deploy this change to all users at the same time, because we want to get some feedback from early adopters first. The doSomething() method should return the String \u0026ldquo;new\u0026rdquo; for some users, and \u0026ldquo;old\u0026rdquo; for the rest of the users.\nThe test from above will not compile anymore, because the constructor of SystemUnderTest will now require a FeatureFlagService as a parameter because it needs to know the current value of the feature flag.\nSo, we pass in a mocked FeatureFlagService to fix the test:\nclass MyTest { private final FeatureFlagService featureFlagService = Mockito.mock(FeatureFlagService.class); private final SystemUnderTest sut = new SystemUnderTest(featureFlagService); @Test void existingTest(){ assertThat(sut.doSomething()).equals(\u0026#34;old\u0026#34;); } } Will the test existingTest() succeed or fail now? That depends on the default value that the method FeatureFlagService.featureOneEnabled() method returns. In the code above, Mockito will return false, because that is the default for a boolean value. That means the test should still pass.\nHowever, we might want to make it explicit that we expect the feature flag to be false. Also, we\u0026rsquo;ll want to add a test for the case when the feature flag is true:\nclass MyTest { private final FeatureFlagService featureFlagService = Mockito.mock(FeatureFlagService.class); private final SystemUnderTest sut = new SystemUnderTest(featureFlagService); @Test void existingTest(){ given(featureFlagService.featureOneEnabled()).willReturn(false); assertThat(sut.doSomething()).equals(\u0026#34;old\u0026#34;); } @Test void newTest(){ given(featureFlagService.featureOneEnabled()).willReturn(true); assertThat(sut.doSomething()).equals(\u0026#34;new\u0026#34;); } } This test now covers all states of the feature flag. If the feature flag was not a boolean, but instead a string or a number, we might want to add some more tests that cover edge cases.\nRemoving the Feature Flag The code has been deployed to production and we have enabled it for the early adopters. They were happy, so we decided to enable it for all users. After a week, we heard no complaints and our monitoring doesn\u0026rsquo;t show any issues with the new code, so we decide to remove the old code and instead make the new code the default.\nThe method SystemUnderTest.doSomething() shall now return \u0026ldquo;new\u0026rdquo; for all users, all the time. We remove the if/else block from the doSomething() method. Since SystemUnderTest no longer requires a feature flag, we remove the FeatureFlagService from its constructor, which causes the above test case to show a compile error.\nSo, we fix our test again:\nclass MyTest { private final SystemUnderTest sut = new SystemUnderTest(); @Test void newTest(){ assertThat(sut.doSomething()).equals(\u0026#34;new\u0026#34;); } } We have removed the existingTest() method because that tested the no longer relevant case when the feature flag returned the value false. We keep the newTest() method but remove the code that mocks the feature flag value because the feature flag doesn\u0026rsquo;t exist anymore (and implicitly has the value true).\nAll tests should be green!\nConclusion Feature flag evaluations in our code should be tested just like any other code. If we don\u0026rsquo;t write tests for the different values a feature flag can have, we risk deploying code that we think is disabled by a feature flag when it\u0026rsquo;s actually enabled by default - completely undermining the value of feature flags!\n","date":"August 20, 2022","image":"https://reflectoring.io/images/stock/0019-magnifying-glass-1200x628_hue704be8477799b4a8b3cb13c97488f24_104431_650x0_resize_q90_box.jpg","permalink":"/testing-feature-flags/","title":"Testing Feature Flags"},{"categories":["Spring Boot","Software Craft"],"contents":"Time-based features in a software application are a pain to test. To test such a feature, you can (and should) write unit tests, of course. But like most other features, you probably want to test them by running the application and see if everything is working as expected.\nTo test a time-based feature, you usually want to travel into the future to check if the expected thing happens at the expected time.\nThe easiest (but most time-consuming) way to travel into the future is to wait, of course. But having to wait is boring and quite literally a waste of time. Sometimes, you would have to wait for days, because a certain batch job only runs once a week, for example. That\u0026rsquo;s not an option.\nAnother option is to change the system date of the application server to a date in the future. However, changing the system date may have unexpected results. It affects the whole server, after all. Every single feature of the application (and any supporting processes) will work with the new date. That\u0026rsquo;s quite a big blast radius.\nInstead, in this article, we will look at using a feature flag to control a date. Instead of having to wait, we can just set the value of the feature flag to the date to which we want to travel. And instead of affecting the whole application server, we can target a feature flag at a specific feature that we want to test. An additional benefit is that we can test the feature in production without affecting any other users by activating the feature flag just for us. We can control the time for each user separately!\nIn this article, we\u0026rsquo;re going to use LaunchDarkly as a feature flagging platform to implement time-based feature flags.\n Example Code This article is accompanied by a working code example on GitHub. Use Cases Before we go into the details of time travel with feature flags, let\u0026rsquo;s look at some example use cases to make it easier to talk about the topic.\nShowing a Welcome Message Depending on the Time of Day The first category of time-based features is an action that is triggered by a user.\nFor example, let\u0026rsquo;s say that the application has a web interface and we want to show a time-based welcome message to the user each time they open the web interface in their browser.\nIn the morning, we want to show the message \u0026ldquo;Good morning\u0026rdquo;, during the day we want to show \u0026ldquo;Good day\u0026rdquo;, and in the evening we want to show \u0026ldquo;Good evening\u0026rdquo;.\nThe user is triggering this feature by loading the web page from their browser.\nWhen the feature is triggered, it checks the current time and based on that decides which message to show to the user.\nOther features triggered by a user action might be triggered by a click on a button in the UI, or by visiting a web page that hasn\u0026rsquo;t been visited before, or by entering a certain text into a form.\nThe common thing for all these features is that they happen in the context of a specific user and if we want to make them time-based, we can just check the current time and decide what to do.\nSending Emails Depending on the Registration Date Another common category of time-based features is scheduled actions. These actions are not triggered by a user but by the system at regular intervals.\nLet\u0026rsquo;s say we want to send a welcome email sequence to each user that registers with the application. We want to send an email 1 day after registration, 7 days after registration, and 14 days after registration.\nWe have a regular job that collects all the customers that need to get an email and then sends those emails.\nThe difference to the user-triggered featured from above is that in a scheduled job, we don\u0026rsquo;t have a user context. To get the user context, we have to load the users from the database. And ideally, we would only want to load those users from the database that should receive an email.\nIf we use SQL, our database query would look something like this:\nselect * from user where ( hasReceivedDay1Email = false and (registrationDate \u0026lt;= now() - interval \u0026#39;1 days\u0026#39; ) or ( hasReceivedDay7Email = false and registrationDate \u0026lt;= now() - interval \u0026#39;7 days\u0026#39; ) or ( hasReceivedDay14Email = false and registrationDate \u0026lt;= now() - interval \u0026#39;14 days\u0026#39; ) This only loads the users from the database that we know should receive an email. The problem with this is that the database now controls the time. If we wanted to travel in time, we would have to change the time of the database, which might have a lot of side effects.\nThis is easily remedied by passing the current time into the query as a parameter like this:\nselect * from user where ( hasReceivedDay1Email = false and (registrationDate \u0026lt;= :now - interval \u0026#39;1 days\u0026#39; ) ... However, this still means that the database makes the decision to include a user in the result or not. The parameter :now that we pass into the query is used for all users.\nWe would like to control time for each user separately, though. Only then can we test time-based featured in production using a feature flag without affecting other users.\nSo, we remove the time constraint from the database query so that we can make the time-based decision in our application code:\nselect * from user where hasReceivedDay1Email = false or hasReceivedDay7Email = false or hasReceivedDay14Email = false This will return all users who haven\u0026rsquo;t received an email, yet. In the application code, we go through the list of users and can now compare each user against a time. And if we use a feature flag to control time, we can control time for each user separately.\nThis workaround is not applicable in every circumstance, however. Sometimes, we can\u0026rsquo;t just load all the data from the database and then make decisions in our code because there is too much data to go through. In those cases, we have to test the old-fashioned way by waiting until the time comes. For the remainder of this article, we assume that for our use case, it\u0026rsquo;s acceptable to load more data than we need and make the time-based decision in the application code instead of in the database.\nImplementing a Time-Based Feature Flag To implement the time-based feature flag, we\u0026rsquo;re going to build a FeatureFlagService based on LaunchDarkly, a managed feature flag platform (you can get a more detailed introduction to LaunchDarkly in my article about LaunchDarkly and Togglz).\nFirst, we create an interface that returns the values for the two feature flags we need:\npublic interface FeatureFlagService { /** * Returns the current time to be used by the welcome message feature. */ Optional\u0026lt;LocalDateTime\u0026gt; currentDateForWelcomeMessage(); /** * Returns the current time to be used by the welcome email feature. */ Optional\u0026lt;LocalDateTime\u0026gt; currentDateForWelcomeEmails(); } The method currentDateForWelcomeMessage() shall return the current date that we want to use for our \u0026ldquo;welcome message\u0026rdquo; feature and the method currentDateForWelcomeEmails() shall return the current date that we want to use for our \u0026quot; sending emails\u0026quot; feature.\nThis interface already hints at the power of this solution: each feature can have its own time!\nBoth methods return an Optional\u0026lt;LocalDateTime\u0026gt; which can have these values:\n An empty Optional means that we haven\u0026rsquo;t set a date for this feature flag. We can use this state to mark the feature as \u0026ldquo;toggled off\u0026rdquo;. If there is no date, we\u0026rsquo;re not going to show the welcome message and not going to send an email at all. We can use this state to \u0026ldquo;dark launch\u0026rdquo; new features in a disabled state, and then enable them for progressively bigger user segments over time. An Optional containing a LocalDateTime means that we have set a date for this feature flag, and we can use it to determine the time of day for our welcome message or the number of days since registration for our email feature.  Let\u0026rsquo;s look an implementation of the FeatureFlagService using LaunchDarkly:\n@Component public class LaunchDarklyFeatureFlagService implements FeatureFlagService { private final Logger logger = LoggerFactory.getLogger(LaunchDarklyFeatureFlagService.class); private final LDClient launchdarklyClient; private final UserSession userSession; private final DateTimeFormatter dateFormatter = DateTimeFormatter.ISO_OFFSET_DATE_TIME; public LaunchDarklyFeatureFlagService( LDClient launchdarklyClient, UserSession userSession) { this.launchdarklyClient = launchdarklyClient; this.userSession = userSession; } @Override public Optional\u0026lt;LocalDateTime\u0026gt; currentDateForWelcomeMessage() { String stringValue = launchdarklyClient.stringVariation( \u0026#34;now-for-welcome-message\u0026#34;, getLaunchdarklyUserFromSession(), \u0026#34;false\u0026#34;); if (\u0026#34;false\u0026#34;.equals(stringValue)) { return Optional.empty(); } if (\u0026#34;now\u0026#34;.equals(stringValue)) { return Optional.of(LocalDateTime.now()); } try { return Optional.of(LocalDateTime.parse(stringValue, dateFormatter)); } catch (DateTimeParseException e) { logger.warn(\u0026#34;could not parse date ... falling back to current date\u0026#34;, e); return Optional.of(LocalDateTime.now()); } } @Override public Optional\u0026lt;LocalDateTime\u0026gt; currentDateForWelcomeEmails() { // ... similar implementation  } private LDUser getLaunchdarklyUserFromSession() { return new LDUser.Builder(userSession.getUsername()) .build(); } } We\u0026rsquo;re using LaunchDarkly\u0026rsquo;s Java SDK, more specifically the classes LDClient and LDUser, to interact with the LaunchDarkly server.\nTo get the value of a feature flag, we call the stringVariation() method of the LaunchDarkly client and then transform that into a date. LaunchDarkly doesn\u0026rsquo;t support date types out of the box, so we use a string value instead.\nIf the string value is false, we interpret the feature as \u0026ldquo;toggled off\u0026rdquo; and return an empty Optional.\nIf the string value is now, it means that we haven\u0026rsquo;t set a specific date for a given user and that user just gets the current date and time - the \u0026ldquo;normal\u0026rdquo; behavior.\nIf the string value is a valid ISO date, we parse it to a date and time and return that.\nAnother aspect of the power of this solution becomes visible with the code above: the feature flags can have different values for different users!\nIn the code, we\u0026rsquo;re getting the name of the current user from a UserSession object, putting that into an LDUser object, and then passing it into the LDClient when the feature flag is evaluated. In the LaunchDarkly UI, we can then select different feature flag values for different users:\nHere we have activated the feature flag for the users ben, hugo, and tom. hugo and ben will get the real date and time when the feature flag is evaluated, and only tom will get a specified time in the future (at the time of writing). All other users will get false as a value, meaning that they shouldn\u0026rsquo;t see the feature at all.\nUsing the Time-Based Feature Flags Now that we have built a FeatureFlagService that returns time-based feature flags for us, let\u0026rsquo;s see how we can use them in action.\nShowing a Welcome Message The time-based welcome message we could implement something like this:\n@Controller public class DateFeatureFlagController { private final UserSession userSession; private final FeatureFlagService featureFlagService; DateFeatureFlagController( UserSession userSession, FeatureFlagService featureFlagService) { this.userSession = userSession; this.featureFlagService = featureFlagService; } @GetMapping(path = {\u0026#34;/welcome\u0026#34;}) ModelAndView welcome() { Optional\u0026lt;LocalDateTime\u0026gt; date = featureFlagService.currentDateForWelcomeMessage(); if (date.isEmpty()) { return new ModelAndView(\u0026#34;/welcome-page-without-message.html\u0026#34;); } LocalTime time = date.get().toLocalTime(); String welcomeMessage = \u0026#34;\u0026#34;; if (time.isBefore(LocalTime.NOON)) { welcomeMessage = \u0026#34;Good Morning!\u0026#34;; } else if (time.isBefore(LocalTime.of(17, 0))) { welcomeMessage = \u0026#34;Good Day!\u0026#34;; } else { welcomeMessage = \u0026#34;Good Evening!\u0026#34;; } return new ModelAndView( \u0026#34;/welcome-page.html\u0026#34;, Map.of(\u0026#34;welcomeMessage\u0026#34;, welcomeMessage)); } } The controller serves a welcome page under the path /welcome. From FeatureFlagService.currentDateForWelcomeMessage() , we get the date that we have set for the current user in the LaunchDarkly UI.\nIf the date is empty, we show the page welcome-page-without-message.html, which doesn\u0026rsquo;t contain the welcome message feature at all.\nIf the date is not empty, we set the welcomeMessage property to a value depending on the time of day, and then pass it into the welcome-page.html template, which displays the welcome message to the user.\nSending a Scheduled Email Sending a welcome email is triggered by a scheduled task and not by a user action, so we approach the problem a little differently:\n@Component public class EmailSender { private final Logger logger = LoggerFactory.getLogger(EmailSender.class); private final FeatureFlagService featureFlagService; public EmailSender( FeatureFlagService featureFlagService, UserSession userSession) { this.featureFlagService = featureFlagService; } @Scheduled(fixedDelay = 10000) public void sendWelcomeEmails() { for (User user : getUsers()) { Optional\u0026lt;LocalDateTime\u0026gt; now = featureFlagService.currentDateForWelcomeEmails(user.name); if (now.isEmpty()) { logger.info(\u0026#34;not sending email to user {}\u0026#34;, user.name); continue; } if (user.registrationDate.isBefore (now.get().minusDays(14L).toLocalDate())) { sendEmail(user, \u0026#34;Welcome email after 14 days\u0026#34;); } else if (user.registrationDate.isBefore( now.get().minusDays(7L).toLocalDate())) { sendEmail(user, \u0026#34;Welcome email after 7 days\u0026#34;); } else if (user.registrationDate.isBefore( now.get().minusDays(1L).toLocalDate())) { sendEmail(user, \u0026#34;Welcome email after 1 day\u0026#34;); } } } } We have a scheduled method sendWelcomeEmails() that runs every 10 seconds in our example code. In it, we iterate through all users in the database so that we can check the value of the feature flag for each user.\nWith currentDateForWelcomeEmails() we get the value of the feature flag for the user. Note that we overloaded the method here so that we can pass the user name into it because we don\u0026rsquo;t have a UserSession to get the name from like in the welcome message use case above. That means that the feature flag service can\u0026rsquo;t get the user name from the session and we have to pass it in specifically. If we don\u0026rsquo;t pass in the name, LaunchDarkly won\u0026rsquo;t know which user to evaluate the feature flag for.\nIf the feature flag is empty, we don\u0026rsquo;t send an email at all - the feature is disabled.\nIf the feature flag has a value, we compare it with the user\u0026rsquo;s registration date to send the appropriate welcome email. Note that there should be some logic to avoid sending duplicate emails, but I skipped it for the sake of simplicity.\nThe drawback for feature flag evaluations from a scheduled task is that we have to iterate through all users to evaluate the feature flag for each of them, as discussed above.\nConclusion Without a way to \u0026ldquo;travel through time\u0026rdquo;, testing time-based feature is a pain. Feature flags provide such a way to travel through time. Even better, feature flags provide a way for each user to travel to a different point in time.\nIf we use a feature flag with three possible values (off, now, specific date), we can use the same feature flag for toggling the whole feature on or off and controlling the date for each user separately.\nThis allows us to test time-based features even in production.\n","date":"August 3, 2022","image":"https://reflectoring.io/images/stock/0043-calendar-1200x628-branded_hu4de637414a60e632f344e01d7e13a994_98685_650x0_resize_q90_box.jpg","permalink":"/date-time-feature-flags/","title":"Testing Time-Based Features with Feature Flags"},{"categories":["AWS"],"contents":"AWS Step Functions is a serverless orchestration service by which we can combine AWS Lambda functions and other AWS services to build complex business applications.\nWe can author the orchestration logic in a declarative style using a JSON-based format called the Amazon States Language(ASL). AWS Step functions also provides a Workflow Studio where we can define and run our workflows.\nIn this article, we will introduce the concepts of AWS Step Functions and understand its working with the help of an example.\nStep Functions: Basic Concepts Let us first understand some basic concepts of Step Functions.\nState Machine, State, and Transitions A state machine is a mathematical model of computation consisting of different states connected with transitions. AWS Step functions also implement a state machine to represent the orchestration logic. Each step of the orchestration is represented by a state in the state machine and connected to one or more states through transitions.\nThey are represented by a diagram to visualize the current state of a system as shown here:\nState machines contain at least one state. Transitions represent different events that allow the system to transition from one state to another state. They also have a start position from where the execution can start and one or more end positions where the execution can end.\nAmazon State Language (ASL) We define a state machine in JSON format in a structure known as the Amazon States Language (ASL). The state is the fundamental element in ASL. The fields of a state object vary depending on the type of the state but the fields: Type, Next, InputPath, and OutputPath are common in states of any type. A state object in ASL looks like this:\n{ \u0026#34;Type\u0026#34;: \u0026#34;Task\u0026#34;, \u0026#34;Next\u0026#34;: \u0026#34;My next state\u0026#34;, \u0026#34;InputPath\u0026#34;: \u0026#34;$\u0026#34;, \u0026#34;OutputPath\u0026#34;: \u0026#34;$\u0026#34;, \u0026#34;Comment\u0026#34;: \u0026#34;My State\u0026#34; } In this state object, we have specified the type of state as Task and provided the name of the next state to execute as My next state. The fields: InputPath and OutputPath are filters for input and output data of the state which we will understand in a separate section.\nThe ASL contains a collection of state objects. It has the following mandatory fields:\n States: This field contains a set of state objects. Each element of the set is a key-value pair with the name of the state as key and an associated state object as the value. StartAt: This field contains the name of one of the state objects in the States collection from where the state machine will start execution.  Amazon States Language (ASL) also has optional fields:\n Comment: Description of state machine TimeoutSeconds: The maximum number of seconds an execution of the state machine can run beyond which the execution fails with an error. Version: Version of the Amazon States Language used to define the state machine which is 1.0 by default.  An example of a state machine defined in ASL is shown below:\n{ \u0026#34;Comment\u0026#34;: \u0026#34;Example State Machine\u0026#34;, \u0026#34;StartAt\u0026#34;: \u0026#34;state1\u0026#34;, \u0026#34;States\u0026#34;: { \u0026#34;state1\u0026#34;: {...}, \u0026#34;state2\u0026#34;: {...}, \u0026#34;state3\u0026#34;: {...} } } This structure has the States field containing a collection of 3 state objects of names: state1, state2, and state3. The value of the field: StartAt is state1 which means that the state machine starts execution from the state named state1.\nTypes of State States receive input, perform actions to produce some output, and pass the output to other states. States are of different types which determine the nature of the functions a state can perform. Some of the commonly used types are:\n Task: A state of type task represents a single unit of work performed by a state machine. All the work in a state machine is performed by tasks. The work is performed by using an activity or an AWS Lambda function, or by passing parameters to the API actions of other services. Parallel: State of type parallel is used to trigger multiple branches of execution. Map: We can dynamically iterate steps with a state of type map. Choice: We use this type of state as decision points within a state machine to choose among multiple branches of execution. Fail or Succeed: We can stop the execution with a failure or success.  We also have mechanisms for transforming the inputs and the outputs with JSONpath expressions. The state machine executes one state after another till it has no more states to execute. We will understand these concepts further by implementing a sample checkout process of an e-commerce application with a state machine.\nTypes of State Machine: Standard vs Express We can create two types of state machine. State machine executions differ based on the type. The type of state machine cannot be changed after the state machine is created.\n Standard: State machine of type: Standard should be used for long-running, durable, and auditable processes. Express: State machine of type: Express is used for high-volume, event-processing workloads such as IoT data ingestion, streaming data processing and transformation, and mobile application backends. They can run for up to five minutes.  Introducing The Example: Checkout Process Let us take an example of a checkout process in an application. This checkout process will typically consist of the following steps:\n   Function Input Output Description     fetch customer customer ID Customer Data: email, mobile Fetching Customer information   fetch price cart items Price of each item Fetching Price of each item in the cart   process payment payment type, cart items Price of each item Fetching Price of cart items   create order customer ID, cart items with price success/failure Create order if payment is successful    When we design the order in which to execute these steps, we need to consider which steps can execute in parallel and which ones are in sequence.\nWe can execute the steps for fetch customer and fetch price in parallel. Once these two steps are complete, we will execute the step for process payment. If the payment fails, we will end the checkout process with an error. If the payment succeeds, we will create an order for the customer. The step for process payment can be retried a specific number of times on failures.\nWe will use AWS Step Function to represent this orchestration in the next sections.\nCreating the Lambda Functions for Invoking from the State Machine Let us define skeletal Lambda functions for each of the steps of the checkout process. Our Lambda function to fetch the customer data looks like the following:\nexports.handler = async (event, context, callback) =\u0026gt; { console,log(`input: ${event.customer_id}`) // TODO fetch from database  callback(null, { customer_id: event.customer_id, customer_name: \u0026#34;John Doe\u0026#34;, payment_pref: \u0026#34;Credit Card\u0026#34;, email: \u0026#34;john.doe@yahoo.com\u0026#34;, mobile: \u0026#34;677896678\u0026#34; } ) }; This lambda takes customer_id as input and returns a customer record corresponding to the customer_id. Since the lambda function is not the focus of this post, we are returning a hardcoded value of customer data instead of fetching it from the database.\nSimilarly, our Lambda function for fetching price looks like this:\nexports.handler = async (event, context, callback) =\u0026gt; { const item_no = event.item_no console.log(`item::: ${item_no}`) // TODO fetch price from database  const price = {item_no: item_no, price: \u0026#34;123.45\u0026#34;, lastUpdated: \u0026#34;2022-06-12\u0026#34;} callback(null, price) } This Lambda takes item_no as input and returns a pricing record corresponding to the item_no.\nWe will use similar Lambda functions for the other steps of the checkout process. All of them will have skeletal code similar to the fetch customer and fetch price.\nDefining the Checkout Process with a State Machine After defining the Lambda functions and getting an understanding of the basic concepts of the Step Function service, let us now define our checkout Process.\nLet us create the state machine from the AWS management console. We can either choose to use the Workflow Studio which provides a visual workflow editor or the Amazon States Language (ASL) for defining our state machine.\nHere we have selected Workflow Studio to author our state machine. We have also selected the type of state machine as standard in the first step.\nLet us also give a name: checkout to our state machine and assign an IAM role that defines which resources our state machine has permission to access during execution. Our IAM policy definition is associated with the following policy:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;lambda:InvokeFunction\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;*\u0026#34; ] } ] } This policy will allow the state machine to invoke any Lambda function.\nWe will next add the steps of the checkout process in the state machine.\nAdding the States in the State Machine Each step of the checkout process will be a state in the state machine.\nLet us add the steps of the checkout process as different states in the state machine. We will define these states as of type task and will call the API: Lambda: Invoke .\nThe configuration of the state for the fetch customer step looks like this in the visual editor:\nAs we can see we have specified the name of the state as fetch customer, defined the API as Lambda:invoke, and selected the integration type as Optimized. We have provided the ARN of the Lambda function as the API parameter. The corresponding definition of the state in Amazon States Language (ASL) looks like this:\n{ \u0026#34;Comment\u0026#34;: \u0026#34;state machine for checkout process\u0026#34;, \u0026#34;StartAt\u0026#34;: \u0026#34;fetch customer\u0026#34;, \u0026#34;States\u0026#34;: { \u0026#34;fetch customer\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;Task\u0026#34;, \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:states:::lambda:invoke\u0026#34;, \u0026#34;OutputPath\u0026#34;: \u0026#34;$.Payload\u0026#34;, \u0026#34;Parameters\u0026#34;: { \u0026#34;Payload.$\u0026#34;: \u0026#34;$\u0026#34;, \u0026#34;FunctionName\u0026#34;: \u0026#34;arn:aws:lambda:us-east-1:926501103602:function:fetchCustomer:$LATEST\u0026#34; }, \u0026#34;End\u0026#34;: true } } } We will add the other steps with similar configurations for invoking the corresponding Lambda functions.\nThe first two steps: fetch customer and fetch price are not dependent on each other. So we can call them in parallel.\nOur state machine after adding these two steps looks like this in the visual editor:\nAs we can see in this visual, our state machine consists of 2 states: fetch customer and fetch price. We have defined these steps as 2 branches of a state of type parallel which allows the state machine to execute them in parallel.\nThe corresponding definition of the state machine built so far in Amazon States Language (ASL) looks like this:\n{ \u0026#34;Comment\u0026#34;: \u0026#34;state machine for checkout process\u0026#34;, \u0026#34;StartAt\u0026#34;: \u0026#34;Parallel\u0026#34;, \u0026#34;States\u0026#34;: { \u0026#34;Parallel\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;Parallel\u0026#34;, \u0026#34;Branches\u0026#34;: [ { \u0026#34;StartAt\u0026#34;: \u0026#34;fetch customer\u0026#34;, \u0026#34;States\u0026#34;: { \u0026#34;fetch customer\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;Task\u0026#34;, \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:states:::lambda:invoke\u0026#34;, \u0026#34;OutputPath\u0026#34;: \u0026#34;$.Payload\u0026#34;, \u0026#34;Parameters\u0026#34;: { \u0026#34;Payload.$\u0026#34;: \u0026#34;$\u0026#34;, \u0026#34;FunctionName\u0026#34;: \u0026#34;arn:aws:lambda:us-east-1:926501103602:function:fetchCustomer:$LATEST\u0026#34; }, \u0026#34;End\u0026#34;: true } } }, { \u0026#34;StartAt\u0026#34;: \u0026#34;Map\u0026#34;, \u0026#34;States\u0026#34;: { \u0026#34;Map\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;Map\u0026#34;, \u0026#34;Iterator\u0026#34;: { \u0026#34;StartAt\u0026#34;: \u0026#34;fetch price\u0026#34;, \u0026#34;States\u0026#34;: { \u0026#34;fetch price\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;Task\u0026#34;, \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:states:::lambda:invoke\u0026#34;, \u0026#34;OutputPath\u0026#34;: \u0026#34;$.Payload\u0026#34;, \u0026#34;Parameters\u0026#34;: { \u0026#34;Payload.$\u0026#34;: \u0026#34;$\u0026#34; }, \u0026#34;End\u0026#34;: true } } }, \u0026#34;End\u0026#34;: true } } } ], \u0026#34;Next\u0026#34;: \u0026#34;Pass\u0026#34; }, \u0026#34;Pass\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;Pass\u0026#34;, \u0026#34;End\u0026#34;: true } } } We have put the fetch price state as a child state of a map state. The map state allows the state machine to iterate over each item in the cart to fetch their price by executing the task state: fetch price.\nWe have next added a state of type: pass after the parallel step. The pass type state acts as a placeholder where we will manipulate the output from the parallel state.\nLet us add two more steps for processing payment and placing an order if the payment succeeds. Here is our state machine in the visual editor with the two additional steps: process payment, and create order:\nWe have also added a state of type choice with 2 branches. Each branch has a rule. The branch will execute only if the result of the rule evaluation is true.\nProcessing Inputs and Outputs in a State Machine The input to a Step Functions is sent in JSON format which is then passed to the first state in the state machine. Each state in the state machine receives JSON data as input and usually generates JSON as output to be passed to the next state. We can associate different kinds of filters to manipulate data in each state both before and after the task processing.\nInput Filters: InputPath and Parameters We use the InputPath and Parameters fields to manipulate the data before task processing:\n InputPath: The InputPath field takes a JSONPath attribute to extract only the parts of the input which is required by the state. Parameters: The Parameters field enables us to pass a collection of key-value pairs, where the values are either static values that we define in our state machine definition, or that are selected from the input using a path.  Output Filters: OutputPath, ResultSelector, and ResultPath We can further manipulate the results of the state execution using the fields: ResultSelector, ResultPath, and OutputPath:\n ResultSelector: This field filters the task result to construct a new JSON object using selected elements of the task result. ResultPath: In most cases, we would like to retain the input data for processing by subsequent states of the state machine. For this, we use the ResultPath filter to add the task result to the original state input. OutputPath: The OutputPath filter is used to select a portion of the effective state output to pass to the next state. It is often used with Task states to filter the result of an API response.  We will next add these filters to manipulate the input data to our state machine for the checkout process at different stages. We will mainly manipulate the data to make prepare the requests for the different Lambda functions.\nData Transformations through the Checkout Process State Machine Our state machine for the checkout process with the input and output filters is shown below:\nWe need to execute our state machine to run the task type states configured in the earlier sections. We can initiate an execution from the Step Functions console, or the AWS Command Line Interface (CLI), or by calling the Step Functions API with the AWS SDKs. Step Functions records full execution history for 90 days after the execution completes.\nWe need to provide input to the state machine in JSON format during execution and receive a JSON output after execution.\nWe can see the list of state machine executions with information such as execution id, status, and start date in the Step Functions console.\nOn selecting an execution, we can see a graph inspector which shows states and transitions marked with colors to indicate successful tasks, failures, and tasks that are still in progress. The graph inspector of our checkout process is shown below:\nLet us look at how our input data changes during the execution of the state machine by applying input and output filters as we transition through some of the states:\nInput to State Machine:\n{ \u0026#34;checkout_request\u0026#34;: { \u0026#34;customer_id\u0026#34; : \u0026#34;C6238485\u0026#34;, \u0026#34;cart_items\u0026#34;: [ { \u0026#34;item_no\u0026#34;: \u0026#34;I1234\u0026#34;, \u0026#34;shipping_date\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;shipping_address\u0026#34;: \u0026#34;address_1\u0026#34; }, { \u0026#34;item_no\u0026#34;: \u0026#34;I1235\u0026#34;, \u0026#34;shipping_date\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;shipping_address\u0026#34;: \u0026#34;address_2\u0026#34; }, { \u0026#34;item_no\u0026#34;: \u0026#34;I1236\u0026#34;, \u0026#34;shipping_date\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;shipping_address\u0026#34;: \u0026#34;address_3\u0026#34; } ] } } This is the input to our checkout process which is composed of a customer_id and an array of items in the shopping cart: cart_items.\n State: Parallel  Input: Same input as state machine\nState: fetch customer  Input: Same input as state machine\nData after filtering with InputPath: $.checkout_request.customer_id\nC6238485 Data after applying parameter filter: {\u0026quot;customer_id.$\u0026quot;: \u0026quot;$\u0026quot;}\n{\u0026#34;customer_id\u0026#34; : \u0026#34;C6238485\u0026#34;} Output of Lambda function execution:\n{ \u0026#34;customer_id\u0026#34;: \u0026#34;C6238485\u0026#34;, \u0026#34;customer_name\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;payment_pref\u0026#34;: \u0026#34;Credit Card\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;john.doe@yahoo.com\u0026#34;, \u0026#34;mobile\u0026#34;: \u0026#34;677896678\u0026#34; } State: iterate  Input: Same input as state machine\nData after filtering with InputPath: $.checkout_request.cart_items\n[ { \u0026#34;item_no\u0026#34;: \u0026#34;I1234\u0026#34;, ... }, { \u0026#34;item_no\u0026#34;: \u0026#34;I1235\u0026#34;, .. }, { \u0026#34;item_no\u0026#34;: \u0026#34;I1236\u0026#34;, .. } ] State: fetch price (for each element of the array: cart_items)  Input: Each element of the array: cart_items\nData after applying parameters filter: {\u0026quot;item_no.$\u0026quot;: \u0026quot;$.item_no\u0026quot;}\n{\u0026#34;item_no\u0026#34; : \u0026#34;I1234\u0026#34;} Output of Lambda function execution (for each element of the array: cart_items):\n{ \u0026#34;item_no\u0026#34;: \u0026#34;I1234\u0026#34;, \u0026#34;price\u0026#34;: 480.7, \u0026#34;lastUpdated\u0026#34;: \u0026#34;2022-06-12\u0026#34; } Output of State: Parallel  [ { \u0026#34;customer_id\u0026#34;: \u0026#34;C6238485\u0026#34;, \u0026#34;customer_name\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;payment_pref\u0026#34;: \u0026#34;Credit Card\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;john.doe@yahoo.com\u0026#34;, \u0026#34;mobile\u0026#34;: \u0026#34;677896678\u0026#34; }, [ { \u0026#34;item_no\u0026#34;: \u0026#34;I1234\u0026#34;, ... }, { \u0026#34;item_no\u0026#34;: \u0026#34;I1235\u0026#34;, ... }, { \u0026#34;item_no\u0026#34;: \u0026#34;I1236\u0026#34;, ... } ] ] State: Pass  Data after applying parameters filter:{\u0026quot;customer.$\u0026quot;: \u0026quot;$[0]\u0026quot;,\u0026quot;items.$\u0026quot;: \u0026quot;$[1]\u0026quot;}\n{ \u0026#34;customer\u0026#34;: { \u0026#34;customer_id\u0026#34;: \u0026#34;C6238485\u0026#34;, \u0026#34;customer_name\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;payment_pref\u0026#34;: \u0026#34;Credit Card\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;john.doe@yahoo.com\u0026#34;, \u0026#34;mobile\u0026#34;: \u0026#34;677896678\u0026#34;}, \u0026#34;items\u0026#34;: [ { \u0026#34;item_no\u0026#34;: \u0026#34;I1234\u0026#34;, ... }, { \u0026#34;item_no\u0026#34;: \u0026#34;I1235\u0026#34;, ... }, { \u0026#34;item_no\u0026#34;: \u0026#34;I1236\u0026#34;, ... } ] } State: process payment  Data after applying parameters filter:{\u0026quot;payment_type.$\u0026quot;: \u0026quot;$.customer.payment_pref\u0026quot;, \u0026quot;items.$\u0026quot;: \u0026quot;$.items\u0026quot;}\n\u0026#34;payment_type\u0026#34;: \u0026#34;Credit Card\u0026#34;, \u0026#34;items\u0026#34;: [{\u0026#34;item_no\u0026#34;: \u0026#34;I1234\u0026#34;,...}, ...] Output of Lambda function execution:\n{ \u0026#34;status\u0026#34;: \u0026#34;OK\u0026#34;, \u0026#34;total_price\u0026#34;: 11274.47 } ResultSelector:\n{ \u0026#34;payment_result.$\u0026#34;: \u0026#34;$.Payload\u0026#34; } Data after ResultSelector:\n{ \u0026#34;items\u0026#34;: [..], \u0026#34;customer\u0026#34;: {\u0026#34;payment_type\u0026#34;: \u0026#34;Credit Card\u0026#34;,\u0026#34;customer_id\u0026#34;: \u0026#34;C6238485\u0026#34;, ...}, \u0026#34;payment\u0026#34;: { \u0026#34;payment_result\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;OK\u0026#34;, \u0026#34;total_price\u0026#34;: 11274.47 } } ResultPath: $.payment\nOutputPath: $\nOutput of State:\n{ \u0026#34;items\u0026#34;: [..], \u0026#34;customer\u0026#34;: {\u0026#34;payment_type\u0026#34;: \u0026#34;Credit Card\u0026#34;,\u0026#34;customer_id\u0026#34;: \u0026#34;C6238485\u0026#34;, ...}, \u0026#34;payment\u0026#34;: { \u0026#34;payment_result\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;OK\u0026#34;, \u0026#34;total_price\u0026#34;: 11274.470000000001 } } } State: payment success?  Choice rule 1: $.payment.payment_result.status == \u0026quot;OK\u0026quot;\nif the rule result is true: Next state is create order\nif the rule result is false: Next state is prepare error\nState: create order  Parameters:\n{ \u0026#34;payment_type.$\u0026#34;: \u0026#34;$.customer.payment_pref\u0026#34;, \u0026#34;order_price.$\u0026#34;: \u0026#34;$.payment.payment_result.total_price\u0026#34;, \u0026#34;customer_id.$\u0026#34;: \u0026#34;$.customer.customer_id\u0026#34; } Data after applying the above parameters filter:\n\u0026#34;payment_type\u0026#34;: \u0026#34;Credit Card\u0026#34;, \u0026#34;order_price\u0026#34;: 11274.47, \u0026#34;customer_id\u0026#34;: \u0026#34;C6238485\u0026#34; \u0026hellip; \u0026hellip;\nOutput of state machine  { \u0026#34;customer_id\u0026#34;: \u0026#34;C6238485\u0026#34;, \u0026#34;order_id\u0026#34;: \u0026#34;oapjjg32g8e\u0026#34;, \u0026#34;order_price\u0026#34;: 11274.47, \u0026#34;status\u0026#34;: \u0026#34;OK\u0026#34; } Handling Errors in Step Function Workflows Let us understand some of the ways we handle errors in Step functions:\nWhen a state reports an error, the state machine execution fails by default.\nStates of type task, parallel, and map provide fields for configuring error handling:\nWe can handle errors by retrying the operation or by a fallback to another state.\nRetrying on Error We can retry a task when errors occur by specifying one or more retry rules, called \u0026ldquo;retriers\u0026rdquo;.\nHere we have defined a retrier for 3 types of errors: Lambda.ServiceException, Lambda.AWSLambdaException, and Lambda.SdkClientException with the following retry settings:\n Interval: The number of seconds before the first retry attempt. It can take values from 1 which is default to 99999999. Max Attempts: The maximum number of retry attempts. The task will not be retried after the number of retries exceeds this value. MaxAttempts has a default value of 3 and maximum value of 99999999. Backoff Rate: It is the multiplier by which the retry interval increases with each attempt.  Falling Back to a Different State on Error We can catch and revert to a fallback state when errors occur by specifying one or more catch rules, called \u0026ldquo;catchers\u0026rdquo;.\nIn this example, we are defining a prepare error state to which the process payment state can fall back if it encounters an error of type: States.TaskFailed.\nThe state machine with a catcher defined for the process payment step looks like this in the visual editor.\nHandling Lambda Service Exceptions As a best practice, we should proactively handle transient service errors in AWS Lambda functions that result in a 500 error, such as ServiceException, AWSLambdaException, or SdkClientException. We can handle these exceptions by retrying the Lambda function invocation, or by catching the error.\nIntegration with AWS Services When we configure a state of type task we need to specify an integration type:\nWe can integrate Step Function with AWS services through two types of service integrations:\nOptimized Integrations When we are calling an AWS service with optimized integration, Step Functions provide some additional functionality when the service API is called. For example, the invocation of the Lambda service converts its output from an escaped JSON string to a JSON object similar to the format:\n{ \u0026#34;ExecutedVersion\u0026#34;: \u0026#34;$LATEST\u0026#34;, \u0026#34;Payload\u0026#34;: { ... }, \u0026#34;SdkHttpMetadata\u0026#34;: { ... }, \u0026#34;SdkResponseMetadata\u0026#34;: { \u0026#34;RequestId\u0026#34;: \u0026#34;ac79dacd-7c6f-41c7-bfcf-eea70b43e141\u0026#34; }, \u0026#34;StatusCode\u0026#34;: 200 } We had used optimized integration for invoking our Lambda functions in the state machine of the checkout process.\nAWS SDK Integrations AWS SDK integrations allow us to make a standard API call on an AWS service from the state machine. When we use AWS SDK integrations, we specify the service name and API call with optionally, a service integration pattern (explained in the next section). The syntax for specifying the AWS service looks like this: arn:aws:states:::aws-sdk:serviceName:apiAction.[serviceIntegrationPattern]\nIntegration Patterns Step functions integrate with AWS services using three types of service integration patterns. The service integration pattern is specified by appending a suffix in the Resource URL in the task configuration.\n  Request Response: Step Functions wait for an HTTP response and then progress to the next state immediately after it gets an HTTP response. We do not append any suffix after the resource URI for this integration pattern. We have used this integration pattern for invoking all the Lambda functions from our state machine for the checkout process.\n  Running a Job: Step Functions wait for a request to complete before progressing to the next state. To specify this integration pattern, we specify the Resource field in our task state definition with the .sync suffix appended after the resource URI.\n  Waiting for a Callback with a Task Token: A task might need to wait for various reasons like seeking human approval, integrating with a third-party workflow, or calling legacy systems. In these situations, we can pause Step Functions indefinitely, and wait for an external process or workflow to complete. For this integration pattern, we specify the Resource field in our task state definition with the .waitForTaskToken suffix appended after the resource URI.\n  Conclusion Here is a list of the major points for a quick reference:\n AWS Step Functions is a serverless orchestration service by which we can combine AWS Lambda functions and other AWS services to build complex business applications. We can author the orchestration logic in a declarative style using a JSON-based format called the Amazon States Language(ASL). AWS Step functions also provide a powerful graphical console where we can visualize our application’s workflow as a series of steps. State Machines are of two types: Standard and Express. We use Standard for long-running, and durable processes and Express for high-volume, event-processing workloads. States of a state machine are of various types like task, choice, map, pass depending on the nature of functions they perform. We can associate different kinds of filters to manipulate data in each state both before and after the task processing. InputPath and parameters filters are used to filter input data before task execution. OutputPath, ResultSelector, and ResultPath filters are used to filter the task result before preparing the output of a state. We can also configure retry and fallback error handlers for handling error conditions in states.  ","date":"July 26, 2022","image":"https://reflectoring.io/images/stock/0117-queue-1200x628-branded_hu88ffcb943027ab1241b6b9f65033c311_123865_650x0_resize_q90_box.jpg","permalink":"/getting-started-with-aws-step-functions-tutorial/","title":"Getting Started with AWS Step Functions"},{"categories":["Node"],"contents":"Logging is used to provide accurate context about what occurs in our application, it is the documentation of all events that happen within an application. Logging is a great way to retrace all steps taken prior to an error/event in applications to understand them better.\nLarge-scale applications should have error/event logs, especially for significant and high-volume activity.\n Example Code This article is accompanied by a working code example on GitHub. What Should I Log In My Application? Most applications use logs for the purpose of debugging and troubleshooting. However, logs can be used for a variety of things including studying application systems, improving business logic and decisions, customer behavioural study, data mining, etc.\nHere is a list of possible events that our application should log.\n  Requests: This records the execution of services in our application. Services like authentication, authorizations, system access, data access, and application access.\n  Resources: Exhausted resources, exceeded capacities, and connectivity issues are all resource-related issues to log.\n  Availability: It is recommended to include a log statement to check the application runtime when the application session starts/stops. Availability logs contain faults and exceptions like the system\u0026rsquo;s availability and stability.\n  Threats: Invalid inputs and security issues are common threats to log out, such as invalid API keys, failed security verification, failed authentication, and other warnings triggered by the application’s security features.\n  Events/Changes: Button click, changing context, etc. System or application changes, data changes (creation and deletion). These are all important messages to log out in our applications\n  Logging Options in Node.js The default logging tool in Node.js is the console. Using the console module we can log messages on both the stdout and stderr.\nconsole.log('some msg') will print msg to the standard output (stdout).\nconsole.error('some error') will print error to the standard error (stderr).\nThis method has some limitations, such as the inability to structure it or add log levels. The console module cannot perform many custom configurations.\nThe Node ecosystem provides us with several other logging options, which are more structured, and easy to configure and customize. Here are some popularly used libraries:\n Winston Morgan Pino etc.  This post will focus on how to set up and use the Winston dependency to generate logging messages.\nWinston Logger Winston is one of the best and most widely used Node.js logging options, we are using it because it is very flexible, open-source, and has a great and supportive community, with approximately 4,000,000 weekly downloads.\nSetting Up Winston Logging In this section, we\u0026rsquo;ll go over how to install Winston and configure Winston with an express server.\nSet Up Express.js Server We will begin by creating a simple express application.\nCreate a folder called logging-file. Open and enter the following command into the directory terminal:\nnpm init -y npm i express winston Next, open the project in your preferred code editor.\nCreate a new file app.js and enter the following code to create a simple server on port 3000:\nconst express = require(\u0026#34;express\u0026#34;); const app = express(); app.get(\u0026#34;/\u0026#34;, (req, res, next) =\u0026gt; { console.log(\u0026#34;debug\u0026#34;, \u0026#34;Hello, Winston!\u0026#34;); console.log(\u0026#34;The is the home \u0026#39;/\u0026#39; route.\u0026#34;); res.status(200).send(\u0026#34;Logging Hello World..\u0026#34;); }); app.get(\u0026#34;/event\u0026#34;, (req, res, next) =\u0026gt; { try { throw new Error(\u0026#34;Not User!\u0026#34;); } catch (error) { console.error(\u0026#34;Events Error: Unauthenticated user\u0026#34;); res.status(500).send(\u0026#34;Error!\u0026#34;); } }); app.listen(3000, () =\u0026gt; { console.log(\u0026#34;Server Listening On Port 3000\u0026#34;); }); Run node app.js to start the server.\nIn the above example:\n The server starts and runs on http://localhost:3000. It responds to the / and /event routes. It will print the above log messages on our stdout and stderr when we visit each route in the browser.  Using Winston for logging Winston was installed above. Now let\u0026rsquo;s include it in our project.\nCreate a new file logger.js Insert the following code into the file:\nconst winston = require(\u0026#34;winston\u0026#34;); const logger = winston.createLogger({ level: \u0026#34;debug\u0026#34;, format: winston.format.json(), transports: [new winston.transports.Console()], }); module.exports = logger; In the code above, what we are doing is:\n Importing the Winston module into our project Creating a logger using Winston.createLogger() method.  Winston loggers can be generated using the default logger winston(), but the simplest method with more options is to create your own logger using the winston.createLogger() method.\nIn subsequent sections, we\u0026rsquo;ll examine all the options provided to us by createLogger() to customize our loggers.\nBut first, lets see the Winston library in action, returning to our file app.js. Here we can replace all console statements with our newly created logger:\nconst express = require(\u0026#34;express\u0026#34;); const logger = require(\u0026#34;./logger\u0026#34;); const app = express(); app.get(\u0026#34;/\u0026#34;, (req, res, next) =\u0026gt; { logger.log(\u0026#34;debug\u0026#34;, \u0026#34;Hello, Winston!\u0026#34;); logger.debug(\u0026#34;The is the home \u0026#39;/\u0026#39; route.\u0026#34;); res.status(200).send(\u0026#34;Logging Hello World..\u0026#34;); }); app.get(\u0026#34;/event\u0026#34;, (req, res, next) =\u0026gt; { try { throw new Error(\u0026#34;Not User!\u0026#34;); } catch (error) { logger.error(\u0026#34;Events Error: Unauthenticated user\u0026#34;); res.status(500).send(\u0026#34;Error!\u0026#34;); } }); app.listen(3000, () =\u0026gt; { logger.info(\u0026#34;Server Listening On Port 3000\u0026#34;); }); Run node app.js to start the server.\nWhen we access the above routes via the paths / and /event, we get logs in JSON format.\n{\u0026#34;level\u0026#34;:\u0026#34;info\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;Server Listening On Port 3000\u0026#34;} {\u0026#34;level\u0026#34;:\u0026#34;debug\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;Hello, Winston!\u0026#34;} {\u0026#34;level\u0026#34;:\u0026#34;debug\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;The is the home \u0026#39;/\u0026#39; route.\u0026#34;} {\u0026#34;level\u0026#34;:\u0026#34;error\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;unauthenticated user failed\u0026#34;} Winston Method Options As seen above the winston.createLogger() method gives us a number of options to help format and transport our logs.\nLet us visit each option and examine the properties and features they offers.\nWinston Level Log level is the piece of information in our code that indicates the importance of a specific log message. Using appropriate log levels is one of the best practices for application logging.\n\u0026lsquo;winston\u0026rsquo; by default uses npm logging levels. Here the severity of all levels is prioritized from the most important to least important from 0 to 6 (highest to lowest).\nIn the npm logging level, the severity of all levels is prioritized from the most important (0) to least important (6):\n 0 - error: is a serious problem or failure, that halts current activity but leaves the application in a recoverable state with no effect on other operations. The application can continue working. 1 - warn: A non-blocking warning about an unusual system exception. These logs provide context for a possible error. It logs warning signs that should be investigated. 2 - Info: This denotes major events and informative messages about the application\u0026rsquo;s current state. Useful For tracking the flow of the application. 3 - http: This logs out HTTP request-related messages. HTTP transactions ranging from the host, path, response, requests, etc. 4 - verbose: Records detailed messages that may contain sensitive information. 5 - debug: Developers and internal teams should be the only ones to see these log messages. They should be disabled in production environments. These logs will help us debug our code. 6 - silly: The current stack trace of the calling function should be printed out when silly messages are called. This information can be used to help developers and internal teams debug problems.  Another option is to explicitly configure winston to use levels severity as specified by Syslog Protocol.\n 0 - Emergency: system is unusable 1 - Alert: action must be taken immediately 2 - Critical: critical conditions 3 - Error: error conditions 4 - Warning: warning conditions 5 - Notice: normal but significant condition 6 - Informational: informational messages 7 - Debug: debug-level messages  If we do not explicitly state our winston logging level, npm levels will be used.\nWhen we specify a logging level for our Winston logger, it will only log anything at that level or higher.\nFor example, looking at our logger.js file, the level there is set to debug. Hence the logger will only output debug and higher levels (info, warn and error).\nAny level lower than debug would not be displayed/output when we call our logger method in app.js.\nThere are two ways to assign levels to log messages.\n  Provide the logger method with the name of the logging level as a string. logger.log(\u0026quot;debug\u0026quot;, \u0026quot;Hello, Winston!\u0026quot;);\n  Call the level on the method directly. logger.debug(\u0026quot;info\u0026quot;,\u0026quot;The '/' route.\u0026quot;)\n  When we look at our previous output, we can see that debug level was logged twice, using these different ways.\nWinston Format Winston output is in JSON format by default, with predefined fields level and message. Its formatting feature allows us to customize logged messages. If you are keen on aesthetics and design format of your logs.\nWinston comes with a number of built-in formats. Next, we\u0026rsquo;ll look at the format styles using printf() and prettyPrint().\nFormatting with printf() We can change the format of the log messages by creating our own formatting function:\nconst { format, createLogger, transports } = require(\u0026#34;winston\u0026#34;); const { combine, timestamp, label, printf } = format; const CATEGORY = \u0026#34;winston custom format\u0026#34;; //Using the printf format. const customFormat = printf(({ level, message, label, timestamp }) =\u0026gt; { return `${timestamp}[${label}] ${level}: ${message}`; }); const logger = createLogger({ level: \u0026#34;debug\u0026#34;, format: combine(label({ label: CATEGORY }), timestamp(), customFormat), transports: [new transports.Console()], }); module.exports = logger; In the code snippet above\n Notice we imported some extra format methods from winston.format. Label, combine and timestamp are log form properties. We defined a function customFormat and passed it into the combine method. Any number of formats can be passed into a single format using the format.combine method. It is used to combine multiple formats.  Run node app.js to display logs:\n2022-07-10T00:30:49.559Z [winston custom format] info: Server Listening On Port 3000 2022-07-10T00:30:57.484Z [winston custom format] debug: Hello, Winston! 2022-07-10T00:30:57.485Z [winston custom format] debug: This is the home \u0026#39;/\u0026#39; route. 2022-07-10T00:31:03.311Z [winston custom format] error: Events Error: Unauthenticated user When we access the above routes via the paths / and /event, we get our logs written in printf() format\nFormatting with prettyPrint() Similarly using format.combine() we can display messages in prettyPrint() format:\nconst { format, createLogger, transports } = require(\u0026#34;winston\u0026#34;); const { combine, timestamp, label, printf, prettyPrint } = format; const CATEGORY = \u0026#34;winston custom format\u0026#34;; const logger = createLogger({ level: \u0026#34;debug\u0026#34;, format: combine( label({ label: CATEGORY }), timestamp({ format: \u0026#34;MMM-DD-YYYY HH:mm:ss\u0026#34;, }), prettyPrint() ), transports: [new transports.Console()], }); module.exports = logger; In the above code\n we set the timestamp to a datetime value of our choice, and the message format to prettyPrint().  The pretty-printed log output of the command node app.js will now look something like this:\n{ message: \u0026#39;Server Listening On Port 3000\u0026#39;, level: \u0026#39;info\u0026#39;, label: \u0026#39;winston custom format\u0026#39;, timestamp: \u0026#39;Jul-10-2022 02:02:03\u0026#39; } { level: \u0026#39;debug\u0026#39;, message: \u0026#39;Hello, Winston!\u0026#39;, label: \u0026#39;winston custom format\u0026#39;, timestamp: \u0026#39;Jul-10-2022 02:02:08\u0026#39; } { message: \u0026#34;This is the home \u0026#39;/\u0026#39; route.\u0026#34;, level: \u0026#39;debug\u0026#39;, label: \u0026#39;winston custom format\u0026#39;, timestamp: \u0026#39;Jul-10-2022 02:02:08\u0026#39; } { message: \u0026#39;Events Error: Unauthenticated user\u0026#39;, level: \u0026#39;error\u0026#39;, label: \u0026#39;winston custom format\u0026#39;, timestamp: \u0026#39;Jul-10-2022 02:02:14\u0026#39; } Winston Transports Transports is a Winston feature that makes use of the Node.js networking, stream, and non-blocking I/O properties.\nTransport in Winston refers to the location where our log entries are sent to. Winston gives us a number of options for where we want our log messages to be sent.\nHere are the built-in transport options in Winston:\n Console File Http Stream  Visit this page to learn more about Winston transport options.\nWe\u0026rsquo;ve been using the Console transport by default to display log messages. Let\u0026rsquo;s look at how to use the File option.\nStoring Winston Logs to File Using the File transport option, we can save generated log messages to any file we want.\nTo accomplish this, the transport field in our code must either point to or generate a file.\nIn the transport section let\u0026rsquo;s replace the new transports.Console() in our logger.js to new transports.File():\nconst { createLogger, transports, format } = require(\u0026#34;winston\u0026#34;); const logger = createLogger({ level: \u0026#34;debug\u0026#34;, format: format.json(), //logger method...  transports: [ //new transports:  new transports.File({ filename: \u0026#34;logs/example.log\u0026#34;, }), ], //... }); module.exports = logger; In the above code, we are explicitly specifying that all logs generated should be saved in logs/example.log.\nAfter switching the transport section with the code above and running node app.js, you will see that a new file example.log has been generated in a logs folder.\nIn large applications, recording every log message into a single file is not a good idea. This makes tracking specific issues difficult. Using multiple transports is one possible solution.\nWinston allows us to use multiple transports. It is common for applications to send the same log output to multiple locations.\nTo use multiple transports we can just add multiple transport implementations to our logging configuration:\nconst { format, createLogger, transports } = require(\u0026#34;winston\u0026#34;); const { combine, timestamp, label, printf, prettyPrint } = format; const CATEGORY = \u0026#34;winston custom format\u0026#34;; const logger = createLogger({ level: \u0026#34;debug\u0026#34;, format: combine( label({ label: CATEGORY }), timestamp({ format: \u0026#34;MMM-DD-YYYY HH:mm:ss\u0026#34;, }), prettyPrint() ), transports: [ new transports.File({ filename: \u0026#34;logs/example.log\u0026#34;, }), new transports.File({ level: \u0026#34;error\u0026#34;, filename: \u0026#34;logs/error.log\u0026#34;, }), new transports.Console(), ], }); module.exports = logger; With these changes in place, all messages would be saved in an example.log file, while only the error messages would be saved in an error.log file and the console transport will log messages to the console.\nEach transport definition can contain configuration settings such as levels, filename, maxFiles, maxsize,handleExceptions and much more.\nLog Rotation with Winston In the production environment, a lot of activity occurs, and storing log messages in files can get out of hand very quickly, even when using multiple transports. Over time log messages become large and bulky to manage.\nTo solve these issues logs can be rotated based on size, limit, and date. log rotation removes old logs based on count, relevance or elapsed day.\nWinston provides the winston-daily-rotate-file module. It is an external transport used for file rotation, To keep our logs up to date.\nFor example, We can choose to auto-delete old log messages every 30-day intervals.\nwinston-daily-rotate-file is a transport maintained by winston contributors.\nLet\u0026rsquo;s go ahead and install it:\nnpm install winston-daily-rotate-file Open your logger.js file and replace its content with the following code:\nconst { format, createLogger, transports } = require(\u0026#34;winston\u0026#34;); const { combine, label, json } = format; require(\u0026#34;winston-daily-rotate-file\u0026#34;); //Label const CATEGORY = \u0026#34;Log Rotation\u0026#34;; //DailyRotateFile func() const fileRotateTransport = new transports.DailyRotateFile({ filename: \u0026#34;logs/rotate-%DATE%.log\u0026#34;, datePattern: \u0026#34;YYYY-MM-DD\u0026#34;, maxFiles: \u0026#34;14d\u0026#34;, }); const logger = createLogger({ level: \u0026#34;debug\u0026#34;, format: combine(label({ label: CATEGORY }), json()), transports: [fileRotateTransport, new transports.Console()], }); module.exports = logger; In the above code we created a fileRotateTransport object of type DailyRotateFile with these properties:\n filename: this is the file name to be used for storing logs. The name can include the %DATE% placeholder, stating the date created and the format datePattern. datePattern: represents a date format, to be used for rotating. maxFiles: maximum number of logs to keep. If it is not set no log will be removed. The above log is set to delete in 14 days. insert fileRotateTransport into logger transport option.  When we run node app.js again, this generates a new rotate-%DATE%.log file in our logs folder and a JSON() file containing our rotate settings\nThere are more option settings to use in the winston-daily-rotate-file transport.\nMore On Winston Transports Winston provides more helpful modules, it supports the ability to create custom transports or leverage transports actively supported by winston contributors and members of the community Here are some popularly used custom transport to check out:\n winston-daily-rotate-file winston-syslog winston-cloudwatch winston-mongodb winston-elasticsearch  Node.js Logging Best Practices To derive great value from logging messages in applications, we should adhere to some widely accepted logging practices. This makes our logs easier to understand and ensures that we are logging relevant and useful information.\nIn light of this, let\u0026rsquo;s look at a list of some best practices for Node.js application logging.\nChoose a Standard Logging Option There are many third parties logging frameworks available to choose from. It is important to ensure that our chosen logging options are simple to use, configurable and extensible enough to meet the need of our application.\nWinston, Multer, Pino, and Bunyan are some of the most popular ones.\nDon\u0026rsquo;t build your own!\nLog Using a Structured Format Logs are one of the most valuable tools for application developers when it comes to bug fixing and monitoring applications in the production environment.\nLog entries should be simple to read and include important details like event description, date and time of the event, application resources, severity level, and so on. Sometimes we want to use an algorithm to index, search, and categorize our log file based on certain parameters (date, user) or automate the log reviewing process. Our logs must Be Structured to support these characteristics easily.\nStructured logging is the process of using a predetermined message format for application logs, which allows logs to be treated as data sets rather than text. In Structure logging we display/output log entries as simple relational data sets, making them easy to search and analyze.\nWe introduce structured logging to help clarify the meaning of log messages making them readable for machines. Structured logs contain the same information as unstructured logs, but in a more structured format mostly in JSON() format.\n{ \u0026#34;level\u0026#34;: \u0026#34;debug\u0026#34;, \u0026#34;label\u0026#34;: \u0026#34;winston custom format\u0026#34;, \u0026#34;timestamp\u0026#34;: \u0026#34;Jul-10-2022 02:02:08\u0026#34; \u0026#34;host\u0026#34;: \u0026#34;192.168.0.1\u0026#34;, \u0026#34;pid\u0026#34;: \u0026#34;11111\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;This is the home \u0026#39;/\u0026#39; route.\u0026#34;, } Most developers now use structured logging to allow application users to interact with log files in an automated manner.\nHave a look at our article dedicated to structured logging if you want to dive deeper.\nUse an Appropriate Log Level The appropriate severity of each event that takes place in our application should be indicated with the correct log level. To deliver the best degree of information for every possible circumstance.\nHaving the right log level makes it easy to set up an automated alerting system that notifies us when the application produces a log entry that demands immediate attention. This makes it easier to read logs and trace faults in our code.\nInclude a Timestamp It is very important to include timestamps in log entries. This help distinguishes between logs that were recorded a few minutes ago from ones that were recorded weeks ago.\nTimestamps in logs make it easier to debug issues and help us predict how recent an issue is.\nProvide Context When composing a log message, make sure you stick to clear and concise words, describing the event that occurred as detailed and concisely as required and always using a widely recognised character set.\nWe may not be able to gather enough information to establish the context of each logged event if our log message is not very detailed.\nEach log message should be useful and relevant to the event and always keep it concise and straight to the point.\nDon\u0026rsquo;t Log Sensitive Information Sensitive and confidential user information should never make it into your log entries, especially in production. So that they are not at risk of being used maliciously.\nIf an attacker can retrieve confidential information from our log, Apart from putting users at risk of being attacked, there are fines or legal data compliance laws that can be enforced against such applications.\nSensitive information is everything from personally identifiable data (PII), health data, financial data, passwords, to IP addresses and similar information.\nConclusion In this article, we covered a number of techniques that will make it easier to create logs for our Node.js applications. Exploring various logging concepts and how to create an efficient logging strategy for our application, while covering several best logging practices.\nAs a result, our applications will be more reliable and usable.\nStart Logging Today!!!\n","date":"July 25, 2022","image":"https://reflectoring.io/images/stock/0031-matrix-1200x628-branded_hufb3c207f9151b804bbf7fe86cefe5814_184798_650x0_resize_q90_box.jpg","permalink":"/node-logging-winston/","title":"Node.js Logging with Winston"},{"categories":["Java"],"contents":"Most of the web today exchanges data in JSON format. Web servers, web and mobile applications, even IoT devices all talk with each other using JSON. Therefore, an easy and flexible way of handling JSON is essential for any software to survive in today\u0026rsquo;s world.\n Example Code This article is accompanied by a working code example on GitHub. What is JSON? JSON stands for \u0026ldquo;JavaScript Object Notation\u0026rdquo;, it\u0026rsquo;s a text-based format for representing structured data based on JavaScript object syntax. Its dynamic and simple format made it extremely popular. In its essence, it follows a key-value map model allowing nested objects and arrays:\n{ \u0026#34;array\u0026#34;: [ 1, 2, 3 ], \u0026#34;boolean\u0026#34;: true, \u0026#34;color\u0026#34;: \u0026#34;gold\u0026#34;, \u0026#34;null\u0026#34;: null, \u0026#34;number\u0026#34;: 123, \u0026#34;object\u0026#34;: { \u0026#34;a\u0026#34;: \u0026#34;b\u0026#34;, \u0026#34;c\u0026#34;: \u0026#34;d\u0026#34; }, \u0026#34;string\u0026#34;: \u0026#34;Hello World\u0026#34; } What is Jackson? Jackson is mainly known as a library that converts JSON strings and Plain Old Java Objects (POJOs). It also supports many other data formats such as CSV, YML, and XML.\nJackson is preferred by many people because of its maturity (13 years old) and its excellent integration with popular frameworks, such as Spring. Moreover, it\u0026rsquo;s an open-source project that is actively developed and maintained by a wide community.\nUnder the hood, Jackson has three core packages Streaming, Databind, and Annotations. With those, Jackson offers us three ways to handle JSON-POJO conversion:\nStreaming API It\u0026rsquo;s the fastest approach of the three and the one with the least overhead. It reads and writes JSON content as discrete events. The API provides a JsonParser that reads JSON into POJOs and a JsonGenerator that writes POJOs into JSON.\nTree Model The Tree Model creates an in-memory tree representation of the JSON document. An ObjectMapper is responsible for building a tree of JsonNode nodes. It is the most flexible approach as it allows us to traverse the node tree when the JSON document doesn\u0026rsquo;t map well to a POJO.\nData Binding It allows us to do conversion between POJOs and JSON documents using property accessors or using annotations. It offers two types of binding:\n  Simple Data Binding which converts JSON to and from Java Maps, Lists, Strings, Numbers, Booleans, and null objects.\n  Full Data Binding which Converts JSON to and from any Java class.\n  ObjectMapper ObjectMapper is the most commonly used part of the Jackson library as it\u0026rsquo;s the easiest way to convert between POJOs and JSON. It lives in com.fasterxml.jackson.databind.\nThe readValue() method is used to parse (deserialize) JSON from a String, Stream, or File into POJOs.\nOn the other hand, the writeValue() method is used to turn POJOs into JSON (serialize).\nThe way ObjectMapper works to figure out which JSON field maps to which POJO field is by matching the names of the JSON fields to the names of the getter and setter methods in the POJO.\nThat is done by removing the \u0026ldquo;get\u0026rdquo; and \u0026ldquo;set\u0026rdquo; parts of the names of the getter and setter methods and converting the first character of the remaining method name to lowercase.\nFor example, say we have a JSON field called name, ObjectMapper will match it with the getter getName() and the setter setName() in the POJO.\nObjectMapper is configurable and we can customize it to our needs either directly from the ObjectMapper instance or by using Jackson annotations as we will see later.\nMaven Dependencies Before we start looking at code, we need to add Jackson Maven dependency jackson-databind which in turn transitively adds jackson-annotations and jackson-core\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.fasterxml.jackson.core\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jackson-databind\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.13.3\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; We are also using Lombok to handle the boilerplate code for getters, setters, and constructors.\nBasic JSON Serialization and Deserialization with Jackson Let\u0026rsquo;s go through Jackson\u0026rsquo;s most important use-cases with code examples.\nBasic POJO / JSON Conversion Using ObjectMapper Let\u0026rsquo;s start by introducing a simple POJO called Employee:\n@Getter @AllArgsConstructor @NoArgsConstructor public class Employee { private String firstName; private String lastName; private int age; } Let\u0026rsquo;s start by turning a POJO to a JSON string:\npublic class JacksonTest { ObjectMapper objectMapper = new ObjectMapper(); @Test void pojoToJsonString() throws JsonProcessingException { Employee employee = new Employee(\u0026#34;Mark\u0026#34;, \u0026#34;James\u0026#34;, 20); String json = objectMapper.writeValueAsString(employee); System.out.println(json); } } We should see this as output:\n{\u0026#34;firstName\u0026#34;:\u0026#34;Mark\u0026#34;,\u0026#34;lastName\u0026#34;:\u0026#34;James\u0026#34;,\u0026#34;age\u0026#34;:20} Now, Let\u0026rsquo;s see convert a JSON string to an Employee object using the ObjectMapper.\npublic class JacksonTest { ... @Test void jsonStringToPojo() throws JsonProcessingException { String employeeJson = \u0026#34;{\\n\u0026#34; + \u0026#34; \\\u0026#34;firstName\\\u0026#34; : \\\u0026#34;Jalil\\\u0026#34;,\\n\u0026#34; + \u0026#34; \\\u0026#34;lastName\\\u0026#34; : \\\u0026#34;Jarjanazy\\\u0026#34;,\\n\u0026#34; + \u0026#34; \\\u0026#34;age\\\u0026#34; : 30\\n\u0026#34; + \u0026#34;}\u0026#34;; Employee employee = objectMapper.readValue(employeeJson, Employee.class); assertThat(employee.getFirstName()).isEqualTo(\u0026#34;Jalil\u0026#34;); } } The ObjectMapper also offers a rich API to read JSON from different sources into different formats, let\u0026rsquo;s check the most important ones.\nCreating a POJO from a JSON file This is done using the readValue() method.\nJSON file under test resources employee.json:\n{ \u0026#34;firstName\u0026#34;:\u0026#34;Homer\u0026#34;, \u0026#34;lastName\u0026#34;:\u0026#34;Simpson\u0026#34;, \u0026#34;age\u0026#34;:44 } public class JacksonTest { ... @Test void jsonFileToPojo() throws IOException { File file = new File(\u0026#34;src/test/resources/employee.json\u0026#34;); Employee employee = objectMapper.readValue(file, Employee.class); assertThat(employee.getAge()).isEqualTo(44); assertThat(employee.getLastName()).isEqualTo(\u0026#34;Simpson\u0026#34;); assertThat(employee.getFirstName()).isEqualTo(\u0026#34;Homer\u0026#34;); } } Creating a POJO from a Byte Array of JSON public class JacksonTest { ... @Test void byteArrayToPojo() throws IOException { String employeeJson = \u0026#34;{\\n\u0026#34; + \u0026#34; \\\u0026#34;firstName\\\u0026#34; : \\\u0026#34;Jalil\\\u0026#34;,\\n\u0026#34; + \u0026#34; \\\u0026#34;lastName\\\u0026#34; : \\\u0026#34;Jarjanazy\\\u0026#34;,\\n\u0026#34; + \u0026#34; \\\u0026#34;age\\\u0026#34; : 30\\n\u0026#34; + \u0026#34;}\u0026#34;; Employee employee = objectMapper.readValue(employeeJson.getBytes(), Employee.class); assertThat(employee.getFirstName()).isEqualTo(\u0026#34;Jalil\u0026#34;); } } Creating a List of POJOs from JSON Sometimes the JSON document isn\u0026rsquo;t an object, but a list of objects. Let\u0026rsquo;s see how we can read that.\nemployeeList.json:\n[ { \u0026#34;firstName\u0026#34;:\u0026#34;Marge\u0026#34;, \u0026#34;lastName\u0026#34;:\u0026#34;Simpson\u0026#34;, \u0026#34;age\u0026#34;:33 }, { \u0026#34;firstName\u0026#34;:\u0026#34;Homer\u0026#34;, \u0026#34;lastName\u0026#34;:\u0026#34;Simpson\u0026#34;, \u0026#34;age\u0026#34;:44 } ] public class JacksonTest { ... @Test void fileToListOfPojos() throws IOException { File file = new File(\u0026#34;src/test/resources/employeeList.json\u0026#34;); List\u0026lt;Employee\u0026gt; employeeList = objectMapper.readValue(file, new TypeReference\u0026lt;\u0026gt;(){}); assertThat(employeeList).hasSize(2); assertThat(employeeList.get(0).getAge()).isEqualTo(33); assertThat(employeeList.get(0).getLastName()).isEqualTo(\u0026#34;Simpson\u0026#34;); assertThat(employeeList.get(0).getFirstName()).isEqualTo(\u0026#34;Marge\u0026#34;); } } Creating a Map from JSON We can choose to parse the JSON to a Java Map, which is very convenient if we don\u0026rsquo;t know what to expect from the JSON file we are trying to parse. ObjectMapper will turn the name of each variable in the JSON to a Map key and the value of that variable to the value of that key.\npublic class JacksonTest { ... @Test void fileToMap() throws IOException { File file = new File(\u0026#34;src/test/resources/employee.json\u0026#34;); Map\u0026lt;String, Object\u0026gt; employee = objectMapper.readValue(file, new TypeReference\u0026lt;\u0026gt;(){}); assertThat(employee.keySet()).containsExactly(\u0026#34;firstName\u0026#34;, \u0026#34;lastName\u0026#34;, \u0026#34;age\u0026#34;); assertThat(employee.get(\u0026#34;firstName\u0026#34;)).isEqualTo(\u0026#34;Homer\u0026#34;); assertThat(employee.get(\u0026#34;lastName\u0026#34;)).isEqualTo(\u0026#34;Simpson\u0026#34;); assertThat(employee.get(\u0026#34;age\u0026#34;)).isEqualTo(44); } } Ignore Unknown JSON fields Sometimes the JSON we expect might have some extra fields that are not defined in our POJO. The default behavior for Jackson is to throw a UnrecognizedPropertyException exception in such cases. We can, however, tell Jackson not to stress out about unknown fields and simply ignore them. This is done by configuring ObjectMapper\u0026rsquo;s FAIL_ON_UNKNOWN_PROPERTIES to false.\nemployeeWithUnknownProperties.json:\n{ \u0026#34;firstName\u0026#34;:\u0026#34;Homer\u0026#34;, \u0026#34;lastName\u0026#34;:\u0026#34;Simpson\u0026#34;, \u0026#34;age\u0026#34;:44, \u0026#34;department\u0026#34;: \u0026#34;IT\u0026#34; } public class JacksonTest { ... @Test void fileToPojoWithUnknownProperties() throws IOException { File file = new File(\u0026#34;src/test/resources/employeeWithUnknownProperties.json\u0026#34;); objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); Employee employee = objectMapper.readValue(file, Employee.class); assertThat(employee.getFirstName()).isEqualTo(\u0026#34;Homer\u0026#34;); assertThat(employee.getLastName()).isEqualTo(\u0026#34;Simpson\u0026#34;); assertThat(employee.getAge()).isEqualTo(44); } } Working with Dates in Jackson Date conversions can be tricky as they can be represented with many formats and levels of specification (seconds, milliseconds, etc..).\nDate to JSON Before talking about Jackson and Date conversion, we need to talk about the new Date API provided by Java 8. It was introduced to address the shortcomings of the older java.util.Date and java.util.Calendar. We are mainly interested in using the LocalDate class which offers a powerful way to express date and time.\nTo do that, we need to add an extra module to Jackson so that it can handle LocalDate.\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.fasterxml.jackson.datatype\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jackson-datatype-jsr310\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.13.3\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Then we need to tell the ObjectMapper to look for and register the new module we\u0026rsquo;ve just added.\npublic class JacksonTest { ObjectMapper objectMapper = new ObjectMapper().findAndRegisterModules(); ... @Test void orderToJson() throws JsonProcessingException { Order order = new Order(1, LocalDate.of(1900,2,1)); String json = objectMapper.writeValueAsString(order); System.out.println(json); } } The default behavior for Jackson then is to show the date as [yyyy-MM-dd] So, the output would be {\u0026quot;id\u0026quot;:1,\u0026quot;date\u0026quot;:[1900,2,1]} \nWe can, however, tell Jackson what format we want the date to be. This can be done using the @JsonFormat annotation\npublic class Order { private int id; @JsonFormat(pattern = \u0026#34;dd/MM/yyyy\u0026#34;) private LocalDate date; } @Test void orderToJsonWithDate() throws JsonProcessingException { Order order = new Order(1, LocalDate.of(2023, 1, 1)); String json = objectMapper.writeValueAsString(order); System.out.println(json); } This should output {\u0026quot;id\u0026quot;:1,\u0026quot;date\u0026quot;:\u0026quot;01/01/2023\u0026quot;}.\nJSON to Date We can use the same configuration above to read a JSON field into a date.\norder.json:\n{ \u0026#34;id\u0026#34; : 1, \u0026#34;date\u0026#34; : \u0026#34;30/04/2000\u0026#34; } public class JacksonTest { ... @Test void fileToOrder() throws IOException { File file = new File(\u0026#34;src/test/resources/order.json\u0026#34;); Order order = objectMapper.readValue(file, Order.class); assertThat(order.getDate().getYear()).isEqualTo(2000); assertThat(order.getDate().getMonthValue()).isEqualTo(4); assertThat(order.getDate().getDayOfMonth()).isEqualTo(30); } } Jackson Annotations Annotations in Jackson play a major role in customizing how the JSON/POJO conversion process takes place. We have seen an example of it with the date conversion where we used the @JsonFormat annotation. Annotations mainly affect how the data is read, written or even both. Let\u0026rsquo;s explore some of those annotations based on their categories.\nRead Annotations They affect how Jackson converts JSON into POJOs.\n@JsonSetter This is useful when we want to match a field in the JSON string to a field in the POJO where their names don\u0026rsquo;t match.\n@NoArgsConstructor @AllArgsConstructor @Getter public class Car { @JsonSetter(\u0026#34;carBrand\u0026#34;) private String brand; } { \u0026#34;carBrand\u0026#34; : \u0026#34;BMW\u0026#34; } public class JacksonTest { ... @Test void fileToCar() throws IOException { File file = new File(\u0026#34;src/test/resources/car.json\u0026#34;); Car car = objectMapper.readValue(file, Car.class); assertThat(car.getBrand()).isEqualTo(\u0026#34;BMW\u0026#34;); } } @JsonAnySetter This annotation is useful for cases where the JSON contains some fields that are not declared in the POJO. It is used with a setter method that is called for every unrecognized field.\npublic class Car { @JsonSetter(\u0026#34;carBrand\u0026#34;) private String brand; private Map\u0026lt;String, String\u0026gt; unrecognizedFields = new HashMap\u0026lt;\u0026gt;(); @JsonAnySetter public void allSetter(String fieldName, String fieldValue) { unrecognizedFields.put(fieldName, fieldValue); } } carUnrecognized.json file:\n{ \u0026#34;carBrand\u0026#34; : \u0026#34;BMW\u0026#34;, \u0026#34;productionYear\u0026#34;: 1996 } public class JacksonTest { ... @Test void fileToUnrecognizedCar() throws IOException { File file = new File(\u0026#34;src/test/resources/carUnrecognized.json\u0026#34;); Car car = objectMapper.readValue(file, Car.class); assertThat(car.getUnrecognizedFields()).containsKey(\u0026#34;productionYear\u0026#34;); } } Write Annotations They affect how Jackson converts POJOs into JSON.\n@JsonGetter This is useful when we want to map a POJOs field to a JSON field using a different name. For example, say we have this Cat class with the field name, but we want its JSON name to be catName.\n@NoArgsConstructor @AllArgsConstructor public class Cat { private String name; @JsonGetter(\u0026#34;catName\u0026#34;) public String getName() { return name; } } public class JacksonTest { ... @Test void catToJson() throws JsonProcessingException { Cat cat = new Cat(\u0026#34;Monica\u0026#34;); String json = objectMapper.writeValueAsString(cat); System.out.println(json); } } This will output\n{ \u0026#34;catName\u0026#34;:\u0026#34;Monica\u0026#34; } @JsonAnyGetter This annotation allows us to treat a Map object as a source of JSON properties. Say we have this map as a field in the Cat class\n@NoArgsConstructor @AllArgsConstructor public class Cat { private String name; @JsonAnyGetter Map\u0026lt;String, String\u0026gt; map = Map.of( \u0026#34;name\u0026#34;, \u0026#34;Jack\u0026#34;, \u0026#34;surname\u0026#34;, \u0026#34;wolfskin\u0026#34; ); ... } @Test void catToJsonWithMap() throws JsonProcessingException { Cat cat = new Cat(\u0026#34;Monica\u0026#34;); String json = objectMapper.writeValueAsString(cat); System.out.println(json); } Then this will output\n{ \u0026#34;catName\u0026#34;:\u0026#34;Monica\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Jack\u0026#34;, \u0026#34;surname\u0026#34;:\u0026#34;wolfskin\u0026#34; } Read/Write Annotations Those annotations affect both reading and writing a JSON.\n@JsonIgnore The annotated filed is ignored while both writing and reading JSON.\n@AllArgsConstructor @NoArgsConstructor @Getter public class Dog { private String name; @JsonIgnore private int age; } public class JacksonTest { ... @Test void dogToJson() throws JsonProcessingException { Dog dog = new Dog(\u0026#34;Max\u0026#34;, 3); String json = objectMapper.writeValueAsString(dog); System.out.println(json); } } This will print out {\u0026quot;name\u0026quot;:\u0026quot;Max\u0026quot;}\nThe same applies to reading into a POJO as well.\nSay we have this dog.json file:\n{ \u0026#34;name\u0026#34; : \u0026#34;bobby\u0026#34;, \u0026#34;age\u0026#34; : 5 } public class JacksonTest { ... @Test void fileToDog() throws IOException { File file = new File(\u0026#34;src/test/resources/dog.json\u0026#34;); Dog dog = objectMapper.readValue(file, Dog.class); assertThat(dog.getName()).isEqualTo(\u0026#34;bobby\u0026#34;); assertThat(dog.getAge()).isNull(); } } Jackson has many more useful annotations that give us more control over the serialization/deserialization process. The full list of them can be found on Jackson\u0026rsquo;s Github repository.\nSummary   Jackson is one of the most powerful and popular libraries for JSON processing in Java.\n  Jackson consists of three main modules Streaming API, Tree Model, and Data Binding.\n  Jackson provides an ObjectMapper which is highly configurable to suit our needs through setting its properties and also using annotations.\n  You can find all the example code in the GitHub repo.\n","date":"July 15, 2022","image":"https://reflectoring.io/images/stock/0124-jackson-1200x628_hu1a98359d3203167d34c5e568c91c7f26_222783_650x0_resize_q90_box.jpg","permalink":"/jackson/","title":"All You Need To Know About JSON Parsing With Jackson"},{"categories":["kotlin"],"contents":"Coroutines are a design pattern for writing asynchronous programs for running multiple tasks concurrently.\nIn asynchronous programs, multiple tasks execute in parallel on separate threads without waiting for the other tasks to complete. Threads are an expensive resource and too many threads lead to a performance overhead due to high memory consumption and CPU usage.\nCoroutines are an alternate way of writing asynchronous programs but are much more lightweight compared to threads. They are computations that run on top of threads.\nWe can suspend a coroutine to allow other coroutines to run on the same thread. We can further resume the coroutine to run on the same or a different thread.\nWhen a coroutine is suspended, the corresponding computation is paused, removed from the thread, and stored in memory leaving the thread free to execute other activities. This way we can run many coroutines concurrently using only a small pool of threads thereby using very limited system resources.\nIn this post, we will understand how to use coroutines in Kotlin.\n Example Code This article is accompanied by a working code example on GitHub. Running a Concurrent Program with Thread Let us start by running a program that will execute some statements and also call a long-running function:\n//Todo: statement1 //Todo: call longRunningFunction //Todo: statement2 ... ... If we execute all the statements in sequence on a single thread, the longRunningFunction will block the thread from executing the remaining statements and the program as a whole will take a long time to complete.\nTo make it more efficient, we will execute the longRunningFunction in a separate thread and let the program continue executing on the main thread:\nimport kotlin.concurrent.thread fun main() { println(\u0026#34;My program runs...: ${Thread.currentThread().name}\u0026#34;) thread { longRunningTask() } println(\u0026#34;My program run ends...: ${Thread.currentThread().name}\u0026#34;) } fun longRunningTask(){ println(\u0026#34;executing longRunningTask on...: ${Thread.currentThread().name}\u0026#34;) Thread.sleep(1000) println(\u0026#34;longRunningTask ends on thread ...: ${Thread.currentThread().name}\u0026#34;) } Here we are simulating the long-running behavior by calling Thread.sleep() inside the function: longRunningTask(). We are calling this function inside the thread function. This will allow the main thread to continue executing without waiting for the longRunningTask() function to complete.\nThe longRunningTask() function will execute in a different thread as we can observe from the output of the println statements by running this program :\nMy program runs...: main My program run ends...: main executing longRunningTask on...: Thread-0 longRunningTask ends on thread ...: Thread-0 Process finished with exit code 0 As we can see in the output, the program starts running on the thread: main. It executes the longRunningTask() on thread Thread-0 but does not wait for it to complete and proceeds to execute the next println() statement again on the thread: main. However, the program ends with exit code 0 only after the longRunningTask finishes executing on Thread-0.\nWe will change this program to run using coroutines in the next sections.\nAdding the Dependencies for Coroutines The Kotlin language gives us basic constructs for writing coroutines but more useful constructs built on top of the basic coroutines are available in the kotlinx-coroutines-core library. So we need to add the dependency to the kotlinx-coroutines-core library before starting to write coroutines:\nOur build tool of choice is Gradle, so the dependency on the kotlinx-coroutines-core library will look like this:\ndependencies { implementation \u0026#39;org.jetbrains.kotlin:kotlin-stdlib\u0026#39; implementation \u0026#39;org.jetbrains.kotlinx:kotlinx-coroutines-core:1.6.2\u0026#39; } Here we have added the dependency on the Kotlin standard library and the kotlinx-coroutines-core library.\nA Simple Coroutine in Kotlin Coroutines are known as lightweight threads which means we can run code on coroutines similar to how we run code on threads. Let us change the earlier program to run the long running function in a coroutine instead of a separate thread as shown below:\nfun main() = runBlocking{ println(\u0026#34;My program runs...: ${Thread.currentThread().name}\u0026#34;) launch { // starting a coroutine  longRunningTask() // calling the long running function  } println(\u0026#34;My program run ends...: ${Thread.currentThread().name}\u0026#34;) } suspend fun longRunningTask(){ println(\u0026#34;executing longRunningTask on...: ${Thread.currentThread().name}\u0026#34;) delay(1000) // simulating the slow behavior by adding a delay  println( \u0026#34;longRunningTask ends on thread ...: ${Thread.currentThread().name}\u0026#34;) } Let us understand what this code does: launch{} function starts a new coroutine that runs concurrently with the rest of the code.\nrunBlocking{} also starts a new coroutine but blocks the current thread: main for the duration of the call until all the code inside the runBlocking{} function body complete their execution.\nlongRunningTask function is called a suspending function. It suspends the coroutine without blocking the underlying thread but allows other coroutines to run and use the underlying thread for their code.\nWe will understand more about starting new coroutines using functions like launch{} and runBlocking{} in a subsequent section on coroutine builders and scopes.\nWhen we run this program, we will get the following output:\nMy program runs...: main My program run ends...: main executing longRunningTask on...: main longRunningTask ends on a thread ...: main Process finished with exit code 0 We can see from this output that the program runs on the thread named main. It does not wait for the longRunningTask to finish and proceeds to execute the next statement and prints My program run ends...: main. The coroutine executes concurrently on the same thread as we can see from the output of the two print statements in the longRunningTask function.\nWe will next understand the different components of a coroutine in the following sections.\nIntroducing Suspending Functions A suspending function is the main building block of a coroutine. It is just like any other regular function which can optionally take one or more inputs and return an output. The thread running a regular function blocks other functions from running till the execution is complete. This will cause a negative performance impact if the function is a long-running function probably pulling data with an external API over a network.\nTo mitigate this, we need to change the regular function into a suspending function and call it from a coroutine scope. Calling the suspending function will pause/suspend the function and allow the thread to perform other activities. The paused/suspended function can resume after some time to run on the same or a different thread.\nThe syntax of a suspending function is also similar to a regular function with the addition of the suspend keyword as shown below:\nsuspend fun longRunningTask(){ ... ... } Functions marked with the suspend keyword are transformed at compile time and made asynchronous. Let is look at an example of calling a suspending function along with some regular functions:\nfun main() = runBlocking{ println(\u0026#34;${Instant.now()}: My program runs...: ${Thread.currentThread().name}\u0026#34;) val productId = findProduct() launch (Dispatchers.Unconfined) { // start a coroutine  val price = fetchPrice(productId) // call the suspending function  } updateProduct() println(\u0026#34;${Instant.now()}: My program run ends...: \u0026#34; + \u0026#34;${Thread.currentThread().name}\u0026#34;) } suspend fun fetchPrice(productId: String) : Double{ println(\u0026#34;${Instant.now()}: fetchPrice starts on...: ${Thread.currentThread().name} \u0026#34;) delay(2000) // simulate the slow function by adding a delay  println(\u0026#34;${Instant.now()}: fetchPrice ends on...: ${Thread.currentThread().name} \u0026#34;) return 234.5 } fun findProduct() : String{ println(\u0026#34;${Instant.now()}: findProduct on...: ${Thread.currentThread().name}\u0026#34;) return \u0026#34;P12333\u0026#34; } fun updateProduct() : String{ println(\u0026#34;${Instant.now()}: updateProduct on...: ${Thread.currentThread().name}\u0026#34;) return \u0026#34;Product updated\u0026#34; } As we can see in this example, the findProduct() and updateProduct() functions are regular functions. The fetchPrice() function is a slow function which we have simulated by adding a delay() function.\nIn the main() function we are first calling the findProduct() function and then calling the fetchPrice() suspending function with the launch{} function. After suspension it resumes the coroutine in the thread. After that we are calling the updateProduct() function.\nThe launch{} function starts a coroutine as explained earlier. We are passing a coroutine dispatcher: Dispatchers.Unconfined to the launch function which controls the threads on which the coroutine will start and resume. We will understand more about coroutine dispatchers in the subsequent sections.\nLet us run this program to observe how the coroutine suspends and allows the thread to run the other regular functions:\n2022-06-24T04:09:40..: My program runs...: main 2022-06-24T04:09:40..: findProduct on...: main 2022-06-24T04:09:40..: fetchPrice starts on...: main 2022-06-24T04:09:40..: updateProduct on...: main 2022-06-24T04:09:40..: My program run ends...: main 2022-06-24T04:09:42..: fetchPrice ends on.: kotlinx.coroutines.DefaultExecutor Process finished with exit code 0 As we can see the from the output, the findProduct() and updateProduct() functions are called on the main thread. The fetchPrice() function starts on the main thread and is suspended to allow execution of the findProduct() and updateProduct() functions on the main thread. The fetchPrice() function resumes on a different thread to execute the println() statement.\nIt is also important to understand that suspending functions can only be invoked by another suspending function or from a coroutine. The delay() function called inside the fetchPrice() function is also a suspending function provided by the kotlinx-coroutines-core library.\nCoroutine Scopes and Builders As explained in the previous sections, we can run suspending functions only in coroutine scopes started by coroutine builders like launch{}.\nWe use a coroutine builder to start a new coroutine and establish the corresponding scope to delimit the lifetime of the coroutine. The coroutine scope provides lifecycle methods for coroutines that allow us to start and stop them.\nLet us understand three coroutine builders in Kotlin: runBlocking{}, launch{}, and async{} :\nStarting Coroutines by Blocking the Running Thread with runBlocking Coroutines are more efficient than threads because they are suspended and resumed instead of blocking execution. However, we need to block threads in some specific use cases. For example, in the main() function, we need to block the thread, otherwise, our program will end without waiting for the coroutines to complete.\nThe runBlocking coroutine builder starts a coroutine by blocking the currently executing thread, till all the code in the coroutine is completed.\nThe signature of runBlocking functions looks like this:\nexpect fun \u0026lt;T\u0026gt; runBlocking(context: CoroutineContext = EmptyCoroutineContext, block: suspend CoroutineScope.() -\u0026gt; T): T The function takes two parameters :\n context: Provides the context of the coroutine represented by the CoroutineContext interface which is an indexed set of Element instances. block: The coroutine code which is invoked. It takes a function type: suspend CoroutineScope.() -\u0026gt; Unit  The runBlocking{} coroutine builder is designed to bridge regular blocking code to libraries that are written in suspending style. So the most appropriate situation of using runBlocking{} in main functions and in JUnit tests.\nA runBlocking{} function called from a main() function looks like this:\nfun main() = runBlocking{ ... ... } We have used runBlocking{} to block execution in all the main() functions in our earlier examples.\nSince runBlocking{} blocks the executing thread, it is rarely used inside the code in function bodies since threads are expensive resources, and blocking them is inefficient and not desired.\nStarting Coroutines in Fire and Forget Mode with launch The launch{} function starts a new coroutine that will not return any result to the caller. It does not block the current thread. The signature of the launch{} function is:\nfun CoroutineScope.launch( context: CoroutineContext = EmptyCoroutineContext, start: CoroutineStart = CoroutineStart.DEFAULT, block: suspend CoroutineScope.() -\u0026gt; Unit ): Job The function takes three parameters and returns a Job object:\n context: Provides the context of the coroutine represented by the CoroutineContext interface which is an indexed set of Element instances. start: Start option for the coroutine. The default value is CoroutineStart.DEFAULT which immediately schedules the coroutine for execution. We can set the start option to CoroutineStart.LAZY to start the coroutine lazily. block: The coroutine code which is invoked. It takes a function type: suspend CoroutineScope.() -\u0026gt; Unit  A new coroutine started using the launch{} function looks like this:\nfun main() = runBlocking{ println(\u0026#34;My program runs...: ${Thread.currentThread().name}\u0026#34;) // calling launch passing all 3 parameters  val job:Job = launch (EmptyCoroutineContext, CoroutineStart.DEFAULT){ longRunningTask() } // Another way of calling launch passing only the block parameter  // context and start parameters are set to their default values  val job1:Job = launch{longRunningTask()} job.join() println(\u0026#34;My program run ends...: ${Thread.currentThread().name}\u0026#34;) } suspend fun longRunningTask(){ println(\u0026#34;executing longRunningTask on...: ${Thread.currentThread().name}\u0026#34;) delay(1000) println(\u0026#34;longRunningTask ends on thread ...: ${Thread.currentThread().name}\u0026#34;) } Here launch{} function is called inside the runBlocking{} function. The launch{} function starts the coroutine which will execute the longRunningTask function and return a Job object immediately as a reference.\nWe are calling the join() method on this Job object which suspends the coroutine leaving the current thread free to do whatever it pleases (like executing another coroutine) in the meantime.\nWe can also use the Job object to cancel the coroutine when the resulting job is canceled.\nReturn Result of Suspending Function to the Launching Thread with async The async is another way to start a coroutine. Sometimes when we start a coroutine, we might need a value to be returned from that coroutine back to the thread that launched it.\nasync starts a coroutine in parallel similar to launch. But it waits one coroutine to complete before starting another coroutine. The signature of async is shown below:\nfun \u0026lt;T\u0026gt; CoroutineScope.async( context: CoroutineContext = EmptyCoroutineContext, start: CoroutineStart = CoroutineStart.DEFAULT, block: suspend CoroutineScope.() -\u0026gt; T ): Deferred\u0026lt;T\u0026gt; The async{} function takes the same three parameters as a launch{} function but returns a Deferred\u0026lt;T\u0026gt; instance instead of Job. We can fetch the result of the computation performed in the coroutine from the Deferred\u0026lt;T\u0026gt; instance by calling the await() method.\nWe can use async as shown in this example:\nfun main() = runBlocking{ println(\u0026#34;program runs...: ${Thread.currentThread().name}\u0026#34;) val taskDeferred = async { generateUniqueID() } val taskResult = taskDeferred.await() println(\u0026#34;program run ends...: ${taskResult} ${Thread.currentThread().name}\u0026#34;) } suspend fun generateUniqueID(): String{ println(\u0026#34;executing generateUniqueID on...: ${Thread.currentThread().name}\u0026#34;) delay(1000) println(\u0026#34;generateUniqueID ends on thread ...: ${Thread.currentThread().name}\u0026#34;) return UUID.randomUUID().toString() } In this example, we are generating a unique identifier in a suspending function: generateUniqueID which is called from a coroutine started with async. The async function returns an instance of Deffered\u0026lt;T\u0026gt;. The type of T is Unit by default.\nHere type of T is String since the suspending function generateUniqueID returns a value of type String.\nNext, we are calling the await() method on the deferred instance: taskDeferred to extract the result.\nWe get the following output by running the program:\nprogram runs...: main executing generateUniqueID on...: main generateUniqueID ends on thread ...: main program run ends...: f18ac8c7-25ef-4755-8ab8-73c8219aadd3 main Process finished with exit code 0 Here we can see the result of the suspended function printed in the output.\nCoroutine Dispatchers: Determine the Thread for the Coroutine to Run A coroutine dispatcher determines the thread or thread pool the corresponding coroutine uses for its execution. All coroutines execute in a context represented by the CoroutineContext interface. The CoroutineContext is an indexed set of elements and is accessible inside the coroutine through the property: CoroutineContext. The coroutine dispatcher is an important element of this indexed set.\nThe coroutine dispatcher can confine the execution of a coroutine to a specific thread, dispatch it to a thread pool, or allow it to run unconfined.\nAs we have seen in the previous section, all coroutine builders like launch{} and async{} accept an optional CoroutineContext as a parameter in their signature:\nfun \u0026lt;T\u0026gt; CoroutineScope.async( context: CoroutineContext = EmptyCoroutineContext, start: CoroutineStart = CoroutineStart.DEFAULT, block: suspend CoroutineScope.() -\u0026gt; T ): Deferred\u0026lt;T\u0026gt; The CoroutineContext is used to explicitly specify the dispatcher for the new coroutine. Kotlin has multiple implementations of CoroutineDispatchers which we can specify when creating coroutines with coroutine builders like launch and async. Let us look at some of the commonly used dispatchers:\nInheriting the Dispatcher from the Parent Coroutine When the launch{} function is used without parameters, it inherits the CoroutineContext (and thus the dispatcher) from the CoroutineScope it is being launched from. Let us observe this behavior with the help of the example below:\nfun main() = runBlocking { launch { println( \u0026#34;launch default: running in thread ${Thread.currentThread().name}\u0026#34;) longTask() } } suspend fun longTask(){ println(\u0026#34;executing longTask on...: ${Thread.currentThread().name}\u0026#34;) delay(1000) println(\u0026#34;longTask ends on thread ...: ${Thread.currentThread().name}\u0026#34;) } Here the launch{} coroutine builder inherits the context and hence the dispatcher of the runBlocking coroutine scope which runs in the main thread. Hence the coroutine started by the launch{} coroutine builder also uses the same dispatcher which makes the coroutine run in the main thread.\nWhen we run this program, we can observe this behavior in the below output:\ncompleted tasks launch default: running in thread main executing longTask on...: main longTask ends on thread ...: main Process finished with exit code 0 As we can see in the output, the coroutine started by the launch{} coroutine builder also runs in the main thread.\nDefault Dispatcher for Running CPU-Intensive Operations The default dispatcher is used when no other dispatcher is explicitly specified in the scope. It is represented by Dispatchers.Default and uses a shared background pool of threads. The pool of threads has a size equal to the number of cores on the machine where our code is running with a minimum of 2 threads.\nLet us run the following code to check this behavior:\nfun main() = runBlocking { repeat(1000) { launch(Dispatchers.Default) { // will get dispatched to DefaultDispatcher  println(\u0026#34;Default : running in thread ${Thread.currentThread().name}\u0026#34;) longTask() } } } Here is a snippet of the output showing the threads used by the coroutine:\nDefault : running in thread DefaultDispatcher-worker-1 Default : running in thread DefaultDispatcher-worker-2 Default : running in thread DefaultDispatcher-worker-4 Default : running in thread DefaultDispatcher-worker-3 Default : running in thread DefaultDispatcher-worker-5 Default : running in thread DefaultDispatcher-worker-6 Default : running in thread DefaultDispatcher-worker-7 Default : running in thread DefaultDispatcher-worker-8 Default : running in thread DefaultDispatcher-worker-9 Default : running in thread DefaultDispatcher-worker-10 Default : running in thread DefaultDispatcher-worker-3 Default : running in thread DefaultDispatcher-worker-2 Default : running in thread DefaultDispatcher-worker-2 Default : running in thread DefaultDispatcher-worker-6 Default : running in thread DefaultDispatcher-worker-4 We can see 10 threads from the thread pool used for running the coroutines.\nWe can also use limitedParallelism to restrict the number of coroutines being actively executed in parallel as shown in this example:\nfun main() = runBlocking { repeat(1000) { // will get dispatched to DefaultDispatcher with  // limit to running 3 coroutines in parallel  val dispatcher = Dispatchers.Default.limitedParallelism(3) launch(dispatcher) { println(\u0026#34;Default : running in thread ${Thread.currentThread().name}\u0026#34;) longTask() } } } Here we have set a limit of 3 for running a maximum of 3 coroutines in parallel.\nCreating a New Thread with newSingleThreadContext newSingleThreadContext creates a new thread which will be solely dedicated for the coroutine to run. This dispatcher guarantees that the coroutine is executed in a specific thread at all times:\nfun main() = runBlocking { launch(newSingleThreadContext(\u0026#34;MyThread\u0026#34;)) { // will get its own new thread MyThread  println(\u0026#34;newSingleThreadContext: running in thread ${Thread.currentThread().name}\u0026#34;) longTask() } println(\u0026#34;completed tasks\u0026#34;) } In this example, we are executing our coroutine in a dedicated thread named MyThread as can be seen in the output obtained by running the program:\nnewSingleThreadContext: running in thread MyThread Process finished with exit code 0 However, a dedicated thread is an expensive resource. In a real application, the thread must be either released, when no longer needed, using the close function, or reused throughout the application by storing its reference in a top-level variable.\nRun Unconfined with Dispatchers.Unconfined The Dispatchers.Unconfined coroutine dispatcher starts a coroutine in the caller thread, but only until the first suspension point. After suspension, it resumes the coroutine in the thread that is fully determined by the suspending function that was invoked.\nLet us modify our previous example to pass a parameter: Dispatchers.Unconfined to the launch{} function:\nfun main() = runBlocking { launch(Dispatchers.Unconfined) { // not confined -- will work with main thread  println( \u0026#34;Unconfined : running in thread ${Thread.currentThread().name}\u0026#34;) longTask() } println(\u0026#34;completed tasks\u0026#34;) } When we run this program, we get the following output:\nUnconfined : running in thread main executing longTask on...: main // coroutine starts completed tasks // printed by main thread with the coroutine suspended longTask ends on thread ...: kotlinx.coroutines.DefaultExecutor // coroutine resumes Process finished with exit code 0 As we can see from the output, the coroutine starts running in the main thread as soon as it is called. It is suspended to allow the main thread to run. The coroutine resumes on a different thread: kotlinx.coroutines.DefaultExecutor to execute the println statement in the longTask function.\nThe unconfined dispatcher is appropriate for coroutines that neither consume CPU time nor update any shared data (like UI) confined to a specific thread. The unconfined dispatcher should not be used in general code. It is helpful in situations where some operation in a coroutine must be performed immediately.\nCancelling Coroutine Execution﻿ We might like to cancel long-running jobs before they finish. An example of a situation when we would want to cancel a job will be: when we have navigated to a different screen in a UI-based application (like Android) and are no longer interested in the result of the long-running function.\nAnother example will be: we want to exit a process due to some exception and we want to perform a clean-up by canceling all the long-running jobs which are still running.\nIn an earlier example, we have already seen the launch{} function returning a Job. The Job object provides a cancel() method to cancel a running coroutine which we can use as shown in this example:\nfun main() = runBlocking{ println(\u0026#34;My program runs...: ${Thread.currentThread().name}\u0026#34;) val job:Job = launch { longRunningFunction() } delay(1500) // delay ending the program  job.cancel() // cancel the job  job.join() // wait for the job to be cancelled  // job.cancelAndJoin() // we can also call this in a single step  println( \u0026#34;My program run ends...: ${Thread.currentThread().name}\u0026#34;) } suspend fun longRunningFunction(){ repeat(1000){ i -\u0026gt; println(\u0026#34;executing :$i step on thread: ${Thread.currentThread().name}\u0026#34;) delay(600) } } In this example, we are executing a print statement from the longRunningFunction after every 600 milliseconds. This simulates a long-running function with 1000 steps and executes the print statement at the end of every step. We get the following output when we run this program:\nMy program runs...: main executing step 0 on thread: main executing step 1 on thread: main executing step 2 on thread: main My program run ends...: main Process finished with exit code 0 We can see the longRunningFunction executing till step 2 and then stopping after we call cancel on the job object. Instead of two statements for cancel and join, we can also use a Job extension function: cancelAndJoin that combines cancel and join invocations.\nCanceling Coroutines As explained in the previous section, we need to cancel coroutines to avoid doing more work than needed to save on memory and processing resources. We need to ensure that we control the life of the coroutine and cancel it when it is no longer needed.\nA coroutine code has to cooperate to be cancellable. We need to ensure that all the code in a coroutine is cooperative with cancellation, by checking for cancellation periodically or before beginning any long-running task.\nThere are two approaches to making a coroutine code cancellable:\nPeriodically Invoke a Suspending Function yield We can periodically invoke a suspending function like yield to check for the cancellation status of a coroutine and yield the thread (or thread pool) of the current coroutine to allow other coroutines to run on the same thread (or thread pool):\nfun main() = runBlocking{ try { val job1 = launch { repeat(20){ println( \u0026#34;processing job 1: ${Thread.currentThread().name}\u0026#34;) yield() } } val job2 = launch { repeat(20){ println( \u0026#34;processing job 2: ${Thread.currentThread().name}\u0026#34;) yield() } } job1.join() job2.join() } catch (e: CancellationException) { // clean up code  } } Here we are running two coroutines with each of them calling the yield function to allow the other coroutine to run on the main thread. The output snippet of running this program is shown below:\nprocessing job 1: main processing job 2: main processing job 1: main processing job 2: main processing job 1: main We can see the output from the first coroutine after which it calls yield. This suspends the first coroutine and allows the second coroutine to run. Similarly, the second coroutine is also calling the yield function and allowing the first coroutine to resume execution.\nWhen the cancellation of a coroutine is accepted, a kotlinx.coroutines.JobCancellationException exception is thrown. We can catch this exception and run all clean-up code here.\nExplicitly Check the Cancellation Status with isActive We can also explicitly check for the cancellation status of a running coroutine with isActive which is an extension property available inside the coroutine via the CoroutineScope object:\nfun main() = runBlocking{ println(\u0026#34;program runs...: ${Thread.currentThread().name}\u0026#34;) val job:Job = launch { val files = File (\u0026#34;\u0026lt;File Path\u0026gt;\u0026#34;).listFiles() var loop = 0 while (loop \u0026lt; files.size-1 ) { if(isActive) { // check the cancellation status  readFile(files.get(++loop)) } } } delay(1500) job.cancelAndJoin() println(\u0026#34;program run ends...: ${Thread.currentThread().name}\u0026#34;) } suspend fun readFile(file: File) { println(\u0026#34;reading file ${file.name}\u0026#34;) if (file.isFile) { // process file  } delay(100) } Here we are processing a set of files from a directory. We are checking for the cancellation status with isActive before processing each file. The isActive property returns true when the current job is still active (not completed and not canceled yet).\nConclusion In this article, we understood the different ways of using Coroutines in Kotlin. Here are some important points to remember:\n A coroutine is a concurrency design pattern used to write asynchronous programs. Coroutines are computations that run on top of threads that can be suspended and resumed. When a coroutine is \u0026ldquo;suspended\u0026rdquo;, the corresponding computation is paused, removed from the thread, and stored in memory leaving the thread free to execute other activities. Coroutines are started by coroutine builders which also establish a scope. launch{}, async{}, and runBlocking{} are different types of coroutine builders. The launch function returns job using which can also cancel the coroutine. The async function returns a Deferred\u0026lt;T\u0026gt; instance. We can fetch the result of the computation performed in the coroutine from the Deferred\u0026lt;T\u0026gt; instance by calling the await() method. Coroutine cancellation is cooperative. A coroutine code has to cooperate to be cancellable. Otherwise, we cannot cancel it midway during its execution even after calling Job.cancel(). The async function starts a coroutine in parallel, similar to the launch{} function. However, it waits for a coroutine to complete before starting another coroutine. A coroutine dispatcher determines the thread or threads the corresponding coroutine uses for its execution. The coroutine dispatcher can confine coroutine execution to a specific thread, dispatch it to a thread pool, or let it run unconfined. Coroutines are lightweight compared to threads. A thread gets blocked while a coroutine is suspended leaving the thread to continue execution, thus allowing the same thread to be used for running multiple coroutines.  You can refer to all the source code used in the article on Github.\n","date":"July 14, 2022","image":"https://reflectoring.io/images/stock/0054-bee-1200x628-branded_hu178224517b326c40da4b12810c856ac9_134300_650x0_resize_q90_box.jpg","permalink":"/understanding-kotlin-coroutines-tutorial/","title":"Understanding Kotlin Coroutines"},{"categories":["Java"],"contents":"Gradle is a build automation tool that supports multi-language development. It is helpful to build, test, publish, and deploy software on any platform. In this article, we will learn about the Gradle Wrapper - what it is, when to use it, how to use it, etc.\nWhat Is the Gradle Wrapper? The Gradle Wrapper is basically a script. It will ensure that the required version of Gradle is downloaded and used for building the project. This is the recommended approach to executing Gradle builds.\nWhen To Use the Gradle Wrapper? The Wrapper is an effective way to make the build environment independent. No matter where the end-user is building the project, it will always download the appropriate version of Gradle and use it accordingly.\nAs a result, developers can get up and running with a Gradle project quickly and reliably without following manual installation processes. The standardized build process makes it easy to provision a new Gradle version to different execution environments.\nHow the Gradle Wrapper Works Once the user builds the project using Gradle Wrapper, then the following steps will happen:\n The Wrapper script will download the required Gradle distribution from the server if necessary. Then, it will store and unpack the distribution under the Gradle user home location (default location is .gradle/wrapper/dists under the user home). We are all set to start building the project using the Wrapper script.  Please Note The Wrapper will not download the Gradle distribution if it is already cached in the system.\n How To Use the Gradle Wrapper There are mainly three scenarios for Gradle Wrapper usage. Let\u0026rsquo;s learn more about these.\nSetting Up the Gradle Wrapper for a New Project First, we need to install Gradle to invoke the Wrapper task. You can refer to the official installation guide. Once the installation is complete, we are good to go for the next step.\nIn this tutorial, we will use Gradle version 7.4.2.\nNow, let\u0026rsquo;s open the terminal, navigate to the required folder/directory and run the command gradle init.\nAfter starting the init command, we choose the project type, build script DSL, and project name. Let\u0026rsquo;s go ahead with the default options that will look something like this:\n$ gradle init Select type of project to generate: 1: basic 2: application 3: library 4: Gradle plugin Enter selection (default: basic) [1..4] Select build script DSL: 1: Groovy 2: Kotlin Enter selection (default: Groovy) [1..2] Generate build using new APIs and behavior (some features may change in the next minor release)? (default: no) [yes, no] Project name (default: gradle-wrapper-demo): \u0026gt; Task :init Get more help with your project: Learn more about Gradle by exploring our samples at https://docs.gradle.org/7.4.2/samples BUILD SUCCESSFUL in 3m 25s 2 actionable tasks: 2 executed If we now check the file structure in this directory, we will see:\n. ├── build.gradle ├── gradle │ └── wrapper │ ├── gradle-wrapper.jar │ └── gradle-wrapper.properties ├── gradlew ├── gradlew.bat └── settings.gradle Please Note We need to commit these files into version control so that the Wrapper script becomes accessible to other developers in the team.\n We will explore the file contents in the next section.\nWe just tried the first way to create the Wrapper. Let\u0026rsquo;s move on to the next.\nSetting Up the Gradle Wrapper for an Existing Project You may also want to create the Wrapper for your existing Gradle projects. There is a wrapper task available for this use case. The only pre-requisite is that you already have a settings.gradle file in your project directory.\nNow, when we run the command gradle wrapper from that directory, it will create the Wrapper specific files:\n$ gradle wrapper BUILD SUCCESSFUL in 697ms 1 actionable task: 1 executed If you need help on the Wrapper task, then the gradle help --task wrapper command is all you need.\nExecuting a Gradle Build Using the Wrapper Once we have a project bootstrapped with the Wrapper files, running the Gradle build is straightforward.\n For Linux/macOS users, the gradlew script can be run from the terminal. For Windows users, the gradlew.bat script can be run from the terminal/command prompt.  Here is a sample output of the script when run from Linux/macOS:\n$ ./gradlew \u0026gt; Task :help Welcome to Gradle 7.4.2. To run a build, run gradlew \u0026lt;task\u0026gt; ... To see a list of available tasks, run gradlew tasks To see more detail about a task, run gradlew help --task \u0026lt;task\u0026gt; To see a list of command-line options, run gradlew --help For more detail on using Gradle, see https://docs.gradle.org/7.4.2/userguide/command_line_interface.html For troubleshooting, visit https://help.gradle.org BUILD SUCCESSFUL in 980ms 1 actionable task: 1 executed As you can see, by default, when we don\u0026rsquo;t pass the task name in the command, the default help task is run.\nTo build the project, we can use the build task, i.e., ./gradlew build or gradlew.bat build. Using the Wrapper script, you can now execute any Gradle command without having to install Gradle separately.\nPlease Note We will use ./gradlew in the following examples. Please use gradlew.bat instead of ./gradlew if you are on a Windows system.\n What Does the Gradle Wrapper Contain? In a typical Wrapper setup, you will encounter the following files:\n   File Name Usage     gradle-wrapper.jar The Wrapper JAR file containing code to download the Gradle distribution.   gradle-wrapper.properties The properties file configuring the Wrapper runtime behavior. Most importantly, this is where you can control the version of Gradle that is used for builds.   gradlew A shell script for executing the build.   gradlew.bat A Windows batch script for running the build.    Normally the gradle-wrapper.properties contains the following data:\ndistributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists distributionUrl=https\\://services.gradle.org/distributions/gradle-7.4.2-bin.zip zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists How to Update the Gradle Version? You might have to update the Gradle version in the future. We can achieve this by running the command ./gradlew wrapper --gradle-version \u0026lt;required_gradle_version\u0026gt; from a project containing Wrapper scripts.\nThen, we can check if the version is duly updated by running the ./gradlew --version command.\nYou can also change the version number in the distributionUrl property in the gradle-wrapper.properties file. The next time ./gradlew is called, it will download the new version of Gradle.\nHow to Use a Different Gradle URL? Sometimes we may have to download the Gradle distribution from a different source than the one mentioned in the default configuration. In such cases, we can use the --gradle-distribution-url flag while generating the Wrapper, e.g., ./gradlew wrapper --gradle-distribution-url \u0026lt;custom_gradle_download_url\u0026gt;.\nConclusion In this article, we learned what problem the Gradle Wrapper solves, how to use it, and how it works. You can read a similar article on this blog on Maven Wrapper.\n","date":"July 4, 2022","image":"https://reflectoring.io/images/stock/0076-airmail-1200x628-branded_hu11b26946a4345a7ce4c5465e5e627838_150840_650x0_resize_q90_box.jpg","permalink":"/gradle-wrapper/","title":"Run Your Gradle Build Anywhere with the Gradle Wrapper"},{"categories":["aws"],"contents":"Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency. A content delivery network consists of a globally-distributed network of servers that can cache static content, like images, media, stylesheets, JavaScript files, etc, or other bulky media, in locations close to consumers. This helps in improving the downloading speed of these static contents.\nIn this tutorial, we will store the contents of a Single page application (SPA) in an S3 bucket and configure CloudFront to deliver this application globally.\nContent Distribution through CloudFront CloudFront delivers all content through a network of data centers called edge locations. Edge locations are also known as Points of Presence (POP) which are part of AWS\u0026rsquo;s global network infrastructure and are usually deployed in major cities and highly populated areas across the globe.\nWhenever a viewer requests content that we are serving with CloudFront, the request is routed to the edge location which is closest to the user that provides the lowest latency. This results in content being delivered to the viewer with the best possible performance.\nIf the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. If the content is not in that edge location, CloudFront retrieves it from an origin configured by us like an S3 bucket, or an HTTP server.\nWe create a CloudFront distribution to tell CloudFront where we want the content to be delivered from. We define origin servers, like an Amazon S3 bucket where we upload our files like HTML pages, images, media files, etc.\nWhen the distribution is deployed, CloudFront assigns a domain name to the distribution and sends our distribution\u0026rsquo;s configuration to all the edge locations or points of presence (POPs).\nCreating a Single Page Application as Static Content For our example, we will create some static content by packaging a barebones Single Page Application (SPA) which will contain JavaScript, HTML, images, and stylesheets. We will then serve this application from CloudFront.\nWe can create a Single Page Application with one of the many frameworks available like Angular, React, Vue, etc. Let us create a SPA with the React framework by running the following npm command:\nnpx create-react-app mystore Running this command bootstraps a react project under a folder: mystore with the following files in a folder: src.\nsrc ├── App.css ├── App.js ├── App.test.js ├── index.css ├── index.js ├── logo.svg ├── reportWebVitals.js └── setupTests.js Let us run this application with the below commands:\ncd mystore npm start This will launch the default react app in a browser.\nWe can evolve this application further to build useful features but for this tutorial, we will deploy this React app using CloudFront.\nFor deployment, we will first build the project by running:\nnpm build This will package the application in a build directory with the below contents:\nbuild ├── asset-manifest.json ├── favicon.ico ├── index.html ├── logo192.png ├── logo512.png ├── manifest.json ├── robots.txt └── static ├── css │ ├── main.073c9b0a.css │ └── main.073c9b0a.css.map ├── js │ ├── 787.dd20aa60.chunk.js │ ├── 787.dd20aa60.chunk.js.map │ ├── main.fa9c6efd.js │ ├── main.fa9c6efd.js.LICENSE.txt │ └── main.fa9c6efd.js.map └── media └── logo.6ce24c58023cc2f8fd88fe9d219db6c6.svg These are a set of static files which we can host on any HTTP server for serving our web content. For our example, we will copy these static contents to an S3 bucket as explained in the next section and then render them through CloudFront in a subsequent section.\nHosting the Static Content in an S3 Bucket Amazon Simple Storage Service (S3) is a service for storing and retrieving any kind of files called objects in S3 parlance. Buckets are containers for storing objects in S3. We upload files to an S3 bucket where it is stored as an S3 object. We will upload all the files under the build folder to an S3 bucket that was created after building the react project in the previous section.\nCreating the S3 Bucket Let us create the S3 bucket from the AWS administration console.\nFor creating the S3 bucket we are providing a name and selecting the region as us-east where the bucket will be created.\nWe will allow public access to the bucket by unchecking the checkbox for Block all public access as shown below:\nThis will allow public access to the S3 bucket which will make the files in the bucket accessible with a public URL over the internet. This is however not a secure practice which we will address in a later section.\nEnabling the Static Web Hosting Property on the S3 Bucket After creating the bucket, we will configure the bucket for hosting web assets by modifying the bucket property for static web hosting:\nWe will enable the property: static web hosting of the bucket as shown below:\nWe have also set the Index document and Error document to index.html.\nAfter we enable the static web hosting, the section under our static web hosting property of our S3 bucket will look like this:\nWe can see a property Bucket website endpoint which contains the URL to be used for navigating to our website after copying the static files to the S3 bucket.\nTypes of S3 Bucket Endpoints Since we will be configuring the S3 bucket URL as the origin when we create a CloudFront distribution in subsequent sections, it will be useful to understand the two types of endpoints associated with S3 buckets:\nREST API Endpoint This endpoint is in the format: {bucket-name}.s3-{region}.amazonaws.com. In our example, the Bucket Website Endpoint is http://io.myapp.s3-us-east-1.amazonaws.com.\nThe characteristics of Bucket Website Endpoints are:\n They support SSL connections Connections to the Bucket Website Endpoint provides end to end encryption They can use Origin Access Identity (OAI) to restrict access to the contents of the S3 bucket. Origin Access Identity (OAI) is a special CloudFront user that is associated with CloudFront distributions. This is further explained in a subsequent section titled \u0026ldquo;Securing Access to Content\u0026rdquo;. They support both private and public access to the S3 buckets.  Bucket Website Endpoint This endpoint is generated when we enable static website hosting on the bucket and is in the format: {bucket-name}-website.s3.amazonaws.com. In our example, the Bucket Website Endpoint is http://io.myapp.s3-website-us-east-1.amazonaws.com.\nThe characteristics of Bucket Website Endpoints are:\n They do not support SSL connections They support redirect requests They can not use Origin Access Identity (OAI) to restrict access to the contents of the S3 bucket. They serve the default index document (Default page) They support only publicly readable content  We will use the Bucket Website Endpoint in our example when we set up a CloudFront distribution to serve content from a public S3 bucket.\nAttaching an S3 Bucket Policy We also need to attach a bucket policy to our S3 bucket. The bucket policy, written in JSON, provides access to the objects stored in the bucket:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;Statement1\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Principal\u0026#34;: \u0026#34;*\u0026#34;, \u0026#34;Action\u0026#34;: \u0026#34;s3:GetObject\u0026#34;, \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:s3:::io.myapp/*\u0026#34; } ] } This bucket policy provides read-only access to all the objects stored in our bucket as represented by the resource ARN: arn:aws:s3:::io.myapp/*.\nUploading Static Content to our S3 Bucket After finishing all the configurations of our bucket, we will upload all our static content under the build folder of our project in our local machine to our S3 bucket. We can upload files from the AWS admin console by drag \u0026amp; drop or by using the Add files or Add folder to upload files and folders from our local machine as shown below:\nWe can see the upload status of all the files after the upload is completed as shown below:\nServing the Web Site from the S3 Bucket With all the files uploaded we will be able to see our application by navigating to the bucket website endpoint: http://io.myapp.s3-website-us-east-1.amazonaws.com.\nAssuming we have customers accessing this website from all parts of the globe, they will all be downloading the static contents from the same S3 bucket in the us-east region in our example. This will result in giving different user experiences to customers depending on their location.\nCustomers closer to the us-east region will experience a lower latency compared to the customers who are accessing this website from other continents. We will improve this behavior in the next section with the help of Amazon\u0026rsquo;s CloudFront service.\nCreating the CloudFront Distribution When we want to use CloudFront to distribute our content, we need to create a distribution. We use a CloudFront distribution to specify the location of the content that we want to deliver from CloudFront along with the configuration to track and manage its delivery.\nLet us create a CloudFront Distribution from the AWS Management Console:\nWe have set the origin domain to the bucket website endpoint of our S3 bucket created in the previous section and left all other configurations as default. The distribution takes a few minutes to change to enabled status.\nAfter it is active, we can see the CloudFront distribution domain name in the CloudFront console:\nWe can now navigate to our website using this CloudFront distribution domain name: https://d1yda4k0ocquhm.cloudfront.net.\nSecuring Access to Content In the earlier sections, we used the static assets residing in a public S3 bucket which makes it insecure by making all the content accessible to users if the S3 bucket URL is known to them. CloudFront provides many configurations to secure access to content. Let us look at a few of those configurations:\nSecuring Content using Origin Access Identity (OAI) Origin Access Identity (OAI) is a special CloudFront user that is associated with our distributions. We can restrict access to the S3 bucket by updating the bucket policy to provide read permission to the OAI defined as the Principal in the policy definition as shown below:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;2\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Principal\u0026#34;: { \u0026#34;AWS\u0026#34;: \u0026#34;arn:aws:iam::cloudfront:user/ CloudFront Origin Access Identity \u0026lt;OAI\u0026gt;\u0026#34; }, \u0026#34;Action\u0026#34;: \u0026#34;s3:GetObject\u0026#34;, \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:s3:::\u0026lt;S3 bucket name\u0026gt;/*\u0026#34; } ] } Let us create another CloudFront Distribution but this time configured to use an OAI to access the contents in the S3 bucket:\nThis time we have chosen the S3 REST API endpoint from the selection box as the origin domain instead of the bucket website endpoint. In the section for S3 bucket access, we have selected the option: Yes use OAI and created an OAI: my-oai to associate with this distribution.\nWe have also chosen the option of updating the bucket policy manually after creating the distribution.\nWe can also reuse an OAI if we have one, instead of creating a new OAI. An AWS account can have up to 100 CloudFront origin access identities (OAIs). Since, we can add an OAI to multiple CloudFront distributions, so one OAI for an AWS account is sufficient in most cases.\nIf we had not created an OAI and added it to our CloudFront distribution during creating the distribution, we can create it later and add it to the distribution by using either the CloudFront console or the CloudFront API.\nAfter creating the distribution, let us update the bucket policy of our S3 bucket to look like this:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;1\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Principal\u0026#34;: { \u0026#34;AWS\u0026#34;: \u0026#34;arn:aws:iam::cloudfront:user/ CloudFront Origin Access Identity E32V87I09SD18I\u0026#34; }, \u0026#34;Action\u0026#34;: \u0026#34;s3:GetObject\u0026#34;, \u0026#34;Resource\u0026#34;: \u0026#34;arn:aws:s3:::io.myapp/*\u0026#34; } ] } This bucket policy grants the CloudFront origin access identity (OAI) with id: E32V87I09SD18I permission to get (read) all objects in our Amazon S3 bucket. We have set the Principal to the OAI id which can be found from the AWS management console.\nWe have also disabled the public access to the S3 bucket and the static web hosting property.\nAfter the CloudFront distribution is deployed and active, we can navigate to our website using the CloudFront distribution domain name: https://d4l1ajcygy8jp.cloudfront.net/index.html:\nSome of the other configurations for securing content by CloudFront are:\nSecuring using HTTPS In our previous distribution setting, we used the domain name that CloudFront assigned to our distribution, such as dxxxxxxabcdef8.cloudfront.net, and could navigate to our website using HTTPS protocol. In this configuration, CloudFront provides the SSL/TLS certificate.\nWe can also use our domain name, such as mydomain.com, and use an SSL/TLS certificate provided by AWS Certificate Manager (ACM) or import a certificate from a third-party certificate authority into ACM or the IAM certificate store. Please refer to the official documentation for the configuration steps.\nWhen we access content from CloudFront, the request passes through two legs:\n Viewer to CloudFront CloudFront to the Origin server  We can choose to secure either one or both legs by encrypting the communication by using HTTPS protocol.\nWe can configure CloudFront to require HTTPS between viewers and CloudFront by changing the Viewer Protocol Policy to either Redirect HTTP to HTTPS or HTTPS Only.\nWhen our origin is an S3 bucket, our options for using HTTPS for communications with CloudFront depend on the bucket configuration. If our S3 bucket is configured as a website endpoint, we cannot configure CloudFront to use HTTPS to communicate with our origin because S3 does not support HTTPS connections in that configuration.\nWhen our origin is an S3 bucket that supports HTTPS communication, CloudFront always forwards requests to S3 by using the protocol used by the viewers to send their requests.\nRestricting Content based on Geography We can use geographic restrictions, to prevent users in specific geographic locations from accessing content being distributed through a CloudFront distribution. We can either use the CloudFront geographic restrictions feature or use a third-party geolocation service.\nHere we are configuring the Allow list option to allow viewers to access our content only if they are in one of the approved countries on the allow list. Alternately, we can use the Block list option to prevent viewers from accessing our content if they are in one of the banned countries on our block list.\nMonitor Requests Using AWS WAF (Web Application Firewall) AWS WAF is a web application firewall that monitors the HTTP and HTTPS requests that are forwarded to CloudFront. We can specify different conditions such as the values of query strings or the IP addresses that requests originate from, based on which CloudFront responds to requests either with the requested content or with an HTTP status code 403 (Forbidden).\nWe can create an AWS WAF web access control list (web ACL) and associate the CloudFront distribution with the web ACL when creating or updating the distribution.\nConclusion In this article, we configured Amazon CloudFront to distribute static Content stored in an S3 bucket. Here is a summary of the steps for our quick reference:\n We created an S3 bucket with public access. We enable static web hosting on the S3 bucket and got a bucket website endpoint. We added an S3 bucket policy to allow access to the contents of the S3 bucket for all users (*). We uploaded some static contents in the form of JavaScript, images, HTML, and CSS files of a Single Page Application (SPA) built using the React library. With this setup, we could view the website in our browser using the S3 bucket website endpoint. We finally created a CloudFront distribution and configured the S3 bucket website endpoint as the origin. After the CloudFront distribution was deployed, we could view the website in a browser using the CloudFront URL. Next we secured the S3 bucket by removing public access. We disabled static web hosting on the S3 bucket. We created another CloudFront distribution with the S3 Rest API endpoint. We created an Origin Access Identity (OAI) and associated it with the bucket. We updated the S3 bucket policy to allow access only to the OAI coming from CloudFront. After this CloudFront distribution was deployed, we could view the website in a browser using the CloudFront URL.  ","date":"June 8, 2022","image":"https://reflectoring.io/images/stock/0041-adapter-1200x628-branded_hudbdb52a7685a8d0e28c5b58dcc10fabe_81226_650x0_resize_q90_box.jpg","permalink":"/distribute-static-content-with-cloudfront-tutorial/","title":"Distribute Static Content with Amazon CloudFront"},{"categories":["Java"],"contents":"When we define multi-layered architectures, we often tend to represent data differently at each layer. The interactions between each layer become quite tedious and cumbersome.\nConsider a client-server application that requires us to pass different objects at different layers, then it would require a lot of boilerplate code to handle the interactions, data-type conversions, etc.\nIf we have an object or payload that takes few fields, then this boilerplate code would be fine to implement once. But if we have an object that accepts more than 20-30 fields and many nested objects with a good amount of fields again within it, then this code becomes quite tedious.\n Example Code This article is accompanied by a working code example on GitHub. Why should we use a Mapper? The problem discussed above can be reduced by introducing the DTO (Data Transfer Object) pattern, which requires defining simple classes to transfer data between layers.\nA server can define a DTO that would return the API response payload which can be different from the persisted Entity objects so that it doesn’t end up exposing the schema of the Data Access Object layer. Thus client applications can accept a data object in a custom-defined DTO with required fields.\nStill, the DTO pattern heavily depends on the mappers or the logic that converts the incoming data into DTO or vice-versa. This involves boilerplate code and introduces overheads that can’t be overlooked, especially when dealing with large data shapes.\nThis is where we seek for some automation which can easily convert the Java beans.\nIn this article, we will take a look at MapStruct, which is an annotation processor plugged into the Java compiler that can automatically generate mappers at build-time. In comparison to other Mapping frameworks, MapStruct generates bean mappings at compile-time which ensures high performance and enables fast developer feedback and thorough error checking.\nMapStruct Dependency Setup MapStruct is a Java-based annotation processor which can be configured using Maven, Gradle, or Ant. It consists of the following libraries:\n org.mapstruct:mapstruct: This takes care of the core implementation behind the primary annotation of @Mapping. org.mapstruct:mapstruct-processor: This is the annotation processor which generates mapper implementations for the above mapping annotations.  Maven To configure MapStruct for a Maven based project, we need to add following into the pom.xml:\n\u0026lt;properties\u0026gt; \u0026lt;org.mapstruct.version\u0026gt;1.4.2.Final\u0026lt;/org.mapstruct.version\u0026gt; \u0026lt;maven.compiler.source\u0026gt;8\u0026lt;/maven.compiler.source\u0026gt; \u0026lt;maven.compiler.target\u0026gt;8\u0026lt;/maven.compiler.target\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.mapstruct\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mapstruct\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${org.mapstruct.version}\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-compiler-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;3.8.1\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;source\u0026gt;1.8\u0026lt;/source\u0026gt; \u0026lt;target\u0026gt;1.8\u0026lt;/target\u0026gt; \u0026lt;annotationProcessorPaths\u0026gt; \u0026lt;path\u0026gt; \u0026lt;groupId\u0026gt;org.mapstruct\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mapstruct-processor\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${org.mapstruct.version}\u0026lt;/version\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;/annotationProcessorPaths\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; Gradle In order to configure MapStruct in a Gradle project, we need to add following to the build.gradle file:\nplugins { id \u0026#39;net.ltgt.apt\u0026#39; version \u0026#39;0.20\u0026#39; } apply plugin: \u0026#39;net.ltgt.apt-idea\u0026#39; apply plugin: \u0026#39;net.ltgt.apt-eclipse\u0026#39; ext { mapstructVersion = \u0026#34;1.4.2.Final\u0026#34; } dependencies { ... implementation \u0026#34;org.mapstruct:mapstruct:${mapstructVersion}\u0026#34; annotationProcessor \u0026#34;org.mapstruct:mapstruct-processor:${mapstructVersion}\u0026#34; // If we are using mapstruct in test code  testAnnotationProcessor \u0026#34;org.mapstruct:mapstruct-processor:${mapstructVersion}\u0026#34; } The net.ltgt.apt plugin is responsible for the annotation processing. We can apply the apt-idea and apt-eclipse plugins depending on the IDE that we are using.\nThird-Party API Integration with Lombok Many of us would like to use MapStruct alongside Project Lombok to take advantage of automatically generated getters, setters. The mapper code generated by MapStruct will use these Lombok-generated getters, setters, and builders if we include lombok-mapstruct-binding as annotation processor in our build:\n\u0026lt;properties\u0026gt; \u0026lt;org.mapstruct.version\u0026gt;1.4.2.Final\u0026lt;/org.mapstruct.version\u0026gt; \u0026lt;org.projectlombok.version\u0026gt;1.18.24\u0026lt;/org.projectlombok.version\u0026gt; \u0026lt;maven.compiler.source\u0026gt;8\u0026lt;/maven.compiler.source\u0026gt; \u0026lt;maven.compiler.target\u0026gt;8\u0026lt;/maven.compiler.target\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.mapstruct\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mapstruct\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${org.mapstruct.version}\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.projectlombok\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;lombok\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${org.projectlombok.version}\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;provided\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-compiler-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;3.8.1\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;source\u0026gt;1.8\u0026lt;/source\u0026gt; \u0026lt;target\u0026gt;1.8\u0026lt;/target\u0026gt; \u0026lt;annotationProcessorPaths\u0026gt; \u0026lt;path\u0026gt; \u0026lt;groupId\u0026gt;org.mapstruct\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mapstruct-processor\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${org.mapstruct.version}\u0026lt;/version\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;path\u0026gt; \u0026lt;groupId\u0026gt;org.projectlombok\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;lombok\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${org.projectlombok.version}\u0026lt;/version\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;!-- additional annotation processor required as of Lombok 1.18.16 --\u0026gt; \u0026lt;path\u0026gt; \u0026lt;groupId\u0026gt;org.projectlombok\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;lombok-mapstruct-binding\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.2.0\u0026lt;/version\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;/annotationProcessorPaths\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; Similarly, a final build.gradle would look something like below:\nplugins { id \u0026#39;net.ltgt.apt\u0026#39; version \u0026#39;0.20\u0026#39; } apply plugin: \u0026#39;net.ltgt.apt-idea\u0026#39; apply plugin: \u0026#39;net.ltgt.apt-eclipse\u0026#39; ext { mapstructVersion = \u0026#34;1.4.2.Final\u0026#34; projectLombokVersion = \u0026#34;1.18.24\u0026#34; } dependencies { implementation \u0026#34;org.mapstruct:mapstruct:${mapstructVersion}\u0026#34; implementation \u0026#34;org.projectlombok:lombok:${projectLombokVersion}\u0026#34; annotationProcessor \u0026#34;org.projectlombok:lombok-mapstruct-binding:0.2.0\u0026#34; annotationProcessor \u0026#34;org.mapstruct:mapstruct-processor:${mapstructVersion}\u0026#34; annotationProcessor \u0026#34;org.projectlombok:lombok:${projectLombokVersion}\u0026#34; } Mapper Definition We will now take a look into various types of bean mappers using MapStruct and try out whatever options are available. Whenever we annotate a Mapper method with the @Mapper annotation, it creates an implementation class with the same mapper methods having all the setters and getters auto-generated. Let’s start with a basic mapping example to see how it works.\nBasic Mapping Example Let’s start with a very basic mapping example. We will define two classes, one with the name BasicUser and another with the name BasicUserDTO:\n@Data @Builder @ToString public class BasicUser { private int id; private String name; } @Data @Builder @ToString public class BasicUserDTO { private int id; private String name; } Now to create a mapper between the two, we will simply define an interface named BasicMapper and annotate it with the @Mapper annotation so that MapStruct would automatically be aware that it needs to create a mapper implementation between the two objects:\n@Mapper public interface BasicMapper { BasicMapper INSTANCE = Mappers.getMapper(BasicMapper.class); BasicUserDTO convert(BasicUser user); } The INSTANCE is the entry-point to our mapper instance once the implementation is auto-generated. We have simply defined a convert method in the interface which would accept a BasicUser object and return a BasicUserDTO object after conversion.\nAs we can notice both the objects have the same object property names and data type, this is enough for MapStruct to map between them. If a property has a different name in the target entity, its name can be specified via the @Mapping annotation. We will look at this in our upcoming examples.\nWhen we compile/build the application, the MapStruct annotation processor plugin will pick the BasicMapper interface and create an implementation for it which would look something like the below:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class BasicMapperImpl implements BasicMapper { @Override public BasicUserDTO convert(BasicUser user) { if ( user == null ) { return null; } BasicUserDTOBuilder basicUserDTO = BasicUserDTO.builder(); basicUserDTO.id( user.getId() ); basicUserDTO.name( user.getName() ); return basicUserDTO.build(); } } You might have noticed that the BasicMapperImpl has picked up the builder method since the BasicUserDTO class is annotated with Lombok\u0026rsquo;s @Builder annotation. If this annotation is not present, it will instead instantiate an object with the new keyword and a constructor.\nNow we just need to instantiate the conversion mapping by something like the below:\nBasicUser user = BasicUser .builder() .id(1) .name(\u0026#34;John Doe\u0026#34;) .build(); BasicUserDTO dto = BasicMapper.INSTANCE.convert(user); Custom Mapping Methods Sometimes we would like to implement a specific mapping manually by defining our logic while transforming from one object to another. For that, we can implement those custom methods directly in our mapper interface by defining a default method.\nLet’s define a DTO object which is different from a User object. We will name it PersonDTO:\n@Data @Builder @ToString public class PersonDTO { private String id; private String firstName; private String lastName; } As we can notice the data type for the id field is different from the User object and the name field needs to be broken into firstName and lastName. Hence, we will define our custom default method in the previous mapper interface directly with our logic:\n@Mapper public interface BasicMapper { BasicMapper INSTANCE = Mappers.getMapper(BasicMapper.class); BasicUserDTO convert(BasicUser user); default PersonDTO convertCustom(BasicUser user) { return PersonDTO .builder() .id(String.valueOf(user.getId())) .firstName(user.getName().substring(0, user.getName().indexOf(\u0026#34; \u0026#34;))) .lastName(user.getName().substring(user.getName().indexOf(\u0026#34; \u0026#34;) + 1)) .build(); } } Now when we instantiate the mapper, this gets converted to a PersonDTO object.\nPersonDTO personDto = BasicMapper.INSTANCE.convertCustom(user); As an alternative, a mapper can also be defined as an abstract class and implement the above custom method directly in that class. MapStruct will still generate an implementation method for all the abstract methods:\n@Mapper public abstract class BasicMapper { public abstract BasicUserDTO convert(BasicUser user); public PersonDTO convertCustom(BasicUser user) { return PersonDTO .builder() .id(String.valueOf(user.getId())) .firstName(user.getName().substring(0, user.getName().indexOf(\u0026#34; \u0026#34;))) .lastName(user.getName().substring(user.getName().indexOf(\u0026#34; \u0026#34;) + 1)) .build(); } } An added advantage of this strategy over declaring default methods is that additional fields can be declared directly in the mapper class.\nMapping from Several Source Objects Suppose if we want to combine several entities into a single data transfer object, then MapStruct supports the mapping method with several source fields. For example, we will create two objects additionally like Education and Address:\n@Data @Builder @ToString public class Education { private String degreeName; private String institute; private Integer yearOfPassing; } @Data @Builder @ToString public class Address { private String houseNo; private String landmark; private String city; private String state; private String country; private String zipcode; } Now we will map these two objects along with User object to PersonDTO entity:\n@Mapping(source = \u0026#34;user.id\u0026#34;, target = \u0026#34;id\u0026#34;) @Mapping(source = \u0026#34;user.name\u0026#34;, target = \u0026#34;firstName\u0026#34;) @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;educationalQualification\u0026#34;) @Mapping(source = \u0026#34;address.city\u0026#34;, target = \u0026#34;residentialCity\u0026#34;) @Mapping(source = \u0026#34;address.country\u0026#34;, target = \u0026#34;residentialCountry\u0026#34;) PersonDTO convert(BasicUser user, Education education, Address address); When we build the code now, the mapstruct annotation processor will generate the following method:\n@Override public PersonDTO convert(BasicUser user, Education education, Address address) { if ( user == null \u0026amp;\u0026amp; education == null \u0026amp;\u0026amp; address == null ) { return null; } PersonDTOBuilder personDTO = PersonDTO.builder(); if ( user != null ) { personDTO.id(String.valueOf(user.getId())); personDTO.firstName(user.getName()); } if ( education != null ) { personDTO.educationalQualification(education.getDegreeName()); } if ( address != null ) { personDTO.residentialCity(address.getCity()); personDTO.residentialCountry(address.getCountry()); } return personDTO.build(); } Mapping Nested Objects We would often see that larger POJOs not only have primitive data types but other classes, lists, or sets as well. Thus we need to map those nested beans into the final DTO.\nLet’s define a few more DTOs and add all of this to PersonDTO:\n@Data @Builder @ToString public class ManagerDTO { private int id; private String name; } @Data @Builder @ToString public class PersonDTO { private String id; private String firstName; private String lastName; private String educationalQualification; private String residentialCity; private String residentialCountry; private String designation; private long salary; private EducationDTO education; private List\u0026lt;ManagerDTO\u0026gt; managerList; } Now we will define an entity named Manager and add it to the BasicUser entity:\n@Data @Builder @ToString public class Manager { private int id; private String name; } @Data @Builder @ToString public class BasicUser { private int id; private String name; private List\u0026lt;Manager\u0026gt; managerList; } Before we update our UserMapper interface, let’s define the ManagerMapper interface to map the Manager entity to ManagerDTO class:\n@Mapper public interface ManagerMapper { ManagerMapper INSTANCE = Mappers.getMapper(ManagerMapper.class); ManagerDTO convert(Manager manager); } Now we can update our UserMapper interface to include list of managers for a given user.\n@Mapper(uses = {ManagerMapper.class}) public interface UserMapper { UserMapper INSTANCE = Mappers.getMapper(UserMapper.class); ... @Mapping(source = \u0026#34;user.id\u0026#34;, target = \u0026#34;id\u0026#34;) @Mapping(source = \u0026#34;user.name\u0026#34;, target = \u0026#34;firstName\u0026#34;) @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;educationalQualification\u0026#34;) @Mapping(source = \u0026#34;address.city\u0026#34;, target = \u0026#34;residentialCity\u0026#34;) @Mapping(source = \u0026#34;address.country\u0026#34;, target = \u0026#34;residentialCountry\u0026#34;) PersonDTO convert(BasicUser user, Education education, Address address); } As we can see we have not added any @Mapping annotation to map managers. Instead, we have set the uses flag for @Mapper annotation so that while generating the mapper implementation for the UserMapper interface, MapStruct will also convert the Manager entity to ManagerDTO. We can see that a new mapper - managerListToManagerDTOList() has been auto-generated along with convert() mapper in the auto-generated implementation. This has been added explicitly since we have added ManagerMapper to the UserMapper interface.\nLet’s suppose we have to map an object to an internal object of the final payload, then we can define @Mapping with direct reference to source and target. For example, we will create EmploymentDTO which would look something like the below:\n@Data @Builder @ToString public class EducationDTO { private String degree; private String college; private Integer passingYear; } Now we need to map this to education field in PersonDTO. For that we will update our mapper in the following way:\n@Mapping(source = \u0026#34;user.id\u0026#34;, target = \u0026#34;id\u0026#34;) @Mapping(source = \u0026#34;user.name\u0026#34;, target = \u0026#34;firstName\u0026#34;) @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;educationalQualification\u0026#34;) @Mapping(source = \u0026#34;address.city\u0026#34;, target = \u0026#34;residentialCity\u0026#34;) @Mapping(source = \u0026#34;address.country\u0026#34;, target = \u0026#34;residentialCountry\u0026#34;) @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;education.degree\u0026#34;) @Mapping(source = \u0026#34;education.institute\u0026#34;, target = \u0026#34;education.college\u0026#34;) @Mapping(source = \u0026#34;education.yearOfPassing\u0026#34;, target = \u0026#34;education.passingYear\u0026#34;) PersonDTO convert(BasicUser user, Education education, Address address, Employment employment); If we see the implementation class after compiling/building the application we would see that a new mapper educationToEducationDTO() is added along side other mappers.\nSometimes we won’t explicitly name all properties from nested source bean. In that case MapStruct allows to use \u0026quot;.\u0026quot; as target. This will tell the mapper to map every property from source bean to target object. This would look something like below:\n@Mapping(source = \u0026#34;employment\u0026#34;, target = \u0026#34;.\u0026#34;) PersonDTO convert(BasicUser user, Education education, Address address, Employment employment); This kind of notation can be very useful when mapping hierarchical objects to flat objects and vice versa.\nUpdating Existing Instances Sometimes, we would like to update an existing DTO with mapping at a later point of time. In those cases, we need mappings which do not create a new instance of the target type. Instead it updates an existing instance of that similar type. This sort of mapping can be achieved by adding a parameter for the target object and marking this parameter with @MappingTarget something like below:\n@Mapping(source = \u0026#34;user.id\u0026#34;, target = \u0026#34;id\u0026#34;) @Mapping(source = \u0026#34;user.name\u0026#34;, target = \u0026#34;firstName\u0026#34;) @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;education.degree\u0026#34;) @Mapping(source = \u0026#34;education.institute\u0026#34;, target = \u0026#34;education.college\u0026#34;) @Mapping(source = \u0026#34;education.yearOfPassing\u0026#34;, target = \u0026#34;education.passingYear\u0026#34;) @Mapping(source = \u0026#34;employment\u0026#34;, target = \u0026#34;.\u0026#34;) PersonDTO convert(BasicUser user, Education education, Address address, Employment employment); @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;educationalQualification\u0026#34;) @Mapping(source = \u0026#34;address.city\u0026#34;, target = \u0026#34;residentialCity\u0026#34;) @Mapping(source = \u0026#34;address.country\u0026#34;, target = \u0026#34;residentialCountry\u0026#34;) void updateExisting(BasicUser user, Education education, Address address, Employment employment, @MappingTarget PersonDTO personDTO); Now this will create the following implementation with the updateExisting() interface:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class UserMapperImpl implements UserMapper { private final ManagerMapper managerMapper = Mappers.getMapper( ManagerMapper.class ); ... @Override public PersonDTO convert(BasicUser user, Education education, Address address, Employment employment) { if ( user == null \u0026amp;\u0026amp; education == null \u0026amp;\u0026amp; address == null \u0026amp;\u0026amp; employment == null ) { return null; } PersonDTOBuilder personDTO = PersonDTO.builder(); if ( user != null ) { personDTO.id( String.valueOf( user.getId() ) ); personDTO.firstName( user.getName() ); personDTO.managerList( managerListToManagerDTOList( user.getManagerList() ) ); } if ( education != null ) { personDTO.education( educationToEducationDTO( education ) ); } if ( employment != null ) { personDTO.designation( employment.getDesignation() ); personDTO.salary( employment.getSalary() ); } return personDTO.build(); } @Override public void updateExisting(BasicUser user, Education education, Address address, Employment employment, PersonDTO personDTO) { if ( user == null \u0026amp;\u0026amp; education == null \u0026amp;\u0026amp; address == null \u0026amp;\u0026amp; employment == null ) { return; } if ( user != null ) { personDTO.setId( String.valueOf( user.getId() ) ); if ( personDTO.getManagerList() != null ) { List\u0026lt;ManagerDTO\u0026gt; list = managerListToManagerDTOList( user.getManagerList() ); if ( list != null ) { personDTO.getManagerList().clear(); personDTO.getManagerList().addAll( list ); } else { personDTO.setManagerList( null ); } } else { List\u0026lt;ManagerDTO\u0026gt; list = managerListToManagerDTOList( user.getManagerList() ); if ( list != null ) { personDTO.setManagerList( list ); } } } if ( education != null ) { personDTO.setEducationalQualification( education.getDegreeName() ); } if ( address != null ) { personDTO.setResidentialCity( address.getCity() ); personDTO.setResidentialCountry( address.getCountry() ); } if ( employment != null ) { personDTO.setDesignation( employment.getDesignation() ); personDTO.setSalary( employment.getSalary() ); } } ... } If someone wants to call this method then this can be defined in the following way:\nPersonDTO personDTO = UserMapper.INSTANCE.convert(user, education, address, employment); UserMapper.INSTANCE.updateExisting(user, education, address, employment, personDTO); Inherit Configuration In continuation with the above example, instead of repeating the configurations for both the mappers, we can use the @InheritConfiguration annotation. By annotating a method with the @InheritConfiguration annotation, MapStruct will look for an already configured method whose configuration can be applied to this one as well. Typically, this annotation is used to update methods after a mapping method is defined:\n@Mapper public interface ManagerMapper { ManagerMapper INSTANCE = Mappers.getMapper(ManagerMapper.class); ManagerDTO convert(Manager manager); @InheritConfiguration void updateExisting(Manager manager, @MappingTarget ManagerDTO managerDTO); } This will generate an implementation something like below:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class ManagerMapperImpl implements ManagerMapper { @Override public ManagerDTO convert(Manager manager) { if ( manager == null ) { return null; } ManagerDTOBuilder managerDTO = ManagerDTO.builder(); managerDTO.id( manager.getId() ); managerDTO.name( manager.getName() ); return managerDTO.build(); } @Override public void updateExisting(Manager manager, ManagerDTO managerDTO) { if ( manager == null ) { return; } managerDTO.setId( manager.getId() ); managerDTO.setName( manager.getName() ); } } Inverse Mappings If we want to define a bi-directional mapping like Entity to DTO and DTO to Entity and if the mapping definition for the forward method and the reverse method is the same, then we can simply inverse the configuration by defining @InheritInverseConfiguration annotation in the following pattern:\n@Mapper public interface UserMapper { UserMapper INSTANCE = Mappers.getMapper(UserMapper.class); BasicUserDTO convert(BasicUser user); @InheritInverseConfiguration BasicUser convert(BasicUserDTO userDTO); } This can be used for straightforward mappings between entity and DTO.\nException Handling during Mapping Exceptions are unavoidable, hence, MapStruct provides support to handle exceptions by making the life of developers quite easy. First, we will define an exception class, ValidationException which we will use in our mapper:\npublic class ValidationException extends RuntimeException { public ValidationException(String message, Throwable cause) { super(message, cause); } public ValidationException(String message) { super(message); } } Now, let’s say if we want to validate the id field for any invalid values, then we can define a utility class named as Validator :\npublic class Validator { public int validateId(int id) throws ValidationException { if(id \u0026lt; 0){ throw new ValidationException(\u0026#34;Invalid ID value\u0026#34;); } return id; } } Finally, we will update our UserMapper by including the Validator class and throw ValidationException wherever we are mapping the id fields:\n@Mapper(uses = {ManagerMapper.class, Validator.class}) public interface UserMapper { UserMapper INSTANCE = Mappers.getMapper(UserMapper.class); BasicUserDTO convert(BasicUser user) throws ValidationException; @InheritInverseConfiguration BasicUser convert(BasicUserDTO userDTO) throws ValidationException; ... } The implementation class after generation would look something like below:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class UserMapperImpl implements UserMapper { private final Validator validator = new Validator(); @Override public BasicUserDTO convert(BasicUser user) throws ValidationException { // ...  BasicUserDTOBuilder basicUserDTO = BasicUserDTO.builder(); basicUserDTO.id( validator.validateId( user.getId() ) ); //...  return basicUserDTO.build(); } @Override public BasicUser convert(BasicUserDTO userDTO) throws ValidationException { // ...  BasicUserBuilder basicUser = BasicUser.builder(); basicUser.id( validator.validateId( userDTO.getId() ) ); //...  return basicUser.build(); } ... } MapStruct has automatically detected and set the id field of the mapper objects with the result of the Validator instance. It has added a throws clause for the method as well.\nData Type Conversion We won’t always find a mapping attribute in a payload having the same data type for the source and target fields. For example, we might have an instance where we would need to map an attribute of type int to String or long. We will take a quick look at how we can deal with such types of data conversions.\nImplicit Type Conversion The simplest way to get a mapper instance is using the Mappers class. We need to invoke the getMappers() method from the factory passing the interface type of the mapper:\n@Mapping(source = \u0026#34;employment.salary\u0026#34;, target = \u0026#34;salary\u0026#34;, numberFormat = \u0026#34;$#.00\u0026#34;) PersonDTO convert(BasicUser user, Education education, Address address, Employment employment); Then the generated mapper implementation class would be something like below:\npersonDTO.setSalary( new DecimalFormat( \u0026#34;$#.00\u0026#34; ).format( employment.getSalary() ) ); Similarly, let’s say if we want to convert a date type in String format to LocalDate format, then we can define a mapper in the following format:\n@Mapping(source = \u0026#34;dateOfBirth\u0026#34;, target = \u0026#34;dateOfBirth\u0026#34;, dateFormat = \u0026#34;dd/MMM/yyyy\u0026#34;) ManagerDTO convert(Manager manager); Then the generated mapper implementation would be something like below:\nmanagerDTO.setDateOfBirth( new SimpleDateFormat( \u0026#34;dd/MMM/yyyy\u0026#34; ) .parse( manager.getDateOfBirth() ) ); If we don’t mention the dateFormat property in above mapper then this would generate an implementation method something like below:\nmanagerDTO.setDateOfBirth( new SimpleDateFormat().parse( manager.getDateOfBirth() ) ); Mapping Collections Mapping Collections in MapStruct works in the same way as mapping any other bean types. But it provides various options and customizations which can be used based on our needs.\nThe generated implementation mapper code will contain a loop that would iterate over the source collection, convert each element, and put it into the target collection. If a mapping method for the collection element types is found in the given mapper or the mapper it uses, this method is automatically invoked to perform the element conversion.\nSet Let’s say if we want to convert a set of Long values to String, then we can simply define a mapper as below:\n@Mapper public interface CollectionMapper { CollectionMapper INSTANCE = Mappers.getMapper(CollectionMapper.class); Set\u0026lt;String\u0026gt; convert(Set\u0026lt;Long\u0026gt; ids); } The generated implementation method would first initiate an instance of HashSet and then iterate through the loop to map and convert the values:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class CollectionMapperImpl implements CollectionMapper { @Override public Set\u0026lt;String\u0026gt; convert(Set\u0026lt;Long\u0026gt; ids) { //...  Set\u0026lt;String\u0026gt; set = new HashSet\u0026lt;String\u0026gt;( Math.max( (int) ( ids.size() / .75f ) + 1, 16 ) ); for ( Long long1 : ids ) { set.add( String.valueOf( long1 ) ); } return set; } ... } Now if we try to convert a set of one entity type to another then we can simply define a mapper as below:\n@Mapper public interface CollectionMapper { CollectionMapper INSTANCE = Mappers.getMapper(CollectionMapper.class); Set\u0026lt;EmploymentDTO\u0026gt; convertEmployment(Set\u0026lt;Employment\u0026gt; employmentSet); } We will notice in the generated implementation that MapStruct has automatically created an extra mapping method to convert between the entities as their fields are identical to each other:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class CollectionMapperImpl implements CollectionMapper { ... @Override public Set\u0026lt;EmploymentDTO\u0026gt; convertEmployment(Set\u0026lt;Employment\u0026gt; employmentSet) { //...  Set\u0026lt;EmploymentDTO\u0026gt; set = new HashSet\u0026lt;EmploymentDTO\u0026gt;( Math.max( (int) ( employmentSet.size() / .75f ) + 1, 16 ) ); for ( Employment employment : employmentSet ) { set.add( employmentToEmploymentDTO( employment ) ); } return set; } protected EmploymentDTO employmentToEmploymentDTO(Employment employment) { //...  EmploymentDTOBuilder employmentDTO = EmploymentDTO.builder(); employmentDTO.designation( employment.getDesignation() ); employmentDTO.salary( employment.getSalary() ); return employmentDTO.build(); } ... } List List are mapped in the same way as Set in MapStruct. But if we want to convert between entities that require custom mapping, then we must define a conversion method between the entities first and then define the mapper between List or Set:\n@Mapper public interface CollectionMapper { CollectionMapper INSTANCE = Mappers.getMapper(CollectionMapper.class); @Mapping(source = \u0026#34;degreeName\u0026#34;, target = \u0026#34;degree\u0026#34;) @Mapping(source = \u0026#34;institute\u0026#34;, target = \u0026#34;college\u0026#34;) @Mapping(source = \u0026#34;yearOfPassing\u0026#34;, target = \u0026#34;passingYear\u0026#34;) EducationDTO convert(Education education); List\u0026lt;EducationDTO\u0026gt; convert(List\u0026lt;Education\u0026gt; educationList); } Now the generated implementation method would look something like below:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class CollectionMapperImpl implements CollectionMapper { ... @Override pu//...  EducationDTOBuilder educationDTO = EducationDTO.builder(); educationDTO.degree( education.getDegreeName() ); educationDTO.college( education.getInstitute() ); educationDTO.passingYear( education.getYearOfPassing() ); return educationDTO.build(); } @Override public List\u0026lt;EducationDTO\u0026gt; convert(List\u0026lt;Education\u0026gt; educationList) { //...  List\u0026lt;EducationDTO\u0026gt; list = new ArrayList\u0026lt;EducationDTO\u0026gt;( educationList.size() ); for ( Education education : educationList ) { list.add( convert( education ) ); } return list; } ... } Map MapStruct provides additional annotation for mapping Maps. It is annotated as MapMapping and it accepts custom definitions to define various formats for key-value pairs:\n@Mapper public interface CollectionMapper { CollectionMapper INSTANCE = Mappers.getMapper(CollectionMapper.class); @MapMapping(keyNumberFormat = \u0026#34;#L\u0026#34;, valueDateFormat = \u0026#34;dd.MM.yyyy\u0026#34;) Map\u0026lt;String, String\u0026gt; map(Map\u0026lt;Long, Date\u0026gt; dateMap); } This would generate an automated implementation method something like below:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class CollectionMapperImpl implements CollectionMapper { ... @Override public Map\u0026lt;String, String\u0026gt; map(Map\u0026lt;Long, Date\u0026gt; dateMap) { //...  Map\u0026lt;String, String\u0026gt; map = new HashMap\u0026lt;String, String\u0026gt;( Math.max( (int) ( dateMap.size() / .75f ) + 1, 16 ) ); for ( java.util.Map.Entry\u0026lt;Long, Date\u0026gt; entry : dateMap.entrySet() ) { String key = new DecimalFormat( \u0026#34;#L\u0026#34; ).format( entry.getKey() ); String value = new SimpleDateFormat( \u0026#34;dd.MM.yyyy\u0026#34; ) .format( entry.getValue() ); map.put( key, value ); } return map; } ... } Mapping Strategies In case, if we need to map data types with the parent-child relationship, then MapStruct offers a way to define a strategy to set or add the children to the parent type. The @Mapper annotation supports a collectionMappingStrategy attribute which takes the following enums:\n ACCESSOR_ONLY SETTER_PREFERRED ADDER_PREFERRED TARGET_IMMUTABLE  The default value is ACCESSOR_ONLY, which means that only accessors can be used to set the Collection of children. This option helps us when the adders for a Collection type field are defined instead of setters. For example, let’s revisit the Manager to ManagerDTO entity conversion in PersonDTO. The PersonDTO entity has a child field of type List:\npublic class PersonDTO { ... private List\u0026lt;ManagerDTO\u0026gt; managerList; public List\u0026lt;ManagerDTO\u0026gt; getManagerList() { return managers; } public void setManagerList(List\u0026lt;ManagerDTO\u0026gt; managers) { this.managers = managers; } public void addManagerList(ManagerDTO managerDTO) { if (managers == null) { managers = new ArrayList\u0026lt;\u0026gt;(); } managers.add(managerDTO); } // other getters and setters } Note that we have both the setter method, setManagers, and the adder method, addManagerList and we are responsible to initiate the collection for the adder. Then we have defined the default mapper the implementation looks something like the below:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class UserMapperImpl implements UserMapper { @Override public PersonDTO map(Person person) { //...  PersonDTO personDTO = new PersonDTO(); personDTO.setManagerList(personMapper.map(person.getManagerList())); return personDTO; } } As we can see, MapStruct uses setter method to set the PersonDTO instance. Since MapStruct uses the ACCESSOR_ONLY collection mapping strategy. But if we pass and attribute in @Mapper to use the ADDER_PREFERRED collection mapping strategy then it would look something like the below:\n@Mapper(collectionMappingStrategy = CollectionMappingStrategy.ADDER_PREFERRED, uses = ManagerMapper.class) public interface PersonMapperAdderPreferred { PersonDTO map(Person person); } The generated implementation method would look something like the below:\npublic class PersonMapperAdderPreferredImpl implements PersonMapperAdderPreferred { private final ManagerMapper managerMapper = Mappers.getMapper( ManagerMapper.class ); @Override public PersonDTO map(Person person) { //...  PersonDTO personDTO = new PersonDTO(); if ( person.getManagerList() != null ) { for ( Manager manager : person.getManagerList() ) { personDTO.addManagerList( managerMapper.convert( manager ) ); } } return personDTO; } } In case the adder was not available, the setter would have been used.\nMapping Streams Mapping streams is similar to mapping collections. The only difference is that the auto-generated implementation would return a Stream from a provided Iterable:\n@Mapper public interface CollectionMapper { CollectionMapper INSTANCE = Mappers.getMapper(CollectionMapper.class); Set\u0026lt;String\u0026gt; convertStream(Stream\u0026lt;Long\u0026gt; ids); @Mapping(source = \u0026#34;degreeName\u0026#34;, target = \u0026#34;degree\u0026#34;) @Mapping(source = \u0026#34;institute\u0026#34;, target = \u0026#34;college\u0026#34;) @Mapping(source = \u0026#34;yearOfPassing\u0026#34;, target = \u0026#34;passingYear\u0026#34;) EducationDTO convert(Education education); List\u0026lt;EducationDTO\u0026gt; convert(Stream\u0026lt;Education\u0026gt; educationStream); } The implementation methods would look something like below:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class CollectionMapperImpl implements CollectionMapper { ... @Override public Set\u0026lt;String\u0026gt; convertStream(Stream\u0026lt;Long\u0026gt; ids) { //...  return ids.map( long1 -\u0026gt; String.valueOf( long1 ) ) .collect( Collectors.toCollection( HashSet\u0026lt;String\u0026gt;::new ) ); } @Override public List\u0026lt;EducationDTO\u0026gt; convert(Stream\u0026lt;Education\u0026gt; educationStream) { //...  return educationStream.map( education -\u0026gt; convert( education ) ) .collect( Collectors.toCollection( ArrayList\u0026lt;EducationDTO\u0026gt;::new ) ); } protected EmploymentDTO employmentToEmploymentDTO(Employment employment) { //...  EmploymentDTOBuilder employmentDTO = EmploymentDTO.builder(); employmentDTO.designation( employment.getDesignation() ); employmentDTO.salary( employment.getSalary() ); return employmentDTO.build(); } } Mapping Enums MapStruct allows the conversion of one Enum to another Enum or String. Each constant from the enum at the source is mapped to a constant with the same name in the target. But in the case of different names, we need to annotate @ValueMapping with source and target enums.\nFor example, we will define an enum named DesignationCode:\npublic enum DesignationCode { CEO, CTO, VP, SM, M, ARCH, SSE, SE, INT } This will be mapped to DesignationConstant enum:\npublic enum DesignationConstant { CHIEF_EXECUTIVE_OFFICER, CHIEF_TECHNICAL_OFFICER, VICE_PRESIDENT, SENIOR_MANAGER, MANAGER, ARCHITECT, SENIOR_SOFTWARE_ENGINEER, SOFTWARE_ENGINEER, INTERN, OTHERS } Now we can define an Enum mapping in the following way:\n@Mapper public interface UserMapper { UserMapper INSTANCE = Mappers.getMapper(UserMapper.class); @ValueMappings({ @ValueMapping(source = \u0026#34;CEO\u0026#34;, target = \u0026#34;CHIEF_EXECUTIVE_OFFICER\u0026#34;), @ValueMapping(source = \u0026#34;CTO\u0026#34;, target = \u0026#34;CHIEF_TECHNICAL_OFFICER\u0026#34;), @ValueMapping(source = \u0026#34;VP\u0026#34;, target = \u0026#34;VICE_PRESIDENT\u0026#34;), @ValueMapping(source = \u0026#34;SM\u0026#34;, target = \u0026#34;SENIOR_MANAGER\u0026#34;), @ValueMapping(source = \u0026#34;M\u0026#34;, target = \u0026#34;MANAGER\u0026#34;), @ValueMapping(source = \u0026#34;ARCH\u0026#34;, target = \u0026#34;ARCHITECT\u0026#34;), @ValueMapping(source = \u0026#34;SSE\u0026#34;, target = \u0026#34;SENIOR_SOFTWARE_ENGINEER\u0026#34;), @ValueMapping(source = \u0026#34;SE\u0026#34;, target = \u0026#34;SOFTWARE_ENGINEER\u0026#34;), @ValueMapping(source = \u0026#34;INT\u0026#34;, target = \u0026#34;INTERN\u0026#34;), @ValueMapping(source = MappingConstants.ANY_REMAINING, target = \u0026#34;OTHERS\u0026#34;), @ValueMapping(source = MappingConstants.NULL, target = \u0026#34;OTHERS\u0026#34;) }) DesignationConstant convertDesignation(DesignationCode code); } This generates an implementation with a switch-case. It throws an error in case a constant of the source enum type does not have a corresponding constant with the same name in the target type and also is not mapped to another constant via @ValueMapping. The generated mapping method will throw an IllegalStateException if for some reason an unrecognized source value occurs.\nMapStruct too has a mechanism to map any unspecified mappings to a default. This can be used only once in a set of value mappings and only applies to the source. It comes in two flavors: \u0026lt;ANY_REMAINING\u0026gt; and \u0026lt;ANY_UNMAPPED\u0026gt;. But they can’t be used at the same time.\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class UserMapperImpl implements UserMapper { private final ManagerMapper managerMapper = Mappers.getMapper( ManagerMapper.class ); @Override public DesignationConstant convertDesignation(DesignationCode code) { //...  DesignationConstant designationConstant; switch ( code ) { case CEO: designationConstant = DesignationConstant.CHIEF_EXECUTIVE_OFFICER; break; case CTO: designationConstant = DesignationConstant.CHIEF_TECHNICAL_OFFICER; break; case VP: designationConstant = DesignationConstant.VICE_PRESIDENT; break; case SM: designationConstant = DesignationConstant.SENIOR_MANAGER; break; case M: designationConstant = DesignationConstant.MANAGER; break; case ARCH: designationConstant = DesignationConstant.ARCHITECT; break; case SSE: designationConstant = DesignationConstant.SENIOR_SOFTWARE_ENGINEER; break; case SE: designationConstant = DesignationConstant.SOFTWARE_ENGINEER; break; case INT: designationConstant = DesignationConstant.INTERN; break; default: designationConstant = DesignationConstant.OTHERS; } return designationConstant; } } Sometimes we need to deal with the enum constants with the same names followed by prefix or suffix pattern. MapStruct supports a few out-of-the-box strategies to deal with those patterns:\n suffix - Applies a suffix on the source enum stripSuffix - Strips a suffix from the source enum prefix - Applies a prefix on the source enum stripPrefix - Strips a prefix from the source enum  For example, let’s say we want to add a prefix to a stream of degree objects named as DegreeStream:\npublic enum DegreeStream { MATHS, PHYSICS, CHEMISTRY, BOTANY, ZOOLOGY, STATISTICS, EDUCATION } with DegreeStreamPrefix:\npublic enum DegreeStreamPrefix { MSC_MATHS, MSC_PHYSICS, MSC_CHEMISTRY, MSC_BOTANY, MSC_ZOOLOGY, MSC_STATISTICS, MSC_EDUCATION } Then we can define an enum mapping in the following way:\n@Mapper public interface UserMapper { UserMapper INSTANCE = Mappers.getMapper(UserMapper.class); @EnumMapping(nameTransformationStrategy = \u0026#34;prefix\u0026#34;, configuration = \u0026#34;MSC_\u0026#34;) DegreeStreamPrefix convert(DegreeStream degreeStream); @EnumMapping(nameTransformationStrategy = \u0026#34;stripPrefix\u0026#34;, configuration = \u0026#34;MSC_\u0026#34;) DegreeStream convert(DegreeStreamPrefix degreeStreamPrefix); } It generates an implementation same as above.\nDefining Default Values or Constants Default values can be specified in MapStruct to set a predefined value to a target property if the corresponding source property is null. Constants can be specified to set such a predefined value in any case. These default values and constants are specified as Strings. MapStruct also supports numberFormat to define a pattern for the numeric value.\n@Mapper(collectionMappingStrategy = CollectionMappingStrategy.ADDER_PREFERRED, uses = {CollectionMapper.class, ManagerMapper.class, Validator.class}, imports = UUID.class ) public interface UserMapper { UserMapper INSTANCE = Mappers.getMapper(UserMapper.class); @Mapping(source = \u0026#34;user.name\u0026#34;, target = \u0026#34;firstName\u0026#34;) @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;education.degree\u0026#34;) @Mapping(source = \u0026#34;education.institute\u0026#34;, target = \u0026#34;education.college\u0026#34;) @Mapping(source = \u0026#34;education.yearOfPassing\u0026#34;, target = \u0026#34;education.passingYear\u0026#34;, defaultValue = \u0026#34;2001\u0026#34;) @Mapping(source = \u0026#34;employment\u0026#34;, target = \u0026#34;.\u0026#34;) PersonDTO convert(BasicUser user, Education education, Address address, Employment employment); @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;educationalQualification\u0026#34;) @Mapping(source = \u0026#34;address.city\u0026#34;, target = \u0026#34;residentialCity\u0026#34;) @Mapping(target = \u0026#34;residentialCountry\u0026#34;, constant = \u0026#34;US\u0026#34;) @Mapping(source = \u0026#34;employment.salary\u0026#34;, target = \u0026#34;salary\u0026#34;, numberFormat = \u0026#34;$#.00\u0026#34;) void updateExisting(BasicUser user, Education education, Address address, Employment employment, @MappingTarget PersonDTO personDTO); } This generates an implementation which looks like below:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class UserMapperImpl implements UserMapper { private final ManagerMapper managerMapper = Mappers.getMapper( ManagerMapper.class ); @Override public PersonDTO convert(BasicUser user, Education education, Address address, Employment employment) { if ( user == null \u0026amp;\u0026amp; education == null \u0026amp;\u0026amp; address == null \u0026amp;\u0026amp; employment == null ) { return null; } PersonDTOBuilder personDTO = PersonDTO.builder(); if ( user != null ) { personDTO.id( String.valueOf( user.getId() ) ); personDTO.firstName( user.getName() ); personDTO.managerList( managerListToManagerDTOList( user.getManagerList() ) ); } if ( education != null ) { personDTO.education( educationToEducationDTO( education ) ); } if ( employment != null ) { personDTO.designation( convertDesignation( employment.getDesignation() ) ); personDTO.salary( String.valueOf( employment.getSalary() ) ); } return personDTO.build(); } @Override public void updateExisting(BasicUser user, Education education, Address address, Employment employment, PersonDTO personDTO) { if ( user == null \u0026amp;\u0026amp; education == null \u0026amp;\u0026amp; address == null \u0026amp;\u0026amp; employment == null ) { return; } if ( user != null ) { personDTO.setId( String.valueOf( user.getId() ) ); if ( personDTO.getManagerList() != null ) { List\u0026lt;ManagerDTO\u0026gt; list = managerListToManagerDTOList( user.getManagerList() ); if ( list != null ) { personDTO.getManagerList().clear(); personDTO.getManagerList().addAll( list ); } else { personDTO.setManagerList( null ); } } else { List\u0026lt;ManagerDTO\u0026gt; list = managerListToManagerDTOList( user.getManagerList() ); if ( list != null ) { personDTO.setManagerList( list ); } } } if ( education != null ) { personDTO.setEducationalQualification( education.getDegreeName() ); } if ( address != null ) { personDTO.setResidentialCity( address.getCity() ); } if ( employment != null ) { personDTO.setSalary( new DecimalFormat( \u0026#34;$#.00\u0026#34; ) .format( employment.getSalary() ) ); personDTO.setDesignation( convertDesignation( employment.getDesignation() ) ); } personDTO.setResidentialCountry( \u0026#34;US\u0026#34; ); } } Defining Default Expressions MapStruct supports default expressions which is a combination of default values and expressions. They can only be used when the source attribute is null. But whenever we define an expression that object class needs to be imported in @Mapper annotation.\n@Mapper( imports = UUID.class ) public interface UserMapper { UserMapper INSTANCE = Mappers.getMapper(UserMapper.class); @Mapping(source = \u0026#34;user.id\u0026#34;, target = \u0026#34;id\u0026#34;, defaultExpression = \u0026#34;java( UUID.randomUUID().toString() )\u0026#34;) PersonDTO convert(BasicUser user, Education education, Address address, Employment employment); } Mapper Retrieval Strategies To execute and call the mapper methods, we need to instantiate the mapper instance or the constructor. MapStruct provides various strategies to instantiate and access the generated mappers. Let’s look into each of them.\nMappers Factory If we are not using MapStruct as a Dependency Injection framework, then the mapper instances can be retrieved using the Mappers class. We need to invoke the getMappers() method from the factory passing the interface type of the mapper:\nUserMapper INSTANCE = Mappers.getMapper(UserMapper.class); This pattern is one of the simplest ways to access the mapper methods. It can be accessed in the following way:\nPersonDTO personDTO = UserMapper.INSTANCE.convert(user, education, address, employment); One thing to note is that the mappers generated by MapStruct are stateless and thread-safe. Thus it can be safely retrieved from several threads at the same time.\nDependency Injection If we want to use MapStruct in a dependency injection framework, then we need to access the mapper objects via dependency injection strategies and not use the Mappers class. MapStruct supports the component model for CDI(Contexts and Dependency Injection for Java EE) and the Spring framework.\nLet’s update our UserMapper class to work with Spring:\n@Mapper(componentModel = \u0026#34;spring\u0026#34;) public interface UserMapper { ... } Now the generated implementation class would have @Component annotation automatically added:\n@Component public class UserMapperImpl implements UserMapper { ... } Now when we define our Controller or Service layer, we can @Autowire it to access its methods:\n@Controller public class UserController() { @Autowired private UserMapper userMapper; } Similarly, if we are not using Spring framework, MapStruct has the support for CDI as well:\n@Mapper(componentModel = \u0026#34;cdi\u0026#34;) public interface UserMapper { ... } Then the generated mapper implementation will be annotated with @ApplicationScoped annotation:\n@ApplicationScoped public class UserMapperImpl implements UserMapper { ... } Finally, we can obtain the constructor using the @Inject annotation:\n@Inject private UserMapper userMapper; Mapping Customization We would often face various situations where we might need to apply custom business logic or conversion before or after mapping methods. MapStruct provides two ways for defining customization:\n Decorators - This pattern allows for type-safe customization of specific mapping methods. @BeforeMapping/@AfterMapping - This allows for generic customization of mapping methods with given source or target types.  Implementing a Decorator Sometimes we would like to customize a generated mapping implementation by adding our custom logic. MapStruct allows to define a Decorator class and annotate it with @DecoratedWith annotation. The decorator must be a sub-type of the decorated mapper type. We can define it as an abstract class that allows us to only implement those methods of the mapper interface which we want to customize. For all the other non-implemented methods, a simple delegation to the original mapper will be generated using the default implementation.\nFor example, let’s say we want to divide the name in the User class to firstName and lastName in PersonDTO, we can define this by adding a Decorator class as follows:\npublic abstract class UserMapperDecorator implements UserMapper { private final UserMapper delegate; protected UserMapperDecorator (UserMapper delegate) { this.delegate = delegate; } @Override public PersonDTO convert(BasicUser user, Education education, Address address, Employment employment) { PersonDTO dto = delegate.convert(user, education, address, employment); if (user.getName().split(\u0026#34;\\\\w+\u0026#34;).length \u0026gt; 1) { dto.setFirstName(user.getName().substring(0, user.getName().lastIndexOf(\u0026#39; \u0026#39;))); dto.setLastName(user.getName().substring(user.getName().lastIndexOf(\u0026#34; \u0026#34;) + 1)); } else { dto.setFirstName(user.getName()); } return dto; } } We can pass this decorator class as part of the UserMapper as follows:\n@Mapper @DecoratedWith(UserMapperDecorator.class) public interface UserMapper { UserMapper INSTANCE = Mappers.getMapper(UserMapper.class); PersonDTO convert(BasicUser user, Education education, Address address, Employment employment); } Usage of @BeforeMapping and @AfterMapping hooks Suppose we have a use-case where we would like to execute some logic before or after each mapping, then MapStruct provides additional control for customization using @BeforeMapping and @AfterMapping annotation. Let’s define those two methods:\n@Mapper @DecoratedWith(UserMapperDecorator.class) public interface UserMapper { UserMapper INSTANCE = Mappers.getMapper(UserMapper.class); @BeforeMapping default void validateMangers(BasicUser user, Education education, Address address, Employment employment) { if (Objects.isNull(user.getManagerList())) { user.setManagerList(new ArrayList\u0026lt;\u0026gt;()); } } @Mapping(source = \u0026#34;user.id\u0026#34;, target = \u0026#34;id\u0026#34;, defaultExpression = \u0026#34;java( UUID.randomUUID().toString() )\u0026#34;) @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;education.degree\u0026#34;) @Mapping(source = \u0026#34;education.institute\u0026#34;, target = \u0026#34;education.college\u0026#34;) @Mapping(source = \u0026#34;education.yearOfPassing\u0026#34;, target = \u0026#34;education.passingYear\u0026#34;, defaultValue = \u0026#34;2001\u0026#34;) @Mapping(source = \u0026#34;employment\u0026#34;, target = \u0026#34;.\u0026#34;) PersonDTO convert(BasicUser user, Education education, Address address, Employment employment); @Mapping(source = \u0026#34;education.degreeName\u0026#34;, target = \u0026#34;educationalQualification\u0026#34;) @Mapping(source = \u0026#34;address.city\u0026#34;, target = \u0026#34;residentialCity\u0026#34;) @Mapping(target = \u0026#34;residentialCountry\u0026#34;, constant = \u0026#34;US\u0026#34;) @Mapping(source = \u0026#34;employment.salary\u0026#34;, target = \u0026#34;salary\u0026#34;, numberFormat = \u0026#34;$#.00\u0026#34;) void updateExisting(BasicUser user, Education education, Address address, Employment employment, @MappingTarget PersonDTO personDTO); @AfterMapping default void updateResult(BasicUser user, Education education, Address address, Employment employment, @MappingTarget PersonDTO personDTO) { personDTO.setFirstName(personDTO.getFirstName().toUpperCase()); personDTO.setLastName(personDTO.getLastName().toUpperCase()); } } Now when the implementation is generated we would be able to see that the validateManagers() is called before mapping execution and updateResult() method is called after mapping execution:\n@Generated( value = \u0026#34;org.mapstruct.ap.MappingProcessor\u0026#34; ) public class UserMapperImpl_ implements UserMapper { private final ManagerMapper managerMapper = Mappers.getMapper( ManagerMapper.class ); @Override public PersonDTO convert(BasicUser user, Education education, Address address, Employment employment) { validateMangers( user, education, address, employment ); if ( user == null \u0026amp;\u0026amp; education == null \u0026amp;\u0026amp; address == null \u0026amp;\u0026amp; employment == null ) { return null; } PersonDTOBuilder personDTO = PersonDTO.builder(); if ( user != null ) { personDTO.id( String.valueOf( user.getId() ) ); personDTO.managerList( managerListToManagerDTOList( user.getManagerList() ) ); } if ( education != null ) { personDTO.education( educationToEducationDTO( education ) ); } if ( employment != null ) { personDTO.designation( convertDesignation( employment.getDesignation() ) ); personDTO.salary( String.valueOf( employment.getSalary() ) ); } return personDTO.build(); } @Override public void updateExisting(BasicUser user, Education education, Address address, Employment employment, PersonDTO personDTO) { validateMangers( user, education, address, employment ); if ( user == null \u0026amp;\u0026amp; education == null \u0026amp;\u0026amp; address == null \u0026amp;\u0026amp; employment == null ) { return; } if ( user != null ) { personDTO.setId( String.valueOf( user.getId() ) ); if ( personDTO.getManagerList() != null ) { List\u0026lt;ManagerDTO\u0026gt; list = managerListToManagerDTOList( user.getManagerList() ); if ( list != null ) { personDTO.getManagerList().clear(); personDTO.getManagerList().addAll( list ); } else { personDTO.setManagerList( null ); } } else { List\u0026lt;ManagerDTO\u0026gt; list = managerListToManagerDTOList( user.getManagerList() ); if ( list != null ) { personDTO.setManagerList( list ); } } } if ( education != null ) { personDTO.setEducationalQualification( education.getDegreeName() ); } if ( address != null ) { personDTO.setResidentialCity( address.getCity() ); } if ( employment != null ) { personDTO .setSalary( new DecimalFormat( \u0026#34;$#.00\u0026#34; ) .format( employment.getSalary() ) ); personDTO .setDesignation( convertDesignation( employment.getDesignation() ) ); } personDTO.setResidentialCountry( \u0026#34;US\u0026#34; ); updateResult( user, education, address, employment, personDTO ); } } Additional Configuration Options MapStruct allows to pass various annotation processor options or arguments to javac directly in the form -Akey=value. The Maven based configuration accepts build definitions with compiler args being passed explicitly:\n\u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-compiler-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;3.8.1\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;source\u0026gt;1.8\u0026lt;/source\u0026gt; \u0026lt;target\u0026gt;1.8\u0026lt;/target\u0026gt; \u0026lt;annotationProcessorPaths\u0026gt; \u0026lt;path\u0026gt; \u0026lt;groupId\u0026gt;org.mapstruct\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mapstruct-processor\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${org.mapstruct.version}\u0026lt;/version\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;/annotationProcessorPaths\u0026gt; \u0026lt;!-- due to problem in maven-compiler-plugin, for verbose mode add showWarnings --\u0026gt; \u0026lt;showWarnings\u0026gt;true\u0026lt;/showWarnings\u0026gt; \u0026lt;compilerArgs\u0026gt; \u0026lt;arg\u0026gt; -Amapstruct.suppressGeneratorTimestamp=true \u0026lt;/arg\u0026gt; \u0026lt;arg\u0026gt; -Amapstruct.defaultComponentModel=default \u0026lt;/arg\u0026gt; \u0026lt;/compilerArgs\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; Similarly, Gradle accepts compiler arguments in the following format:\ncompileJava { options.compilerArgs += [ \u0026#39;-Amapstruct.suppressGeneratorTimestamp=true\u0026#39;, \u0026#39;-Amapstruct.defaultComponentModel=default\u0026#39; ] } We just took two example configurations here. But it supports a lot of other configuration options as well. Let’s look at these four important options:\n mapstruct.suppressGeneratorTimestamp: the creation of a time stamp in the @Generated annotation in the generated mapper classes is suppressed with this option. mapstruct.defaultComponentModel: It accepts component models like default, cdi, spring, or jsr330 based on which mapper the code needs to be generated finally at compile time.  You can get to see more of this options here.\nConclusion In this article, we took a deep dive into the world of MapStruct and created a mapper class from basic level to custom methods and wrappers. We also looked into different options provided by MapStruct which include data type mappings, enum mappings, dependency injection, and expressions.\nMapStruct provides a powerful integration plugin that reduces the amount of code a user has to write. It makes the process of creating bean mappers pretty easy and fast.\nWe can refer to all the source codes used in the article on Github.\n","date":"June 8, 2022","image":"https://reflectoring.io/images/stock/0123-mapping-1200x628-branded_hu7b7ce0c7416b072f7f34ebacad3fc96f_274691_650x0_resize_q90_box.jpg","permalink":"/java-mapping-with-mapstruct/","title":"One-Stop Guide to Mapping with MapStruct"},{"categories":["aws"],"contents":"Back-end server resources like databases often contain data that is critical for an application to function consistently. So these resources are protected from public access over the internet by placing them in a private subnet. This will however make it inaccessible to the database clients and applications running on our local development workstations.\nWe usually run data manipulation queries in query editors provided by different database clients or from our application\u0026rsquo;s unit test cases to check out various scenarios during application development.\nIf the database is not accessible from our local workstation, we need to seek alternate methods of testing like moving the compiled application code to the cloud environment each time we want to test which is not very convenient and results in reducing productivity with a poor developer experience.\nThis problem is addressed by using a server called \u0026ldquo;Jump host\u0026rdquo; that can receive requests from external sources over the internet and securely forward or \u0026ldquo;jump\u0026rdquo; to the database secured in the private subnet.\nIn this tutorial, we will use a jump host for accessing an Amazon Relational Database Service(RDS) database residing in a private subnet.\nAmazon RDS supports multiple databases. We will use MySQL in our example. However, this approach will work for all other databases supported by Amazon RDS.\nCreating an RDS Database with Engine Type: MySQL Let us first create our RDS database using the AWS Management Console with MySQL as the engine type:\nFor creating the RDS database in a private subnet we have used the following configurations:\nWe have used the default virtual private cloud(VPC) available in our AWS account and set the public access to No. We have also chosen the option to create a new security group where we will define the inbound rules to allow traffic from selected sources.\nWe will also select Password authentication as the Database authentication option.\nOur RDS database created in a private subnet is ready to use when the status changes to available\nWhen the database is ready to be used, we can see the endpoint of the database along with the port which we will use later to connect to the database.\nWith our database created, we will next set up a jump host and populate inbound rules in the security groups in the following sections.\nCreating an EC2 Instance as the Jump Host A jump host is also called a bastion host/server whose sole purpose is to provide access to resources in a private network from an external network like the internet. A rough representation of this architecture is shown below:\nHere we are using an EC2 instance in a public subnet as our jump host for connecting to an RDS database in a private subnet.\nLet us create the EC2 instance from the AWS Management Console in a public subnet in the same VPC where we had created our RDS database in the previous section.:\nWe have created our instance in the free tier with an SSH key pair to securely access the instance with SSH in the later sections. An SSH key pair is used to authenticate the identity of a user or process (local workstation) that wants to access a remote system (the EC2 instance) using the SSH protocol.\nWe can either create a new SSH key pair or choose to use an existing key pair when creating the instance. We have created a new key pair as shown below:\nThe SSH key pair consists of a public key and a private key. The public key is used by the local workstation and the EC2 instance to encrypt messages. On the EC2 side, it is saved as an entry in a file: ~/.ssh/authorized_keys that contains a list of all authorized public keys.\nWe have downloaded the private key of the SSH key pair and saved it to our local workstation.\nFor creating the instance in the public subnet we have used the network settings as shown below:\nWe will use this EC2 instance as our jump host on which we will set up an SSH tunnel for connecting to the RDS database in the next section.\nAllow Traffic to the RDS Database from the Jump Host To enable connectivity to our RDS database, any security groups, network ACL, security rules, or third-party security software that exist on the RDS database must allow traffic from the EC2 instance used as the jump host.\nIn our example, the security group of our RDS database must allow access to port 3306 from the EC2 instance. To enable this access, let us add an inbound rule to the new security group associated with the RDS database to allow connections from the EC2 instance:\nHere we have specified the port range as 3306 and the source as 172.31.24.5/32 which is the private IP of the EC2 instance.\nThe EC2 instance is also secured by a security group. A security group is associated with an outbound rule, by default, that allows outbound traffic to all destinations. Accordingly, this rule will also allow the EC2 instance to make an outbound connection to the RDS database.\nConnecting to the RDS Database We use a mechanism called: SSH tunneling or port forwarding for connecting to the RDS database. SSH tunneling is a method of transporting arbitrary networking data over an encrypted SSH connection.\nThe SSH client on our local workstation listens for connections on a configured port. When it receives a connection, it tunnels the connection to the SSH server which is our EC2 instance acting as the jump host.\nThe SSH server(EC2 instance) connects to a destination port, usually on a different machine than the SSH server. In our example, the destination port is the port of the RDS database.\nAdditionally, since the SSH private key is securely saved on our local workstation, the communication with the RDS database is secured/encrypted over the SSH tunnel and the owner of the SSH private key is authenticated by the jump host.\nPlease refer to the official documentation to understand more details about SSH tunneling.\nWe will use MySQL workbench which provides a GUI to connect to our RDS MySQL database in two ways:\nConnect using Standard TCP/IP In this method, we create an SSH tunnel from our local machine to access the RDS MySQL database using the EC2 instance as the jump host.\nLet us start the SSH tunnel by running the following command:\nssh -i \u0026lt;SSH key of EC2 instance\u0026gt; ec2-user@\u0026lt;instance-IP of EC2\u0026gt; -L 3306:\u0026lt;RDS DB endpoint\u0026gt;:3306 When we run this command, the local port 3306 on our local machine tunnels to port 3306 on the RDS instance. We can then use MySQL workbench to access the RDS MySQL with connection type as Standard TCP/IP:\nWe can see the successful test connection message with 127.0.0.1 as the hostname and 3306 as the port.\nAlternately, we can run the following command in our terminal using the MySQL Command-Line Client: mysql:\nmysql -u \u0026lt;DB User\u0026gt; -h 127.0.0.1 -P 3306 -p \u0026lt;DB password\u0026gt; Here also we are connecting to the RDS MySQL database with 127.0.0.1 as the hostname and 3306 as the port.\nConnect using Standard TCP/IP over SSH In this method, we are connecting to the RDS MySQL database using the MySQL workbench using TCP/IP over SSH as the connection type:\nWe can see the successful test connection message with the following parameters :\n SSH Hostname: DNS name or IP of the EC2 instance used as the jump host SSH Username: SSH user name (ec2-user in our example) to connect to the EC2 instance. SSH Key File: Path to the SSH private key file saved in our local machine when creating the EC2 instance. MySQL Hostname: Endpoint of the RDS MySQL database. MySQL Server Port: TCP/IP port of the RDS MySQL database. Username: The user name of the RDS MySQL database set up during RDS database creation. Password: Password of the RDS MySQL database set up during RDS database creation.  Conclusion In this article, we walked through the stages of creating an RDS database in a private subnet and then connecting to the database using a jump host. Here is a summary of the steps for our quick reference:\n Create an RDS database in a private subnet. Create an EC2 instance in a public subnet in the same VPC where the RDS database was created. This EC2 instance will act as the bastion or Jump host for connecting to the RDS database. A bastion or jump host is a server whose purpose is to provide access to a private network from an external network like the internet. Add an inbound rule to the security group associated with the RDS database to allow incoming traffic from the EC2 instance created in step 2. Optionally add an outbound rule to the security group associated with the EC2 instance to allow outgoing traffic to the RDS database. Use a database client and connect to the endpoint of the RDS database with the database credentials configured during creation time or later using the SSH tunneling method.  ","date":"June 1, 2022","image":"https://reflectoring.io/images/stock/0001-network-1200x628-branded_hu72d229b68bf9f2a167eb763930d4c7d5_172647_650x0_resize_q90_box.jpg","permalink":"/connect-rds-byjumphost/","title":"Using a Jump host to access an RDS database in a private subnet"},{"categories":["NodeJS"],"contents":"Making API calls is integral to most applications and while doing this we use an HTTP client usually available as an external library. Axios is a popular HTTP client available as a JavaScript library with more than 22 million weekly downloads as of May 2022.\nWe can make API calls with Axios from JavaScript applications irrespective of whether the JavaScript is running on the front-end like a browser or the server-side.\nIn this article, we will understand Axios and use its capabilities to make different types of REST API calls from JavaScript applications.\n Example Code This article is accompanied by a working code example on GitHub. Why do we need Axios Let us first understand why do we need to use a library like Axios. JavaScript already provides built-in objects: XMLHttpRequest and the Fetch API for interacting with APIs.\nAxios in contrast to these built-in objects is an open-source library that we need to include in our application for making API calls over HTTP. It is similar to the Fetch API and returns a JavaScript Promise object but also includes many powerful features.\nOne of the important capabilities of Axios is its isomorphic nature which means it can run in the browser as well as in server-side Node.js applications with the same codebase.\nAxios is also a promise-based HTTP client that can be used in plain JavaScript as well as in advanced JavaScript frameworks like React, Vue.js, and Angular.\nIt supports all modern browsers, including support for IE 8 and higher.\nIn the following sections, we will look at examples of using these features of Axios in our applications.\nInstalling Axios and Other Prerequisites For the Examples We have created the following applications to simulate APIs on the server consumed by other applications on the server and the browser with REST APIs :\n apiserver: This is a Node.js application written using the Express Framework that will contain the REST APIs. serversideapps: This is also a Node.js written in Express that will call the REST APIs exposed by the apiserver application using the Axios HTTP client. reactapp: This is a front-end application written in React which will also call the REST APIs exposed by the apiserver application.  Instead of Express, we could have used any other JavaScript framework or even raw JavaScript applications. To understand Express, please refer to our Express series of articles starting with Getting started on Express.\nWe will need to install the Axios library in two of these applications: serversideapps and reactapp which will be making API calls. Let us change to these directories one by one and install Axios using npm:\nnpm install axios The package.json in our Node.js express application after installing the axios module looks like this:\n{ \u0026#34;name\u0026#34;: \u0026#34;serversideapp\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;main\u0026#34;: \u0026#34;index.js\u0026#34;, ... ... \u0026#34;dependencies\u0026#34;: { \u0026#34;axios\u0026#34;: \u0026#34;^0.27.2\u0026#34;, \u0026#34;cors\u0026#34;: \u0026#34;^2.8.5\u0026#34;, \u0026#34;express\u0026#34;: \u0026#34;^4.18.1\u0026#34; } } We can see the axios module added as a dependency in the dependencies element.\nIf we want to call APIs with Axios from a vanilla JavaScript application, then we need to include it from a Content delivery network (CDN) as shown here:\n\u0026lt;script src=\u0026#34;https://unpkg.com/axios/dist/axios.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; After setting up our applications, let us now get down to invoking the APIs exposed by the apiserver from the serversideapp and the reactapp using the Axios HTTP client in the following sections.\nSending Requests with the Axios Instance Let us start by invoking a GET method with the Axios HTTP client from our server-side application: serversideapp.\nFor doing this, we will add an Express route handler function with a URL: /products to the application. In the route handler function, we will fetch the list of products by calling an API from our apiserver with the URL: http://localhost:3002/products.\nWe will use the signature: axios(config) on the default instance provided by the Axios HTTP client for doing this:\nconst express = require(\u0026#39;express\u0026#39;) // Get the default instance const axios = require(\u0026#39;axios\u0026#39;) const app = express() // Express route handler with URL: \u0026#39;/products\u0026#39; and a handler function app.get(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { // Make the GET call by passing a config object to the instance  axios({ method: \u0026#39;get\u0026#39;, url: \u0026#39;http://localhost:3002/products\u0026#39; }).then(apiResponse =\u0026gt; { // process the response  const products = apiResponse.data response.json(products) }) }) In this example, we are first calling require('axios') for getting an instance: axios set up with a default configuration.\nThen we are passing a configuration argument to the axios instance containing the method parameter set to the HTTP method: get and the url parameter set to the URL of the REST endpoint: http://localhost:3002/products. The url parameter is mandatory while we can omit the method parameter that will then default to get.\nThis method returns a JavaScript Promise object which means the program does not wait for the method to complete before trying to execute the subsequent statement. The Promise is either fulfilled or rejected, depending on the response from the API.\nWe use the then() method as in this example for processing the result. The then() method gets executed when the Promise is fulfilled . In our example, in the then method, we are extracting the list of products by calling apiResponse.data.\nSimilarly, a POST request for adding a new product made with the axios default instance will look like this:\nconst express = require(\u0026#39;express\u0026#39;) // Get the default instance const axios = require(\u0026#39;axios\u0026#39;) const app = express() // Express route handler with URL: \u0026#39;/products/new\u0026#39; and a handler function app.post(\u0026#39;/products/new\u0026#39;, async (request, response) =\u0026gt; { const name = request.body.name const brand = request.body.brand const newProduct = {name: name, brand:brand} // Make the POST call by passing a config object to the instance  axios({ method: \u0026#39;post\u0026#39;, url: \u0026#39;http://localhost:3002/products\u0026#39;, data: newProduct, headers: {\u0026#39;Authorization\u0026#39;: \u0026#39;XXXXXX\u0026#39;} }).then(apiResponse=\u0026gt;{ const products = apiResponse.data response.json(products) }) }) In this example, in addition to what we did for calling the GET method, we have set the data element containing the JSON representation of the new Product along with an Authorization header. We are processing the response in the then function on the Promise response where we are extracting the API response data by calling apiResponse.data.\nFor more involved processing of the API response, it will be worthwhile to understand all the elements of the response returned by the API call made with axios :\n data: Response payload sent by the server status: HTTP status code from the server response statusText: HTTP status message from the server response headers: HTTP headers received in the API response config: config sent to the axios instance for sending the request request: Request that generated this response. It is the last ClientRequest instance in node.js (in redirects) and an XMLHttpRequest instance in the browser.  Sending Requests with the Convenience Instance Methods of Axios Axios also provides an alternate signature for making the API calls by providing convenience methods for all the HTTP methods like:axios.get(), axios.post(), axios.put(), axios.delete(), etc.\nWe can write the previous example for calling the GET method of the REST API using the convenience method: axios.get() as shown below:\nconst express = require(\u0026#39;express\u0026#39;) // Get the default instance const axios = require(\u0026#39;axios\u0026#39;) const app = express() // Express route handler for making a request for fetching a product app.get(\u0026#39;/products/:productName\u0026#39;, (request, response) =\u0026gt; { const productName = request.params.productName axios.get(`http://localhost:3002/products/${productName}`) .then(apiResponse =\u0026gt; { const product = apiResponse.data response.json(product) }) }) In this example, in the Express route handler function, we are calling the get() method on the default instance of axios and passing the URL of the REST API endpoint as the sole argument. This code looks much more concise than the signature: axios(config) used in the example in the previous section.\nThe signature: axios.get() is always preferable for calling the REST APIs due to its cleaner syntax. However, the signature: axios(config) of passing a config object containing the HTTP method, and URL parameters to the axios instance can be used in situations where we want to construct the API calls dynamically.\nThe get() method returns a JavaScript Promise object similar to our earlier examples, where we extract the list of products inside the then function.\nInstead of appending the request query parameter in the URL in the previous example, we could have passed the request parameter in a separate method argument: params as shown below:\nconst axios = require(\u0026#39;axios\u0026#39;) axios.get(`http://localhost:3002/products/`, { params: { productName: productName } }) .then(apiResponse =\u0026gt; { const product = apiResponse.data response.json(product) }) We could have also used the async/await syntax to call the get() method:\nconst express = require(\u0026#39;express\u0026#39;) const axios = require(\u0026#39;axios\u0026#39;) const app = express() app.get(\u0026#39;/products/async/:productName\u0026#39;, async (request, response) =\u0026gt; { const productName = request.params.productName const apiResponse = await axios.get(`http://localhost:3002/products/`, { params: { productName: productName } }) const product = apiResponse.data response.json(product) }) async/await is part of ECMAScript 2017 and is not supported in older browsers like IE.\nLet us next make a POST request to an API with the convenience method axios.post():\nconst express = require(\u0026#39;express\u0026#39;) const axios = require(\u0026#39;axios\u0026#39;) const app = express() app.post(\u0026#39;/products\u0026#39;, async (request, response) =\u0026gt; { const name = request.body.name const brand = request.body.brand const newProduct = {name: name, brand:brand} const apiResponse = await axios.post(`http://localhost:3002/products/`, newProduct) const product = apiResponse.data response.json({result:\u0026#34;OK\u0026#34;}) }) Here we are using the async/await syntax to make a POST request with the axios.post() method. We are passing the new product to be created as a JSON as the second parameter of the post() method.\nUsing Axios in Front-End Applications Let us look at an example of using Axios in a front-end application built with the React library. The below snippet is from a React component that calls the API for fetching products:\nimport React, { useState } from \u0026#39;react\u0026#39; import axios from \u0026#39;axios\u0026#39; export default function ProductList(){ const [products, setProducts] = useState([]) const fetchProducts = ()=\u0026gt;{ axios.get(`http://localhost:3001/products`) .then(response =\u0026gt; { const products = response.data setProducts(products) }) } return ( \u0026lt;\u0026gt; \u0026lt;p\u0026gt;Product List\u0026lt;/p\u0026gt; \u0026lt;p\u0026gt;\u0026lt;button onClick={fetchProducts}\u0026gt;Fetch Products\u0026lt;/button\u0026gt;\u0026lt;/p\u0026gt; \u0026lt;ul\u0026gt; { products .map(product =\u0026gt; \u0026lt;li key={product.id}\u0026gt;{product.name}\u0026amp;nbsp;{product.brand}\u0026lt;/li\u0026gt; ) } \u0026lt;/ul\u0026gt; \u0026lt;/\u0026gt; ) } As we can see, the code for making the API call with Axios is the same as what we used in the Node.js application in the earlier sections.\nSending Multiple Concurrent Requests with Axios In many situations, we need to combine the results from multiple APIs to get a consolidated result. With the Axios HTTP client, we can make concurrent requests to multiple APIs as shown in this example:\nconst express = require(\u0026#39;express\u0026#39;) // get the default axios instance const axios = require(\u0026#39;axios\u0026#39;) const app = express() // Route Handler app.get(\u0026#39;/products/:productName/inventory\u0026#39;, (request, response) =\u0026gt; { const productName = request.params.productName // Call the first API for product details  const productApiResponse = axios .get(`http://localhost:3002/products/${productName}`) // Call the second API for inventory details  const inventoryApiResponse = axios .get(`http://localhost:3002/products/${productName}/itemsInStock`) // Consolidate results into a single result  Promise.all([productApiResponse, inventoryApiResponse]) .then(results=\u0026gt;{ const productData = results[0].data const inventoryData = results[1].data let aggregateData = productData aggregateData.unitsInStock = inventoryData.unitsInStock response.send(aggregateData) }) }) In this example, we are making requests to two APIs using the Promise.all() method. We pass an iterable of the two Promise objects returned by the two APIs as input to the method.\nIn response, we get a single Promise object that resolves to an array of the results of the input Promise objects.\nThis Promise object returned as the response will resolve only when all of the input promises are resolved, or if the input iterable contains no promises.\nOverriding the default Instance of Axios In all the examples we have seen so far, we used the require('axios') to get an instance of axios which is configured with default parameters. If we want to add a custom configuration like a timeout of 2 seconds, we need to use Axios.create() where we can pass the custom configuration as an argument.\nAn Axios instance created with Axios.create() with a custom config helps us to reuse the provided configuration for all the API invocations made by that particular instance.\nHere is an example of an axios instance created with Axios.create() and used to make a GET request:\nconst express = require(\u0026#39;express\u0026#39;) const axios = require(\u0026#39;axios\u0026#39;) const app = express() // Express Route Handler app.get(\u0026#39;/products/deals\u0026#39;, (request, response) =\u0026gt; { // Create a new instance of axios  const new_instance = axios.create({ baseURL: \u0026#39;http://localhost:3002/products\u0026#39;, timeout: 1000, headers: { \u0026#39;Accept\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;Authorization\u0026#39;: \u0026#39;XXXXXX\u0026#39; } }) new_instance({ method: \u0026#39;get\u0026#39;, url: \u0026#39;/deals\u0026#39; }).then(apiResponse =\u0026gt; { const products = apiResponse.data response.json(products) }) }) In this example, we are using axios.create() to create a new instance of Axios with a custom configuration that has a base URL of http://localhost:3002/products and a timeout of 1000 milliseconds. The configuration also has an Accept and Authorization headers set depending on the API being invoked.\nThe timeout configuration specifies the number of milliseconds before the request times out. If the request takes longer than the timeout interval, the request will be aborted.\nIntercepting Requests and Responses We can intercept requests or responses of API calls made with Axios by setting up interceptor functions. Interceptor functions are of two types:\n Request interceptor for intercepting requests before the request is sent to the server. Response interceptor for intercepting responses received from the server.  Here is an example of an axios instance configured with a request interceptor for capturing the start time and a response interceptor for computing the time taken to process the request:\nconst express = require(\u0026#39;express\u0026#39;) const axios = require(\u0026#39;axios\u0026#39;) const app = express() // Request interceptor for capturing start time axios.interceptors.request.use( (request) =\u0026gt; { request.time = { startTime: new Date() } return request }, (err) =\u0026gt; { return Promise.reject(err) } ) // Response interceptor for computing duration axios.interceptors.response.use( (response) =\u0026gt; { response.config.time.endTime = new Date() response.duration = response.config.time.endTime - response.config.time.startTime return response }, (err) =\u0026gt; { return Promise.reject(err); } ) // Express route handler app.get(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { axios({ method: \u0026#39;get\u0026#39;, url: \u0026#39;http://localhost:3002/products\u0026#39; }).then(apiResponse=\u0026gt;{ const products = apiResponse.data // Print duration computed in the response interceptor  console.log(`duration ${apiResponse.duration}` ) response.json(products) }) }) In this example, we are setting the request.time to the current time in the request interceptor. In the response interceptor, we are capturing the current time in response.config.time.endTime and computing the duration by deducting from the current time, the start time captured in the request interceptor.\nInterceptors are a powerful feature that can be put to use in many use cases where we need to perform actions common to all API calls. In the absence of interceptors, we will need to repeat these actions in every API call. Some of these examples are:\n Verify whether the access token for making the API call has expired in the request interceptor. If the token has expired, fetch a new token with the refresh token. Attach specific headers required by the API to the request in the request interceptor. For example, add the Authorization header to every API call. Check for HTTP status, headers, and specific fields in the response to detect error conditions and trigger error handling logic.  Handling Errors in Axios The response received from Axios is a JavaScript promise which has a then() function for promise chaining, and a catch() function for handling errors. So for handling errors, we should add a catch() function at the end of one or more then() functions as shown in this example:\nconst express = require(\u0026#39;express\u0026#39;) const axios = require(\u0026#39;axios\u0026#39;) const app = express() // Express route handler app.post(\u0026#39;/products/new\u0026#39;, async (request, response) =\u0026gt; { const name = request.body.name const brand = request.body.brand const newProduct = {name: name, brand: brand} axios({ method: \u0026#39;post\u0026#39;, url: \u0026#39;http://localhost:3002/products\u0026#39;, data: newProduct, headers: {\u0026#39;Authorization\u0026#39;: \u0026#39;XXXXXX\u0026#39;} }).then(apiResponse=\u0026gt;{ const products = apiResponse.data response.json(products) }).catch(error =\u0026gt; { if (error.response) { console.log(\u0026#34;response error\u0026#34;) } else if (error.request) { console.log(\u0026#34;request error\u0026#34;) } else { console.log(\u0026#39;Error\u0026#39;, error.message); } response.send(error.toJSON()) }) }) In this example, we have put the error handling logic in the catch() function. The callback function in the catch() takes the error object as input. We come to know about the source of the error by checking for the presence of the response property and request property in the error object with error.response and error.request.\nAn error object with a response property indicates that our server returned a 4xx/5xx error and accordingly return a helpful error message in the response.\nIn contrast, An error object with a request property indicates network errors, a non-responsive backend, or errors caused by unauthorized or cross-domain requests.\nThe error object may not have either a response or request object attached to it. This indicates errors related to setting up the request, which eventually triggered the error. An example of this condition is an URL parameter getting omitted while sending the request.\nCancelling Initiated Requests We can also cancel or abort a request when we no longer require the requested data for example, when the user navigates from the current page to another page. To cancel a request, we use the AbortController class as shown in this code snippet from our React application:\nimport React, { useState } from \u0026#39;react\u0026#39; import axios from \u0026#39;axios\u0026#39; export default function ProductList(){ const [products, setProducts] = useState([]) const controller = new AbortController() const abortSignal = controller.signal const fetchProducts = ()=\u0026gt;{ axios.get(`http://localhost:3001/products`, {signal: abortSignal}) .then(response =\u0026gt; { const products = response.data setProducts(products) }) controller.abort() } return ( \u0026lt;\u0026gt; ... ... \u0026lt;/\u0026gt; ) } As we can see in this example, we are first creating a controller object using the AbortController() constructor, then storing a reference to its associated AbortSignal object using the signal property of the AbortController.\nWhen the axios request is initiated, we pass in the AbortSignal as an option inside the request\u0026rsquo;s options object: {signal: abortSignal}. This associates the signal and controller with the axios request and allows us to abort the request by calling the abort() method on the controller.\nUsing Axios in TypeScript Let us now see an example of using Axios in applications authored in TypeScript.\nWe will first create a separate folder: serversideapp_ts and create a project in Node.js by running the below command by changing into this directory:\ncd serversideapp_ts npm init -y Let us next add support for TypeScript to our Node.js project by performing the following steps:\n Installing Typescript and ts-node with npm:  npm i -D typescript ts-node Creating a JSON file named tsconfig.json with the below contents in our project’s root folder to specify different options for compiling the TypeScript code:  { \u0026#34;compilerOptions\u0026#34;: { \u0026#34;module\u0026#34;: \u0026#34;commonjs\u0026#34;, \u0026#34;target\u0026#34;: \u0026#34;es6\u0026#34;, \u0026#34;rootDir\u0026#34;: \u0026#34;./\u0026#34;, \u0026#34;esModuleInterop\u0026#34;: true } } Installing the axios module with npm:  npm install axios Axios includes TypeScript definitions, so we do not have to install them separately.\nAfter enabling the project for TypeScript, let us add a file index.ts which will contain our code for making API calls with Axios in TypeScript.\nNext, we will make an HTTP GET request to our API for fetching products as shown in this code snippet:\nimport axios from \u0026#39;axios\u0026#39;; type Product = { id: number; email: string; first_name: string; }; type GetProductsResponse = { data: Product[]; }; async function getProducts() { try { // 👇️ fetch products from an API  const { data, status } = await axios.get\u0026lt;GetProductsResponse\u0026gt;( \u0026#39;http://localhost:3002/products\u0026#39;, { headers: { Accept: \u0026#39;application/json\u0026#39;, }, }, ); console.log(JSON.stringify(data, null, 4)); console.log(`response status is: ${status}`); return data; } catch (error) { if (axios.isAxiosError(error)) { console.log(`error message: ${error.message}`); return error.message; } else { console.log(`unexpected error: ${error}`); return \u0026#39;An unexpected error occurred\u0026#39;; } } } getProducts(); In this example, we are getting the axios instance by importing it from the module in the first line. We have next defined two type definitions: Product and GetProductsResponse.\nAfter that we have defined a method getProducts() where we are invoking the API with the axios.get() method using the async/await syntax. We are passing the URL of the API endpoint and an Accept header to the axios.get() method.\nLet us now run this program with the below command:\nnpx ts-node index.ts We are calling the method: getProducts() in the last line in the program, which prints the response from the API in the terminal/console.\nConclusion In this article, we looked at the different capabilities of Axios. Here is a summary of the important points from the article:\n Axios is an HTTP client for calling REST APIs from JavaScript programs running in the server as well as in web browsers. We create default instance of axios by calling require('axios') We can override the default instance of axios with the create() method of axios to create a new instance, where we can override the default configuration properties like \u0026lsquo;timeout\u0026rsquo;. Axios allows us to attach request and response interceptors to the axios instance where we can perform actions common to multiple APIs. Error conditions are handled in the catch() function of the Promise response. We can cancel requests by calling the abort() method of the AbortController class. The Axios library includes TypeScript definitions, so we do not have to install them separately when using Axios in TypeScript applications.  You can refer to all the source code used in the article on Github.\n","date":"May 20, 2022","image":"https://reflectoring.io/images/stock/0118-keyboard-1200x628-branded_huf25a9b6a90140c9cfeb91e792ab94429_105919_650x0_resize_q90_box.jpg","permalink":"/tutorial-guide-axios/","title":"Complete Guide to Axios HTTP Client"},{"categories":["Java"],"contents":"Developers use HTTP Clients to communicate with other applications over the network. Over the years, multiple HTTP Clients have been developed to suit various application needs.\nIn this article, we will focus on Retrofit, one of the most popular type-safe Http clients for Java and Android.\n Example Code This article is accompanied by a working code example on GitHub. What is OkHttp? OkHttp is an efficient HTTP client developed by Square. Some of its key advantages are:\n HTTP/2 support Connection pooling (helps reduce request latency) GZIP compression (saves bandwidth and speeds up interaction) Response Caching Silent recovery from connection problems Support for synchronous and asynchronous calls  What is Retrofit? Retrofit is a high-level REST abstraction built on top of OkHttp. When used to call REST applications, it greatly simplifies API interactions by parsing requests and responses into POJOs.\nIn the further sections, we will work on creating a Retrofit client and look at how to incorporate the various features that OkHttp provides.\nSetting up a REST Server We will use a sample REST-based Library Application that can fetch, create, update and delete books and authors. You can check out the source code on GitHub and run the application yourself if you want.\nThis library application is a Spring Boot service that uses Maven for building and HSQLDB as the underlying database. The Maven Wrapper bundled with the application will be used to start the service:\nmvnw clean verify spring-boot:run (for Windows) ./mvnw clean verify spring-boot:run (for Linux) Now, the application should successfully start:\n[ main] com.reflectoring.library.Application : Started application in 6.94 seconds (JVM running for 7.611) Swagger is a set of tools that describes an API structure by creating user-friendly documentation and helps develop and describe RESTful APIs. This application uses the Swagger documentation that can be viewed at http://localhost:8090/swagger-ui.html\nThe documentation should look like this:\nSwagger also allows us to make calls to the REST endpoints. Before we can do this, we need to add basic authentication credentials as configured in application.yaml:\nNow, we can hit the REST endpoints successfully. Sample JSON requests are available in README.md file in the application codebase.\nOnce the POST request to add a book to the library is successful, we should be able to make a GET call to confirm this addition.\nNow that our REST service works as expected, we will move on to introduce another application that will act as a REST client making calls to this service. In the process, we will learn about Retrofit and its various features.\nBuilding a REST Client with Retrofit The REST Client application will be a Library Audit application that exposes REST endpoints and uses Retrofit to call our previously set up Library application. The result is then audited in an in-memory database for tracking purposes.\nAdding Retrofit dependencies With Maven:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.squareup.retrofit2\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;retrofit\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.5.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.squareup.retrofit2\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;converter-jackson\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.5.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; With Gradle:\ndependencies { implementation \u0026#39;com.squareup.retrofit2:retrofit:2.5.0\u0026#39; implementation \u0026#39;com.squareup.retrofit2:converter-jackson:2.5.0\u0026#39; } Quick Guide to Setting up a Retrofit Client Every Retrofit client needs to follow the three steps listed below:\nCreating the Model Objects for Retrofit We will take the help of the Swagger documentation in our REST service to create model objects for our Retrofit client.\nWe will now create corresponding model objects in our client application:\n@Getter @Setter @NoArgsConstructor public class AuthorDto { @JsonProperty(\u0026#34;id\u0026#34;) private long id; @JsonProperty(\u0026#34;name\u0026#34;) private String name; @JsonProperty(\u0026#34;dob\u0026#34;) private String dob; } @Getter @Setter @NoArgsConstructor public class BookDto { @JsonProperty(\u0026#34;bookId\u0026#34;) private long id; @JsonProperty(\u0026#34;bookName\u0026#34;) private String name; @JsonProperty(\u0026#34;publisher\u0026#34;) private String publisher; @JsonProperty(\u0026#34;publicationYear\u0026#34;) private String publicationYear; @JsonProperty(\u0026#34;isCopyrighted\u0026#34;) private boolean copyrightIssued; @JsonProperty(\u0026#34;authors\u0026#34;) private Set\u0026lt;AuthorDto\u0026gt; authors; } @Getter @Setter @AllArgsConstructor @NoArgsConstructor public class LibResponse { private String responseCode; private String responseMsg; } We take advantage of Lombok to generate getters, setters, and constructors for us (@Getter, @Setter, @AllArgsConstructor, @NoArgsConstructor). You can read more about Lombok in our article.\nCreating the Client Interface To create the retrofit interface, we will map every service call with a corresponding interface method as shown in the screenshot below.\npublic interface LibraryClient { @GET(\u0026#34;/library/managed/books\u0026#34;) Call\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getAllBooks(@Query(\u0026#34;type\u0026#34;) String type); @POST(\u0026#34;/library/managed/books\u0026#34;) Call\u0026lt;LibResponse\u0026gt; createNewBook(@Body BookDto book); @PUT(\u0026#34;/library/managed/books/{id}\u0026#34;) Call\u0026lt;LibResponse\u0026gt; updateBook(@Path(\u0026#34;id\u0026#34;) Long id, @Body BookDto book); @DELETE(\u0026#34;/library/managed/books/{id}\u0026#34;) Call\u0026lt;LibResponse\u0026gt; deleteBook(@Path(\u0026#34;id\u0026#34;) Long id); } Creating a Retrofit Client We will use the Retrofit Builder API to create an instance of the Retrofit client for us:\n@Configuration @EnableConfigurationProperties(ClientConfigProperties.class) public class RestClientConfiguration { @Bean public LibraryClient libraryClient(ClientConfigProperties props) { OkHttpClient.Builder httpClientBuilder = new OkHttpClient.Builder() .addInterceptor(new BasicAuthInterceptor(props.getUsername(), props.getPassword())) .connectTimeout(props.getConnectionTimeout(), TimeUnit.SECONDS) .readTimeout(props.getReadWriteTimeout(), TimeUnit.SECONDS); return new Retrofit.Builder().client(httpClientBuilder.build()) .baseUrl(props.getEndpoint()) .addConverterFactory(JacksonConverterFactory.create(new ObjectMapper())) .build().create(LibraryClient.class); } } Here, we have created a Spring Boot configuration that uses the Retrofit Builder to create a Spring bean that we can then use in other classes.\nWe will deep-dive into each of the three steps listed above in the next section.\nUsing Retrofit in Detail This section will focus on the annotations, Retrofit classes, and features that will help us create a flexible and easy-to-configure REST client.\nBuilding a Client Interface In this section, we will look at how to build the client interface. Retrofit supports annotations @GET, @POST, @PUT, @DELETE, @PATCH, @OPTIONS, @HEAD which we use to annotate our client methods as shown below:\nPath Parameters Along with the mentioned annotations, we specify the relative path of the REST service endpoint. To make this relative URL more dynamic we use parameter replacement blocks as shown below:\n@PUT(\u0026#34;/library/managed/books/{id}\u0026#34;) Call\u0026lt;LibResponse\u0026gt; updateBook(@Path(\u0026#34;id\u0026#34;) Long id, @Body BookDto book); To pass the actual value of id, we annotate a method parameter with the @Path annotation so that the call execution will replace {id} with its corresponding value.\nQuery Parameters We can specify the query parameters in the URL directly or add a @Query-annotated param to the method:\n@GET(\u0026#34;/library/managed/books?type=all\u0026#34;) // OR  @GET(\u0026#34;/library/managed/books\u0026#34;) Call\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getAllBooks(@Query(\u0026#34;type\u0026#34;) String type); Multiple Query Parameters If the request needs to have multiple query parameters, we can use @QueryMap:\n@GET(\u0026#34;/library/managed/books\u0026#34;) Call\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getAllBooks(@QueryMap Map\u0026lt;String, String\u0026gt; options); Request Body To specify an object as HTTP request body, we use the @Body annotation:\n@POST(\u0026#34;/library/managed/books\u0026#34;) Call\u0026lt;LibResponse\u0026gt; createNewBook(@Body BookDto book); Headers To the Retrofit interface methods, we can specify static or dynamic header parameters. For static headers, we can use the @Headers annotation:\n@Headers(\u0026#34;Accept: application/json\u0026#34;) @GET(\u0026#34;/library/managed/books\u0026#34;) Call\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getAllBooks(@Query(\u0026#34;type\u0026#34;) String type); We could also define multiple static headers inline:\n@Headers({ \u0026#34;Accept: application/json\u0026#34;, \u0026#34;Cache-Control: max-age=640000\u0026#34;}) @GET(\u0026#34;/library/managed/books\u0026#34;) Call\u0026lt;List\u0026lt;BookDto\u0026gt;\u0026gt; getAllBooks(@Query(\u0026#34;type\u0026#34;) String type); To pass dynamic headers, we specify them as method parameters annotated with the @Header annotation:\n@GET(\u0026#34;/library/managed/books/{requestId}\u0026#34;) Call\u0026lt;BookDto\u0026gt; getAllBooksWithHeaders(@Header(\u0026#34;requestId\u0026#34;) String requestId); For multiple dynamic headers, we use @HeaderMap.\nAll Retrofit responses are wrapped in a Call object. It supports both blocking and non-blocking requests.\nUsing the Retrofit Builder API The Builder API on Retrofit allows for customization of our HTTP client. Let\u0026rsquo;s take a closer look at some configuration options.\nConfiguring Timeout Settings We can set timeouts on the underlying HTTP client. However, setting up these values is optional. If we do not specify the timeouts, default settings apply.\n Connection timeout: 10 sec Read timeout: 10 sec Write timeout: 10 sec  To override these defaults, we need to set up OkHttpClient as shown below:\nOkHttpClient.Builder httpClientBuilder = new OkHttpClient.Builder() .connectTimeout(props.getConnectionTimeout(), TimeUnit.SECONDS) .readTimeout(props.getReadWriteTimeout(), TimeUnit.SECONDS); return new Retrofit.Builder().client(httpClientBuilder.build()) .baseUrl(props.getEndpoint()) .addConverterFactory(JacksonConverterFactory.create(new ObjectMapper())) .build().create(LibraryClient.class); Here, the timeout values are as specified in application.yaml.\nUsing Converters By default, Retrofit can only deserialize HTTP bodies into OkHttp\u0026rsquo;s ResponseBody type and its RequestBody type for @Body. With converters, the requests and responses can be wrapped into Java objects.\nCommonly used convertors are:\n Gson: com.squareup.retrofit2:converter-gson Jackson: com.squareup.retrofit2:converter-jackson  To make use of these converters, we need to make sure their corresponding build dependencies are included. Then we can add them to the respective converter factory.\nIn the following example, we have used Jackson\u0026rsquo;s ObjectMapper() to map requests and responses to and from JSON:\nnew Retrofit.Builder().client(httpClientBuilder.build()) .baseUrl(props.getEndpoint()) .addConverterFactory(JacksonConverterFactory.create(new ObjectMapper())) .build().create(LibraryClient.class); Adding Interceptors Interceptors are a part of the OkHttp library that intercepts requests and responses. They help add, remove or modify metadata. OkHttp interceptors are of two types:\n Application Interceptors - Configured to handle application requests and responses Network Interceptors - Configured to handle network focused scenarios  Let\u0026rsquo;s take a look at some use-cases where interceptors are used:\nBasic Authentication Basic Authentication is one of the commonly used means to secure endpoints. In our example, the REST service is secured. For the Retrofit client to make authenticated REST calls, we will create an Interceptor class as shown:\npublic class BasicAuthInterceptor implements Interceptor { private final String credentials; public BasicAuthInterceptor(String user, String password) { this.credentials = Credentials.basic(user, password); } @Override public Response intercept(Chain chain) throws IOException { Request request = chain.request(); Request authenticatedRequest = request.newBuilder() .header(\u0026#34;Authorization\u0026#34;, credentials).build(); return chain.proceed(authenticatedRequest); } } Next, we will add this interceptor to the Retrofit configuration client.\nOkHttpClient.Builder httpClientBuilder = new OkHttpClient.Builder() .addInterceptor(new BasicAuthInterceptor( props.getUsername(), props.getPassword())); The username and password configured in the application.yaml will be securely passed to the REST service in the Authorization header. Adding this interceptor ensures that the Authorization header is attached to every request triggered.\nLogging Logging interceptors print requests, responses, header data and additional information. OkHttp provides a logging library that serves this purpose. To enable this, we need to add com.squareup.okhttp3:logging-interceptor as a dependency. Further, we need to add this interceptor to our Retrofit configuration client:\nHttpLoggingInterceptor loggingInterceptor = new HttpLoggingInterceptor(); loggingInterceptor.setLevel(HttpLoggingInterceptor.Level.BODY); OkHttpClient.Builder httpClientBuilder = new OkHttpClient.Builder() .addInterceptor(loggingInterceptor) With these additions, when we trigger requests, the logs will look like this:\nVarious levels of logging are available such as BODY, BASIC, HEADERS. We can customize them to the level we need.\nHeader In the previous sections, we have seen how to add headers to the client interface. Another way to add headers to requests and responses is via interceptors. We should consider adding interceptors for headers if we need the same common headers to be passed to every request or response:\nOkHttpClient.Builder httpClient = new OkHttpClient.Builder(); httpClient.addInterceptor(new Interceptor() { @Override public Response intercept(Interceptor.Chain chain) throws IOException { Request request = chain.request(); // Request customization: add request headers  Request.Builder requestBuilder = request.newBuilder() .header(\u0026#34;Cache-Control\u0026#34;, \u0026#34;no-store\u0026#34;); return chain.proceed(requestBuilder.build()); } }); Note that if the request already creates the Cache-Control header, .header() will replace the existing header. There is also a .addHeader() method available that allows us to add multiple values to the same header. For instance:\nOkHttpClient.Builder httpClient = new OkHttpClient.Builder(); httpClient.addInterceptor(new Interceptor() { @Override public Response intercept(Interceptor.Chain chain) throws IOException { Request request = chain.request(); // Request customization: add request headers  Request.Builder requestBuilder = request.newBuilder() .addHeader(\u0026#34;Cache-Control\u0026#34;, \u0026#34;no-store\u0026#34;); .addHeader(\u0026#34;Cache-Control\u0026#34;, \u0026#34;no-cache\u0026#34;); return chain.proceed(requestBuilder.build()); } }); With the above code, the header added will be\nCache-Control: no-store, no-cache Caching For applications, caching can help speed up response times. With the combination of caching and network interceptor configuration, we can retrieve cached responses when there is a network connectivity issue. To configure this, we first implement an Interceptor:\npublic class CacheInterceptor implements Interceptor { @Override public Response intercept(Chain chain) throws IOException { Response response = chain.proceed(chain.request()); CacheControl cacheControl = new CacheControl.Builder() .maxAge(1, TimeUnit.MINUTES) // 1 minutes cache  .build(); return response.newBuilder() .removeHeader(\u0026#34;Pragma\u0026#34;) .removeHeader(\u0026#34;Cache-Control\u0026#34;) .header(\u0026#34;Cache-Control\u0026#34;, cacheControl.toString()) .build(); } } Here the Cache-Control header is telling the client to cache responses for the configured maxAge. Next, we add this interceptor as a network interceptor and define an OkHttp cache in the client configuration.\nCache cache = new Cache(new File(\u0026#34;cache\u0026#34;), 10 * 1024 * 1024); OkHttpClient.Builder httpClientBuilder = new OkHttpClient.Builder() .addInterceptor(new BasicAuthInterceptor(props.getUsername(), props.getPassword())) .cache(cache) .addNetworkInterceptor(new CacheInterceptor()) .addInterceptor(interceptor) .connectTimeout(props.getConnectionTimeout(), TimeUnit.SECONDS) .readTimeout(props.getReadWriteTimeout(), TimeUnit.SECONDS); Note: Caching in general applies to GET requests only. With this configuration, the GET requests will be cached for 1 minute. The cached responses will be served during the 1 minute timeframe even if the network connectivity is down.\nCustom Interceptors As explained in the previous sections, BasicAuthInterceptor, CachingInterceptor are all examples of custom interceptors created to serve a specific purpose. Custom interceptors implement the OkHttp Interceptor interface and implement the method intercept().\nNext, we configure the interceptor (either as an Application interceptor or Network interceptor). This will make sure the interceptors are chained and called before the end-to-end request is processed.\nNote: If multiple interceptors are defined, they are called in sequence. For instance, a Logging interceptor must always be defined as the last interceptor to be called in the chain, so that we do not miss any critical logging during execution.\nUsing the REST Client to Make Synchronous or Asynchronous Calls The REST client we configured above can call the service endpoints in two ways:\nSynchronous calls To make a synchronous call, the Call interface provides the execute() method. Since execute() method runs on the main thread, the UI is blocked till the execution completes.\nResponse\u0026lt;BookDto\u0026gt; allBooksResponse = libraryClient.getAllBooksWithHeaders(bookRequest).execute(); if (allBooksResponse.isSuccessful()) { books = allBooksResponse.body(); log.info(\u0026#34;Get All Books : {}\u0026#34;, books); audit = auditMapper.populateAuditLogForGetBook(books); } else { log.error(\u0026#34;Error calling library client: {}\u0026#34;, allBooksResponse.errorBody()); if (Objects.nonNull(allBooksResponse.errorBody())) { audit = auditMapper.populateAuditLogForException( null, HttpMethod.GET, allBooksResponse.errorBody().string()); } } The methods that help us further process the response are:\n isSuccessful(): Helps determine if the response HTTP status code is 2xx. body(): On success, returns the response body. In the example above, the response gets mapped to a BookDto object. errorBody(): When the service returns a failure response, this method gives us the corresponding error object. To further extract the error message, we use the errorBody().string().  Asynchronous Calls To make an asynchronous call, the Call interface provides the enqueue() method. The request is triggered on a separate thread and it does not block the main thread processing.\npublic void getBooksAsync(String bookRequest) { Call\u0026lt;BookDto\u0026gt; bookDtoCall = libraryClient.getAllBooksWithHeaders(bookRequest); bookDtoCall.enqueue(new Callback\u0026lt;\u0026gt;() { @Override public void onResponse(Call\u0026lt;BookDto\u0026gt; call, Response\u0026lt;BookDto\u0026gt; response) { if (response.isSuccessful()) { log.info(\u0026#34;Success response : {}\u0026#34;, response.body()); } else { log.info(\u0026#34;Error response : {}\u0026#34;, response.errorBody()); } } @Override public void onFailure(Call\u0026lt;BookDto\u0026gt; call, Throwable throwable) { log.error(\u0026#34;Network error occured : {}\u0026#34;, throwable.getLocalizedMessage()); } }); } We provide implementations to the methods of the Callback interface. The onResponse() handles valid HTTP responses (both success and error) and onFailure() handles network connectivity issues.\nWe have now covered all the basic components that will help us create a working Retrofit client in a Spring Boot application. In the next section, we will look at mocking the endpoints defined in the Retrofit client.\nMocking an OkHttp REST Client For writing Unit tests, we will use the Spring Boot Test framework in combination with Mockito and Retrofit Mock. We will include the Retrofit Mock dependency with Maven:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.squareup.retrofit2\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;retrofit-mock\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.5.0\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; Gradle:\ntestImplementation group: \u0026#39;com.squareup.retrofit2\u0026#39;, name: \u0026#39;retrofit-mock\u0026#39;, version: \u0026#39;2.5.0\u0026#39; Next, we will test the service methods. Here we will focus on mocking the Retrofit client calls. First we will use Mockito to mock libraryClient.\n@Mock private LibraryClient libraryClient; Now, we will mock the client methods and return a static object. Further we will use retrofit-mock to wrap the response into a Call object using Calls.response. Code snippet is as shown below:\nString booksResponse = getBooksResponse(\u0026#34;/response/getAllBooks.json\u0026#34;); List\u0026lt;BookDto\u0026gt; bookDtoList = new ObjectMapper().readValue(booksResponse, new TypeReference\u0026lt;\u0026gt;(){}); when(libraryClient.getAllBooks(\u0026#34;all\u0026#34;)) .thenReturn(Calls.response(bookDtoList)); Calls.response automatically wraps the Call response as successful. To test error scenarios, we need to explicitly define okhttp3.ResponseBody with the error code and error body:\nLibResponse response = new LibResponse(Status.ERROR.toString(), \u0026#34;Could not delete book for id : 1000\u0026#34;); ResponseBody respBody = ResponseBody.create(MediaType.parse(\u0026#34;application/json\u0026#34;), new ObjectMapper().writeValueAsString(response)); Response\u0026lt;LibResponse\u0026gt; respLib = Response.error(500, respBody); when(libraryClient.deleteBook(Long.valueOf(\u0026#34;1000\u0026#34;))) .thenReturn(Calls.response(respLib)); Conclusion In this article, we introduced a Spring Boot REST client and REST server and looked at various capabilities of the Retrofit library. We took a closer look at the various components that need to be addressed to define a Retrofit client. Finally, we learned to mock the Retrofit client for unit tests. In conclusion, Retrofit along with OkHttp is an ideal library that works well with Spring and simplifies calls to a REST server.\n","date":"May 17, 2022","image":"https://reflectoring.io/images/stock/0096-tools-1200x628-branded_hue8579b2f8c415ef5a524c005489e833a_326215_650x0_resize_q90_box.jpg","permalink":"/okhttp-retrofit/","title":"Typesafe HTTP Clients with OkHttp and Retrofit"},{"categories":["Spring"],"contents":"Microservices are meant to be adaptable, scalable, and highly performant so that they can be more competitive to the other products in the market. To increase speed among the services, network communications and data flow within the microservices play a key role.\nIn this tutorial, we will take a look at microservices that communicate in a blocking fashion and turn them into reactive applications to improve the flow between them.\n Example Code This article is accompanied by a working code example on GitHub. Brief Introduction to Reactive Systems Usually, while data is being transferred between the services, it operates in a blocking, synchronous, and FIFO (first-in-first-out) pattern. This blocking methodology of data streaming often prohibits a system to process real-time data while streaming because of limited performance and speed.\nHence, a bunch of prominent developers realized that they would need an approach to build a “reactive” systems architecture that would ease the processing of data while streaming and they signed a manifesto, popularly known as the Reactive Manifesto.\nThe authors of the manifesto stated that a reactive system must be an asynchronous software that deals with producers who have the single responsibility to send messages to consumers. They introduced the following features to keep in mind:\n Responsive: Reactive systems must be fast and responsive so that they can provide a consistently high quality of service. Resilient: Reactive systems should be designed to anticipate system failures. Thus, they should be responsive through replication and isolation. Elastic: Reactive systems must be adaptive to shard or replicate components based upon their requirement. They should use predictive scaling to anticipate sudden ups and downs in their infrastructure. Message-driven: Since all the components in a reactive system are supposed to be loosely coupled, they must communicate across their boundaries by asynchronously exchanging messages.  Hence, a programming paradigm was introduced, popularly known as the Reactive Programming Paradigm. If you want to know in-depth about the various components of this paradigm, then have a look at our WebFlux article.\nIn this chapter, we are going to build a microservice architecture that would be based upon the following design principles:\n Do one thing, and one thing well while defining service boundaries Isolate all the services Ensure that the services act autonomously Embrace asynchronous messaging between the services Stay mobile, but addressable Design for the required level of consistency  Building a Synchronous Credit Card Transaction System For this article, we are going to build a simple microservice that would receive continuous credit card transactions as a data stream and take necessary actions based on the decision of whether that particular transaction is valid or fraudulent. This architecture wouldn’t necessarily exhibit the characteristics of a reactive system. Rather we will make necessary changes in the design progressively to finally adopt a reactive characteristic.\n  We will define four services:\n Banking Service - This will receive the transaction request as an API call. Then it will orchestrate and send the transaction downstream based upon various criteria to take necessary actions. User Notification Service - This will receive fraudulent transactions and notify or alert users to make them aware of the transaction attempt. Reporting Service - This will receive any kind of transaction and report in case valid or fraudulent. It will also report to the bank and update or take necessary actions against the card or User Account. Account Management Service - This will manage the user’s account and update it in case of valid transactions.  All the above calls will be synchronous and driven by banking service. It would wait for the downstream applications to process the calls synchronously and finally update the result.\nWe will be using MongoDB and creating two tables, Transaction, and User. Each transaction would contain the following information:\n Card ID - the user\u0026rsquo;s card ID with which (allegedly) a purchase was made Amount - the amount of the purchase transaction in dollars Transaction location - the country in which that purchase has been made Transaction Date Store Information Transaction ID    Banking Service We will first define a banking microservice that would receive a transaction. Based upon the transaction status, this service will play as an orchestrator to communicate between other services and take necessary actions. This will be a simple synchronous call that would wait until all the other services take necessary actions and update the final status.\nLet’s define a Transaction model to receive the incoming information:\n@Data @Document @ToString @NoArgsConstructor public class Transaction { @Id @JsonProperty(\u0026#34;transaction_id\u0026#34;) private String transactionId; private String date; @JsonProperty(\u0026#34;amount_deducted\u0026#34;) private double amountDeducted; @JsonProperty(\u0026#34;store_name\u0026#34;) private String storeName; @JsonProperty(\u0026#34;store_id\u0026#34;) private String storeId; @JsonProperty(\u0026#34;card_id\u0026#34;) private String cardId; @JsonProperty(\u0026#34;transaction_location\u0026#34;) private String transactionLocation; private TransactionStatus status; } Next, we will also create a User model which will have the User details and the card or Account info:\n@Data @Document @ToString @NoArgsConstructor public class User { @Id private String id; @JsonProperty(\u0026#34;first_name\u0026#34;) private String firstName; @JsonProperty(\u0026#34;last_name\u0026#34;) private String lastName; private String email; private String address; @JsonProperty(\u0026#34;home_country\u0026#34;) private String homeCountry; private String gender; private String mobile; @JsonProperty(\u0026#34;card_id\u0026#34;) private String cardId; @JsonProperty(\u0026#34;account_number\u0026#34;) private String accountNumber; @JsonProperty(\u0026#34;account_type\u0026#34;) private String accountType; @JsonProperty(\u0026#34;account_locked\u0026#34;) private boolean accountLocked; @JsonProperty(\u0026#34;fraudulent_activity_attempt_count\u0026#34;) private Long fraudulentActivityAttemptCount; @JsonProperty(\u0026#34;valid_transactions\u0026#34;) private List\u0026lt;Transaction\u0026gt; validTransactions; @JsonProperty(\u0026#34;fraudulent_transactions\u0026#34;) private List\u0026lt;Transaction\u0026gt; fraudulentTransactions; } Now let’s define a controller with a single endpoint:\n@Slf4j @RestController @RequestMapping(\u0026#34;/banking\u0026#34;) public class TransactionController { @Autowired private TransactionService transactionService; @PostMapping(\u0026#34;/process\u0026#34;) public ResponseEntity\u0026lt;Transaction\u0026gt; process(@RequestBody Transaction transaction) { log.info(\u0026#34;Process transaction with details: {}\u0026#34;, transaction); Transaction processed = transactionService.process(transaction); if (processed.getStatus().equals(TransactionStatus.SUCCESS)) { return ResponseEntity.ok(processed); } else { return ResponseEntity.internalServerError().body(processed); } } } And finally, a service to encapsulate the business logic and orchestrate the information to other services.\n@Slf4j @Service public class TransactionService { private static final String USER_NOTIFICATION_SERVICE_URL = \u0026#34;http://localhost:8081/notify/fraudulent-transaction\u0026#34;; private static final String REPORTING_SERVICE_URL = \u0026#34;http://localhost:8082/report/\u0026#34;; private static final String ACCOUNT_MANAGER_SERVICE_URL = \u0026#34;http://localhost:8083/banking/process\u0026#34;; @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; @Autowired private RestTemplate restTemplate; public Transaction process(Transaction transaction) { Transaction firstProcessed; Transaction secondProcessed = null; transactionRepo.save(transaction); if (transaction.getStatus().equals(TransactionStatus.INITIATED)) { User user = userRepo.findByCardId(transaction.getCardId()); // Check whether the card details are valid or not  if (Objects.isNull(user)) { transaction.setStatus(TransactionStatus.CARD_INVALID); } // Check whether the account is blocked or not  else if (user.isAccountLocked()) { transaction.setStatus(TransactionStatus.ACCOUNT_BLOCKED); } else { // Check if it\u0026#39;s a valid transaction or not. The Transaction  // would be considered valid if it has been requested from  // the same home country of the user, else will be considered  // as fraudulent  if (user.getHomeCountry().equalsIgnoreCase(transaction .getTransactionLocation())) { transaction.setStatus(TransactionStatus.VALID); // Call Reporting Service to report valid transaction to bank  // and deduct amount if funds available  firstProcessed = restTemplate.postForObject(REPORTING_SERVICE_URL, transaction, Transaction.class); // Call Account Manager service to process the transaction  // and send the money  if (Objects.nonNull(firstProcessed)) { secondProcessed = restTemplate.postForObject(ACCOUNT_MANAGER_SERVICE_URL, firstProcessed, Transaction.class); } if (Objects.nonNull(secondProcessed)) { transaction = secondProcessed; } } else { transaction.setStatus(TransactionStatus.FRAUDULENT); // Call User Notification service to notify for a  // fraudulent transaction attempt from the User\u0026#39;s card  firstProcessed = restTemplate.postForObject(USER_NOTIFICATION_SERVICE_URL, transaction, Transaction.class); // Call Reporting Service to notify bank that  // there has been an attempt for fraudulent transaction  // and if this attempt exceeds 3 times then auto-block  // the card and account  if (Objects.nonNull(firstProcessed)) { secondProcessed = restTemplate.postForObject(REPORTING_SERVICE_URL, firstProcessed, Transaction.class); } if (Objects.nonNull(secondProcessed)) { transaction = secondProcessed; } } } } else { // For any other case, the transaction will be considered failure  transaction.setStatus(TransactionStatus.FAILURE); } return transactionRepo.save(transaction); } } User Notification Service User Notification Service would be responsible to notify users if there is any suspicious or fraudulent transaction attempt in the system. We will send a mail to the User and alert them about the fraudulent transaction.\nLet’s begin by defining a simple controller to expose an endpoint:\n@Slf4j @RestController @RequestMapping(\u0026#34;/notify\u0026#34;) public class UserNotificationController { @Autowired private UserNotificationService userNotificationService; @PostMapping(\u0026#34;/fraudulent-transaction\u0026#34;) public ResponseEntity\u0026lt;Transaction\u0026gt; notify(@RequestBody Transaction transaction) { log.info(\u0026#34;Process transaction with details and notify user: {}\u0026#34;, transaction); Transaction processed = userNotificationService.notify(transaction); if (processed.getStatus().equals(TransactionStatus.SUCCESS)) { return ResponseEntity.ok(processed); } else { return ResponseEntity.internalServerError().body(processed); } } } Next, we will define the service layer to encapsulate our logic:\n@Slf4j @Service public class UserNotificationService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; @Autowired private JavaMailSender emailSender; public Transaction notify(Transaction transaction) { if (transaction.getStatus().equals(TransactionStatus.FRAUDULENT)) { User user = userRepo.findByCardId(transaction.getCardId()); // Notify user by sending email  SimpleMailMessage message = new SimpleMailMessage(); message.setFrom(\u0026#34;noreply@baeldung.com\u0026#34;); message.setTo(user.getEmail()); message.setSubject(\u0026#34;Fraudulent transaction attempt from your card\u0026#34;); message.setText(\u0026#34;An attempt has been made to pay \u0026#34; + transaction.getStoreName() + \u0026#34; from card \u0026#34; + transaction.getCardId() + \u0026#34; in the country \u0026#34; + transaction.getTransactionLocation() + \u0026#34;.\u0026#34; + \u0026#34; Please report to your bank or block your card.\u0026#34;); emailSender.send(message); transaction.setStatus(TransactionStatus.FRAUDULENT_NOTIFY_SUCCESS); } else { transaction.setStatus(TransactionStatus.FRAUDULENT_NOTIFY_FAILURE); } return transactionRepo.save(transaction); } } Reporting Service Reporting Service would check if there is a fraudulent transaction then it will update the User account with the fraudulent attempt. For the safety and security of the User’s account, it may take necessary actions to automatically lock the account if there are multiple attempts. If the transaction is valid, then it will store the transaction information and update his account.\nLet’s define a controller to report a transaction:\n@Slf4j @RestController @RequestMapping(\u0026#34;/report\u0026#34;) public class ReportingController { @Autowired private ReportingService reportingService; @PostMapping(\u0026#34;/\u0026#34;) public ResponseEntity\u0026lt;Transaction\u0026gt; report(@RequestBody Transaction transaction) { log.info(\u0026#34;Process transaction with details: {}\u0026#34;, transaction); Transaction processed = reportingService.report(transaction); if (processed.getStatus().equals(TransactionStatus.SUCCESS)) { return ResponseEntity.ok(processed); } else { return ResponseEntity.internalServerError().body(processed); } } } Then we will define a service layer to define our business logic:\n@Slf4j @Service public class ReportingService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; public Transaction report(Transaction transaction) { if (transaction.getStatus().equals(TransactionStatus.FRAUDULENT_NOTIFY_SUCCESS) || transaction.getStatus().equals( TransactionStatus.FRAUDULENT_NOTIFY_FAILURE)) { // Report the User\u0026#39;s account and take automatic action against  // User\u0026#39;s account or card  User user = userRepo.findByCardId(transaction.getCardId()); user.setFraudulentActivityAttemptCount( user.getFraudulentActivityAttemptCount() + 1); user.setAccountLocked(user.getFraudulentActivityAttemptCount() \u0026gt; 3); user.getFraudulentTransactions().add(transaction); userRepo.save(user); transaction.setStatus(user.isAccountLocked() ? TransactionStatus.ACCOUNT_BLOCKED : TransactionStatus.FAILURE); } return transactionRepo.save(transaction); } } Account Management Service Finally, the Account Management Service will manage the user account and add the incoming transaction to the user’s account for further processing. It will return a message to the banking service that the transaction had been marked valid and successful.\nLet’s define a Controller first:\n@Slf4j @RestController @RequestMapping(\u0026#34;/banking\u0026#34;) public class AccountManagementController { @Autowired private AccountManagementService accountManagementService; @PostMapping(\u0026#34;/process\u0026#34;) public ResponseEntity\u0026lt;Transaction\u0026gt; manage(@RequestBody Transaction transaction) { log.info(\u0026#34;Process transaction with details: {}\u0026#34;, transaction); Transaction processed = accountManagementService.manage(transaction); if (processed.getStatus().equals(TransactionStatus.SUCCESS)) { return ResponseEntity.ok(processed); } else { return ResponseEntity.internalServerError().body(processed); } } } Finally, we will define a service layer to cover the business logic:\n@Slf4j @Service public class AccountManagementService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; public Transaction manage(Transaction transaction) { if (transaction.getStatus().equals(TransactionStatus.VALID)) { transaction.setStatus(TransactionStatus.SUCCESS); transactionRepo.save(transaction); User user = userRepo.findByCardId(transaction.getCardId()); user.getValidTransactions().add(transaction); userRepo.save(user); } return transaction; } } Deploying the application Once we have created all the individual microservices, we need to deploy them all and make them orchestrate so that they can communicate to each other seamlessly. For the sake of simplicity, we have defined a Dockerfile to build each of the microservice and will use Docker Compose to build and deploy the services. Our docker-compose.yml looks like below:\nversion: \u0026#39;3\u0026#39; services: mongodb: image: mongo:5.0 ports: - 27017:27017 volumes: - ~/apps/mongo:/data/db banking-service: build: ./banking-service ports: - \u0026#34;8080:8080\u0026#34; depends_on: - mongodb - user-notification-service - reporting-service - account-management-service user-notification-service: build: ./user-notification-service ports: - \u0026#34;8081:8081\u0026#34; depends_on: - mongodb reporting-service: build: ./reporting-service ports: - \u0026#34;8082:8082\u0026#34; depends_on: - mongodb account-management-service: build: ./account-management-service ports: - \u0026#34;8083:8083\u0026#34; depends_on: - mongodb Problems With a Synchronous Architecture This is just a bunch of simple microservices interacting with each other, each one having a distinctive responsibility and a role to play. Still, this is far from real-time production-grade enterprise software. So let’s look into the present problems in this architecture and discuss further how we can transform it into a full-fledged reactive system.\n All the calls to the external systems and the internal embedded database are blocking in nature. When we need to handle a large stream of incoming data, most of the worker threads in each service would be busy completing their task. Whereas the servlet threads in each service reach a waiting state due to which some of the calls remain blocked until the previous ones are resolved. This makes our overall microservice slow in performance. Failure in any of these services could have a cascading effect and stop the entire system to function which is against the design of microservices. Present deployment may not be capable enough to become fault-tolerant or fluctuate loads automatically.  Blocking calls in any large-scale system often becomes a bottleneck waiting for things to work. This can occur with any API calls, database calls, or network calls. We must plan to make sure that the threads do not get into a waiting state and must create an event loop to circle back once the responses are received from the underlying system. So let’s try to convert this architecture into a reactive paradigm and try to yield better resource utilization.\nConverting to a Reactive Architecture The overall objective of microservice architecture in comparison to monolith is about finding better ways to create more and more isolation between the services. Isolation reduces the coupling between the services, increases stability, and provides a framework to become fault-tolerant on its own. Thus, reactive microservices are isolated based on the following terms:\n State - The entry-point or accessibility to the state of this kind of microservices must be through APIs. It must not provide any backdoor access through the database. This in turn allows the microservices to evolve internally without affecting the layers exposed outside. Space - Each microservice must be deployed independently without caring much about the location or the deployment of the other microservices. This in turn would allow the service to be scaled up/down to meet the scalability demand. Time - Reactive microservices must be strictly non-blocking and asynchronous throughout so that they can be eventually consistent enough. Failures - A failure occurring in one of the microservice must not impact others or cause the service to go down. It must isolate failures to remote operations despite any kind of failures.  Keeping this in mind, let’s try to convert our existing microservice to adapt Reactive frameworks. We will primarily use Reactive Spring Data Mongo which provides out-of-the-box support for reactive access through MongoDB Reactive Streams. It provides ReactiveMongoTemplate and ReactiveMongoRepository interface for mapping functionality.\nWe will also use Spring WebFlux which provides the reactive stack web framework for Spring Boot. It brings in Reactor as its core reactive library that enables us to write non-blocking code and Reactive Streams backpressure. It also embeds WebClient which can be used in place of RestTemplate for performing non-blocking nested HTTP requests.\nThese are the dependencies we add to our pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-webflux\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-mongodb-reactive\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; In comparison to the above architecture diagram, the below diagram replaces the general Spring Boot with Reactive Spring Boot and the API communication framework from RestTemplate to WebClient and Spring WebFlux. Even the DAO layer is replaced from Spring Data MongoDB to Reactive Spring Data MongoDB.\nBanking Service We will consider the same service that we had defined earlier and then we will convert the Controller implementation to emit Reactive publishers\u0026quot;\n@Slf4j @RestController @RequestMapping(\u0026#34;/banking\u0026#34;) public class TransactionController { @Autowired private TransactionService transactionService; @PostMapping(value = \u0026#34;/process\u0026#34;, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE) public Mono\u0026lt;Transaction\u0026gt; process(@RequestBody Transaction transaction) { log.info(\u0026#34;Process transaction with details: {}\u0026#34;, transaction); return transactionService.process(transaction); } } Next, we will update the service layer implementation to make it reactive and use WebClient to invoke other API calls:\n@Slf4j @Service public class TransactionService { private static final String USER_NOTIFICATION_SERVICE_URL = \u0026#34;http://localhost:8081/notify/fraudulent-transaction\u0026#34;; private static final String REPORTING_SERVICE_URL = \u0026#34;http://localhost:8082/report/\u0026#34;; private static final String ACCOUNT_MANAGER_SERVICE_URL = \u0026#34;http://localhost:8083/banking/process\u0026#34;; @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; @Autowired private WebClient webClient; @Transactional public Mono\u0026lt;Transaction\u0026gt; process(Transaction transaction) { return Mono.just(transaction) .flatMap(transactionRepo::save) .flatMap(t -\u0026gt; userRepo.findByCardId(t.getCardId()) .map(u -\u0026gt; { log.info(\u0026#34;User details: {}\u0026#34;, u); if (t.getStatus().equals(TransactionStatus.INITIATED)) { // Check whether the card details are valid or not  if (Objects.isNull(u)) { t.setStatus(TransactionStatus.CARD_INVALID); } // Check whether the account is blocked or not  else if (u.isAccountLocked()) { t.setStatus(TransactionStatus.ACCOUNT_BLOCKED); } else { // Check if it\u0026#39;s a valid transaction or not.  // The Transaction would be considered valid  // if it has been requested from the same home  // country of the user, else will be considered  // as fraudulent  if (u.getHomeCountry() .equalsIgnoreCase(t.getTransactionLocation())) { t.setStatus(TransactionStatus.VALID); // Call Reporting Service to report valid transaction  // to bank and deduct amount if funds available  return webClient.post() .uri(REPORTING_SERVICE_URL) .contentType(MediaType.APPLICATION_JSON) .body(BodyInserters.fromValue(t)) .retrieve() .bodyToMono(Transaction.class) .zipWhen(t1 -\u0026gt; // Call Account Manager service to process  // the transaction and send the money  webClient.post() .uri(ACCOUNT_MANAGER_SERVICE_URL) .contentType(MediaType.APPLICATION_JSON) .body(BodyInserters.fromValue(t)) .retrieve() .bodyToMono(Transaction.class) .log(), (t1, t2) -\u0026gt; t2 ) .log() .share() .block(); } else { t.setStatus(TransactionStatus.FRAUDULENT); // Call User Notification service to notify  // for a fraudulent transaction  // attempt from the User\u0026#39;s card  return webClient.post() .uri(USER_NOTIFICATION_SERVICE_URL) .contentType(MediaType.APPLICATION_JSON) .body(BodyInserters.fromValue(t)) .retrieve() .bodyToMono(Transaction.class) .zipWhen(t1 -\u0026gt; // Call Reporting Service to notify bank  // that there has been an attempt for fraudulent transaction  // and if this attempt exceeds 3 times then auto-block  // the card and account  webClient.post() .uri(REPORTING_SERVICE_URL) .contentType(MediaType.APPLICATION_JSON) .body(BodyInserters.fromValue(t)) .retrieve() .bodyToMono(Transaction.class) .log(), (t1, t2) -\u0026gt; t2 ) .log() .share() .block(); } } } else { // For any other case, the transaction will be considered failure  t.setStatus(TransactionStatus.FAILURE); } return t; })); } } We are using the zipWhen() method in WebClient to make sure that once we receive a response from the first API call, we pick the payload and pass it to the second API. Finally, we will consider the response of the second API as the resulting response to be returned as a response for the initial API call.\nIf you want to learn more about WebClient, have a look at our article about sending requests with WebClient.\nUser Notification Service Similarly, we will make changes in the endpoint of our User Notification service:\n@Slf4j @RestController @RequestMapping(\u0026#34;/notify\u0026#34;) public class UserNotificationController { @Autowired private UserNotificationService userNotificationService; @PostMapping(\u0026#34;/fraudulent-transaction\u0026#34;) public Mono\u0026lt;Transaction\u0026gt; notify(@RequestBody Transaction transaction) { log.info(\u0026#34;Process transaction with details and notify user: {}\u0026#34;, transaction); return userNotificationService.notify(transaction); } } We will also make corresponding changes in the service layer to leverage the reactive streams implementation:\n@Slf4j @Service public class UserNotificationService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; @Autowired private JavaMailSender emailSender; public Mono\u0026lt;Transaction\u0026gt; notify(Transaction transaction) { return userRepo.findByCardId(transaction.getCardId()) .map(u -\u0026gt; { if (transaction.getStatus().equals(TransactionStatus.FRAUDULENT)) { // Notify user by sending email  SimpleMailMessage message = new SimpleMailMessage(); message.setFrom(\u0026#34;noreply@baeldung.com\u0026#34;); message.setTo(u.getEmail()); message.setSubject(\u0026#34;Fraudulent transaction attempt from your card\u0026#34;); message.setText(\u0026#34;An attempt has been made to pay \u0026#34; + transaction.getStoreName() + \u0026#34; from card \u0026#34; + transaction.getCardId() + \u0026#34; in the country \u0026#34; + transaction.getTransactionLocation() + \u0026#34;.\u0026#34; + \u0026#34; Please report to your bank or block your card.\u0026#34;); emailSender.send(message); transaction.setStatus(TransactionStatus.FRAUDULENT_NOTIFY_SUCCESS); } else { transaction.setStatus(TransactionStatus.FRAUDULENT_NOTIFY_FAILURE); } return transaction; }) .onErrorReturn(transaction) .flatMap(transactionRepo::save); } } Reporting Service We will make similar changes in Reporting service endpoints to emit reactive publishers:\n@Slf4j @RestController @RequestMapping(\u0026#34;/report\u0026#34;) public class ReportingController { @Autowired private ReportingService reportingService; @PostMapping(\u0026#34;/\u0026#34;) public Mono\u0026lt;Transaction\u0026gt; report(@RequestBody Transaction transaction) { log.info(\u0026#34;Process transaction with details in reporting service: {}\u0026#34;, transaction); return reportingService.report(transaction); } } Similarly, we will update the service layer implementation accordingly:\n@Slf4j @Service public class ReportingService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; public Mono\u0026lt;Transaction\u0026gt; report(Transaction transaction) { return userRepo.findByCardId(transaction.getCardId()) .map(u -\u0026gt; { if (transaction.getStatus().equals(TransactionStatus.FRAUDULENT) || transaction.getStatus().equals(TransactionStatus .FRAUDULENT_NOTIFY_SUCCESS) || transaction.getStatus().equals(TransactionStatus .FRAUDULENT_NOTIFY_FAILURE)) { // Report the User\u0026#39;s account and take automatic  // action against User\u0026#39;s account or card  u.setFraudulentActivityAttemptCount( u.getFraudulentActivityAttemptCount() + 1); u.setAccountLocked(u.getFraudulentActivityAttemptCount() \u0026gt; 3); List\u0026lt;Transaction\u0026gt; newList = new ArrayList\u0026lt;\u0026gt;(); newList.add(transaction); if (Objects.isNull(u.getFraudulentTransactions()) || u.getFraudulentTransactions().isEmpty()) { u.setFraudulentTransactions(newList); } else { u.getFraudulentTransactions().add(transaction); } } log.info(\u0026#34;User details: {}\u0026#34;, u); return u; }) .flatMap(userRepo::save) .map(u -\u0026gt; { if (!transaction.getStatus().equals(TransactionStatus.VALID)) { transaction.setStatus(u.isAccountLocked() ? TransactionStatus.ACCOUNT_BLOCKED : TransactionStatus.FAILURE); } return transaction; }) .flatMap(transactionRepo::save); } } Account Management Service Finally, we will update the Account Management service endpoints.\n@Slf4j @RestController @RequestMapping(\u0026#34;/banking\u0026#34;) public class AccountManagementController { @Autowired private AccountManagementService accountManagementService; @PostMapping(\u0026#34;/process\u0026#34;) public Mono\u0026lt;Transaction\u0026gt; manage(@RequestBody Transaction transaction) { log.info(\u0026#34;Process transaction with details in account management service: {}\u0026#34;, transaction); return accountManagementService.manage(transaction); } } Next, we will update the service layer implementation to encapsulate the business logic as per reactive design:\n@Slf4j @Service public class AccountManagementService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; public Mono\u0026lt;Transaction\u0026gt; manage(Transaction transaction) { return userRepo.findByCardId(transaction.getCardId()) .map(u -\u0026gt; { if (transaction.getStatus().equals(TransactionStatus.VALID)) { List\u0026lt;Transaction\u0026gt; newList = new ArrayList\u0026lt;\u0026gt;(); newList.add(transaction); if (Objects.isNull(u.getValidTransactions()) || u.getValidTransactions().isEmpty()) { u.setValidTransactions(newList); } else { u.getValidTransactions().add(transaction); } } log.info(\u0026#34;User details: {}\u0026#34;, u); return u; }) .flatMap(userRepo::save) .map(u -\u0026gt; { if (transaction.getStatus().equals(TransactionStatus.VALID)) { transaction.setStatus(TransactionStatus.SUCCESS); } return transaction; }) .flatMap(transactionRepo::save); } } Using Message-driven Communication The basic problem we had was synchronous communication between microservices, which caused delays and didn\u0026rsquo;t use the processor resource to full effect. With the conversion of simple microservices to Reactive Architecture, it had allowed us to make the microservices adapt to the Reactive paradigm, where the communication between the services is still synchronous, though, because HTTP is a synchronous protocol. This kind of orchestration between the microservices with reactive APIs is never easy to maintain. It\u0026rsquo;s quite prone to error and hard to debug to figure out the root cause of the failure in multiple downstream applications.\nSo, the final part of this solution is to make the overall communications asynchronous and we can achieve that by adapting a message-driven architecture. We will use a message broker like Apache Kafka as a medium or a middleware to facilitate service-to-service communication asynchronously and automatically as soon as the transaction message is published.\nWe will use the Spring Cloud Stream Kafka library in the same Reactive microservices to easily configure the publish-subscribe module with Kafka. We can modify the existing pom.xml and add the following:\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-dependencies\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2020.0.3\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-stream-kafka\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependencies\u0026gt; Next, we need to get an instance of Apache Kafka running and create a topic to publish messages. We will create a single topic named “transactions” to produce and consume by different consumer groups and process it by each service.\nTo integrate with Kafka through Spring Cloud Stream we need to define the following in each microservice. First, we will define the Spring Kafka cloud configurations in application.yml as below:\n# Configure Spring specific properties spring: # Datasource Configurations data: mongodb: authentication-database: admin uri: mongodb://localhost:27017/reactive database: reactive # Kafka Configuration cloud: function: definition: consumeTransaction stream: kafka: binder: brokers: localhost:9092 autoCreateTopics: false bindings: consumeTransaction-in-0: consumer: max-attempts: 3 back-off-initial-interval: 100 destination: transactions group: account-management concurrency: 1 transaction-out-0: destination: transactions Next, we will define a Producer implementation that would help us to produce the messages using StreamBridge:\n@Slf4j @Service public class TransactionProducer { @Autowired private StreamBridge streamBridge; public void sendMessage(Transaction transaction) { Message\u0026lt;Transaction\u0026gt; msg = MessageBuilder.withPayload(transaction) .setHeader(KafkaHeaders.MESSAGE_KEY, transaction.getTransactionId() .getBytes(StandardCharsets.UTF_8)) .build(); log.info(\u0026#34;Transaction processed to dispatch: {}; Message dispatch successful: {}\u0026#34;, msg, streamBridge.send(\u0026#34;transaction-out-0\u0026#34;, msg)); } } Now, we will take a look into each microservice to define the consumer implementation to process the transaction records and process it asynchronously and automatically as soon as the message is published into the Kafka topic.\nBanking Service First, we will define a simple listener (consumer) to process the new messages that are being published on the topic:\n@Slf4j @Configuration public class TransactionConsumer { @Bean public Consumer\u0026lt;Transaction\u0026gt; consumeTransaction( TransactionService transactionService) { return transactionService::asyncProcess; } } Next, we will define our service layer that would process the record, set the status message for that transaction, and produce it back again to the Kafka topic.\n@Slf4j @Service public class TransactionService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; @Autowired TransactionProducer producer; public void asyncProcess(Transaction transaction) { userRepo.findByCardId(transaction.getCardId()) .map(u -\u0026gt; { if (transaction.getStatus().equals(TransactionStatus.INITIATED)) { log.info(\u0026#34;Consumed message for processing: {}\u0026#34;, transaction); log.info(\u0026#34;User details: {}\u0026#34;, u); // Check whether the card details are valid or not  if (Objects.isNull(u)) { transaction.setStatus(TransactionStatus.CARD_INVALID); } // Check whether the account is blocked or not  else if (u.isAccountLocked()) { transaction.setStatus(TransactionStatus.ACCOUNT_BLOCKED); } else { // Check if it\u0026#39;s a valid transaction or not. The Transaction  // would be considered valid if it has been requested from  // the same home country of the user, else will be considered  // as fraudulent  if (u.getHomeCountry().equalsIgnoreCase( transaction.getTransactionLocation())) { transaction.setStatus(TransactionStatus.VALID); } else { transaction.setStatus(TransactionStatus.FRAUDULENT); } } producer.sendMessage(transaction); } return transaction; }) .filter(t -\u0026gt; t.getStatus().equals(TransactionStatus.VALID) || t.getStatus().equals(TransactionStatus.FRAUDULENT) || t.getStatus().equals(TransactionStatus.CARD_INVALID) || t.getStatus().equals(TransactionStatus.ACCOUNT_BLOCKED) ) .flatMap(transactionRepo::save) .subscribe(); } } User Notification Service The listener or the consumer logic in the User Notification or any other service can be written similarly as above. We will look into the service layer implementation for this service:\n@Slf4j @Service public class UserNotificationService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; @Autowired private JavaMailSender emailSender; @Autowired private TransactionProducer producer; public void asyncProcess(Transaction transaction) { userRepo.findByCardId(transaction.getCardId()) .map(u -\u0026gt; { if (transaction.getStatus().equals(TransactionStatus.FRAUDULENT)) { try { // Notify user by sending email  SimpleMailMessage message = new SimpleMailMessage(); message.setFrom(\u0026#34;noreply@baeldung.com\u0026#34;); message.setTo(u.getEmail()); message.setSubject(\u0026#34;Fraudulent transaction attempt from your card\u0026#34;); message.setText(\u0026#34;An attempt has been made to pay \u0026#34; + transaction.getStoreName() + \u0026#34; from card \u0026#34; + transaction.getCardId() + \u0026#34; in the country \u0026#34; + transaction.getTransactionLocation() + \u0026#34;.\u0026#34; + \u0026#34; Please report to your bank or block your card.\u0026#34;); emailSender.send(message); transaction.setStatus(TransactionStatus.FRAUDULENT_NOTIFY_SUCCESS); } catch (MailException e) { transaction.setStatus(TransactionStatus.FRAUDULENT_NOTIFY_FAILURE); } } return transaction; }) .onErrorReturn(transaction) .filter(t -\u0026gt; t.getStatus().equals(TransactionStatus.FRAUDULENT) || t.getStatus().equals(TransactionStatus.FRAUDULENT_NOTIFY_SUCCESS) || t.getStatus().equals(TransactionStatus.FRAUDULENT_NOTIFY_FAILURE) ) .map(t -\u0026gt; { producer.sendMessage(t); return t; }) .flatMap(transactionRepo::save) .subscribe(); } } Reporting Service Next, we will take a look into the service layer implementation for the Reporting Service:\n@Slf4j @Service public class ReportingService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; @Autowired private TransactionProducer producer; public void asyncProcess(Transaction transaction) { userRepo.findByCardId(transaction.getCardId()) .map(u -\u0026gt; { if (transaction.getStatus().equals(TransactionStatus.FRAUDULENT) || transaction.getStatus().equals(TransactionStatus .FRAUDULENT_NOTIFY_SUCCESS) || transaction.getStatus().equals(TransactionStatus .FRAUDULENT_NOTIFY_FAILURE)) { // Report the User\u0026#39;s account and take automatic  // action against User\u0026#39;s account or card  u.setFraudulentActivityAttemptCount( u.getFraudulentActivityAttemptCount() + 1); u.setAccountLocked(u.getFraudulentActivityAttemptCount() \u0026gt; 3); List\u0026lt;Transaction\u0026gt; newList = new ArrayList\u0026lt;\u0026gt;(); newList.add(transaction); if (Objects.isNull(u.getFraudulentTransactions()) || u.getFraudulentTransactions().isEmpty()) { u.setFraudulentTransactions(newList); } else { u.getFraudulentTransactions().add(transaction); } } log.info(\u0026#34;User details: {}\u0026#34;, u); return u; }) .flatMap(userRepo::save) .map(u -\u0026gt; { if (!transaction.getStatus().equals(TransactionStatus.VALID)) { transaction.setStatus(u.isAccountLocked() ? TransactionStatus.ACCOUNT_BLOCKED : TransactionStatus.FAILURE); producer.sendMessage(transaction); } return transaction; }) .filter(t -\u0026gt; t.getStatus().equals(TransactionStatus.FAILURE) || t.getStatus().equals(TransactionStatus.ACCOUNT_BLOCKED) ) .flatMap(transactionRepo::save) .subscribe(); } } Account Management Service Finally, we will implement the service layer implementation for the Account Management service:\n@Slf4j @Service public class AccountManagementService { @Autowired private TransactionRepository transactionRepo; @Autowired private UserRepository userRepo; @Autowired private TransactionProducer producer; public void asyncProcess(Transaction transaction) { userRepo.findByCardId(transaction.getCardId()) .map(u -\u0026gt; { if (transaction.getStatus().equals(TransactionStatus.VALID)) { List\u0026lt;Transaction\u0026gt; newList = new ArrayList\u0026lt;\u0026gt;(); newList.add(transaction); if (Objects.isNull(u.getValidTransactions()) || u.getValidTransactions().isEmpty()) { u.setValidTransactions(newList); } else { u.getValidTransactions().add(transaction); } } log.info(\u0026#34;User details: {}\u0026#34;, u); return u; }) .flatMap(userRepo::save) .map(u -\u0026gt; { if (transaction.getStatus().equals(TransactionStatus.VALID)) { transaction.setStatus(TransactionStatus.SUCCESS); producer.sendMessage(transaction); } return transaction; }) .filter(t -\u0026gt; t.getStatus().equals(TransactionStatus.VALID) || t.getStatus().equals(TransactionStatus.SUCCESS) ) .flatMap(transactionRepo::save) .subscribe(); } } These consumer implementations are sufficient enough to achieve asynchronous communications within the applications. Note that this asynchronous choreography has a much simpler code in comparison to the implementation that we had seen above.\nDeploying the Message-driven System Now that we have implemented all the services, we will try to achieve containerization of the services through Docker and manage dependencies between them using Docker Compose. We can define a Dockerfile for each microservice and build our jars for them and bundle it in the image. A simple Dockerfile would look something like this:\nFROMopenjdk:8-jdk-alpineCOPY target/banking-service-0.0.1-SNAPSHOT.jar app.jarENTRYPOINT [\u0026#34;java\u0026#34;,\u0026#34;-jar\u0026#34;,\u0026#34;/app.jar\u0026#34;]Then we can update our previously created docker-compose.yml with all the images. That would manage the dependencies between each microservice and orchestrate the overall communication with a single command:\ndocker-compose up The final docker-compose.yml looks like below:\nversion: \u0026#39;3\u0026#39; services: zookeeper: image: wurstmeister/zookeeper ports: - \u0026#34;2181:2181\u0026#34; kafka: image: wurstmeister/kafka ports: - \u0026#34;9092:9092\u0026#34; depends_on: - zookeeper environment: KAFKA_ADVERTISED_HOST_NAME: 10.204.106.55 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181 ALLOW_PLAINTEXT_LISTENER: \u0026#34;yes\u0026#34; KAFKA_CFG_LOG_DIRS: /tmp/kafka_mounts/logs KAFKA_CREATE_TOPICS: \u0026#34;transactions:1:2\u0026#34; volumes: - /var/run/docker.sock:/var/run/docker.sock kafka-ui: image: provectuslabs/kafka-ui container_name: kafka-ui ports: - \u0026#34;8090:8080\u0026#34; depends_on: - zookeeper - kafka restart: always environment: - KAFKA_CLUSTERS_0_NAME=local - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 - KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181 mongodb: image: mongo:latest ports: - \u0026#34;27017:27017\u0026#34; volumes: - ~/apps/mongo:/data/db banking-service: build: ./banking-service ports: - \u0026#34;8080:8080\u0026#34; depends_on: - zookeeper - kafka - mongodb - user-notification-service - reporting-service - account-management-service user-notification-service: build: ./user-notification-service ports: - \u0026#34;8081:8081\u0026#34; depends_on: - zookeeper - kafka - mongodb reporting-service: build: ./reporting-service ports: - \u0026#34;8082:8082\u0026#34; depends_on: - zookeeper - kafka - mongodb account-management-service: build: ./account-management-service ports: - \u0026#34;8083:8083\u0026#34; depends_on: - zookeeper - kafka - mongodb Let’s look into all the components and their orchestration steps to deploy everything in our Docker environment.\n Zookeeper: This is required to run a Kafka instance to manage the brokers as well as consumers. Kafka: This is to run the Kafka broker and is dependent on Zookeeper to get started before it starts. Kafka UI: This is an optional User Interface for Kafka to create or manage topics or brokers through UI. It is dependent on Zookeeper and Kafka. MongoDB: This is required for our microservices to store and retrieve data into the database. Banking Service: This is the first point-of-contact microservice and would be dependent on all the other microservices, Kafka and MongoDB before it can start. User Notification Service: This is dependent on Kafka and MongoDB. Reporting Service: This is also dependent on Kafka and MongoDB. Account Management Service: This as well is dependent on Kafka and MongoDB before it can start.  Evaluating the Reactive Microservice Architecture Now since we have completed the overall architecture let’s review and evaluate what we have built until now against the Reactive Manifesto and its four core features.\n Responsive - Once we had adapted the reactive programming paradigm into our microservices, it has helped us to achieve an end-to-end non-blocking system which in turn proved to be a pretty responsive application. Resilient - The isolation of microservices provides a good amount of resiliency against various failures in the system. More resiliency can be achieved if we can move this deployment to Kubernetes and define ReplicaSet with the desired number of pods. Elastic - Already Reactive Spring Boot services are capable enough to handle a good amount of load and performance. Moving this system to Kubernetes or a cloud-managed service can easily support elasticity against unpredictable traffic loads. Message-driven - We have added a message broker like Kafka as a middleware system to handle asynchronous communication between each service.  This brings an end to our discussion regarding the need for a Reactive Architecture. While this looks quite promising, there is still scope for improvement by replacing Docker Compose with Kubernetes cluster and resources. It may also be quite difficult to manage so many components and their resiliency or traffic load. Thus, a managed cloud infrastructure can also help to manage and provide the necessary guarantee for each of these services or components.\nConclusion In this tutorial, we took a deep dive into the basics and need for a reactive system. We gradually built a microservice organically and made it adapt to a Reactive design or programming paradigm. We also went ahead and converted that to an asynchronous and automated message-driven architecture using Kafka. Lastly, we evaluated the resultant architecture to see if it adheres to the standards of the Reactive Manifesto.\nThis article not only introduces us to all the tools, frameworks, or patterns which can help us to create a reactive system but also introduces us to the journey towards the Reactive world.\nYou can refer to all the source code used in the article on Github.\n","date":"May 8, 2022","image":"https://reflectoring.io/images/stock/0122-newton-1200x628_hu6dcd177a02916781f050f32188d7d34e_91088_650x0_resize_q90_box.jpg","permalink":"/reactive-architecture-with-spring-boot/","title":"Reactive Architecture with Spring Boot"},{"categories":["java"],"contents":"A stream is a sequence of elements on which we can perform different kinds of sequential and parallel operations. The Stream API was introduced in Java 8 and is used to process collections of objects. Unlike collections, a Java stream is not a data structure instead it takes input from Collections, Arrays, or I/O channels (like files).\nThe operations in a stream use internal iteration for processing the elements of a stream. This capability helps us to get rid of verbose constructs like while, for, and forEach loops.\nIn this tutorial, we will work with the different classes and interfaces of the Java Stream API and understand the usage of the various operations that we can perform on Java Streams.\n Example Code This article is accompanied by a working code example on GitHub. Creating a Stream from a Source The java.util.stream package contains the interfaces and classes to support functional-style operations on streams of elements. In addition to the Stream interface, which is a stream of object references, there are primitive specializations like IntStream, LongStream, and DoubleStream.\nWe can obtain streams in several ways from different types of data sources:\nObtaining Stream From an Array We can obtain a stream from an array using the stream() method of the Arrays class:\npublic class StreamingApp { public void createStreamFromArray() { double[] elements = {3.0, 4.5, 6.7, 2.3}; DoubleStream stream = Arrays.stream(elements); stream.forEach(logger::info); } } In this example, we are creating a stream of double elements from an array and printing them by calling a forEach() function on the stream.\nObtaining Stream From a Collection We can obtain a stream from a collection using the stream() and parallelStream() methods:\npublic class StreamingApp { public void createStreamFromCollection() { Double[] elements = {3.0, 4.5, 6.7, 2.3}; List\u0026lt;Double\u0026gt; elementsInCollection = Arrays.asList(elements); Stream\u0026lt;Double\u0026gt; stream = elementsInCollection.stream(); Stream\u0026lt;Double\u0026gt; parallelStream = elementsInCollection.parallelStream(); stream.forEach(logger::info); parallelStream.forEach(logger::info); } } Here we are creating two streams of double elements using the stream() and parallelStream() methods from a collection of type List and printing them by calling a forEach() function on the streams. The elements in the stream object are processed in serial while those in the object parallelStream will be processed in parallel.\nWe will understand parallel streams in a subsequent section.\nObtaining Stream From Static Factory Methods on the Stream Classes We can construct a stream by calling static factory methods on the stream classes as shown in this example:\npublic class StreamingApp { public void createStreams() { Stream\u0026lt;Integer\u0026gt; stream = Stream.of(3, 4, 6, 2); IntStream integerStream = IntStream.of(3, 4, 6, 2); LongStream longStream = LongStream.of(3l, 4l, 6l, 2l); DoubleStream doubleStream = DoubleStream.of(3.0, 4.5, 6.7, 2.3); } } In this example, we are creating streams of integer, long, and double elements using the static factory method of() on the Stream classes. We have also used the different types of Streams starting with the Stream abstraction followed by the primitive specializations: IntStream, LongStream, and DoubleStream.\nObtaining Stream From Files The lines of a file can be obtained from Files.lines() as shown in this example:\nimport java.util.stream.Stream; public class StreamingApp { public void readFromFile(final String filePath) { try (Stream\u0026lt;String\u0026gt; lines = Files.lines(Paths.get(filePath));){ lines.forEach(logger::info); } catch (IOException e) { logger.info(\u0026#34;i/o error \u0026#34; + e); } } } Here we are getting the lines from a file in a stream using the lines() method in the Files class. We have put this statement in a try-with-resources statement which will close the stream after use.\nStreams have a BaseStream.close() method and implement AutoCloseable. Only streams whose source is an IO channel (such as those returned by Files.lines(Path) as in this example) will require closing.\nMost streams are backed by collections, arrays, or generating functions and do not need to be closed after use.\nType of Operations on Streams The operations that we can perform on a stream are broadly categorized into two types:\n  Intermediate operations: Intermediate operations transform one stream into another stream. An example of an Intermediate operation is map() which transforms one element into another by applying a function (called a predicate) on each element.\n  Terminal operations: Terminal operations are applied on a stream to get a single result like a primitive or object or collection or may not return anything. An example of a Terminal operation is count() which counts the total number of elements in a stream.\n  Let us look at the different intermediate and terminal operations in the subsequent sections. We have grouped these operations into the following categories:\n Mapping Operations: These are intermediate operations and transform each element of a stream by applying a function and putting them in a new stream for further processing. Ordering Operations: These operations include methods for ordering the elements in a stream. Matching and Filtering Operations: Matching operations help to validate elements of a stream with a specified condition while filtering operations allow us to filter elements based on specific criteria. Reduction Operations: Reduction operations evaluate the elements of a stream to return a single result.  Stream Mapping Operations Mapping Operations are intermediate operations and transform each element of a stream with the help of a predicate function:\nmap() Operation The map() operation takes a function as an input and returns a stream consisting of the results of applying the supplied function to each element of the stream.\nIn this example, we are applying the map() operation on a stream of category names and passing an input function that maps each category name to a numeric category code:\npublic class StreamingApp { public void mapStream() { // Stream of category names  Stream\u0026lt;String\u0026gt; productCategories = Stream.of(\u0026#34;washing machine\u0026#34;, \u0026#34;Television\u0026#34;, \u0026#34;Laptop\u0026#34;, \u0026#34;grocery\u0026#34;, \u0026#34;essentials\u0026#34;); List\u0026lt;String\u0026gt; categoryCodes = productCategories.map( // mapping function: map category name to code  element-\u0026gt;{ String code = null; switch (element) { case \u0026#34;washing machine\u0026#34; : code = \u0026#34;1\u0026#34;; break; case \u0026#34;Television\u0026#34; : code = \u0026#34;2\u0026#34;; break; case \u0026#34;Laptop\u0026#34; : code = \u0026#34;3\u0026#34;; break; case \u0026#34;grocery\u0026#34; : code = \u0026#34;4\u0026#34;; break; case \u0026#34;essentials\u0026#34; : code = \u0026#34;5\u0026#34;; break; case \u0026#34;default\u0026#34; : code = \u0026#34;6\u0026#34;; } return code; } ).collect(Collectors.toList()); categoryCodes.forEach(logger::info); } } Here in the mapping function supplied as input, we are converting each category name to a category code which is a numeric value so that the map() operation on the stream returns a stream of category codes. Then we apply the collect() function to convert the stream to a collection.\nWe will understand the collect() function in a subsequent section.\nWhen we run this program, we will get a collection of category codes: 1, 2, 3, 4, and 5.\nflatMap() Operation We should use the flatMap() method if we have a stream where every element has its sequence of elements and we want to create a single stream of these inner elements:\npublic class StreamingApp { public void flatmapStream() { List\u0026lt;List\u0026lt;String\u0026gt;\u0026gt; productByCategories = Arrays.asList( Arrays.asList(\u0026#34;washing machine\u0026#34;, \u0026#34;Television\u0026#34;), Arrays.asList(\u0026#34;Laptop\u0026#34;, \u0026#34;Camera\u0026#34;, \u0026#34;Watch\u0026#34;), Arrays.asList(\u0026#34;grocery\u0026#34;, \u0026#34;essentials\u0026#34;)); List\u0026lt;String\u0026gt; products = productByCategories .stream() .flatMap(Collection::stream) .collect(Collectors.toList()); logger.info(\u0026#34;flattened elements::\u0026#34; + products); } } In this example, each element of the stream is a list. We apply the flatMap() operation to get a list of all the inner elements as shown in this output:\nINFO: flattened elements::[washing machine, Television, Laptop, Camera, Watch, grocery, essentials] Ordering Operations Ordering operations on a stream include:\n sorted() which sorts the stream elements according to the natural order an overridden method sorted(comparator) which sorts the stream elements according to a provided Comparator instance.  public class StreamOrderingApp { private final Logger logger = Logger.getLogger( StreamOrderingApp.class.getName()); public void sortElements() { Stream\u0026lt;Integer\u0026gt; productCategories = Stream.of(4,15,8,7,9,10); Stream\u0026lt;Integer\u0026gt; sortedStream = productCategories.sorted(); sortedStream.forEach(logger::info); } public void sortElementsWithComparator() { Stream\u0026lt;Integer\u0026gt; productCategories = Stream.of(4,15,8,7,9,10); Stream\u0026lt;Integer\u0026gt; sortedStream = productCategories .sorted((o1, o2) -\u0026gt; o2 - o1); sortedStream.forEach(logger::info); } } In the sortElements() function we are sorting the integer elements in their natural order. In the sortElementsWithComparator() function we are sorting the integer elements by using a Comparator function to sort them in descending order.\nComparator is a functional interface that is used to provide an ordering for a collection of objects. It takes two arguments for comparison and returns a negative, zero, or a positive integer. More details on the Comparator can be found in the official Java documentation.\nBoth methods are intermediate operations so we still need to call a terminal operation to trigger the sorting. In this example, we are calling the terminal operation: forEach() to trigger the sort.\nMatching and Filtering Operations The Stream interface provides methods to detect whether the elements of a stream comply with a condition (called the predicate) specified as input. All of these methods are terminal operations that return a boolean.\nanyMatch() Operation With anyMatch() operation, we determine whether any of the elements comply to the condition specified as the predicate as shown in this example:\npublic class StreamMatcherApp { private final Logger logger = Logger.getLogger(StreamMatcherApp.class.getName()); public void findAnyMatch(){ Stream\u0026lt;String\u0026gt; productCategories = Stream.of( \u0026#34;washing machine\u0026#34;, \u0026#34;Television\u0026#34;, \u0026#34;Laptop\u0026#34;, \u0026#34;grocery\u0026#34;, \u0026#34;essentials\u0026#34;); boolean isPresent = productCategories .anyMatch(e-\u0026gt;e.equals(\u0026#34;Laptop\u0026#34;)); logger.info(\u0026#34;isPresent::\u0026#34;+isPresent); } } Here we are checking whether the stream contains an element with the value Laptop. Since one of the values in the stream is Laptop, we get the result of the anyMatch() operation as true.\nWe would have received a false result if we were checking for a value for example e-\u0026gt;e.equals(\u0026quot;Shoes\u0026quot;)  in our predicate function, which is not present in the stream.\nallMatch() Operation With allMatch() operation, we determine whether all of the elements comply to the condition specified as the predicate as shown in this example:\npublic class StreamMatcherApp { private final Logger logger = Logger .getLogger(StreamMatcherApp.class.getName()); public void findAllMatch(){ Stream\u0026lt;Integer\u0026gt; productCategories = Stream.of(4,5,7,9,10); boolean allElementsMatch = productCategories.allMatch(e-\u0026gt;e \u0026lt; 11); logger.info(\u0026#34;allElementsMatch::\u0026#34; + allElementsMatch); } } The result of applying the allMatch() function will be true since all the elements in the stream satisfy the condition in the predicate function: e \u0026lt; 11.\nnoneMatch() Operation With noneMatch() operation, we determine whether none of the elements comply to the condition specified as the predicate as shown in this example:\npublic class StreamMatcherApp { private final Logger logger = Logger .getLogger(StreamMatcherApp.class.getName()); public void findNoneMatch(){ Stream\u0026lt;Integer\u0026gt; productCategories = Stream.of(4,5,7,9,10); boolean noElementsMatch = productCategories.noneMatch(e-\u0026gt;e \u0026lt; 4); logger.info(\u0026#34;noElementsMatch::\u0026#34;+noElementsMatch); } } The result of applying the noneMatch() function will be true since none of the elements in the stream satisfy the condition in the predicate function: e \u0026lt; 4.\nfilter() Operation filter() is an intermediate operation of the Stream interface that allows us to filter elements of a stream that match a given condition (known as predicate).\npublic class StreamingApp { public void processStream() { Double[] elements = {3.0, 4.5, 6.7, 2.3}; Stream\u0026lt;Double\u0026gt; stream = Stream.of(elements); stream .filter(e-\u0026gt;e \u0026gt; 3 ) .forEach(logger::info); } } Here we are applying the filter operation on the stream to get a stream filled with elements that are greater than 3.\nfindFirst() and findAny() Operations findFirst() returns an Optional for the first entry in the stream:\npublic class StreamingApp { public void findFromStream() { Stream\u0026lt;String\u0026gt; productCategories = Stream.of( \u0026#34;washing machine\u0026#34;, \u0026#34;Television\u0026#34;, \u0026#34;Laptop\u0026#34;, \u0026#34;grocery\u0026#34;, \u0026#34;essentials\u0026#34;); Optional\u0026lt;String\u0026gt; category = productCategories.findFirst(); if(category.isPresent()) logger.info(category.get()); } } findAny() is a similar method using which we can find any element from a Stream. We should use this method when we are looking for an element irrespective of the position of the element in the stream.\nThe behavior of the findAny() operation is explicitly nondeterministic since it is free to select any element in the stream. Multiple invocations on the same source may not return the same result. We should use findFirst() method if a stable result is desired.\nReduction Operations The Stream class has many terminal operations (such as average, sum, min, max, and count) that return one value by combining the contents of a stream. These operations are called reduction operations. The Stream API also contains reduction operations that return a collection instead of a single value.\nMany reduction operations perform a specific task, such as finding the average of values or grouping elements into categories. The Stream API provides two general-purpose reduction operations: reduce() and collect() as explained below:\nreduce() Operation The reduce() method is a general-purpose reduction operation that enables us to produce a single result by repeatedly applying a function to a sequence of elements from a stream. This method has three overridden signatures, the first of which looks like this:\nOptional\u0026lt;T\u0026gt; reduce(BinaryOperator\u0026lt;T\u0026gt; accumulator); This signature takes the accumulator function as an input and returns an Optional describing the reduced value. The accumulator function takes two parameters: a partial result of the reduction operation and the next element of the stream.\nHere is an example of a reduce() operation that concatenates the elements of a string array:\npublic class StreamingApp { public void joinString(final String separator){ String[] strings = {\u0026#34;a\u0026#34;, \u0026#34;b\u0026#34;, \u0026#34;c\u0026#34;, \u0026#34;d\u0026#34;, \u0026#34;e\u0026#34;}; String joined = Arrays.stream(strings) .reduce((a, b) -\u0026gt; { return !\u0026#34;\u0026#34;.equals(a)? a + separator + b : b; }); logger.info(joined); } } Here we are passing an accumulator function to the reduce() operation. The accumulator function takes two parameters and concatenates them with a separator passed as a method parameter. Please note there is already a String method:join() for joining strings.\nString joined = String.join(separator, strings); There are two more overridden methods of reduce with the below signatures:\nT reduce(T identity, BinaryOperator\u0026lt;T\u0026gt; accumulator); \u0026lt;U\u0026gt; U reduce(U identity, BiFunction\u0026lt;U,? super T,U\u0026gt; accumulator, BinaryOperator\u0026lt;U\u0026gt; combiner); The first overridden method takes only the accumulator as an input parameter. The second overridden method signature takes the below input parameters:\n identity: default or the initial value. accumulator: a functional interface that takes two inputs: a partial result of the reduction operation and the next element of the stream. combiner: a stateless function for combining two values, which must be compatible with the accumulator function.  Here is an example of a reduce() operation that adds the elements of a stream:\npublic class StreamingApp { public void sumElements(){ int[] numbers = {5, 2, 8, 4, 55, 9}; int sum = Arrays.stream(numbers) .reduce(0, (a, b) -\u0026gt; a + b); logger.info(sum + \u0026#34; \u0026#34; + sumWithMethodRef); } } Here we have used an initial value of 0 as the first parameter of the reduce() operation and provided an accumulator function to add the elements of the stream.\ncollect() Operation The collect() operation seen in an earlier example is another commonly used reduction operation to get the elements from a stream after completing all the processing:\npublic class StreamingApp { public void collectFromStream() { List\u0026lt;String\u0026gt; productCategories = Stream.of( \u0026#34;washing machine\u0026#34;, \u0026#34;Television\u0026#34;, \u0026#34;Laptop\u0026#34;, \u0026#34;grocery\u0026#34;, \u0026#34;essentials\u0026#34;) .collect(Collectors.toList()); productCategories.forEach(logger::info); } } In this example, we are collecting the elements of the stream into a list by using the collect() method on the stream before printing each element of the list.\nSpecialized Reduction Functions The Stream interface provides reduction operations that perform a specific task like finding the average, sum, minimimum, and maximum of values present in a stream:\npublic class ReduceStreamingApp { public void aggregateElements(){ int[] numbers = {5, 2, 8, 4,55, 9}; int sum = Arrays.stream(numbers).sum(); OptionalInt max = Arrays.stream(numbers).max(); OptionalInt min = Arrays.stream(numbers).min(); long count = Arrays.stream(numbers).count(); OptionalDouble average = Arrays.stream(numbers).average(); } } In this example, we have used the reduction operations: sum(), min(), max, count(), and average() on the elements of a stream.\nChaining Stream Operations in a Pipeline Operations on streams are commonly chained together to form a pipeline to execute specific use cases as shown in this code snippet:\npublic class StreamingApp { public void processStream() { Double[] elements = {3.0, 4.5, 6.7, 2.3}; Stream\u0026lt;Double\u0026gt; stream = Stream.of(elements); // Pipeline of stream operations  int numberOfElements = stream .map(e-\u0026gt;e.intValue()) .filter(e-\u0026gt;e \u0026gt;3 ) .count(); } } In this example, we are counting the number of elements, that are bigger than 3. To get that count, we have created a pipeline of two intermediate operations map() and filter(), and chained them together with a terminal operation count().\nAs we can see in the example, intermediate operations are present in the middle of the pipeline while terminal operations are attached to the end of the pipeline.\nIntermediate operations are lazily loaded and executed when the terminal operation is called on the stream.\nHandling Nullable Streams In some earlier examples, we used the static factory method of Stream: Stream.of() to create a stream with elements. We will get a NullPointerException if the value in the stream is null. The ofNullable method was introduced in Java 9 to mitigate this behavior.\nThe ofNullable method creates a Stream with the supplied elements and if the value is null, an empty Stream is created as shown in this example:\npublic class StreamingApp { public void createFromNullable() { Stream\u0026lt;String\u0026gt; productCategories = Stream.ofNullable(null); long count = productCategories.count(); logger.info(\u0026#34;size==\u0026#34;+count); } } The ofNullable method returns an empty stream. So we get a value of 0 for the count() operation instead of a NullPointerException.\nUnbounded/Infinite Streams The examples we used so far operated on the finite streams of elements generated from an array or collection. Infinite streams are sequential unordered streams with an unending sequence of elements.\ngenerate() Operation The generate() method returns an infinite sequential unordered stream where each element is generated by the provided Supplier. This is suitable for generating constant streams, streams of random elements, etc.\npublic class UnboundedStreamingApp { private final Logger logger = Logger.getLogger( UnboundedStreamingApp.class.getName()); public void generateStreamingData(){ Stream.generate(()-\u0026gt;UUID.randomUUID().toString()) .limit(10) .forEach(logger::info); } } Here, we pass UUID.randomUUID().toString() as a Supplier function, which returns 10 randomly generated unique identifiers.\nWith infinite streams, we need to provide a condition to eventually terminate the processing. One common way of doing this is by using the limit() operation. In the above example, we limit the stream to 10 unique identifiers and print them out as they get generated.\niterate() Operation The iterate() method is a common way of generating an infinite sequential stream. The iterate() method takes two parameters: an initial value called the seed element, and a function that generates the next element using the previous value. This method is stateful by design so it is not useful in parallel streams:\npublic class UnboundedStreamingApp { private final Logger logger = Logger.getLogger( UnboundedStreamingApp.class.getName()); public void iterateStreamingData(){ Stream\u0026lt;Double\u0026gt; evenNumStream = Stream.iterate( 2.0, element -\u0026gt; Math.pow(element, 2.0)); List\u0026lt;Double\u0026gt; collect = evenNumStream .limit(5) .collect(Collectors.toList()); collect.forEach(element-\u0026gt;logger.info(\u0026#34;value==\u0026#34;+element)); } } Here, we have set 2.0 as the seed value, which becomes the first element of our stream. This value is passed as input to the lambda expression element -\u0026gt; Math.pow(element, 2.0), which returns 4. This value, in turn, is passed as input in the next iteration.\nThis continues until we generate the number of elements specified by the limit() operation which acts as the terminating condition. These types of operations which terminate an infinite stream are called short-circuiting operations. We have already seen two other short-circuiting operations: findFirst() and findAny() in an earlier section.\nParallel Streams We can execute streams in serial or in parallel. When a stream executes in parallel, the stream is partitioned into multiple substreams. Aggregate operations iterate over and process these substreams in parallel and then combine the results.\nWhen we create a stream, it is a serial stream by default. We create a parallel stream by invoking the operation parallelStream() on the Collection or the BaseStream interface.\nIn this example, we are printing each element of the stream using the forEach() method and the forEachOrdered():\npublic class ParallelStreamingApp { private final Logger logger = Logger.getLogger( ParallelStreamingApp.class.getName()); public void processParallelStream(){ List\u0026lt;String\u0026gt; list = List.of(\u0026#34;washing machine\u0026#34;, \u0026#34;Television\u0026#34;, \u0026#34;Laptop\u0026#34;, \u0026#34;grocery\u0026#34;); list.parallelStream().forEach(logger::info); list.parallelStream().forEachOrdered(logger::info); } } The forEach() method prints the elements of the list in random order. Since the stream operations use internal iteration when processing elements of a stream when we execute a stream in parallel, the Java compiler and runtime determine the order in which to process the stream\u0026rsquo;s elements to maximize the benefits of parallel computing.\nWe use the forEachOrdered() method when we want to process the elements of the stream in the order specified by its source, regardless of whether we are executing the stream in serial or parallel. But while doing this, we also lose the benefits of parallelism even if we use parallel streams.\nConclusion In this article, we looked at the different capabilities of Java Streams. Here is a summary of the important points from the article:\n A stream is a sequence of elements on which we can perform different kinds of sequential and parallel operations. The java.util.stream package contains the interfaces and classes to support functional-style operations on streams of elements. In addition to the Stream interface, which is a stream of object references, there are primitive specializations like IntStream, LongStream, and DoubleStream. We can obtain streams from arrays and collections by calling the stream() method. We can also get s Stream by calling the static factory method on the Stream class. Most streams are backed by collections, arrays, or generating functions and do not need to be closed after use. However, streams obtained from files need to be closed after use. The operations that we can perform on a stream are broadly categorized into two types: intermediate and Terminal. Intermediate operations transform one stream into another stream. Terminal operations are applied on a stream to get a single result like a primitive object or collection or may not return anything. Operations on streams are commonly chained together to form a pipeline to execute specific use cases. Infinite streams are sequential unordered streams with an unending sequence of elements. They are generated using the generate() and iterate() operations. We can execute streams in serial or in parallel. When a stream executes in parallel, the stream is partitioned into multiple substreams. Aggregate operations iterate over and process these substreams in parallel and then combine the results.  You can refer to all the source code used in the article on Github.\n","date":"May 5, 2022","image":"https://reflectoring.io/images/stock/0122-snow-1200x628-branded_hu206f99a1a56c89a2586cec5241bde683_343930_650x0_resize_q90_box.jpg","permalink":"/comprehensive-guide-to-java-streams/","title":"Comprehensive Guide to Java Streams"},{"categories":["AWS"],"contents":"Amazon Elastic Compute Cloud (EC2) is a compute service with which we can create virtual machines in the AWS Cloud. We can configure the computing capacity of an EC2 instance and attach different types and capacities of storage which we can further scale up or down to handle changes in server load and consumer traffic, thereby reducing our need to forecast the capacity for investing in hardware upfront.\nIn this article, we will introduce the Amazon EC2 service and understand some of its core concepts like instances, instance types, disk storage, networking, elastic capabilities, and security by creating a few instances of EC2 and applying different configurations to those instances.\nCreating an Amazon EC2 Instance Let us get an idea of the EC2 service by creating what we call an \u0026ldquo;EC2 instance\u0026rdquo;.\nAn EC2 instance is a virtual machine in the cloud. Like most AWS resources we can create an EC2 instance from the AWS administration console, the AWS Command Line Interface(CLI), or by leveraging infrastructure-as-code services like Cloudformation and AWS CDK.\nThe minimum information required for creating an EC2 instance is the operating system like Linux, Windows, or Mac along with the size (CPU, memory, storage) of the virtual machine. We select the operating system when we select the AMI (Amazon Machine Image). The size of the virtual machine is specified through the \u0026ldquo;instance type\u0026rdquo; configuration.\nCreating a Linux EC2 instance Let us create an EC2 instance that will have Linux as the operating system and a size of 1 CPU from the AWS administration console as per this screenshot:\nFor creating the instance, we have selected an AMI named Amazon Linux 2 AMI (HVM) - Kernel 5.10, SSD Volume Type and an instance type t2.micro.\nAn Amazon Machine Image (AMI) is a template that contains a base configuration for the virtual machine that we want to create. It includes the operating system, root storage volumes, and some pre-installed applications. The Amazon Linux AMI used in this example includes packages and configurations for integration with Amazon Web Services and is pre-installed with many AWS API tools.\nInstance types comprise varying combinations of CPU, memory, storage, and networking capacity and give us the flexibility to choose the appropriate mix of resources for our applications. We have selected our instance type as t2.micro in this example, which is a low-cost, general-purpose instance type that provides a baseline level of CPU performance with the ability to burst above the baseline when needed.\nIn the last step of creating the instance, we need to choose from an existing key pair or create a new key pair which we use to connect to our instance when it is ready. A key pair is a combination of a public key that is stored by AWS and a private key that we need to store. Let\u0026rsquo;s create a new key pair, download the private key and store it in our local workstation. We will explore this further in the next section when we will connect to this EC2 instance.\nWe have accepted default properties for all other configurations like storage volume and security group.\nWhen we launch an EC2 instance, it takes a short time for the instance to be ready and it remains in its initial state of pending. The state of the EC2 instance changes to running after it starts and it receives a public DNS name. We can see the EC2 instance in the running state as shown below:\nWe can also see attributes of the running instance in the lower block. Some of the important attributes to note are:\n Instance ID: This is the identifier of the instance Public DNS name: Public DNS name of the instance Availability Zone: Availability Zone where the instance is created Security Group: Set of rules to allow or disallow incoming and outgoing traffic to the EC2 instance  We will use these attributes to add or modify the configurations of our EC2 instance in the subsequent sections.\nCreating a Windows EC2 Instance Let us similarly create a Windows EC2 instance by selecting an AMI for Windows Operating System as shown below:\nAs we can see from the description of the AMI, this will create an EC2 instance with the 2019 version of the Microsoft Windows operating system.\nConnecting to an Amazon EC2 Instance We can connect to the shell of our Linux instance or via remote desktop to our Windows instance. Let\u0026rsquo;s explore both.\nConnecting to the EC2 Linux Instance We connect to EC2 instances created with Linux AMIs using an SSH client.\nFor accessing the EC2 instance we had created an SSH key pair during instance creation for connecting to the instance. The SSH key pair is used to authenticate the identity of a user or process that wants to access the EC2 instance using the SSH protocol.\nA key pair as explained earlier is a combination of the public key which is stored by AWS and a private key which we need to store. We had downloaded the private key and stored it in our workstation in the path: ~/Downloads/mykeypair.pem.\nThe public key is saved in a file .ssh/authorized_keys in the EC2 instance that contains a list of all authorized public keys.\nWe use the below ssh command to connect to our instance with our private key:\nchmod 400 ~/Downloads/mykeypair.pem ssh -i ~/Downloads/mykeypair.pem ec2-user@ec2-34-235-151-78.compute-1.amazonaws.com Before running the ssh command, we change the permission of our private key file. We have used the public DNS name: ec2-34-235-151-78.compute-1.amazonaws.com to connect to our instance. The logged-in ssh session for our EC2 instance looks like this:\n__| __|_ ) _| ( / Amazon Linux 2 AMI ___|\\___|___| https://aws.amazon.com/amazon-linux-2/ [ec2-user@ip-172-31-31-48 ~]$ As we can see, we have logged in as ec2-user and can execute commands in the Linux shell.\nConnecting to the EC2 Windows Instance We connect to an EC2 Windows instance with a Remote Desktop client. For Windows AMIs, the private key file is required to obtain the initial administrator password to log into our instance. Let us retrieve this password using the private key file by performing the following actions in the EC2 console:\nAs we can see here, we have specified the connection method as a standalone RDP client which gives us the option to download the Remote Desktop file and get the password. We have downloaded the Remote Desktop file and saved it as ec2-54-235-20-109.compute-1.amazonaws.com.rdp.\nThe name of the administrator account depends on the language of the operating system. For example, for English, it\u0026rsquo;s Administrator. We initiate the connection with our Windows EC2 instance by running the Remote Desktop file as shown below:\nWe log in to the EC2 Windows instance by providing the password generated using the private key file earlier.\nIf our instance is joined to a domain, we can connect to our instance using the Remote Desktop client with domain credentials defined in AWS Directory Service.\nAdding Storage for an EC2 Instance The EC2 instances created earlier were configured with a Root Storage Device which contained all the information necessary to boot the instance. A root storage device is created for an instance when we launch an instance from an AMI.\nEC2 provides the following data storage options with each option having a unique combination of performance and durability:\n Elastic Block Store (EBS): We use EBS as a primary storage device for data that requires frequent and granular updates, for example, a write-heavy database. EBS provides durable, block-level storage volumes that we can attach to a running instance. For more details, see Amazon EBS volume types:  General Purpose SSD (gp2 and gp3), Provisioned IOPS SSD (io1 and io2), Throughput Optimized HDD (st1), Cold HDD (sc1)   EC2 instance store: We use an instance store for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. It is located on disks that are physically attached to the host computer. Elastic File System (EFS): We use an EFS file system as a common data source for workloads and applications running on multiple instances. EFS provides scalable file storage. Simple Storage Service (S3): S3 provides access to reliable and inexpensive data storage infrastructure.  We can attach an EBS volume to an EC2 instance in the same Availability Zone. After we attach a volume, it appears as a native block device similar to a hard drive or another physical device. At that point, the instance can interact with the volume just as it would with a local drive.\nControlling Incoming and Outgoing Connections to the EC2 Instance with Security Groups We control incoming and outgoing traffic to EC2 instances by configuring a security group.\nA security group is composed of inbound rules which control the incoming traffic to the EC2 instance, and outbound rules that control the outgoing traffic from the instance.\nWe can specify one or more security groups when we launch an instance or even when the instance is running.\nLet us configure our EC2 instance to allow only SSH and HTTP requests. For this we will add two inbound rules to the security group associated with our EC2 instance as shown below:\nHere we created two inbound rules:\n for protocols SSH and port 22 for protocol HTTP and port 80.  A security group is a tool for securing our EC2 instances, and we need to configure them to meet our security needs.\nConfiguring EC2 Instances An EC2 instance that we launched earlier is a bare-bones virtual machine which is not very useful. We need to further configure the instance by installing OS upgrades, security patches, and common software mandated by organization policies before being used for regular operations.\nSome of the configurations that we apply to an EC2 instance are:\n  Installing and updating software: Software packages for Linux are stored in software repositories. We can add a software repository and use the package management tool provided by Linux to search, install, and update software applications in the repository.\n  Adding Users: We can create user accounts for individual users who can have their own files and workspaces in the EC2 instance.\n  Setting System Time: By default, the latest versions of Amazon Linux 2 and Amazon Linux AMIs synchronize with the Amazon Time Sync Service. For EC2 instances created with others AMIs, we can configure the Amazon Time Sync Service on the instance using the chrony client or can use external NTP and public time sources.\n  Changing Hostname: We can use a public DNS name registered for the IP address of our instance for the hostname.\n  Setting up Dynamic DNS: Dynamic DNS services provide custom DNS hostnames within their domain area that are easy to remember. We can use a dynamic DNS provider to configure the instance to update the IP address associated with a public DNS name each time the instance starts.\n  Running Commands at launch: We can perform initial configuration tasks by running scripts during the launching of an EC2 instance. The scripts are attached using a feature called user data.\n  Let us configure our EC2 instance to install an Apache HTTP Server at launch by adding the below script to the user data configuration:\n#!/bin/bash yum update -y yum -y install httpd usermod -a -G apache ec2-user chown -R ec2-user:apache /var/www chmod 2775 /var/www echo \u0026#34;Hello from $(hostname -f)\u0026#34; \u0026gt; /var/www/html/index.html systemctl start httpd systemctl enable httpd As we can see, the script starts with #!/bin/bash and is run with root privilege so we should not add sudo to any command.\nTo run this script at instance startup, we need to add this script to the user data configuration of the EC2 instance as shown below:\nFor running instances, we need to stop the instance before adding user data.\nWe also need to add an inbound security rule to allow traffic from port 80 as shown below:\nWe should be using AWS CloudFormation and AWS OpsWorks for more complex automation scenarios.\nCustom images are created from instances or snapshots, or imported from your local device. You can create a custom image from an ECS instance that have applications deployed, and then use the custom image to create identical instances. This eliminates the need for repeated configurations.\nRegister EC2 Instances as Targets of an Application Load Balancer In the last section, we started the Apache Httpd server in our EC2 instance. The Apache Httpd server is used to serve web pages in response to HTTP requests sent from web browsers. But a single instance of EC2 instance running Apache Httpd server can get overwhelmed and run out of resources when it receives a very high number of requests simultaneously.\nTo mitigate this situation, we can register multiple EC2 instances as targets of an Application Load Balancer. An Application Load Balancer distributes incoming application traffic across multiple EC2 instances placed in multiple Availability Zones thereby increasing the availability of our application.\nHere is an example of creating an Application Load Balancer with our EC2 instances as targets:\nHere we can see two EC2 instances put in a target group which we will register as a target of an Application Load Balancer.\nThen we configure the routing property of the load balancer with this target group:\nAdditionally, we have configured an HTTP listener and a security group to control traffic for this load balancer.\nWe can set up the Application Load Balancer to route traffic based on advanced application-level information that includes the content of the request.\nAuto Scaling an EC2 Fleet An EC2 fleet contains a group of On-Demand and Spot instances. We can automate the management of a fleet of EC2 instances with Auto Scaling to meet a pre-defined target capacity.\nEC2 Auto Scaling ensures that we always have the correct number of EC2 instances available to handle the load for our application.\nThe key components of EC2 Auto Scaling are:\n Auto Scaling Group Launch Template Scaling Policies  For configuring Auto Scaling, we need to create an Auto Scaling Group and associate it with a launch template.\nThe Auto Scaling group contains a collection of EC2 instances along with the maximum and the minimum number of EC2 instances and scaling policies, based on which EC2 instances are launched or terminated as demand on our application increases or decreases.\nA launch template contains the configuration information required to launch an instance and contains the instance-level settings such as the Amazon Machine Image (AMI), instance type, key pair, and security groups.\nA snippet of a launch template as displayed in the EC2 administration console is shown below:\nAs we can see, this launch template has the instance type property set to t2.micro.\nWhen we create the Auto Scaling Group, we specify this launch template:\nThe Auto Scaling Group will use the configurations in this launch template to launch new EC2 instances.\nWe further specify the group size and scaling policies of the Auto Scaling Group as shown below:\nHere we have set the scaling policy of the Auto Scaling Group to maintain Average CPU utilization at 50.\nMonitoring an EC2 Instance The primary metrics which we monitor in EC2 instances are:\n CPUUtilization for CPU utilization NetworkIn and NetworkOut for Network utilization DiskReadOps and DiskWriteOps for Disk performance DiskReadBytes and DiskWriteBytes for Disk Reads/Writes  By default, EC2 sends metric data to CloudWatch in 5-minute intervals. We can enable detailed monitoring on the EC2 instance to send metric data for our instance to CloudWatch in 1-minute intervals. The Amazon EC2 console displays a series of graphs based on the raw data from Amazon CloudWatch.\nDepending on our needs, we might prefer to get information about our instances from the Amazon CloudWatch service instead of the graphs in the EC2 console. For example, we can configure alarms to alert us for specific events in our EC2 instance. An example of creating a Cloudwatch alarm is shown below:\nHere we have set up an alarm that will publish an information message to an SNS topic when the CPU utilization in the EC2 instance is below 1%.\nOptimizing Costs with Purchasing Options We can use the following purchasing options for EC2 to reduce our cost of using EC2:\n On-Demand Instances: We pay by the second for the running instances. Savings Plans: We commit to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances: We commit to a consistent instance configuration, including instance type and region, for a term of 1 or 3 years. Spot Instances: We can request unused EC2 instances and use them as part of an autoscaling group. Capacity Reservations: We can reserve capacity for our EC2 instances in a specific Availability Zone for any duration.  Platform Services Based on EC2 EC2 is a foundation-level service in the Amazon Cloud. AWS provides multiple platform level services which take care of the provisioning of EC2 instances and allow us to focus on building applications:\n Elastic Container Service (Amazon ECS): ECS is a highly scalable and fast container management service (more in this article about CloudFormation and ECS). Fargate: With AWS Fargate, we do not need to manage servers, handle capacity planning, or isolate container workloads for security. AWS Elastic Beanstalk: makes it easy for us to create, deploy, and manage scalable, fault-tolerant applications running on the AWS Cloud.  Conclusion Here is a list of the major points for quick reference:\n EC2 is a foundation-level compute service in AWS. For creating an EC2 instance we provide an AMI and instance type. We can attach different types of storage to EC2: EBS, EFS, S3, and Instance Store. We control incoming and outgoing traffic to EC2 with security groups. EC2 provides various cost-optimizing options like reserved and spot instances for running our workloads on EC2. We define EC2 instances as targets of an Application Load Balancer for increasing the availability of our application. We can use the auto-scaling feature to automatically scale a fleet of EC2 instances for consistent and predictable performance in cases of fluctuating load. We use the AWS Cloudwatch service to monitor the health of our EC2 instances.  ","date":"April 24, 2022","image":"https://reflectoring.io/images/stock/0046-rack-1200x628-branded_hu38983fac43ab7b5246a0712a5f744c11_252723_650x0_resize_q90_box.jpg","permalink":"/getting-started-with-amazon-ec2/","title":"Getting Started with Amazon EC2"},{"categories":["Node"],"contents":"Error handling functions in an application detect and capture multiple error conditions and take appropriate remedial actions to either recover from those errors or fail gracefully. Common examples of remedial actions are providing a helpful message as output, logging a message in an error log that can be used for diagnosis, or retrying the failed operation.\nExpress is a framework for developing a web application in Node.js. In an earlier article we had introduced the Express framework with examples of using its powerful features which was followed by a second article on middleware functions in Express. In both of those articles, we briefly explained error handling using middleware functions.\nThis is the third article in the Express series where we will focus on handling errors in Node.js applications written using Express and understand the below concepts:\n Handling errors with the default error handler provided by Express. Creating custom error handlers to override the default error handling behavior. Handling errors thrown by asynchronous functions invoked in the routes defined in the Express application. Handling errors by chaining error-handling middleware functions.   Example Code This article is accompanied by a working code example on GitHub. Prerequisites A basic understanding of Node.js and components of the Express framework is advisable.\nPlease refer to our earlier article for an introduction to Express.\nBasic Setup for Running the Examples We need to first set up a Node.js project for running our examples of handling errors in Express applications. Let us create a folder and initialize a Node.js project under it by running the npm init command:\nmkdir storefront cd storefront npm init -y Running these commands will create a Node.js project containing a package.json file.\nWe will next install the Express framework using the npm install command as shown below:\nnpm install express --save When we run this command, it will install the Express framework and also add it as a dependency in our package.json file.\nWe will now create a file named index.js under a folder: js and open the project folder in our favorite code editor. We are using Visual Studio Code as our source-code editor.\nLet us now add the following lines of code to index.js for running a simple HTTP server:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); // Route for handling get request for path / app.get(\u0026#39;/\u0026#39;, (request, response) =\u0026gt; { response.send(\u0026#39;response for GET request\u0026#39;); }) // Route for handling post request for path /products app.post(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { ... response.json(...) }) // start the server app.listen(3000, () =\u0026gt; console.log(\u0026#39;Server listening on port 3000.\u0026#39;)) In this code snippet, we are importing the express module and then calling the listen() function on the app handle to start our server.\nWe have also defined two routes that will accept the requests at URLs: / and /products. For an elaborate explanation of routes and handler functions, please refer to our earlier article for an introduction to Express.\nWe can run our application with the node command:\nnode js/index.js This will start a server that will listen for requests in port 3000.\nWe have also defined a server application in a file: js/server.js which we can run to simulate an external service. We can run the server application with the command:\nnode js/server.js This will start the server application on port 3001 where we can access a REST API on a URL: http://localhost:3001/products. We will call this service in some of our examples to test errors related to an external API call.\nThe application in index.js does not contain any error handling code as yet. Node.js applications crash when they encounter unhandled exceptions. So we will next add code to this application for simulating different error conditions and handling them in the subsequent sections.\nHandling Errors in Route Handler Functions The simplest way of handling errors in Express applications is by putting the error handling logic in the individual route handler functions. We can either check for specific error conditions or use a try-catch block for intercepting the error condition before invoking the logic for handling the error.\nExamples of error handling logic could be logging the error stack to a log file or returning a helpful error response.\nAn example of a error handling in a route handler function is shown here:\nconst express = require(\u0026#39;express\u0026#39;) const app = express() app.use(\u0026#39;/products\u0026#39;, express.json({ limit: 100 })) // handle post request for path /products app.post(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { const name = request.body.name ... ... // Check for error condition  if(name == null){ // Error handling logic: log the error  console.log(\u0026#34;input error\u0026#34;) // Error handling logic: return error response  response .status(400) .json({ message: \u0026#34;Mandatory field: name is missing. \u0026#34; }) }else{ // continue with normal processing  const productCreationResponse = { result: \u0026#34;success\u0026#34;} // return success response  response.json(productCreationResponse) } }) Here we are checking for the error condition by checking for the presence of a mandatory input in the request payload and returning the error as an HTTP error response with error code 400 and an error message as part of the error handling logic.\nHere is one more example of handling error using a try-catch block:\nconst express = require(\u0026#39;express\u0026#39;) const axios = require(\u0026#34;axios\u0026#34;) const app = express() app.get(\u0026#39;/products\u0026#39;, async (request, response) =\u0026gt; { try{ const apiResponse = await axios.get(\u0026#34;http://localhost:3001/products\u0026#34;) const jsonResponse = apiResponse.data console.log(\u0026#34;response \u0026#34; + jsonResponse) response.send(jsonResponse) } catch(error) { // intercept the error in catch block  // return error response  response .status(500) .json({ message: \u0026#34;Error in invocation of API: /products\u0026#34; }) } }) Here also we are handling the error in the route handler function. We are intercepting the error in a catch block and returning an error message with an error code of 500 in the HTTP response.\nBut this method of putting error handling logic in all the route handler functions is not clean. We will try to handle this more elegantly using the middleware functions of Express as explained in the subsequent sections.\nDefault Built-in Error Handler of Express When we use the Express framework to build our web applications, we get an error handler by default that catches and processes all the errors thrown in the application.\nLet us check this behavior with the help of this simple Express application with a route that throws an error:\nconst express = require(\u0026#39;express\u0026#39;) const app = express() // handle get request for path /productswitherror app.get(\u0026#39;/productswitherror\u0026#39;, (request, response) =\u0026gt; { // throw an error with status code of 400  let error = new Error(`processing error in request at ${request.url}`) error.statusCode = 400 throw error }) const port = 3000 app.listen(3000, () =\u0026gt; console.log(`Server listening on port ${port}.`)); When we invoke this route with URL /productswitherror, we will get an error with a status code of 400 and an error message: processing error in request .... But we do not have to handle this error since it is handled by the default error handler of the Express framework.\nWhen we call this route either by putting this URL in a browser or by running a CURL command in a terminal window, we will get an error stack contained in an HTML format as output as shown:\nError: processing error in request at /productswitherror at /.../storefront/js/index.js:43:15 at Layer.handle .. (/.../storefront/node_modules/express/lib/router/layer.js:95:5) at next (/.../storefront/node_modules/express/lib/router/route.js:137:13) at Route.dispatch (/.../storefront/node_modules/express/lib/router/route.js:112:3) at Layer.handle .. (/.../storefront/node_modules/express/lib/router/layer.js:95:5) at /.../storefront/node_modules/express/lib/router/index.js:281:22 at Function.process_params (/.../storefront/node_modules/express/lib/router/index.js:341:12) at next (/.../storefront/node_modules/express/lib/router/index.js:275:10) at SendStream.error (/.../storefront/node_modules/serve-static/index.js:121:7) at SendStream.emit (node:events:390:28) This is the error message sent by the Express framework\u0026rsquo;s default error handler. Express catches this error for us and responds to the caller with the error’s status code, message, and stack trace (only for non-production environments). But this behavior applies only to synchronous functions.\nHowever, the asynchronous functions that are called from route handlers which throw an error, need to be handled differently. The error from asynchronous functions are not handled by the default error handler in Express and result in the stopping (crashing) of the application.\nTo prevent this behaviour, we need to pass the error thrown by any asynchronous function invoked by route handlers and middleware, to the next()function as shown below:\nconst asyncFunction = async (request,response,next) =\u0026gt; { try { throw new Error(`processing error in request `) } catch(error) { next(error) } } Here we are catching the error and passing the error to the next() function. Now the application will be able to run without interruption and invoke the default error handler or any custom error handler if we have defined it.\nHowever, this default error handler is not very elegant and user-friendly giving scant information about the error to the end-user. We will improve this behavior by adding custom error handling functions in the next sections.\nHandling Errors with Error Handling Middleware Functions An Express application is essentially a series of middleware function calls. We define a set of middleware functions and attach them as a stack to one or more route handler functions. We call the next middleware function by calling the next() function.\nThe error handling middleware functions are defined in the same way as other middleware functions and attached as a separate stack of functions:\nWhen an error occurs, we call the next(error) function and pass the error object as input. The Express framework will process this by skipping all the functions in the middleware function stack and triggering the functions in the error handling middleware function stack.\nThe error handling middleware functions are defined in the same way as other middleware functions, but they accept the error object as the first input parameter followed by the three input parameters: request, response, and next accepted by the other middleware functions as shown below:\nconst express = require(\u0026#39;express\u0026#39;) const app = express() const errorHandler = (error, request, response, next) { // Error handling middleware functionality } // route handlers app.get(...) app.post(...) // attach error handling middleware functions after route handlers app.use(errorHandler) These error-handling middleware functions are attached to the app instance after the route handler functions have been defined.\nThe built-in default error handler of Express described in the previous section is also an error-handling middleware function and is attached at the end of the middleware function stack if we do not define any error-handling middleware function.\nAny error in the route handlers gets propagated through the middleware stack and is handled by the last middleware function which can be the default error handler or one or more custom error-handling middleware functions if defined.\nCalling the Error Handling Middleware Function When we get an error in the application, the error object is passed to the error-handling middleware, by calling the next(error) function as shown below:\nconst express = require(\u0026#39;express\u0026#39;) const axios = require(\u0026#34;axios\u0026#34;) const app = express() const errorHandler = (error, request, response, next) { // Error handling middleware functionality  console.log( `error ${error.message}`) // log the error  const status = error.status || 400 // send back an easily understandable error message to the caller  response.status(status).send(error.message) } app.get(\u0026#39;/products\u0026#39;, async (request, response) =\u0026gt; { try { const apiResponse = await axios.get(\u0026#34;http://localhost:3001/products\u0026#34;) const jsonResponse = apiResponse.data response.send(jsonResponse) } catch(error) { next(error) // calling next error handling middleware  } }) app.use(errorHandler) As we can see here, the next(error) function takes the error object in the catch block as input which is passed on to the next error-handling middleware function where we can potentially put the logic to extract relevant information from the error object, log the error, and send back an easily understandable error message to the caller.\nAdding Multiple Middleware Functions for Error Handling We can chain multiple error-handling middleware functions similar to what we do for other middleware functions.\nLet us define two middleware error handling functions and add them to our routes:\nconst express = require(\u0026#39;express\u0026#39;) const app = express() // Error handling Middleware function for logging the error message const errorLogger = (error, request, response, next) =\u0026gt; { console.log( `error ${error.message}`) next(error) // calling next middleware } // Error handling Middleware function reads the error message // and sends back a response in JSON format const errorResponder = (error, request, response, next) =\u0026gt; { response.header(\u0026#34;Content-Type\u0026#34;, \u0026#39;application/json\u0026#39;) const status = error.status || 400 response.status(status).send(error.message) } // Fallback Middleware function for returning // 404 error for undefined paths const invalidPathHandler = (request, response, next) =\u0026gt; { response.status(404) response.send(\u0026#39;invalid path\u0026#39;) } // Route with a handler function which throws an error app.get(\u0026#39;/productswitherror\u0026#39;, (request, response) =\u0026gt; { let error = new Error(`processing error in request at ${request.url}`) error.statusCode = 400 throw error }) app.get(\u0026#39;/products\u0026#39;, async (request, response) =\u0026gt; { try{ const apiResponse = await axios.get(\u0026#34;http://localhost:3001/products\u0026#34;) const jsonResponse = apiResponse.data response.send(jsonResponse) }catch(error){ next(error) // calling next error handling middleware  } }) // Attach the first Error handling Middleware // function defined above (which logs the error) app.use(errorLogger) // Attach the second Error handling Middleware // function defined above (which sends back the response) app.use(errorResponder) // Attach the fallback Middleware // function which sends back the response for invalid paths) app.use(invalidPathHandler) app.listen(PORT, () =\u0026gt; { console.log(`Server listening at http://localhost:${PORT}`) }) These middleware error handling functions perform different tasks:\n errorLogger logs the error message errorResponder sends the error response to the caller  We have then attached these two error-handling middleware functions to the app object, after the definitions of the route handler functions by calling the use() method on the app object.\nTo test how our application handles errors with the help of these error handling functions, let us invoke the with URL: localhost:3000/productswitherror. The error raised from this route causes the first two error handlers to be triggered. The first one logs the error message to the console and the second one sends the error message processing error in request at /productswitherror in the response.\nWe have also added a middleware function invalidPathHandler() at the end of the chain which will be a fallback function to handle requests whose routes are not defined.\nPlease note that the function invalidPathHandler() is not an error-handling middleware since it does not take an error object as the first parameter. It is a conventional middleware function that gets invoked at the end of the middleware stack.\nWhen we request a non-existent route in the application for example: http://localhost:3000/productswitherrornew, Express does not a find any matching routes. So it does not invoke any route handler functions and associated middleware and error handling functions. It invokes only the middleware function invalidPathHandler() at the end which sends an error message: invalid path with an HTTP status code of 404.\nError Handling while Calling Promise-based Methods Lastly, it will be worthwhile to look at the best practices for handling errors in JavaScript Promise blocks. A Promise is a JavaScript object which represents the eventual completion (or failure) of an asynchronous operation and its resulting value.\nWe can enable Express to catch errors in Promises by providing next as the final catch handler as shown in this example:\napp.get(\u0026#39;/product\u0026#39;, (request, response, next) =\u0026gt; { axios.get(\u0026#34;http://localhost:3001/product\u0026#34;) .then(response=\u0026gt;response.json) .then(jsonresponse=\u0026gt;response.send(jsonresponse)) .catch(next) }) Here we are calling a REST API with the axios library which returns a Promise and catches any error in the API invocation by providing next() as the final catch handler.\nAccording to the Express Docs, from Express 5 onwards, the route handlers and middleware functions that return a Promise will call next(value) automatically when they reject or throw an error.\nDeveloping Express Error Handling Middleware with TypeScript TypeScript is an open-source language developed by Microsoft. It is a superset of JavaScript with additional capabilities, most notable being static type definitions making it an excellent tool for a better and safer development experience.\nLet us first add support for TypeScript to our Node.js project and then see a snippet of the error handling middleware functions written using the TypeScript language.\nInstalling TypeScript and other Configurations For adding TypeScript, we need to perform the following steps:\n Install Typescript and ts-node with npm:  npm i -D typescript ts-node Create a JSON file named tsconfig.json with the below contents in our project’s root folder to specify different options for compiling the TypeScript code as shown here:  { \u0026#34;compilerOptions\u0026#34;: { \u0026#34;module\u0026#34;: \u0026#34;commonjs\u0026#34;, \u0026#34;target\u0026#34;: \u0026#34;es6\u0026#34;, \u0026#34;rootDir\u0026#34;: \u0026#34;./\u0026#34;, \u0026#34;esModuleInterop\u0026#34;: true } } Install the type definitions of the Node APIs and Express to be fetched from the @types namespace by installing the @types/node and @types/express packages as a development dependency:  npm i -D @types/node @types/express Writing the Express Error Handling Middleware Functions in TypeScript After enabling the project for TypeScript, we have written the same application built earlier in TypeScript. The files for TypeScript are kept under the folder: ts. Here is a snippet of the code in file app.ts containing routes and error handling middleware functions:\nimport express, { Request, Response, NextFunction } from \u0026#39;express\u0026#39; const app = express() const port: number = 3000 // Error object used in error handling middleware function  class AppError extends Error{ statusCode: number; constructor(statusCode: number, message: string) { super(message); Object.setPrototypeOf(this, new.target.prototype); this.name = Error.name; this.statusCode = statusCode; Error.captureStackTrace(this); } } // Middleware function for logging the request method and request URL  const requestLogger = ( request: Request, response: Response, next: NextFunction) =\u0026gt; { console.log(`${request.method}url:: ${request.url}`); next() } app.use(requestLogger) app.use(\u0026#39;/products\u0026#39;, express.json({ limit: 100 })) // Error handling Middleware functions  // Error handling Middleware function for logging the error message  const errorLogger = ( error: Error, request: Request, response: Response, next: NextFunction) =\u0026gt; { console.log( `error ${error.message}`) next(error) // calling next middleware  } // Error handling Middleware function reads the error message  // and sends back a response in JSON format  const errorResponder = ( error: AppError, request: Request, response: Response, next: NextFunction) =\u0026gt; { response.header(\u0026#34;Content-Type\u0026#34;, \u0026#39;application/json\u0026#39;) const status = error.statusCode || 400 response.status(status).send(error.message) } // Fallback Middleware function for returning  // 404 error for undefined paths  const invalidPathHandler = ( request: Request, response: Response, next: NextFunction) =\u0026gt; { response.status(404) response.send(\u0026#39;invalid path\u0026#39;) } app.get(\u0026#39;product\u0026#39;, (request: Request, response: Response) =\u0026gt; { response.sendFile(\u0026#34;productsample.html\u0026#34;) }) // handle get request for path /  app.get(\u0026#39;/\u0026#39;, (request: Request, response: Response) =\u0026gt; { response.send(\u0026#39;response for GET request\u0026#39;); }) const requireJsonContent = ( request: Request, response: Response, next: NextFunction) =\u0026gt; { if (request.headers[\u0026#39;content-type\u0026#39;] !== \u0026#39;application/json\u0026#39;) { response.status(400).send(\u0026#39;Server requires application/json\u0026#39;) } else { next() } } app.get(\u0026#39;/products\u0026#39;, async ( request: Request, response: Response, next: NextFunction) =\u0026gt; { try{ const apiResponse = await axios.get(\u0026#34;http://localhost:3001/products\u0026#34;) const jsonResponse = apiResponse.data console.log(\u0026#34;response \u0026#34; + jsonResponse) response.send(jsonResponse) }catch(error){ next(error) } }) app.get(\u0026#39;/product\u0026#39;, ( request: Request, response: Response, next: NextFunction) =\u0026gt; { axios.get(\u0026#34;http://localhost:3001/product\u0026#34;) .then(jsonresponse=\u0026gt;response.send(jsonresponse)) .catch(next) }) app.get(\u0026#39;/productswitherror\u0026#39;, ( request: Request, response: Response) =\u0026gt; { let error:AppError = new AppError(400, `processing error in request at ${request.url}`) error.statusCode = 400 throw error }) // Attach the first Error handling Middleware  // function defined above (which logs the error)  app.use(errorLogger) // Attach the second Error handling Middleware  // function defined above (which sends back the response)  app.use(errorResponder) // Attach the fallback Middleware  // function which sends back the response for invalid paths)  app.use(invalidPathHandler) app.listen(port, () =\u0026gt; { console.log(`Server listening at port ${port}.`) } ) Here we have used the express module to create an application as we have seen before. With this configuration, the application will run on port 3000 and can be accessed with the URL: http://localhost:3000.\nWe have modified the import statement on the first line to import the TypeScript interfaces that will be used for the request, response, and next parameters inside the Express middleware.\nRunning the Express Application Written in TypeScript We run the Express application written in TypeScript code by using the below command:\nnpx ts-node ts/app.ts Running this command will start the HTTP server. We have used npx here which is a command-line tool that can execute a package from the npm registry without installing that package.\nConclusion Here is a list of the major points for a quick reference:\n  We perform error handling in Express applications by writing middleware functions that handle errors. These error handling functions take the error object as the fourth parameter in addition to the parameters: request, response, and the next() function.\n  Express comes with a default error handler for handling error conditions. This is a default middleware function added by Express at the end of the middleware stack.\n  We call the error handling middleware by passing the error object to the next(error) function.\n  We can define a chain of multiple error-handling middleware functions to one or more routes and attach them at the end of Express route definitions.\n  We can enable Express to catch errors in JavaScript Promises by providing next as the final catch handler.\n  We also used TypeScript to author an Express application with route handler and error-handling middleware functions.\n  You can refer to all the source code used in the article on Github.\n","date":"April 20, 2022","image":"https://reflectoring.io/images/stock/0090-404-1200x628-branded_hu09a369bec6cd81282cda28392f89d387_72453_650x0_resize_q90_box.jpg","permalink":"/express-error-handling/","title":"Error Handling in Express"},{"categories":["Java"],"contents":"Project Lombok is a popular library that helps us to write clear, concise, and less repetitive Java code. However, among the developer community, it has been both embraced and criticized for reasons I would like to elaborate here.\nIn this article, we will focus on factors that will help you make an informed decision about using the library effectively and being wary of its consequences.\n Example Code This article is accompanied by a working code example on GitHub. What is Lombok? According to official docs, \u0026ldquo;Project Lombok is a java library that automatically plugs into your editor and build tools, spicing up your Java.\u0026rdquo;\nThis library provides a set of user-friendly annotations that generate the code at compile time, helping the developers save time and space and improving code readability.\nIDE Support All popular IDEs support Lombok. For example, IntelliJ version 2020.3 and above is compatible with Lombok without a plugin. For earlier versions, plugins can be installed from here. Once installed, we need to ensure annotation processing is enabled as in the example configuration below.\nAnnotation processing makes it possible for the IDE to evaluate the Lombok annotations and generate the source code from them at compile time.\nFor Eclipse, go to Help menu \u0026gt; Install new Software \u0026gt; Add https://projectlombok.org/p2. Install the Lombok plugin and restart Eclipse.\nSetting Up a Project with Lombok To use the Lombok features in a new or an existing project, add a compile-time dependency to lombok as below. It makes the Lombok libraries available to the compiler but is not a dependency on the final deployable jar:\nWith Maven:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.projectlombok\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;lombok\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.18.20\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;provided\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; With Gradle:\ncompileOnly group: \u0026#39;org.projectlombok\u0026#39;, name: \u0026#39;lombok\u0026#39;, version: \u0026#39;1.18.20\u0026#39; As an example, consider the below java class:\npublic class Book { private String isbn; private String publication; private String title; private List\u0026lt;Author\u0026gt; authors; public Book( String isbn, String publication, String title, List\u0026lt;Author\u0026gt; authors) { // Constructor logic goes here  } // All getters and setters are explicitly defined here  public String toString() { return \u0026#34;Book(isbn=\u0026#34; + this.getIsbn() + \u0026#34;, publication=\u0026#34; + this.getPublication() + \u0026#34;, title=\u0026#34; + this.getTitle() + \u0026#34;, authors=\u0026#34; + this.getAuthors() + \u0026#34;, genre=\u0026#34; + this.getGenre() + \u0026#34;)\u0026#34;; } } Using Lombok, we can simplify the above plain Java class to this:\n@Getter @Setter @AllArgsConstructor @ToString public class Book { private String isbn; private String publication; private String title; private List\u0026lt;Author\u0026gt; authors; } The above code looks much cleaner and easier to write and understand.\nHow Lombok Works All annotations in Java are processed during compile time by a set of annotation processors. The Java specification publicly does not allow us to modify the Abstract Syntax Tree (AST). It only mentions that annotation processors generate new files and documentation.\nSince the Java Compiler Specification does not prevent annotation processors from modifying source files, Lombok developers have cleverly used this loophole to their advantage. For more information on how annotation processing in Java works, refer here.\nAdvantages of Lombok Let\u0026rsquo;s look at some of the most prominent benefits of using Lombok.\nClean Code With Lombok, we can replace boiler-plate code with meaningful annotations. They help the developer focus on business logic. Lombok also provides some annotations that combine multiple other annotations (like @Data combines @ToString, @EqualsAndHashCode, @Getter / @Setter, and @RequiredArgsConstructor), so we don\u0026rsquo;t have to \u0026ldquo;pollute\u0026rdquo; our code with too many annotations.\nSince the code is more concise, modifying and adding new fields doesn\u0026rsquo;t require so much typing. A list of all available annotations is available here.\nSimple Creation of Complex Objects The Builder pattern is used when we need to create objects that are complex and flexible (in constructor arguments). With Lombok, this is achieved using @Builder.\nConsider the below example:\n@Builder public class Account { private String acctNo; private String acctName; private Date dateOfJoin; private String acctStatus; } Let\u0026rsquo;s use Intellij\u0026rsquo;s \u0026ldquo;Delombok\u0026rdquo; feature to understand the code written behind the scenes.\npublic class Account { private String acctNo; private String acctName; private String dateOfJoin; private String acctStatus; Account(String acctNo, String acctName, String dateOfJoin, String acctStatus) { this.acctNo = acctNo; this.acctName = acctName; this.dateOfJoin = dateOfJoin; this.acctStatus = acctStatus; } public static AccountBuilder builder() { return new AccountBuilder(); } public static class AccountBuilder { private String acctNo; private String acctName; private String dateOfJoin; private String acctStatus; AccountBuilder() { } public AccountBuilder acctNo(String acctNo) { this.acctNo = acctNo; return this; } public AccountBuilder acctName(String acctName) { this.acctName = acctName; return this; } public AccountBuilder dateOfJoin(String dateOfJoin) { this.dateOfJoin = dateOfJoin; return this; } public AccountBuilder acctStatus(String acctStatus) { this.acctStatus = acctStatus; return this; } public Account build() { return new Account(acctNo, acctName, dateOfJoin, acctStatus); } public String toString() { return \u0026#34;Account.AccountBuilder(acctNo=\u0026#34; + this.acctNo + \u0026#34;, acctName=\u0026#34; + this.acctName + \u0026#34;, dateOfJoin=\u0026#34; + this.dateOfJoin + \u0026#34;, acctStatus=\u0026#34; + this.acctStatus + \u0026#34;)\u0026#34;; } } } The code written with Lombok is much easier to understand than the one above which is too verbose. As we can see, all the complexity of creating the Builder class is hidden from the developer, making the code more precise. Now, we can create objects easily.\nAccount account = Account.builder().acctName(\u0026#34;Savings\u0026#34;) .acctNo(\u0026#34;A001090\u0026#34;) .build(); Creating Immutable Objects Made Easy Once created, an immutable object cannot be modified. The concept of immutability is vital when creating a Java application. Some of its benefits include thread safety, ease of caching, and ease of object maintainability. To understand why it is a good idea to make classes immutable refer to this article.\nLombok provides the @Value annotation to create immutable classes:\n@Value public class Person { private String firstName; private String lastName; private String socialSecurityNo; private List\u0026lt;String\u0026gt; hobbies; } Delomboked version is as below:\npublic final class Person { private final String firstName; private final String lastName; private final String socialSecurityNo; private final List\u0026lt;String\u0026gt; hobbies; public Person(String firstName, String lastName, String socialSecurityNo, List\u0026lt;String\u0026gt; hobbies) { this.firstName = firstName; this.lastName = lastName; this.socialSecurityNo = socialSecurityNo; this.hobbies = hobbies; } public String getFirstName() { return this.firstName; } public String getLastName() { return this.lastName; } public String getSocialSecurityNo() { return this.socialSecurityNo; } public List\u0026lt;String\u0026gt; getHobbies() { return this.hobbies; } public boolean equals(final Object o) { // Default equals implementation  } public int hashCode() { // default hashcode implementation  } public String toString() { return \u0026#34;Person(firstName=\u0026#34; + this.getFirstName() + \u0026#34;, lastName=\u0026#34; + this.getLastName() + \u0026#34;, socialSecurityNo=\u0026#34; + this.getSocialSecurityNo() + \u0026#34;, hobbies=\u0026#34; + this.getHobbies() + \u0026#34;)\u0026#34;; } } The @Value annotation ensures the state of the object is unchanged once created.\n it makes the class final it makes the fields final it generates only getters and not setters it creates a constructor that takes all fields as an argument  In other words, the @Value annotation is a shorthand for using all of these annotations:\n @Getter, @FieldDefaults(makeFinal=true, level=AccessLevel.PRIVATE), @AllArgsConstructor, @ToString, and @EqualsAndHashCode.  We can further enforce immutability in the above example by adding @AllArgsConstructor(access = AccessLevel.PRIVATE) to make the constructor private and force object creation via the Builder pattern.\nIf you\u0026rsquo;re looking for a library that generates immutable objects, you should also have a look at the immutables library.\nCaveats with Lombok Above are some benefits of using Lombok. By now you would have realized the value these annotations can provide to your code. However, in my experience of using Lombok, I have noticed developers misusing these annotations and using them across the whole codebase, making the code messy and prone to errors.\nLet\u0026rsquo;s look at some situations where Lombok could be used incorrectly.\nUsing Lombok with JPA Entities Although using Lombok to generate boilerplate code for entities is attractive, it does not work well with JPA and Hibernate entities. Below are a few examples of what could go wrong when using Lombok with JPA.\nAvoid @ToString The seemingly harmless @ToString could do more harm to our application than we would expect. Consider the below entity classes:\n@Entity @Table(name = \u0026#34;BOOK\u0026#34;) @Getter @Setter @ToString public class Book { @Id private long id; private String name; @ManyToMany(cascade = CascadeType.PERSIST, fetch = FetchType.LAZY) @JoinTable(name = \u0026#34;publisher_book\u0026#34;, joinColumns = @JoinColumn(name = \u0026#34;book_id\u0026#34;, referencedColumnName = \u0026#34;id\u0026#34;), inverseJoinColumns = @JoinColumn(name = \u0026#34;publisher_id\u0026#34;, referencedColumnName = \u0026#34;id\u0026#34;)) private Set\u0026lt;Publisher\u0026gt; publishers; } @Entity @Getter @Setter @Builder @ToString public class Publisher implements Serializable { @Id private long id; private String name; @ManyToMany(mappedBy = \u0026#34;publishers\u0026#34;) private Set\u0026lt;Book\u0026gt; books; } As we can see, there is a @ManyToMany relationship that requires a JOIN with another table to fetch data. The Repository class that fetches data from the table is as below:\n@Repository public interface BookRepository extends JpaRepository\u0026lt;Book, Long\u0026gt; { } There are three main problems here:\n In an entity class, not all attributes of an entity are initialized. If an attribute has a FetchType of LAZY, it gets invoked only when used in the application. However, @ToString requires all attributes of an entity and would trigger the lazy loading, making one or multiple database calls. This can unintentionally cause performance issues. Further, if we call toString() on the entity outside of the scope of a transaction, it could lead to a LazyInitializationException. In the case of associations like @ManyToMany between 2 entities, logging the entity data could result in evaluating circular references and causing a StackOverflowError. In the example above, the Book entity will try to fetch all authors of the book. The Author entity in turn will try to find all books of the author. This process will keep repeating until it results in an error.  Avoid @EqualsAndHashCode Lombok uses all non-final attributes to evaluate and override default equals and hashCode. This isn\u0026rsquo;t always desirable in the case of entities due to the following reasons:\n Most primary keys in the database are auto-generated by the database during insertion. This can cause issues in the hashCode computation process as the ID is not available before the entity has been persisted, causing unexpected results. Every database record is uniquely identified by its primary key. In such cases using the Lombok implementation of @EqualsAndHashCode might not be ideal.  Although Lombok allows us to include and exclude attributes, for the sake of brevity it might be a better option to override these methods (toString(), equals(), hashcode()) ourselves and not rely on Lombok.\nLombok Hides Coding Violations Consider a snippet of the model class as below:\n@Data @Builder @AllArgsConstructor public class CustomerDetails { private String id; private String name; private Address address; private Gender gender; private String dateOfBirth; private String age; private String socialSecurityNo; private Contact contactDetails; private DriverLicense driverLicense; } For the project, we have configured a static code analyzer checkstyle that runs as a part of the maven verify lifecycle. In the case of the above example (that uses Lombok) the code builds without any issues.\nIn contrast, let\u0026rsquo;s replace the same class with its Delomboked version. After the annotations get replaced with their corresponding constructors, we see issues with the static code analyzer as below.\nIn my experience, I have seen developers use these annotations to escape such violations making it difficult to maintain the code.\nConfiguration with Code Coverage Tools Tools such as JaCoCo help create better quality software, as they point out areas of low test coverage in their reports. Using Lombok (that generates code behind the scenes), greatly affects its code coverage results. Additional configuration is required to exclude Lombok-generated code.\n@AllArgsConstructor May Introduce Errors When Refactoring Consider an example class:\n@AllArgsConstructor public class Customer { private String id; private String name; private Gender gender; private String dateOfBirth; private String age; private String socialSecurityNo; } Let\u0026rsquo;s create an object of Customer class\nCustomer c = new Customer( \u0026#34;C001\u0026#34;, \u0026#34;Bryan Rhodes\u0026#34;, Gender.MALE, \u0026#34;1986/02/02\u0026#34;, \u0026#34;36\u0026#34;, \u0026#34;07807789\u0026#34;); Here, we see that most of the attributes have String as their type. It is easy to mistakenly create an object whose params are out of order like this:\nCustomer c = new Customer( \u0026#34;C001\u0026#34;, \u0026#34;Bryan Rhodes\u0026#34;, Gender.MALE, \u0026#34;36\u0026#34;, \u0026#34;1986/02/02\u0026#34;, \u0026#34;07807789\u0026#34;); If validations are not in place for the attributes, this object might propagate as is in the application. Using @Builder here might avoid such errors.\n@Builder Allows Creation of Invalid Objects Consider a model as below:\n@Builder public class Job { private String id; private JobType jobType; } public enum JobType { PLUMBER, BUILDER, CARPENTER } For this class, we could construct an object as\nJob job = Job.builder() .id(\u0026#34;5678\u0026#34;) .build(); Although the code compiles, the object job here is in an invalid state because we do not know which JobType it belongs to. Therefore, along with using the @Builder annotation, it is also important to enforce required attributes to have a value. To do this we could consider using the @NonNull annotation. With this annotation in place we now get the below error:\nAn object created with this approach would now be considered valid.\nFor more advanced validation scenarios, you could consider using the Bean Validation API.\nApplication Logic Should Not Depend on the Generated Code Apart from following good programming practices, developers try to generalize features to ensure re-usability. However, these features should NEVER depend on the code that Lombok generates.\nFor instance, consider we create a base feature that uses reflection to create objects. The DTOs use @Builder, and we use the Lombok-generated code in it. If someone decides to create new DTOs that use @Builder(setterPrefix = \u0026quot;with\u0026quot;), this could be catastrophic in huge, complex applications, because the feature using reflection will be broken.\nSince Lombok provides a lot of flexibility in the way objects are created, we should be equally responsible and use them appropriately.\nUse @SneakyThrows Cautiously @SneakyThrows can be used to sneakily throw checked exceptions without declaring it in the \u0026ldquo;throws\u0026rdquo; clause. Lombok achieves this by faking out the compiler. It relies on the fact that the forced check applies only to the compiler and not the JVM. Therefore, it modifies the generated class file to disable the check at compile time thus treating checked exceptions as unchecked.\nTo understand better, let\u0026rsquo;s first consider this example:\npublic interface DataProcessor { void dataProcess(); } Without @SneakyThrows an implementation of DataProcessor would be like this:\npublic class FileDataProcessor implements DataProcessor { @Override public void dataProcess() { try { processFile(); } catch (IOException e) { e.printStackTrace(); } } private void processFile() throws IOException { File file = new ClassPathResource(\u0026#34;sample.txt\u0026#34;).getFile(); log.info(\u0026#34;Check if file exists: {}\u0026#34;, file.exists()); return FileUtils.readFileToString(file, \u0026#34;UTF-8\u0026#34;); } } With @SneakyThrows the code gets simplified\npublic class FileDataProcessor implements DataProcessor { @Override public void dataProcess() { processFile(); } @SneakyThrows private void processFile() { File file = new ClassPathResource(\u0026#34;sample.txt\u0026#34;).getFile(); log.info(\u0026#34;Check if file exists: {}\u0026#34;, file.exists()); return FileUtils.readFileToString(file, \u0026#34;UTF-8\u0026#34;); } } As we can see, @SneakyThrows avoids the hassle of catching or throwing checked exceptions. In other words, it treats a checked exception like an unchecked one.\nThis can be useful, especially when writing lambda functions making the code concise and clean.\nHowever, use @SneakyThrows only when you don\u0026rsquo;t intend to process the code selectively depending on the kind of Exception it throws. For instance, if we try to catch IOException after applying @SneakyThrows, we would get the below compile-time error\nThe invisible IOException gets propagated, which could then be handled down the call stack.\nFurther, we could build logic to read the file content and parse them to dates which might result in DateTimeParseException. Bubbling up such checked exceptions and using @SneakyThrows to escape its handling might make it difficult to trace errors. Therefore, be careful when using this annotation to escape multiple checked exceptions.\nUse Lombok with Caution The power of Lombok cannot be underestimated or ignored. However, I would like to summarise the key points that will help you use Lombok in a better way.\n Avoid using Lombok with JPA entities. It will be much easier generating the code yourself than debugging issues later. When designing POJO\u0026rsquo;s use only the Lombok annotations you require (use shorthand annotations sparingly). I would recommend using the Delombok feature to understand the code generated better. @Builder gives a lot of flexibility in object creation. This can cause objects to be in an invalid state. Therefore, make sure all the required attributes are assigned values during object creation. DO NOT write code that could have a huge dependency on the background code Lombok generates. When using test coverage tools like Jacoco, Lombok can cause problems since Jacoco cannot distinguish between Lombok-generated code and normal source code and configure them accordingly. Use @SneakyThrows for checked exceptions that you don\u0026rsquo;t intend to selectively catch. Otherwise, wrap them in runtime exceptions that you throw instead. Overusing @SneakyThrows in an application could make it difficult to trace and debug errors.  ","date":"April 10, 2022","image":"https://reflectoring.io/images/stock/0010-gray-lego-1200x628-branded_hu463ec94a0ba62d37586d8dede4e932b0_190778_650x0_resize_q90_box.jpg","permalink":"/when-to-use-lombok/","title":"When Should I Use Project Lombok?"},{"categories":["Spring Boot"],"contents":"In a distributed, fast-paced environment, dev teams often want to find out at what time they deployed the app, what version of the app they deployed, what Git commit was deployed, and more.\nSpring Boot Actuator helps us monitor and manage the application. It exposes various endpoints that provide app health, metrics, and other relevant information.\nIn this article, we will find out how to use Spring Boot Actuator and the Maven/Gradle build plugins to add such information to our projects.\n Example Code This article is accompanied by a working code example on GitHub. Enabling Spring Boot Actuator Spring Boot Actuator is a sub-project of Spring Boot. In this section, we will quickly see how to bootstrap the sample project and enable the /info endpoint. If you want to know more about Spring Boot Actuator, there is already a great tutorial.\nLet\u0026rsquo;s quickly create a Spring Boot project using the Spring Initializr. We will require the following dependencies:\n   Dependency Purpose     Spring Boot Actuator To expose the application management endpoints e.g. info.   Spring Web To enable the web app behavior.    If it helps, here is a link to the pre-populated projects in Maven and Gradle.\nAfter the project is built we will expose the built-in /info endpoint over HTTP. By default the /info web endpoint is disabled. We can simply enable it by adding the the management.endpoints.web.exposure.include property in the application.properties configuration:\nmanagement.endpoints.web.exposure.include=health,info Let\u0026rsquo;s run the Spring Boot application and open the URL http://localhost:8080/actuator/info in a browser. Nothing useful will be visible yet as we still have to make a few config changes. In the next section, we will see how we can add informative build information in this response.\nSecuring Endpoints If you are exposing the endpoints publicly, please make sure to secure them as appropriate. We should not expose any sensitive information unknowingly.\n Spring Boot Application Info Spring collects useful application information from various InfoContributor beans defined in the application context. Below is a summary of the default InfoContributor beans:\n   ID Bean Name Usage     build BuildInfoContributor Exposes build information.   env EnvironmentInfoContributor Exposes any property from the Environment whose name starts with info.   git GitInfoContributor Exposes Git related information.   java JavaInfoContributor Exposes Java runtime information.    By default, the env and java contributors are disabled.\nFirst, we will enable the java contributor by adding the following key-value pair in application.properties:\nmanagement.info.java.enabled=true Let\u0026rsquo;s rerun the application. If we open the actuator /info endpoint again in a browser, we get an output like this:\n{ \u0026#34;java\u0026#34;: { \u0026#34;vendor\u0026#34;: \u0026#34;Eclipse Adoptium\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;11.0.14\u0026#34;, \u0026#34;runtime\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;OpenJDK Runtime Environment\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;11.0.14+9\u0026#34; }, \u0026#34;jvm\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;OpenJDK 64-Bit Server VM\u0026#34;, \u0026#34;vendor\u0026#34;: \u0026#34;Eclipse Adoptium\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;11.0.14+9\u0026#34; } } } You are likely to see different values based on the installed Java version.\nNow, it\u0026rsquo;s time to display environment variables. Spring picks up any environment variable with a property name starting with info. To see this in action, let\u0026rsquo;s add the following properties in the application.properties file:\nmanagement.info.env.enabled=true info.app.website=reflectoring.io Upon restarting the app, we will start seeing the following information added to the actuator info endpoint:\n{ \u0026#34;app\u0026#34;: { \u0026#34;website\u0026#34;: \u0026#34;reflectoring.io\u0026#34; } } Feel free to add as many info variables you want :)\nIn the following sections, we will see how to add Git and application build specific information.\nAdding Build Info Adding useful build information helps to quickly identify the build artifact name, version, time created, etc. It could come in handy to check if the team deployed the relevant version of the app. Spring Boot allows easy ways to add this using Maven or Gradle build plugins.\nUsing the Maven Plugin The Spring Boot Maven Plugin comes bundled with plenty of useful features such as creating executable jar or war archives, running the application, etc. It also provides a way to add application build info.\nSpring Boot Actuator will show build details if a valid META-INF/build-info.properties file is present. The Spring Boot Maven plugin has a build-info goal to create this file.\nThis plugin will be by default present in the pom.xml if you bootstrapped the project using Spring Initializr. We just have to add the build-info goal for execution as shown below:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.6.4\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;build-info\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; If we run the command ./mvnw spring-boot:run (for Linux/macOS) or mvnw.bat spring-boot:run (for Windows) now, the required file would be created in target/classes/META-INF/build-info.properties.\nThe file content will be similar to this:\nbuild.artifact=spring-boot-build-info build.group=io.reflectoring build.name=spring-boot-build-info build.time=2022-03-06T05\\:53\\:45.236Z build.version=0.0.1-SNAPSHOT We can also add custom properties to this list using the additionalProperties attribute:\n\u0026lt;execution\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;build-info\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;additionalProperties\u0026gt; \u0026lt;custom.key1\u0026gt;value1\u0026lt;/custom.key1\u0026gt; \u0026lt;custom.key2\u0026gt;value2\u0026lt;/custom.key2\u0026gt; \u0026lt;/additionalProperties\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/execution\u0026gt; If we run the app now and open the http://localhost:8080/actuator/info endpoint in the browser, we will see a response similar to below:\n{ \u0026#34;build\u0026#34;: { \u0026#34;custom\u0026#34;: { \u0026#34;key2\u0026#34;: \u0026#34;value2\u0026#34;, \u0026#34;key1\u0026#34;: \u0026#34;value1\u0026#34; }, \u0026#34;version\u0026#34;: \u0026#34;0.0.1-SNAPSHOT\u0026#34;, \u0026#34;artifact\u0026#34;: \u0026#34;spring-boot-build-info\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;spring-boot-build-info\u0026#34;, \u0026#34;time\u0026#34;: \u0026#34;2022-03-06T06:34:30.306Z\u0026#34;, \u0026#34;group\u0026#34;: \u0026#34;io.reflectoring\u0026#34; } } If you want to exclude any of the properties that is possible using the excludeInfoProperties configuration. Let\u0026rsquo;s see how to exclude the artifact property:\n\u0026lt;configuration\u0026gt; \u0026lt;excludeInfoProperties\u0026gt; \u0026lt;infoProperty\u0026gt;artifact\u0026lt;/infoProperty\u0026gt; \u0026lt;/excludeInfoProperties\u0026gt; \u0026lt;/configuration\u0026gt; Please refer to the official Spring Boot documentation to know more.\nNow, it\u0026rsquo;s time to see how we can achieve the same output using the Spring Boot Gradle plugin.\nUsing the Gradle Plugin The easiest way to add the build info is using the plugin DSL. In the build.gradle file, we need to add the following block:\nspringBoot { buildInfo() } If we sync the Gradle project now, we can see a new task bootBuildInfo is available for use. Running the task will generate similar build/resources/main/META-INF/build-info.properties file with build info (derived from the project). Using the DSL we can customize existing values or add new properties:\nspringBoot { buildInfo { properties { name = \u0026#39;Sample App\u0026#39; additional = [ \u0026#39;customKey\u0026#39;: \u0026#39;customValue\u0026#39; ] } } } Time to run the app using ./gradlew bootRun (for macOS/Linux) or gradlew.bat bootRun (for Windows) command. Once the app is running, we can open the http://localhost:8080/actuator/info endpoint in the browser and find the response as:\n{ \u0026#34;build\u0026#34;: { \u0026#34;customKey\u0026#34;: \u0026#34;customValue\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;0.0.1-SNAPSHOT\u0026#34;, \u0026#34;artifact\u0026#34;: \u0026#34;spring-boot-build-info\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Sample App\u0026#34;, \u0026#34;time\u0026#34;: \u0026#34;2022-03-06T09:11:53.380Z\u0026#34;, \u0026#34;group\u0026#34;: \u0026#34;io.reflectoring\u0026#34; } } We can exclude any default properties from the generated build information by setting its value to null. For example:\nproperties { group = null } To know more about the plugin, you can refer to the official Spring Boot documentation.\nAdding Git Info Git information comes handy to quickly identify if the relevant code is present in production or if the distributed deployments are in sync with expectations. Spring Boot can easily include Git properties in the Actuator endpoint using the Maven and Gradle plugins.\nUsing this plugin we can generate a git.properties file. The presence of this file will auto-configure the GitProperties bean to be used by the GitInfoContributor bean to collate relevant information.\nBy default the following information will be exposed:\n git.branch git.commit.id git.commit.time  The following management application properties control the Git related information:\n   Application Property Purpose     management.info.git.enabled=false Disables the Git information entirely from the info endpoint   management.info.git.mode=full Displays all the properties from the git.properties file    Using the Maven Plugin The Maven Git Commit ID plugin is managed via the spring-boot-starter-parent pom. To use this we have to edit the pom.xml as below:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;pl.project13.maven\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;git-commit-id-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;/plugin\u0026gt; If we run the project and open the /actuator/info endpoint in the browser, it will return the Git related information:\n{ \u0026#34;git\u0026#34;: { \u0026#34;branch\u0026#34;: \u0026#34;main\u0026#34;, \u0026#34;commit\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;5404bdf\u0026#34;, \u0026#34;time\u0026#34;: \u0026#34;2022-03-06T10:34:16Z\u0026#34; } } } We can also inspect the generated file under target/classes/git.properties. Here is what it looks like for me:\n#Generated by Git-Commit-Id-Plugin git.branch=main git.build.host=mylaptop git.build.time=2022-03-06T23\\:22\\:16+0530 git.build.user.email=user@email.com git.build.user.name=user git.build.version=0.0.1-SNAPSHOT git.closest.tag.commit.count= git.closest.tag.name= git.commit.author.time=2022-03-06T22\\:46\\:56+0530 git.commit.committer.time=2022-03-06T22\\:46\\:56+0530 git.commit.id=e9fa20d4914367c1632e3a0eb8ca4d2f32b31a89 git.commit.id.abbrev=e9fa20d git.commit.id.describe=e9fa20d-dirty git.commit.id.describe-short=e9fa20d-dirty git.commit.message.full=Update config git.commit.message.short=Update config git.commit.time=2022-03-06T22\\:46\\:56+0530 git.commit.user.email=saikat@email.com git.commit.user.name=Saikat git.dirty=true git.local.branch.ahead=NO_REMOTE git.local.branch.behind=NO_REMOTE git.remote.origin.url=Unknown git.tags= git.total.commit.count=2 This plugin comes with lot of configuration options. For example, to include/exclude specific properties we can add a configuration section like this:\n\u0026lt;configuration\u0026gt; \u0026lt;excludeProperties\u0026gt; \u0026lt;excludeProperty\u0026gt;time\u0026lt;/excludeProperty\u0026gt; \u0026lt;/excludeProperties\u0026gt; \u0026lt;includeOnlyProperties\u0026gt; \u0026lt;property\u0026gt;git.commit.id\u0026lt;/property\u0026gt; \u0026lt;/includeOnlyProperties\u0026gt; \u0026lt;/configuration\u0026gt; It will generate an output like below:\n{ \u0026#34;git\u0026#34;: { \u0026#34;commit\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;5404bdf\u0026#34; } } } Let\u0026rsquo;s now find out what options are available for Gradle users.\nUsing the Gradle Plugin In the build.gradle we will add the gradle-git-properties plugin:\nplugins { id \u0026#39;com.gorylenko.gradle-git-properties\u0026#39; version \u0026#39;2.4.0\u0026#39; } Let\u0026rsquo;s build the Gradle project now. We can see build/resources/main/git.properties file is created. And, the actuator info endpoint will display the same data:\n{ \u0026#34;git\u0026#34;: { \u0026#34;branch\u0026#34;: \u0026#34;main\u0026#34;, \u0026#34;commit\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;5404bdf\u0026#34;, \u0026#34;time\u0026#34;: \u0026#34;2022-03-06T10:34:16Z\u0026#34; } } } This plugin too provides multiple ways to configure the output using the attribute gitProperties. For example, let\u0026rsquo;s limit the keys to be present by adding below:\ngitProperties { keys = [\u0026#39;git.commit.id\u0026#39;] } Rerunning the app will now show limited Git info:\n{ \u0026#34;git\u0026#34;: { \u0026#34;commit\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;5404bdf\u0026#34; } } } Conclusion In this article, we learned how to use Spring Actuator to expose relevant information about our application. We found out how information about the build, environment, Git, and Java environment can be added to the Actuator /info endpoint. We also looked at how all this information can be configured and controlled by the Maven/Gradle build plugins.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"March 28, 2022","image":"https://reflectoring.io/images/stock/0121-info-1200x628-branded_hu63a91391dfac89694732105449fbe4e1_240885_650x0_resize_q90_box.jpg","permalink":"/spring-boot-info-endpoint/","title":"Exposing a Helpful Info Endpoint with Spring Boot Actuator"},{"categories":["Node"],"contents":"Middleware functions are an integral part of an application built with the Express framework (henceforth referred to as Express application). They access the HTTP request and response objects and can either terminate the HTTP request or forward it for further processing to another middleware function.\nMiddleware functions are attached to one or more route handlers in an Express application and execute in sequence from the time an HTTP request is received by the application till an HTTP response is sent back to the caller.\nThis capability of executing the Express middleware functions in a chain allows us to create smaller potentially reusable components based on the single responsibility principle(SRP).\nIn this article, we will understand the below concepts about Express middleware:\n Different types of middleware functions in Express. Create middleware functions using both JavaScript and TypeScript and attach them to one or more Express routes Use the middleware functions provided by Express and many third-party libraries in our Express applications. Use middleware functions as error handlers.   Example Code This article is accompanied by a working code example on GitHub. Prerequisites A basic understanding of Node.js and components of the Express framework is advisable.\nPlease refer to our earlier article for an introduction to Express.\nWhat is Express Middleware? Middleware in Express are functions that come into play after the server receives the request and before the response is sent to the client. They are arranged in a chain and are called in sequence.\nWe can use middleware functions for different types of processing tasks required for fulfilling the request like database querying, making API calls, preparing the response, etc, and finally calling the next middleware function in the chain.\nMiddleware functions take three arguments: the request object (request), the response object (response), and optionally the next() middleware function:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); function middlewareFunction(request, response, next){ ... next() } app.use(middlewareFunction) An exception to this rule is error handling middleware which takes an error object as the fourth parameter. We call app.use() to add a middleware function to our Express application.\nUnder the hood, when we call app.use(), the Express framework adds our middleware function to its internal middleware stack. Express executes middleware in the order they are added, so if we make the calls in this order:\napp.use(function1) app.use(function2) Express will first execute function1 and then function2.\nMiddleware functions in Express are of the following types:\n Application-level middleware which runs for all routes in an app object Router level middleware which runs for all routes in a router object Built-in middleware provided by Express like express.static, express.json, express.urlencoded Error handling middleware for handling errors Third-party middleware maintained by the community  We will see examples of each of these types in the subsequent sections.\nBasic Setup for Running the Examples We need to first set up a Node.js project for running our examples of using middleware functions in Express.\nLet us create a folder and initialize a Node.js project under it by running the npm init command:\nmkdir storefront cd storefront npm init -y Running these commands will create a Node.js project containing a package.json file.\nWe will next install the Express framework using the npm install command as shown below:\nnpm install express --save When we run this command, it will install the Express framework and also add it as a dependency in our package.json file.\nWe will now create a file named index.js and open the project folder in our favorite code editor. We are using Visual Studio Code as our source-code editor.\nLet us now add the following lines of code to index.js for running a simple HTTP server:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); // Route for handling get request for path / app.get(\u0026#39;/\u0026#39;, (request, response) =\u0026gt; { response.send(\u0026#39;response for GET request\u0026#39;); }) // Route for handling post request for path /products app.post(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { ... response.json(...) }) // start the server app.listen(3000, () =\u0026gt; console.log(\u0026#39;Server listening on port 3000.\u0026#39;)) In this code snippet, we are importing the express module and then calling the listen() function on the app handle to start our server.\nWe have also defined two routes which will accept the requests at URLs: / and /products. For an elaborate explanation of routes and handler function, please refer to our earlier article for an introduction to Express.\nWe can run our application with the node command:\nnode index.js This will start a server that will listen for requests in port 3000. We will now add middleware functions to this application in the following sections.\nUsing Express' Built-in Middleware Built-in middleware functions are bundled with Express so we do not need to install any additional modules for using them.\nExpress provides the following Built-in middleware functions:\n   Function Description     express.static serves static assets   express.json parses JSON payloads   express.urlencoded parses URL-encoded payloads   express.raw parses payloads into a Buffer and makes them available under req.body   express.text parses payloads into a string    Let us see some examples of their use.\nUsing express.static for Serving Static Assets We use the express.static built-in middleware function to serve static files such as images, CSS files, and JavaScript files. Here is an example of using express.static to serve our HTML and image files:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); app.use(express.static(\u0026#39;images\u0026#39;)) app.use(express.static(\u0026#39;htmls\u0026#39;)) app.get(\u0026#39;product\u0026#39;, (request, response)=\u0026gt;{ response.sendFile(\u0026#34;productsample.html\u0026#34;) }) Here we have defined two static paths named images and htmls to represent two folders of the same name in our root directory. We have also defined multiple static assets directories by calling the express.static() middleware function multiple times.\nOur root directory structure looks like this:\n. ├── htmls │ └── productsample.html ├── images │ └── sample.jpg ├── index.js ├── node_modules Express looks for the files in the order in which we set the static directories with the express.static middleware function.\nIn our example, we have defined the images directory before htmls. So Express will look for the file: productsample.html in the images directory first. If the file is not found in the images directory, Express looks for the file in the htmls directory.\nNext we have defined a route with url product to serve the static HTML file productsample.html. The HTML file contains an image referred only with the image name sample.jpg:\n\u0026lt;html\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h2\u0026gt;My sample product page\u0026lt;/h2\u0026gt; \u0026lt;img src=\u0026#34;sample.jpg\u0026#34; alt=\u0026#34;sample\u0026#34;\u0026gt;\u0026lt;/img\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Express looks up the files relative to the static directory, so the name of the static directory is not part of the URL.\nUsing express.json for Parsing JSON Payloads We use the express.json built-in middleware function to JSON content received from the incoming requests.\nLet us suppose the route with URL /products in our Express application accepts product data from the request object in JSON format. So we will use Express' built-in middleware express.json for parsing the incoming JSON payload and attach it to our router object as shown in this code snippet:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); // Attach the express.json middleware to route \u0026#34;/products\u0026#34; app.use(\u0026#39;/products\u0026#39;, express.json({ limit: 100 })) // handle post request for path /products app.post(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { ... ... response.json(...) }) Here we are attaching the express.json middleware by calling the use() function on the app object. We have also configured a maximum size of 100 bytes for the JSON request.\nWe have used a slightly different signature of the use() function than the signature of the function used before. The use() function invoked on the app object here takes the URL of the route: /products to which the middleware function will get attached, as the first parameter. Due to this, this middleware function will be called only for this route.\nNow we can extract the fields from the JSON payload sent in the request body as shown in this route definition:\nconst express = require(\u0026#39;express\u0026#39;) const app = express() // Attach the express.json middleware to route \u0026#34;/products\u0026#34; app.use(\u0026#39;/products\u0026#39;, express.json({ limit: 100 })) // handle post request for path /products app.post(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { const products = [] // sample JSON request  // {\u0026#34;name\u0026#34;:\u0026#34;furniture\u0026#34;, \u0026#34;brand\u0026#34;:\u0026#34;century\u0026#34;, \u0026#34;price\u0026#34;:1067.67}  // JSON payload is parsed to extract  // the fields name, brand, and category  // Extract name of product  const name = request.body.name // Extract brand of product  const brand = request.body.brand // Extract category of product  const category = request.body.category console.log(name + \u0026#34; \u0026#34; + brand + \u0026#34; \u0026#34; + category) ... ... response.json(...) }) Here we are extracting the contents of the JSON request by calling request.body.FIELD_NAME before using those fields for adding a new product.\nSimilarly we can use express' built-in middleware express.urlencoded() to process URL encoded fields submitted through a HTTP form object:\napp.use(express.urlencoded({ extended: false })); Then we can use the same code for extracting the fields as we had used before for extracting the fields from a JSON payload.\nAdding a Middleware Function to a Route Let us now see how to create a middleware function of our own.\nAs an example, let us check for the presence of JSON content in the HTTP POST request body before allowing any further processing and send back an error response if the request body does not contain JSON content.\nOur middleware function for checking for the presence of JSON content looks like this:\nconst requireJsonContent = (request, response, next) =\u0026gt; { if (request.headers[\u0026#39;content-type\u0026#39;] !== \u0026#39;application/json\u0026#39;) { response.status(400).send(\u0026#39;Server requires application/json\u0026#39;) } else { next() } } Here we are checking the value of the content-type header in the request. If the value of the content-type header does not match application/json, we are sending back an error response with status 400 accompanied by an error message thereby ending the request-response cycle.\nOtherwise, if the content-type header is application/json, the next() function is invoked to call the subsequent middleware present in the chain.\nNext we will add the middleware function: requireJsonContent to our desired route like this:\nconst express = require(\u0026#39;express\u0026#39;) const app = express() // handle post request for path /products app.post(\u0026#39;/products\u0026#39;, requireJsonContent, (request, response) =\u0026gt; { ... ... response.json(...) }) We can also attach more than one middleware function to a route to apply multiple stages of processing.\nOur route with multiple middleware functions attached will look like this:\nconst express = require(\u0026#39;express\u0026#39;) const app = express() // handle post request for path /products app.post(\u0026#39;/products\u0026#39;, // first function in the chain will check for JSON content  requireJsonContent, // second function will check for valid product category  // in the request if the first function detects JSON  (request, response) =\u0026gt; { // Allow to add only products in the category \u0026#34;Electronics\u0026#34;  const category = request.body.category if(category != \u0026#34;Electronics\u0026#34;) { response.status(400).send(\u0026#39;Server requires application/json\u0026#39;) } else { next() } ... ... // add the product and return a response in JSON  response.json( {productID: \u0026#34;12345\u0026#34;, result: \u0026#34;success\u0026#34;)} ); Here we have two middleware functions attached to the route with route path /products.\nThe first middleware function requireJsonContent() will pass the control to the next function in the chain if the content-type header in the HTTP request contains application/json.\nThe second middleware function extracts the category field from the JSON request and sends back an error response if the value of the category field is not Electronics.\nOtherwise, it calls the next() function to process the request further which adds the product to a database for example, and sends back a response in JSON format to the caller.\nWe could have also attached our middleware function by using the use() function of the app object as shown below:\nconst express = require(\u0026#39;express\u0026#39;) const app = express() // first function in the chain will check for JSON content app.use(\u0026#39;/products\u0026#39;, requireJsonContent) // second function will check for valid product category // in the request if the first function detects JSON app.use(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { // Allow to add only products in the category \u0026#34;Electronics\u0026#34;  const category = request.body.category if(category != \u0026#34;Electronics\u0026#34;) { response.status(400).send(\u0026#39;Server requires application/json\u0026#39;) } else { next() } }) // handle post request for path /products app.post(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { ... ... response.json( {productID: \u0026#34;12345\u0026#34;, result: \u0026#34;success\u0026#34;}) }) Understanding The next() Function The next() function is a function in the Express router that, when invoked, executes the next middleware in the middleware stack.\nIf the current middleware function does not end the request-response cycle, it must call next() to pass control to the next middleware function. Otherwise, the request will be left hanging.\nWhen we have multiple middleware functions, we need to ensure that each of our middleware functions either calls the next() function or sends back a response. Express will not throw an error if our middleware does not call the next() function and will simply hang.\nThe next() function could be named anything, but by convention it is always named “next”.\nAdding a Middleware Function to All Requests We might also want to perform some common processing for all the routes and specify them in one place instead of repeating them for all the route definitions. Examples of common processing are authentication, logging, common validations, etc.\nLet us suppose we want to print the HTTP method (get, post, etc.) and the URL of every request sent to the Express application. Our middleware function for printing this information will look like this:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); const requestLogger = (request, response, next) =\u0026gt; { console.log(`${request.method}url:: ${request.url}`); next() } app.use(requestLogger) This middleware function: requestLogger accesses the method and url fields from the request object to print the request URL along with the HTTP method to the console.\nFor applying the middleware function to all routes, we will attach the function to the app object that represents the express() function.\nSince we have attached this function to the app object, it will get called for every call to the express application. Now when we visit http://localhost:3000 or any other route in this application, we can see the HTTP method and URL of the incoming request object in the terminal window.\nAdding a Middleware Function for Error Handling Express comes with a default error handler that takes care of any errors that might be encountered in the application. The default error handler is added as a middleware function at the end of the middleware function stack.\nWe can change this default error handling behavior by adding a custom error handler which is a middleware function that takes an error parameter in addition to the parameters: request, response, and the next() function. The error handling middleware functions are attached after the route definitions.\nThe basic signature of an error-handling middleware function in Express looks like this:\nfunction customErrorHandler(error, request, response, next) { // Error handling middleware functionality  } When we want to call an error-handling middleware, we pass on the error object by calling the next() function with the error argument like this:\nconst errorLogger = (error, request, response, next) =\u0026gt; { console.log( `error ${err.message}`) next(error) // calling next middleware } Let us define three middleware error handling functions and add them to our routes. We have also added a new route that will throw an error as shown below:\n// Error handling Middleware functions const errorLogger = (error, request, response, next) =\u0026gt; { console.log( `error ${error.message}`) next(error) // calling next middleware } const errorResponder = (error, request, response, next) =\u0026gt; { response.header(\u0026#34;Content-Type\u0026#34;, \u0026#39;application/json\u0026#39;) const status = error.status || 400 response.status(status).send(error.message) } const invalidPathHandler = (request, response, next) =\u0026gt; { response.status(400) response.send(\u0026#39;invalid path\u0026#39;) } app.get(\u0026#39;product\u0026#39;, (request, response)=\u0026gt;{ response.sendFile(\u0026#34;productsample.html\u0026#34;) }) // handle get request for path / app.get(\u0026#39;/\u0026#39;, (request, response) =\u0026gt; { response.send(\u0026#39;response for GET request\u0026#39;); }) app.post(\u0026#39;/products\u0026#39;, requireJsonContent, (request, response) =\u0026gt; { ... }) app.get(\u0026#39;/productswitherror\u0026#39;, (request, response) =\u0026gt; { let error = new Error(`processing error in request at ${request.url}`) error.statusCode = 400 throw error }) app.use(errorLogger) app.use(errorResponder) app.use(invalidPathHandler) app.listen(PORT, () =\u0026gt; { console.log(`Server listening at http://localhost:${PORT}`) }) These middleware error handling functions perform different tasks: errorLogger logs the error message,errorResponder sends the error response to the client, and invalidPathHandler responds with a message for invalid path when a non-existing route is requested.\nWe have next attached these three middleware functions for handling errors to the app object by calling the use() method after the route definitions.\nTo test how our application handles errors with the help of these error handling functions, let us invoke the route with URL: localhost:3000/productswitherror.\nNow instead of the default error handler, the first two error handlers get triggered. The first one logs the error message to the console and the second one sends the error message in the response.\nWhen we request a non-existent route, the third error handler is invoked giving us an error message: invalid path.\nUsing Third-Party Middleware We can also use third-party middleware to add functionality built by the community to our Express applications. These are usually available as npm modules which we install by running the npm install command in our terminal window. The following example illustrates installing and loading a third-party middleware named Morgan which is an HTTP request logging middleware for Node.js.\nnpm install morgan After installing the module containing the third-party middleware, we need to load the middleware function in our Express application as shown below:\nconst express = require(\u0026#39;express\u0026#39;) const morgan = require(\u0026#39;morgan\u0026#39;) const app = express() app.use(morgan(\u0026#39;tiny\u0026#39;)) Here we are loading the middleware function morgan by calling require() and then attaching the function to our routes with the use() method of the app instance.\nDeveloping Express Middleware with TypeScript TypeScript is an open-source language developed by Microsoft. It is a superset of JavaScript with additional capabilities, most notable being static type definitions making it an excellent tool for a better and safer development experience.\nLet us first add support for TypeScript to our Node.js project and then see a snippet of the middleware functions written using the TypeScript language.\nInstalling TypeScript and other Configurations For adding TypeScript, we need to perform the following steps:\n Install Typescript and ts-node with npm:  npm i -D typescript ts-node Create a JSON file named tsconfig.json with the below contents in our project’s root folder to specify different options for compiling the TypeScript code as shown here:  { \u0026#34;compilerOptions\u0026#34;: { \u0026#34;module\u0026#34;: \u0026#34;commonjs\u0026#34;, \u0026#34;target\u0026#34;: \u0026#34;es6\u0026#34;, \u0026#34;rootDir\u0026#34;: \u0026#34;./\u0026#34;, \u0026#34;esModuleInterop\u0026#34;: true } } Install the type definitions of the Node APIs and Express to be fetched from the @types namespace by installing the @types/node and @types/express packages as a development dependency:  npm i -D @types/node @types/express Writing Express Middleware Functions in TypeScript The Express application is written in TypeScript language in a file named app.ts. Here is a snippet of the code:\nimport express, { Request, Response, NextFunction } from \u0026#39;express\u0026#39; import morgan from \u0026#39;morgan\u0026#39; const app = express() const port: number = 3000 // Define the types to be used in the application interface Product { name: string price: number brand: string category?: string } interface ProductCreationResponse { productID: string result: string } // Error object used in error handling middleware function class AppError extends Error{ statusCode: number; constructor(statusCode: number, message: string) { super(message); Object.setPrototypeOf(this, new.target.prototype); this.name = Error.name; this.statusCode = statusCode; Error.captureStackTrace(this); } } const requestLogger = (request: Request, response: Response, next: NextFunction) =\u0026gt; { console.log(`${request.method}url:: ${request.url}`); next() } app.use(express.static(\u0026#39;images\u0026#39;)) app.use(express.static(\u0026#39;htmls\u0026#39;)) app.use(requestLogger) app.use(morgan(\u0026#39;tiny\u0026#39;)) app.use(\u0026#39;/products\u0026#39;, express.json({ limit: 100 })) // Error handling Middleware functions const errorLogger = ( error: Error, request: Request, response: Response, next: NextFunction) =\u0026gt; { console.log( `error ${error.message}`) next(error) // calling next middleware  } const errorResponder = ( error: AppError, request: Request, response: Response, next: NextFunction) =\u0026gt; { response.header(\u0026#34;Content-Type\u0026#34;, \u0026#39;application/json\u0026#39;) const status = error.statusCode || 400 response.status(status).send(error.message) } const invalidPathHandler = ( request: Request, response: Response, next: NextFunction) =\u0026gt; { response.status(400) response.send(\u0026#39;invalid path\u0026#39;) } app.get(\u0026#39;product\u0026#39;, (request: Request, response: Response)=\u0026gt;{ response.sendFile(\u0026#34;productsample.html\u0026#34;) }) // handle get request for path / app.get(\u0026#39;/\u0026#39;, (request: Request, response: Response) =\u0026gt; { response.send(\u0026#39;response for GET request\u0026#39;); }) const requireJsonContent = (request: Request, response: Response, next: NextFunction) =\u0026gt; { if (request.headers[\u0026#39;content-type\u0026#39;] !== \u0026#39;application/json\u0026#39;) { response.status(400).send(\u0026#39;Server requires application/json\u0026#39;) } else { next() } } const addProducts = (request: Request, response: Response, next: NextFunction) =\u0026gt; { let products: Product[] = [] ... ... const productCreationResponse: ProductCreationResponse = {productID: \u0026#34;12345\u0026#34;, result: \u0026#34;success\u0026#34;} response.json(productCreationResponse) response.status(200).json(products); } app.post(\u0026#39;/products\u0026#39;, addProducts) app.get(\u0026#39;/productswitherror\u0026#39;, (request: Request, response: Response) =\u0026gt; { let error: AppError = new AppError(400, `processing error in request at ${request.url}`) throw error }) app.use(errorLogger) app.use(errorResponder) app.use(invalidPathHandler) app.listen(port, () =\u0026gt; { console.log(`Server listening at port ${port}.`) }) Here we have used the express module to create a server as we have seen before. With this configuration, the server will run on port 3000 and can be accessed with the URL: http://localhost:3000.\nWe have modified the import statement on the first line to import the TypeScript interfaces that will be used for the request, response, and next parameters inside the Express middleware.\nNext, we have defined a type named Product containing attributes: name, price, category, and brand. After we have defined the handler function for returning an array of products and finally associated it with a route with route path /products.\nRunning the Express Application Written in TypeScript We run the Express application written in TypeScript code by using the below command:\nnpx ts-node app.ts Running this command will start the HTTP server. We have used npx here which is a command-line tool that can execute a package from the npm registry without installing that package.\nConclusion Here is a list of the major points for a quick reference:\n  Express middleware refers to a set of functions that execute during the processing of HTTP requests received by an Express application.\n  Middleware functions access the HTTP request and response objects. They either terminate the HTTP request or forward it for further processing to another middleware function.\n  We can add middleware functions to all the routes by using the app.use(\u0026lt;middleware function\u0026gt;).\n  We can add middleware functions to selected routes by using the app.use(\u0026lt;route url\u0026gt;, \u0026lt;middleware function\u0026gt;).\n  Express comes with built-in middleware functions like:\n express.static for serving static resources like CSS, images, and HTML files. express.json for parsing JSON payloads received in the request body express.urlencoded for parsing URL encoded payloads received in the request body    Express middleware functions are also written and distributed as npm modules by the community. These can be integrated into our application as third-party middleware functions.\n  We perform error handling in Express applications by writing middleware functions that handle errors. These error handling functions take the error object as the fourth parameter in addition to the parameters: request, response, and the next function.\n  Express comes with a default error handler for handling error conditions. This is a default middleware function added by Express at the end of the middleware stack.\n  We also used TypeScript to define a Node.js server application containing middleware functions,\n  You can refer to all the source code used in the article on Github.\n","date":"March 26, 2022","image":"https://reflectoring.io/images/stock/0118-keyboard-1200x628-branded_huf25a9b6a90140c9cfeb91e792ab94429_105919_650x0_resize_q90_box.jpg","permalink":"/express-middleware/","title":"Complete Guide to Express Middleware"},{"categories":["Spring"],"contents":"In this article, we will build a production-grade application with Spring Boot.\nAfter understanding the use case and requirements, we will implement the application layer by layer.\nLet us dive into the requirements for the application.\n Example Code This article is accompanied by a working code example on GitHub. Requirements We need to build an application for the local bookstore that will allow them to keep track of borrowed books.\nThe bookstore wants to have these functionalities in the application:\n The application should be accessible through a web browser. The user has to provide their name, last name, email, and password. Each book has information about an author, date of publication, and the number of copies (instances) in the bookstore. The user can see all available books. The user can borrow a book. Each user can borrow only three books at one point in time. The admin user can see all users that currently own a specific book. The admin user can add a new book into the bookstore. The admin user can delete a book from the bookstore. The admin user can update information about a book in the bookstore.  High-level Architecture Let\u0026rsquo;s take a look at a high-level architecture diagram to see how the application behaves:\nTyping www.bookstore.loc in the browser will guide the user to the homepage. The homepage triggers the request to load all books available in the bookstore.\nInside the Spring Boot Application box, we can see layers we will implement through this article.\nController classes accept the request from the browser and send it further down layers.\nMethods inside the service layers contain all business logic. We will implement most requirements from the chapter above in this layer.\nThe service methods contact the repository layer to access the database.\nWe store data about books and users in the in-memory H2 database.\nSetting up the Project Spring provides the Spring Initializr project, which creates an application skeleton for us. The generated application eases the configuration phase and allows us to dive straight into the code.\nWe will look at two ways of creating a new Spring Boot project:\n Creating the project using Spring Initializr Creating the project using the IntelliJ IDE  Both ways use the Spring Initializr project underneath and you can choose whichever way works best for you.\nCreating the Project with Spring Initializr On the Spring Initilizr page, we can create a new Spring Boot project:\nOn this page, we provide the metadata about the application. After defining all necessary information, we can go and add our dependencies.\nBy clicking on the button on the top right corner, we can see the dependency selection screen:\nWe will choose these three dependencies:\n Spring Web Spring Data JPA H2 Database  After selecting desired dependencies, we can generate our project by clicking the \u0026ldquo;Generate\u0026rdquo; button on the lower-left corner of the screen. Clicking the button will download the zip file onto our machine. After unpacking the zip and importing the project into IDE, we can start developing.\nCreating the Project with IntelliJ In the IntelliJ IDE, we can go to the File -\u0026gt; New -\u0026gt; Project\u0026hellip;. We will get the next screen:\nAfter defining the project on this screen, we can move forward to the next screen:\nWe choose the same dependencies as above and click the finish button to create the project and start developing.\nProject Dependencies Let\u0026rsquo;s learn a bit about the dependencies that we selected for our new project.\nThe Spring Web Dependency The Spring Web dependency gives us everything we need to build a Spring MVC application.\nSpring MVC is Spring\u0026rsquo;s implementation of the Model-View-Controller design pattern. A controller is the front part of the application that takes incoming requests and relays them to the right destination. Model is the object or collection that holds our data, and view represents the pages that the browser renders.\nLet us look into the high-level architecture again and determine what does Spring Web provides:\nThe Spring Web dependency provides core Spring features (Inversion of Control, Spring MVC, server container for local running, etc.). With the Spring Web dependency, we can create controller classes from the image above.\nThe controller does several things:\n Accepts incoming HTTP requests Validates and deserializes input Sends the data to the business logic layer Deserializes output from the business logic layer Handles exceptions and returns a correct response to the user  If we run the application, we will see that we can access the application on http://localhost:8080. We won\u0026rsquo;t see much, but the application will be up and running.\nIf you want to dive deeper into the responsibilities of a web controller and how to write tests for them, have a look at the article on testing MVC web controllers.\nThe Spring Data JPA Dependency Building the data access layer can be cumbersome, and Spring Data JPA data gives us everything we need to start communicating with the database.\nJPA (Java Persistence API) is a set of concepts that helps us write code against a database. Since JPA is just a set of specifications, it requires an implementation like Hibernate ORM.\nHibernate is an ORM (Object/Relational Mapping) solution. Hibernate takes care of mapping between Java classes and tables in the database. It allows us to edit data in tables without writing any SQL code.\nLet us look into the application diagram to see which parts Spring Data JPA supports:\nWith Spring Data JPA, we can use the @Entity annotation to create database entities.\nThe database entity is the direct connection between the application and the table in the database. Using annotations like @ManyToMany, @Column, etc., we can define relationships between tables and constraints on columns.\nSpring Data JPA also provides the repository interfaces:\n CrudRepository PagingAndSortingRepository JpaRepository  Repositories are interfaces that hide the logic required for accessing the database. By extending the interface, we can run queries on the database without worrying about connection details.\nWe will talk more about this when we start building the repository layer.\nIf you want to dive deeper into Spring Data JPA repositories and how to write tests for them, have a look at the article on testing JPA queries.\nSpring Data JPA Alternatives Spring Data JPA is only one of many different alternatives of data access layers.\nTo read more about alternatives to Spring Data JPA, like Spring Data JDBC or Spring Data Neo4J please refer to the official page.\n The H2 Database Dependency The in-memory H2 database is excellent for fast iteration when we don\u0026rsquo;t care that the data is lost when we shut down the application.\nThis dependency offers us the ability to define the H2 database shown in the image above. We can define it as an in-memory or file-based database.\nIn the development environment, when we want to keep the data for easier access and testing different use cases, we want to configure the file-based database. The data will remain permanent at the desired location. We can access the data each time we start the application.\nThe H2 database We should use the H2 database only in the prototyping phase. For later development and production, we should use something more stable and production-ready (e.g. Oracle, PostgreSQL, MySQL, etc.)\n Project Files The initialization process creates several files and folders. One of those files is the pom.xml.\nThe pom.xml defines all dependencies we are using in our project. Each dependency has its pom.xml. The inner pom.xml declares what does it bring into the application.\nThe dependency is the package that contains a piece of code that our project needs to run successfully.\nLet us look into the pom.xml that Spring Boot generated for us:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;parent\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-parent\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.6.2\u0026lt;/version\u0026gt; \u0026lt;relativePath/\u0026gt; \u0026lt;!-- lookup parent from repository --\u0026gt; \u0026lt;/parent\u0026gt; \u0026lt;groupId\u0026gt;com.reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;beginners-guide\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.0.1-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;name\u0026gt;beginners-guide\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;beginners-guide\u0026lt;/description\u0026gt; \u0026lt;properties\u0026gt; \u0026lt;java.version\u0026gt;11\u0026lt;/java.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-web\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-test\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-jpa\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.6.3\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.h2database\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;h2\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.1.210\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;/project\u0026gt; We can see that most of our dependencies have the keyword starter. The starter keyword means that this dependency is built for Spring Boot and comes with pre-made configurations that we can use out of the box. Before the starter dependency, the user had to provide all dependencies manually. Also, we had to create our configuration for most things. The new approach helps us to start the development process much faster.\nIt is important to note that the configurations that come with starter dependencies are not invasive. We can create custom configurations only where we need them.\nIf you want to know more about Spring Boot starters, have a look at the quick guide to building a Spring Boot starter.\nBuilding the Database Entities A database entity represents the direct link with a database table. Entity classes represent columns, relationships between different tables, and constraints.\nWhile creating database entities, we have to think about requirements from the beginning of the article:\n The user has to provide their name, last name, email, and password. Each book has information about the author, date of publication, number of instances, and users that currently own the book. The user can see all books and borrow them.  After sketching which data tables we need, we will create those objects in the Java code.\nDefining the Database Entities Let\u0026rsquo;s take a look at the database diagram for our application:\nWe have three tables in the database:\n user book borrowed_books  The user table contains the columns id, name, lastname, email, and password. The id is our primary key, and the database will autogenerate it. We will see how to do it in the next chapter.\nThe book table has the columns id, title, author, publication, and numberOfInstances in the bookstore.\nThe borrowed_books table represents the many-to-many relationship between user and book. The many-to-many relationship means that one user can borrow several books and that one book can be borrowed by several users at the same time (given there are enough copies of the book).\nImplementing the Book Entity After we have defined the database, we can start implementing entity classes. The entity class is the direct connection to the table in the database:\n@Entity(name = \u0026#34;book\u0026#34;) public class Book { //The rest of the class omitted  } The @Entity annotation indicates that the annotated class is a JPA entity. The name attribute inside the annotation defines the table name. Setting the name is not mandatory, but if we do not set it, Spring will assume that the table name is the same as the class name.\nDefining a Primary Key When defining a class as an entity, we need to provide an id column using the @Id annotation. The id column is the primary key of that table.\n@Entity(name = \u0026#34;book\u0026#34;) public class Book { @Id @GeneratedValue(strategy = GenerationType.AUTO) private long id; //The rest of the class omitted  } We need to decide how will the id column be generated. Using the @GeneratedValue annotation we can define several different strategies:\n IDENTITY SEQUENCE TABLE AUTO (chooses automatically between the three options above)  We used the AUTO strategy, which leverages whatever the database prefers. Most databases prefer to use the SEQUENCE strategy for their primary key definition.\nIDENTITY Generation Strategy The GenerationType.IDENTITY strategy allows the database to autoincrement the id value when we insert a new row. Let us look into the example of using the identity generation strategy:\n@Entity(name = \u0026#34;book\u0026#34;) public class Book { @Id @GeneratedValue( strategy = GenerationType.IDENTITY) private long id; // Rest of the code omitted } We define the identity generation type in the strategy attribute of the @GeneratedValue. While the identity strategy is highly efficient for the database, it doesn\u0026rsquo;t perform well with the Hibernate ORM. Hibernate expects that every managed entity has its id set, so it needs to go and call the database to insert the id.\nThe identity strategy is excellent for fast iteration and the early stages. When we move to the development environment, we should move to something more stable and with better performance.\nSEQUENCE Generation Strategy The GenerationType.SEQUENCE strategy uses a database sequence to determine which primary key to select next. Let us look at how to define the primary key with sequence generators:\n@Entity(name = \u0026#34;book\u0026#34;) public class Book { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = \u0026#34;book_generator\u0026#34;) @SequenceGenerator(name = \u0026#34;book_generator\u0026#34;, sequenceName = \u0026#34;book_seq\u0026#34;, initialValue = 10) private long id; } To define the sequence generator, we annotate the field with the @SequenceGenerator annotation. We declare the name for Hibernate, the sequence name, and the initial value. After defining the generator, we need to connect it to the @GeneratedValue annotation by setting its name in the generator attribute.\nTABLE Generation Strategy The GenerationType.TABLE strategy uses a separate table to keep track of which primary key can be next:\n@Entity(name = \u0026#34;book\u0026#34;) public class Book { @Id @GeneratedValue(strategy = GenerationType.TABLE, generator = \u0026#34;book_generator\u0026#34;) @TableGenerator(name = \u0026#34;book_generator\u0026#34;, table = \u0026#34;book_id_table\u0026#34;) private long id; } We need to define the @TableGenerator annotation with the name and table attributes. We provide the generator name to the generator attribute inside the @GeneratedValue annotation.\nDefining the Columns We define each table column with the @Column annotation and the name of the column inside the name attribute:\n@Entity(name = \u0026#34;book\u0026#34;) public class Book { @Id @GeneratedValue(strategy = GenerationType.AUTO) private long id; @Column(name = \u0026#34;title\u0026#34;) private String title; @Column(name = \u0026#34;author\u0026#34;) private String author; @Column(name = \u0026#34;publication\u0026#34;) private Date publication; @Column(name = \u0026#34;numberOfInstances\u0026#34;) private int numberOfInstances; //The rest of the class omitted  } If we do not provide the name attribute, Hibernate will assume that the column name is the same as the field name inside the Java class.\nIt is always better to be safe and set the name attribute so that things do not start crashing when someone accidentally changes a field name.\nDefining the Many-to-Many Relationship We define the many-to-many relationship between Book and User with the @ManyToMany annotation\nEach relationship has two sides:\n the owner - the table that has the foreign key the target - the table to which one the owner is referring to with foreign key  The Owner of the Relationship One side of that relationship needs to be the owning side and define the join table. We decided that it will be the User side:\n@Entity(name = \u0026#34;_user\u0026#34;) public class User { // Rest of the code omitted  @ManyToMany @JoinTable( name = \u0026#34;borrowed_books\u0026#34;, joinColumns = @JoinColumn(name = \u0026#34;user_id\u0026#34;), inverseJoinColumns = @JoinColumn(name = \u0026#34;book_id\u0026#34;) ) private List\u0026lt;Book\u0026gt; borrowedBooks; // Rest of the code omitted } After setting the @ManyToMany annotation, we need to define the table that will connect the user and book tables. In the @JoinTable annotation, we declare the name, foreign key, and inverse foreign key. We define the foreign and the inverse foreign key with the @JoinColumn annotation.\nThe Target Side @Entity(name = \u0026#34;book\u0026#34;) public class Book { // Rest of the code omitted  @ManyToMany(mappedBy = \u0026#34;borrowedBooks\u0026#34;) private List\u0026lt;User\u0026gt; users; // Rest of the code omitted } We defined the target side with the @ManyToMany annotation with mappedBy attribute. We set the mappedBy attribute to the name on the owning side.\nConfiguring the Database We said that the entity class represents the direct link to the table in the database. We have to define a configuration for database location, login information, etc. In Spring Boot applications, the application.properties file (or application.yml, if you prefer YAML over properties) is the place where we set that information.\nEven though the Spring Boot framework comes with the configurations for most things, we need to tap in and change them. The excellent thing about provided configurations is that they are non-invasive, and we can change only those things that we need.\nIn this example, we will see how to configure the database. The H2 database can be an in-memory or persistent file-based database, and let us look at how to set them up.\nDefining the Database URL Let us look how we define the URL for the in-memory database.\nspring.datasource.url=jdbc:h2:mem:localdb spring.jpa.database-platform=org.hibernate.dialect.H2Dialect spring.datasource.driver-class-name=org.h2.Driver # Rest of the configuration is omitted After defining the keywords jdbc and h2 to note that we are using the H2 database, we are defining that we are using the in-memory database with the mem keyword. The localdb is the database name and can be whatever we want.\nThe in-memory database is good for fast iterations and prototyping, but we need something more persistent when we go into full development. Let us see how to define the file-based H2 database:\nspring.datasource.url=jdbc:h2:file:/Users/mateostjepanovic/Documents/git/code-examples/spring-boot/beginners-guide/src/main/resources/data/demo;AUTO_SERVER=true spring.jpa.database-platform=org.hibernate.dialect.H2Dialect spring.datasource.driver-class-name=org.h2.Driver # Rest of the configuration is omitted After defining the jdbc and the h2 keywords, we note that we want to use a file-based H2 database with the file keyword. The last part of the URL is the absolute path to the folder where we want to save our database.\nThe H2 database Please note that we should use the H2 database only for the development phase. When going to the production environment, move to something more persistent and production-ready like Oracle, PostgreSQL, etc.\n Defining the Database Login Credentials After we defined the URL for the database, we need the login information for that database:\nspring.datasource.username=username spring.datasource.password=password spring.h2.console.enabled=true # Rest of the configuration is omitted We define the username and password for the database.\nDon\u0026rsquo;t Put Passwords in Your Configuration Files! Please note that it\u0026rsquo;s fine to put the password in the configuration file only because it\u0026rsquo;s for an in-memory H2 database that is only used for local testing. In a production environment, the password and username should be externalized via environment variables or similar. Spring Boot supports first-class support for externalizing configuration.\n Defining Schema Creation We can set the configuration to autocreate the schema for us:\nspring.jpa.hibernate.ddl-auto=create spring.jpa.generate-ddl=true # Rest of the configuration is omitted Setting hibernate.ddl-auto to create, we tell Hibernate that we want to destroy and recreate the schema on each run. Only use this setting for testing!\nBuilding the Data Access Layer After creating entities, we can develop the data access layer. The data access layer allows us to use methods to manipulate the data in the database. We will build the data access layer using the repository pattern.\nThe repository pattern is the design pattern that leverages the usage of interfaces for connection to the database. The repository interface hides all implementation details of connecting to the database, maintaining the connection, transactions, etc.\nLet us look at how to create the repository for the entity and explain how we use it.\nCreating the Book Repository public interface BookRepository extends JpaRepository\u0026lt;Book, Long\u0026gt; { // Rest of the code is omitted } Extending JpaRepository turns our interface into the repository bean that is added to Spring\u0026rsquo;s ApplicationContext.\nSpring\u0026rsquo;s ApplicationContext contains all the objects that make up our application. Objects in the ApplicationContext are called \u0026ldquo;beans\u0026rdquo; in Spring lingo. If an object is in the ApplicationContext, it can be injected into other beans and thus used by them.\nThe dependency injection is the pattern where objects do not construct dependencies they need but let the controller (in this case, Spring) do it.\nWhen a class is a Spring Bean class, we are sure that we will get the instance where ever it is asked for using the @Autowired annotation, or one of the other ways of injection explained in the next chapter.\nOur repository can extend several different interfaces:\n CrudRepository PagingAndSortingRepository JpaRepository  The CrudRepository contains CRUD (Create, Read, Update, Delete) methods. It is the most basic one, and we should use it when we do not need anything besides those four methods.\nThe PagingAndSortingRepository extends the CrudRepository and adds some more functionality. Besides CRUD methods, we can fetch results in pages from the database and sort them with simple interface methods.\nThe JpaRepository extends the PagingAndSortingRepository. Except for methods from the PagingAndSortingRepository and the CrudRepository, we can flush and delete database records in batches.\nBesides queries from the repository interface, we can create custom queries. We can do it in several ways:\n Creating a native SQL query Creating a JPQL query Creating a named method query  Native SQL Queries public interface BookRepository extends JpaRepository\u0026lt;Book, Long\u0026gt; { @Query(nativeQuery = true, value = \u0026#34;SELECT * FROM Book \u0026#34; + \u0026#34;book WHERE book.currentlyAvailableNumber \u0026gt; 5 \u0026#34;) List\u0026lt;Book\u0026gt; findWithMoreInstancesThenFive(); // Rest of the code is omitted } We create the query by annotating the method with the @Query annotation. If we set the nativeQuery attribute to true, we can leverage the syntax from the underlying database.\nUsing Native Queries Native queries are bound to a chosen database. If we use native queries, we lose one of the main advantages of using JPA, which abstracts away the database specifics. If you end up using a lot of native queries, you might be better off using a simpler abstraction like Spring Data JDBC.\n JPQL Queries If we don\u0026rsquo;t want to be bound to the syntax of the underlying database (maybe because we want to support multiple databases), we can use JPQL (Java Persistence Query Language) syntax:\npublic interface BookRepository extends JpaRepository\u0026lt;Book, Long\u0026gt; { @Query( value = \u0026#34;SELECT b FROM book b where b.numberOfInstances \u0026gt; 5\u0026#34;) List\u0026lt;Book\u0026gt; findWithMoreInstancesThenFiveJPQL(); // Rest of the code is omitted } We define the query with the @Query annotation, and the nativeQuery is, by default, set to false.\nThe JPQL syntax allows us to change the underlying database while the query stays the same.\nNamed Method Queries Spring framework provides one more feature regarding queries. We can build queries by using a special naming convention for our repository methods:\npublic interface BookRepository extends JpaRepository\u0026lt;Book, Long\u0026gt; { List\u0026lt;Book\u0026gt; findAllByNumberOfInstancesGreaterThan(long limit); // Rest of the code is omitted } Using the attribute names and special keywords, we can create queries. Spring JPA will generate the proper queries according to our definition.\nBuilding the Business Layer The business layer is the core part of the application.\nThe business layer is where we should write the business logic. It should contain the code for the requirement that one user can borrow a maximum of three books. Each service class contains the business logic for part of the application.\nWe split our business layer in two ways. The first one is that each entity has its service class (e.g., BookService, UserService ). After we split the layer by entities, we split them by use case. The end product of splitting is that we have the GetBookService.java class. By reading the name of the class, we can conclude that the code for fetching books will be inside this class.\nDefining the Service Class One of the requirements is that the user can borrow three books maximum at one point in time. This check should be done in the business layer.\nLet us look at how in the implementation class:\n@Service public class UpdateBookService { private final GetUserService getUserService; private final BookRepository bookRepository; public UpdateBookService( GetUserService getUserService, BookRepository bookRepository) { this.getUserService = getUserService; this.bookRepository = bookRepository; } public void borrow(long bookId, long userId){ User user = getUserService.getById(userId); if(user.getBorrowedBooks().stream() .anyMatch(book -\u0026gt; book.getId()== bookId)){ throw new IllegalStateException(\u0026#34;User already borrowed \u0026#34; + \u0026#34;the book\u0026#34;); } if(user.getBorrowedBooks().size() \u0026gt;= 3){ throw new IllegalStateException(\u0026#34;User already has \u0026#34; + \u0026#34;maximum number of books borrowed!\u0026#34;); } Book book = bookRepository.findById(bookId) .orElseThrow(() -\u0026gt; new EntityNotFoundException()); if(book.getNumberOfInstances()-1 \u0026lt; 0){ throw new IllegalStateException(\u0026#34;There are no available\u0026#34; + \u0026#34; books!\u0026#34;); } book.getUsers().add(user); book.numberOfInstances(book.getNumberOfInstances()+1); bookRepository.save(book); } // Rest of the code is omitted } The @Service annotation transforms our class into Spring Bean controlled by ApplicationContext. The primary task of ApplicationContext is to control the lifecycles of each Spring Bean and provide them when they are needed.\nAfter fetching the user that wants to borrow the book, we check if the user borrowed the book already. If that check passes, we can check our requirement of a maximum of three books at one point in time. Before allowing the user to borrow the book, we need to make sure that there is an instance of the book available.\nInjecting the Repository Class We use the dependency injection to obtain required classes into the GetBookService.class. For now, we only need the BookRepository.class. Dependency injection is explained in the previous chapter.\nWe can inject the GetBookRepository.class using three methods:\n field-based injection setter-based injection constructor-based injection  You can read a bit more about different types of dependency injection in this article.\nField-based Injection Let us look at how to use the field-based injection:\n@Service public class GetBookService { @Autowired private final BookRepository bookRepository; // Rest of the code is omitted } Spring recognizes the @Autowired annotation and makes sure that the BookRepository is provided.\nSetter-based Injection @Service public class GetBookService { private BookRepository bookRepository; @Autowired private void setBookRepository(BookRepository bookRepository){ this.bookRepository = bookRepository; } // Rest of the code is omitted } We can put the @Autowired annotation on the setter method. The setter-based injection does not allow us to mark the variable as final.\nConstructor-based Injection @Service public class GetBookService { private final BookRepository bookRepository; @Autowired public GetBookService(BookRepository bookRepository) { this.bookRepository = bookRepository; } // Rest of the code is omitted } For the constructor-based annotation, we should set the @Autowired annotation on the constructor.\nLearn about why you should usually choose constructor injection over the other types in this article.\nBuilding a Web Controller After creating entities, repositories, and services let us create our first controller that is solving the requirement that the application should be accessible through the browser.\nThe imported spring-boot-starter-web contains everything that we need for the controller. We have Spring MVC autoconfigured and the local server ready to use.\nWith Spring MVC, we can define a controller with the @Controller or @RestController annotation so it can handle incoming requests.\nCreating an Endpoint With @RestController We are going to create our first endpoint. We want to fetch all books that the bookstore owns. This is the endpoint that will be called from the homepage.\nLet us look into the codebase:\n@RestController @RequestMapping(\u0026#34;/books\u0026#34;) public class BooksRestController { private final GetBookService getBookService; @Autowired public BooksRestController(GetBookService getBookService) { this.getBookService = getBookService; } @GetMapping List\u0026lt;BookResponse\u0026gt; fetchAllBooks(){ return getBookService.getAllBooks(); } // Rest of the code } To make our class the controller bean we need to annotate it with @RestController or with @Controller. The difference between these two is that @RestController automatically wraps the return object from the methods annotated with @GetMapping, @PostMapping, etc. into ResponseEntity\u0026lt;\u0026gt;.\nWe went with @RestController because it gives cleaner and more readable code.\nThe @RequestMapping(\u0026quot;/books\u0026quot;) annotation maps our bean to this path. If we start the application locally we can access the endpoint on the http://localhost:8080/books.\nWe annotated the fetchAllBooks() method with the @GetMapping annotation to define that this is the GET method. Since we didn\u0026rsquo;t define an additional path on the @GetMapping annotation we are using the path from the @RequestMapping definitions.\nCreating an Endpoint With the @Controller Annotation Instead with @RestController we can define the controller bean with the @Controller annotation:\n@Controller @RequestMapping(\u0026#34;/controllerBooks\u0026#34;) public class BooksController { private final GetBookService getBookService; @Autowired public BooksController(GetBookService getBookService) { this.getBookService = getBookService; } @GetMapping ResponseEntity\u0026lt;List\u0026lt;BookResponse\u0026gt;\u0026gt; fetchAllBooks(){ return ResponseEntity.ok(getBookService.getAllBooks()); } // Rest of the code omitted  } When using the @Controller annotation we need to make sure that we wrap the result in the ResponseEntity\u0026lt;\u0026gt; class.\nThis approach gives us more freedom when returning objects than @RestController. Let us imagine we are rewriting some legacy backend code of the Spring Boot project. One of requirements was that the current frontend will continue working throughout our refactoring. Previous code always returned the 200 code but the body would differ if some error occurred. In a scenario like this we can use the @Controller annotation and control the body and status code that is returned to the user.\nCreating a POST Endpoint The POST method is used when we want to create a new resource in the database.\nWith this POST method, we will cover the requirement \u0026ldquo;The admin user can add new book into the bookstore\u0026rdquo;.\nNow, let us take a look how we can create new data:\n@RestController @RequestMapping(\u0026#34;/admin/books\u0026#34;) public class AdminBooksRestController { private final CreateBookService createBookService; @Autowired public AdminBooksRestController(CreateBookService createBookService) { this.createBookService = createBookService; } @PostMapping BookResponse create(@RequestBody BookRequest request){ return createBookService.createBook(request); } // Rest of the code omitted  } @PostMapping defines that this is the POST method and that we want to create a new resource through it.\nThe @RequestBody annotation defines that we are expecting the data inside the HTTP requests body. That date should be serializable to the BookRequest instance.\nCreating a PUT Endpoint The PUT method is used when we want to update the resource that is already in the database. With this endpoint we are allowing the admin user to update information about a book:\n@RestController @RequestMapping(\u0026#34;/admin/books\u0026#34;) public class AdminBooksRestController { private final CreateBookService createBookService; private final UpdateBookService updateBookService; private final DeleteBookService deleteBookService; @Autowired public AdminBooksRestController(CreateBookService createBookService, UpdateBookService updateBookService, DeleteBookService deleteBookService) { this.createBookService = createBookService; this.updateBookService = updateBookService; this.deleteBookService = deleteBookService; } @PutMapping(\u0026#34;/{id}\u0026#34;) BookResponse update(@PathVariable(\u0026#34;id\u0026#34;) long id, @RequestBody BookRequest request){ return updateBookService.updateBook(id,request); } // Rest of the code omitted } In the @PutMapping annotation, we define the path that continues on the one defined with @RequestMapping at the top of the class. The path looks like this: http://localhost:8080/admin/books/{id}.\nThe id variable is called the path variable and we can pass it into the method using the @PathVariable(\u0026quot;id\u0026quot;) annotation on the method argument. We need to be careful that the value inside @PathVariable matches the value inside @PutMapping.\nThe body of the method is defined with the @RequestBody annotation and we pass it as the json object inside the HTTP request.\nCreating a DELETE Endpoint With the DELETE endpoint we are meeting the requirement that admin users can delete a book from the bookstore:\n@RestController @RequestMapping(\u0026#34;/admin/books\u0026#34;) public class AdminBooksRestController { private final CreateBookService createBookService; private final UpdateBookService updateBookService; private final DeleteBookService deleteBookService; @Autowired public AdminBooksRestController(CreateBookService createBookService, UpdateBookService updateBookService, DeleteBookService deleteBookService) { this.createBookService = createBookService; this.updateBookService = updateBookService; this.deleteBookService = deleteBookService; } @DeleteMapping(\u0026#34;/{id}\u0026#34;) void delete(@PathVariable(\u0026#34;id\u0026#34;) long id){ deleteBookService.delete(id); } // Rest of the code omitted  } With @DeleteMapping(\u0026quot;/{id}\u0026quot;) we define which resource we want to delete. We can see that the path is the same as in the PUT endpoint but the HTTP method is different. The paths have to be unique for the same HTTP method.\nCalling an Endpoint After we build our endpoints we want to test them and see what do we get as the result. Since we don\u0026rsquo;t have any frontend we can use command-line tools or a graphical tool like Postman. With the cURL command-line tool, we can do the following, for example:\ncurl --location --request GET 'http://localhost:8080/books'\nWe will get a result like the following:\n[ { \u0026#34;title\u0026#34;: \u0026#34;The Sandman Vol. 1: Preludes \u0026amp; Nocturnes\u0026#34;, \u0026#34;author\u0026#34;: \u0026#34;Neil Gaiman\u0026#34;, \u0026#34;publishedOn\u0026#34;: \u0026#34;19/10/2010\u0026#34;, \u0026#34;currentlyAvailableNumber\u0026#34;: 4 }, { \u0026#34;title\u0026#34;: \u0026#34;The Lord Of The Rings Illustrated Edition\u0026#34;, \u0026#34;author\u0026#34;: \u0026#34;J.R.R. Tolkien\u0026#34;, \u0026#34;publishedOn\u0026#34;: \u0026#34;16/11/2021\u0026#34;, \u0026#34;currentlyAvailableNumber\u0026#34;: 1 } ] Conclusion After deciding which dependencies we needed and generating the project, we looked at how to create a functional application that can store and retrieve data from a database via a REST API.\nWe learned how to build the basic Spring Boot application, and went through several concepts:\n creating the entity creating the repository creating the service creating the controller  Spring Boot provides all the scaffolding for us, and we can focus on building the business logic of our application.\nYou can browse the source code of the Spring Boot application on GitHub.\n","date":"March 21, 2022","image":"https://reflectoring.io/images/stock/0065-java-1200x628-branded_hu49f406cdc895c98f15314e0c34cfd114_116403_650x0_resize_q90_box.jpg","permalink":"/getting-started-with-spring-boot/","title":"Getting Started with Spring Boot"},{"categories":["AWS"],"contents":"Amazon Kinesis is a family of managed services for collecting and processing streaming data in real-time. Stream processing platforms are an integral part of the Big Data ecosystem.\nExamples of streaming data are data collected from website click-streams, marketing, and financial information, social media feeds, IoT sensors, and monitoring and operational logs.\nIn this article, we will introduce Amazon Kinesis and understand different aspects of processing streaming data like ingestion, loading, delivery, and performing analytic operations using the different services of the Kinesis family: Kinesis Data Stream, Kinesis Data Firehose, Kinesis Data Analytics, and Kinesis Video Streams.\n Example Code This article is accompanied by a working code example on GitHub. What is Streaming Data? Streaming data is data that is generated continuously (in a stream) by multiple data sources which typically send the data records simultaneously. Due to its continuous nature, streaming data is also called unbounded data as opposed to bounded data handled by batch processing systems.\nStreaming data includes a wide variety of data such as:\n log files generated by customers using their mobile devices or web applications customer activity in e-commerce sites in-game player activity feeds from social media networks real-time market data from financial exchanges location feeds from geospatial services telemetry data from connected devices  Streaming data is processed sequentially and incrementally either by one record at a time or in batches of records aggregated over sliding time windows.\nWhat is Amazon Kinesis? Amazon Kinesis is a fully managed streaming data platform for processing streaming data. It provides four specialized services roughly classified based on the type and stages of processing of streaming data:\n  Kinesis Data Streams (KDS): The Kinesis Data Streams service is used to capture streaming data produced by various data sources in real-time. Producer applications write to the Kinesis Data Stream and consumer applications connected to the stream read the data for different types of processing.\n  Kinesis Data Firehose (KDF): With Kinesis Data Firehose, we do not need to write applications or manage resources. We configure data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the specified destination. We can also configure Kinesis Data Firehose to transform the data before delivering it.\n  Kinesis Data Analytics (KDA): With Kinesis Data Analytics we can process and analyze streaming data. It provides an efficient and scalable environment to run applications built using the Apache Flink framework which provides useful operators like map, filter, aggregate, window, etc for querying streaming data.\n  Kinesis Video Streams (KVS): Kinesis Video Streams is a fully managed service that we can use to stream live media from video or audio capturing devices to the AWS Cloud, or build applications for real-time video processing or batch-oriented video analytics.\n  Let us understand these services in the following sections.\nKinesis Data Streams The Kinesis Data Streams service is used to collect and store streaming data as soon as it is produced (in real-time).\nThe streaming data is collected by producer applications from various data sources and continually pushed to a Kinesis Data Stream. Similarly, the consumer applications read the data from the Kinesis Data Stream and process the data in real-time as shown below:\nExamples of consumer applications are custom applications running on EC2 instances, EMR clusters, Lambda functions, or a Kinesis Data Firehose delivery stream which can use another AWS service such as DynamoDB, Redshift, or S3 to store the results of their processing.\nAs part of the processing, the consumer applications can store their results using another AWS service such as DynamoDB, Redshift, or S3. The consumer applications process the data in real or near real-time which makes the Kinesis Data Streams service most useful for building time-sensitive applications like real-time dashboards and anomaly detection.\nAnother common use of Kinesis Data Streams is the real-time aggregation of data followed by loading the aggregated data into a data warehouse or map-reduce cluster.\nThe data is stored in Kinesis Data Stream for 24 hours by default but it can be configured to up to 365 days.\nStreams, Shards, and Records When using Kinesis Data Streams, we first set up a data stream and then build producer applications that push data to the data stream and consumer applications that read and process the data from the data stream:\nThe Kinesis Data Stream is composed of multiple data carriers called shards, as we can see in this diagram. Each shard provides a fixed unit of capacity. The data capacity of a data stream is a function of the number of shards in the stream. The total capacity of the data stream is the sum of the capacities of all the shards it is composed of.\nThe data stored in the shard is called a record.\nEach shard contains a sequence of data records. Each data record has a sequence number that is assigned by the Kinesis Data Stream.\nCreating a Kinesis Data Stream Let us first create our data stream where we can send streaming data from various data sources.\nWe can create a Kinesis data stream either by using the AWS Kinesis Data Streams Management Console, from the AWS Command Line Interface (CLI) or using the CreateStream operation of Kinesis Data Streams API from the AWS SDK.\nWe can also use AWS CloudFormation or AWS CDK to create a data stream as part of an infrastructure-as-code project.\nOur code for creating a data stream with the Kinesis Data Streams API looks like this:\npublic class DataStreamResourceHelper { public static void createDataStream() { KinesisClient kinesisClient = getKinesisClient(); // Prepare Create stream request with stream name and stream mode  CreateStreamRequest createStreamRequest = CreateStreamRequest .builder() .streamName(Constants.MY_DATA_STREAM) .streamModeDetails( StreamModeDetails .builder() .streamMode(StreamMode.ON_DEMAND) .build()) .build(); // Create the data stream  CreateStreamResponse createStreamResponse = kinesisClient.createStream(createStreamRequest); ... ... } private static KinesisClient getKinesisClient() { AwsCredentialsProvider credentialsProvider = ProfileCredentialsProvider .create(Constants.AWS_PROFILE_NAME); KinesisClient kinesisClient = KinesisClient .builder() .credentialsProvider(credentialsProvider) .region(Region.US_EAST_1).build(); return kinesisClient; } } In this code snippet, we are creating a Kinesis Data Stream with ON_DEMAND capacity mode. The capacity mode of ON_DEMAND is used for unpredictable workloads which scale the capacity of the data stream automatically in response to varying data traffic.\nWith the Kinesis Data Stream created, we will look at how to add data to this stream in the next sections.\nKinesis Data Stream Records Before adding data, it is also important to understand the structure of data that is added to a Kinesis Data Stream.\nData is written to a Kinesis Data Stream as a record.\nA record in a Kinesis data stream consists of:\n a sequence number, a partition key, and a data blob.  The maximum size of a data blob (the data payload before Base64-encoding) is 1 megabyte (MB).\nA sequence number is a unique identifier for each record. The sequence number is assigned by the Kinesis Data Streams service when a producer application calls the PutRecord() or PutRecords() operation to add data to a Kinesis Data Stream.\nThe partition key is used to segregate and route records to different shards of a stream. We need to specify the partition key in the producer application while adding data to a Kinesis data stream.\nFor example, if we have a data stream with two shards: shard1 and shard2, we can write our producer application to use two partition keys: key1 and key2 so that all records with key1 are added to shard1 and all records with key2 are added to shard2.\nData Ingestion - Writing Data to Kinesis Data Streams Applications that write data to Kinesis Data Streams are called \u0026ldquo;producers\u0026rdquo;. Producer applications can be custom-built in a supported programming language using AWS SDK or by using the Kinesis Producer Library (KPL).\nWe can also use Kinesis Agent which is a stand-alone application that we can run as an agent on Linux-based server environments such as web servers, log servers, and database servers.\nLet us create a producer application in Java that will use the AWS SDK\u0026rsquo;s PutRecord() operation for adding a single record and the PutRecords() operation for adding multiple records to the Kinesis Data Stream.\nWe have created this producer application as a Maven project and added a Maven dependency in our pom.xml as shown below:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;software.amazon.awssdk\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;kinesis\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;software.amazon.awssdk\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;bom\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.17.116\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; In the pom.xml, we have added the kinesis library as a dependency after adding bom for AWS SDK.\nHere is the code for adding a single event to the Kinesis Data Stream that we created in the previous step:\npublic class EventSender { private static final Logger logger = Logger .getLogger(EventSender.class.getName()); public static void main(String[] args) { sendEvent(); } public static void sendEvent() { KinesisClient kinesisClient = getKinesisClient(); // Set the partition key  String partitionKey = \u0026#34;partitionKey1\u0026#34;; // Create the data to be sent to Kinesis Data Stream in bytes  SdkBytes data = SdkBytes.fromByteBuffer( ByteBuffer.wrap(\u0026#34;Test data\u0026#34;.getBytes())); // Create the request for putRecord method  PutRecordRequest putRecordRequest = PutRecordRequest .builder() .streamName(Constants.MY_DATA_STREAM) .partitionKey(partitionKey) .data(data) .build(); // Call the method to write the record to Kinesis Data Stream  PutRecordResponse putRecordsResult = kinesisClient.putRecord(putRecordRequest); logger.info(\u0026#34;Put Result\u0026#34; + putRecordsResult); kinesisClient.close(); } // Set up the Kinesis client by reading aws credentials  private static KinesisClient getKinesisClient() { AwsCredentialsProvider credentialsProvider = ProfileCredentialsProvider .create(Constants.AWS_PROFILE_NAME); KinesisClient kinesisClient = KinesisClient .builder() .credentialsProvider(credentialsProvider) .region(Region.US_EAST_1).build(); return kinesisClient; } } Here we are first creating the request object for the putRecord() method by specifying the name of the Kinesis Data Stream, partition key, and the data to be sent in bytes. Then we have invoked the putRecord() method on the kinesisClient to add a record to the stream.\nRunning this program gives the following output:\nINFO: Put ResultPutRecordResponse(ShardId=shardId-000000000001, SequenceNumber=49626569155656830268862440193769593466823195675894743058) We can see the shardId identifier of the shard where the record is added along with the sequence number of the record.\nLet us next add multiple events to the Kinesis data stream by using the putRecords() method as shown below:\npublic class EventSender { private static final Logger logger = Logger.getLogger(EventSender.class.getName()); public static void main(String[] args) { sendEvents(); } public static void sendEvents() { KinesisClient kinesisClient = getKinesisClient(); String partitionKey = \u0026#34;partitionKey1\u0026#34;; List\u0026lt;PutRecordsRequestEntry\u0026gt; putRecordsRequestEntryList = new ArrayList\u0026lt;\u0026gt;(); // Create collection of 5 PutRecordsRequestEntry objects  // for adding to the Kinesis Data Stream  for (int i = 0; i \u0026lt; 5; i++) { SdkBytes data = ... PutRecordsRequestEntry putRecordsRequestEntry = PutRecordsRequestEntry.builder() .data(data) .partitionKey(partitionKey) .build(); putRecordsRequestEntryList.add(putRecordsRequestEntry); } // Create the request for putRecords method  PutRecordsRequest putRecordsRequest = PutRecordsRequest .builder() .streamName(Constants.MY_DATA_STREAM) .records(putRecordsRequestEntryList) .build(); PutRecordsResponse putRecordsResult = kinesisClient .putRecords(putRecordsRequest); logger.info(\u0026#34;Put records Result\u0026#34; + putRecordsResult); kinesisClient.close(); } // Set up the Kinesis client by reading aws credentials  private static KinesisClient getKinesisClient() { ... ... } } Here we are using the following steps for adding a set of 5 records to the data stream:\n  Creating a collection of PutRecordsRequestEntry objects corresponding to each data record to be put in the data stream. In each PutRecordsRequestEntry object, we are specifying the partition key and the data payload in bytes.\n  Creating the PutRecordsRequest object which we will pass as the input parameter to the putRecords() method by specifying the name of the data stream, and the collection of data records created in the previous step.\n  Invoking the putRecords() method to add the collection of 5 records to the data stream.\n  Running this program gives the following output:\nResultPutRecordsResponse(FailedRecordCount=0, Records=[ PutRecordsResultEntry(SequenceNumber=49626569155656830268862440193770802392642928158972051474, ShardId=shardId-000000000001), PutRecordsResultEntry(SequenceNumber=49626569155656830268862440193772011318462542788146757650, ShardId=shardId-000000000001), PutRecordsResultEntry(SequenceNumber=49626569155656830268862440193773220244282157417321463826, ShardId=shardId-000000000001), PutRecordsResultEntry(SequenceNumber=49626569155656830268862440193774429170101772046496170002, ShardId=shardId-000000000001), PutRecordsResultEntry(SequenceNumber=49626569155656830268862440193775638095921386675670876178, ShardId=shardId-000000000001)]) In this output, we can see the same shardId for all the 5 records which got added to the stream which means they have all been put into the same shard with shardId: shardId-000000000001. This is because we have used the same value of partition key: partitionKey1 for all our records.\nAs mentioned before, other than the AWS SDK, we can use the Kinesis Producer Library (KPL) or the Kinesis agent for adding data to a Kinesis Data Stream:\n  Kinesis Producer Library (KPL): KPL is a library written in C++ for adding data into a Kinesis data stream. It runs as a child process to the main user process. So in case the child process stops due to an error when connecting or writing to a Kinesis Data Stream, the main process continues to run. Please refer to the documentation for guidance on developing producer applications using the Kinesis Producer Library.\n  Amazon Kinesis Agent: Kinesis Agent is a stand-alone application that we can run as an agent on Linux-based server environments such as web servers, log servers, and database servers. The agent continuously monitors a set of files and collects and sends new data to Kinesis Data Streams. Please refer to the official documentation for guidance on configuring Kinesis Agent in a Linux-based server environment.\n  Data Consumption - Reading Data from Kinesis Data Streams With the data ingested in our data stream, let us get into creating a consumer application that can process the data from this data stream.\nEarlier we had created a Kinesis Data Stream and added streaming data to it using the putRecord() and putRecords() operation of the Kinesis Data Streams API. We can also use the Kinesis Data Streams API from the AWS SDK for reading the streaming data from this Kinesis Data Stream as shown below:\npublic class EventConsumer { public static void receiveEvents() { KinesisClient kinesisClient = getKinesisClient(); String shardId = \u0026#34;shardId-000000000001\u0026#34;; // Prepare the shard iterator request with the stream name  // and identifier of the shard to which the record was written  GetShardIteratorRequest getShardIteratorRequest = GetShardIteratorRequest .builder() .streamName(Constants.MY_DATA_STREAM) .shardId(shardId) .shardIteratorType(ShardIteratorType.TRIM_HORIZON.name()) .build(); GetShardIteratorResponse getShardIteratorResponse = kinesisClient .getShardIterator(getShardIteratorRequest); // Get the shard iterator from the Shard Iterator Response  String shardIterator = getShardIteratorResponse.shardIterator(); while (shardIterator != null) { // Prepare the get records request with the shardIterator  GetRecordsRequest getRecordsRequest = GetRecordsRequest .builder() .shardIterator(shardIterator) .limit(5) .build(); // Read the records from the shard  GetRecordsResponse getRecordsResponse = kinesisClient.getRecords(getRecordsRequest); List\u0026lt;Record\u0026gt; records = getRecordsResponse.records(); logger.info(\u0026#34;count \u0026#34; + records.size()); // log content of each record  records.forEach(record -\u0026gt; { byte[] dataInBytes = record.data().asByteArray(); logger.info(new String(dataInBytes)); }); shardIterator = getRecordsResponse.nextShardIterator(); } kinesisClient.close(); } // set up the Kinesis Client  private static KinesisClient getKinesisClient() { AwsCredentialsProvider credentialsProvider = ProfileCredentialsProvider .create(Constants.AWS_PROFILE_NAME); KinesisClient kinesisClient = KinesisClient .builder() .credentialsProvider(credentialsProvider) .region(Region.US_EAST_1).build(); return kinesisClient; } } Here we are invoking the getRecords() method on the Kinesis client to read the ingested records from the Kinesis Data Stream. We have provided a shard iterator using the ShardIterator parameter in the request.\nThe shard iterator specifies the position in the shard from which we want to start reading the data records sequentially. We will get an empty list if there are no records available in the portion of the shard that the iterator is pointing to so we have used a while loop to make multiple calls to get to the portion of the shard that contains records.\nConsumers of Kinesis Data Streams Kinesis Data Streams API which we used till now is a low-level method of reading streaming data. We have to take care of polling the stream, checkpointing processed records, running multiple instances, etc when we are using Kinesis Data Streams API for performing operations on a data stream.\nSo in most practical situations, we use the following methods for creating consumer applications for reading data from the stream:\n  AWS Lambda: We can use an AWS Lambda function to process records in an Amazon Kinesis data stream. AWS Lambda integrates natively with Amazon Kinesis Data Stream as a consumer to process data ingested through a data stream by taking care of the polling, checkpointing, and error handling functions. Please refer to the AWS Lambda documentation for the steps to configure a Lambda function as a consumer to a Kinesis Data Stream.\n  Kinesis Client Library (KCL): We can build a consumer application for Amazon Kinesis Data Streams using the Kinesis Client Library (KCL). The KCL is different from the Kinesis Data Streams API used earlier. It provides a layer of abstraction around the low-level tasks like connecting to the stream, reading the record from the stream, checkpointing processed records, and reacting to resharding. For information on using KCL, check the documentation for developing KCL consumers.\n  Kinesis Data Firehose: Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and Splunk. We can set up the Kinesis Data Stream as a source of streaming data to a Kinesis Firehose delivery stream for delivering after optionally transforming the data to a configured destination. We will explain this mechanism further in the section on Kinesis Data Firehose.\n  Kinesis Data Analytics: Kinesis Data Analytics is another fully managed service from the Kinesis family for processing and analyzing streaming data with helpful programming constructs like windowing, sorting, filtering, etc. We can set up the Kinesis Data Stream as a source of streaming data to a Kinesis Data Analytics application which we will explain in the section on Kinesis Data Analytics.\n  Throughput Limits - Shared vs. Enhanced Fan-Out Consumers It is important to understand the throughput limits of Kinesis Data Stream for designing and operating a highly reliable data streaming system and ensuring predictable performance.\nAs explained before, the Kinesis Data Stream is composed of multiple data carriers called shards which contain a sequence of data records. Each shard provides a fixed unit of capacity thereby serving as a base throughput unit of a Kinesis data stream. The data capacity of a data stream is a function of the number of shards in the stream.\nA shard supports 1 MB/second and 1,000 records per second for write throughput and 2 MB/second for read throughput.\nWhen we have multiple consumers reading from a shard, this read throughput is shared between them. These types of consumers are called Shared fan-out consumers.\nIf we want dedicated throughput for consumers, we can define them as Enhanced fan-out consumers.\nEnhanced fan-out is an optional feature for Kinesis Data Streams consumers that provides dedicated 2 MB/second throughput between consumers and shards. This helps to scale the number of consumers reading from a data stream in parallel while maintaining high performance.\nThese consumers do not have to share this throughput with other consumers that are receiving data from the stream. The data records from the stream are also pushed to these consumers that use enhanced fan-out.\nThe producer and consumer applications will receive throttling errors when writes and reads exceed the shard limits, which are handled through retries.\nKinesis Data Firehose Kinesis Data Firehose is a fully managed service that is used to deliver streaming data to a destination in near real-time.\nOne or more data producers send their streaming data into a kind of \u0026ldquo;pipe\u0026rdquo; called a delivery stream which optionally applies some transformation to the data before delivering it to a destination. An example of a data producer for a Kinesis Data Firehose is a web server that sends log data to a delivery stream.\nThe incoming streaming data is buffered in the delivery stream till it reaches a particular size or exceeds a certain time interval before it is delivered to the destination. Due to this reason, Kinesis Data Firehose is not intended for real-time delivery. It groups incoming streaming data, optionally compressing and/or transforming them with AWS Lambda functions, and then puts the data into a sink which is usually an AWS service like S3, Redshift, or Elasticsearch.\nWe can configure the delivery stream to read streaming data from a Kinesis Data Stream and deliver it to a destination.\nWe also need to do very little programming when using Kinesis Data Firehose. This is unlike Kinesis Data Streams where we write custom applications for producers and consumers of a data stream.\nCreating a Kinesis Firehose Delivery Stream Let us take a closer look at the Kinesis Data Firehose service by creating a Firehose delivery stream. We can create a Firehose delivery stream using the AWS management console, AWS SDK, or infrastructure as a service like AWS CloudFormation and AWS CDK.\nFor our example, let us use the AWS management console for creating the delivery stream as shown below:\nWe configure a delivery stream in Firehose with a source and a destination.\nThe source of a Kinesis Data Firehose delivery stream can be :\n A Kinesis Data Stream Direct PUT means an application producer can send data to the delivery stream using a direct PUT operation.  Here we have chosen the source as Direct PUT.\nSimilarly, the delivery stream can send data to the following destinations:\n Amazon Simple Storage Service (Amazon S3), Amazon Redshift Amazon OpenSearch Service Splunk, and Any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers like Datadog, Dynatrace, LogicMonitor, MongoDB, New Relic, and Sumo Logic.  For our example, we have chosen our destination as S3 and configured an S3 bucket that will receive the streaming data delivered by our Firehose delivery stream.\nApart from this we also need to assign an IAM service role to the Kinesis Firehose service with an access policy that allows it to write to the S3 bucket.\nThe delivery stream which we created using this configuration looks like this:\nIn this screenshot, we can observe a few more properties of the delivery stream like Data transformation and Dynamic partitioning which we will understand in the subsequent sections. We can also see the status of the delivery stream as Active which means it can receive streaming data. The initial status of the delivery stream is CREATING.\nWe are now ready to send streaming data to our Firehose delivery stream which will deliver this data to the configured destination: S3 bucket.\nSending Data to a Kinesis Firehose Delivery Stream We can send data to a Kinesis Data Firehose delivery stream from different types of sources:\n  Kinesis Data Stream: We can configure Kinesis Data Streams to send data records to a Kinesis Data Firehose delivery stream by setting the Kinesis Data Stream as the Source when we are creating the delivery stream.\n  Kinesis Firehose Agent: Kinesis Firehose Agent is a standalone Java application that collects log data from a server and sends them to Kinesis Data Firehose. We can install this agent in Linux-based servers and configure it by specifying the files to monitor along with the delivery stream to which the log data is to be sent. More details about configuring a Kinesis Firehose Agent can be found in the official documentation.\n  Kinesis Data Firehose API: Kinesis Data Firehose API from the AWS SDK offers two operations for sending data to the Firehose delivery stream: PutRecord() and PutRecordBatch(). The PutRecord() operation sends one data record while the PutRecordBatch() operation can send multiple data records to the delivery stream in a single invocation. We can use these operations only if the delivery stream is created with the DIRECT PUT option as the Source.\n  Amazon CloudWatch Logs: CloudWatch Logs are used to centrally store and monitor logs from all our systems, applications, and dependent AWS services. We can create a CloudWatch Logs subscription that will send log events to Kinesis Data Firehose. Please refer to the documentation for steps to configure Subscription Filters with Amazon Kinesis Firehose.\n  CloudWatch Events: We can configure Amazon CloudWatch to send events to a Kinesis Data Firehose delivery stream by creating a CloudWatch Events rule with the Firehose delivery stream as a target.\n  AWS IoT as the data source: We can configure AWS IoT to send data to a Kinesis Data Firehose delivery stream by adding a rule action for an AWS IoT rule.\n  For our example, let us use the Kinesis Data Firehose API to send a data record to the Firehose delivery stream. A very simplified code snippet for sending a single record to a Kinesis Firehose delivery stream looks like this:\npublic class FirehoseEventSender { private final static Logger logger = Logger.getLogger(FirehoseEventSender.class.getName()); public static void main(String[] args) { new FirehoseEventSender().sendEvent(); } public void sendEvent() { String deliveryStreamName = \u0026#34;PUT-S3-5ZGgA\u0026#34;; String data = \u0026#34;Test data\u0026#34; + \u0026#34;\\n\u0026#34;; // Create a record for sending to Firehose Delivery Stream  Record record = Record .builder() .data(SdkBytes .fromByteArray(data.getBytes())) .build(); // Prepare the request for putRecord operation  PutRecordRequest putRecordRequest = PutRecordRequest .builder() .deliveryStreamName(deliveryStreamName) .record(record) .build(); FirehoseClient firehoseClient = getFirehoseClient(); // Put record into the DeliveryStream  PutRecordResponse putRecordResponse = firehoseClient.putRecord(putRecordRequest); logger.info(\u0026#34;record ID:: \u0026#34; + putRecordResponse.recordId()); firehoseClient.close(); } // Create the FirehoseClient with the AWS credentials  private static FirehoseClient getFirehoseClient() { AwsCredentialsProvider credentialsProvider = ProfileCredentialsProvider .create(Constants.AWS_PROFILE_NAME); FirehoseClient kinesisClient = FirehoseClient .builder() .credentialsProvider(credentialsProvider) .region(Constants.AWS_REGION).build(); return kinesisClient; } } Here we are calling the putRecord() method of the Kinesis Data Firehose API for adding a single record to the delivery stream. The putRecord() method takes an object of type PutRecordRequest as input parameter. We have set the name of the delivery stream along with the contents of the data when creating the input parameter object before invoking the putRecord() method.\nData Transformation in a Firehose Delivery Stream We can configure the Kinesis Data Firehose delivery stream to transform and convert streaming data received from the data source before delivering the transformed data to destinations:\nTransforming Incoming Data: We can invoke a Lambda function to transform the data received in the delivery stream. Some ready-to-use blueprints are offered by AWS which we can adapt according to our data format.\nConverting the Format of the Incoming Data Records: We can convert the format of our input data from JSON to Apache Parquet or Apache ORC before storing the data in Amazon S3. Parquet and ORC are columnar data formats that save space and enable faster queries compared to row-oriented formats like JSON. If we want to convert an input format other than JSON, such as comma-separated values (CSV) or structured text, we can use a Lambda function to transform it to JSON.\nThe below figure shows the data transformation and record format conversion options enabled in the AWS management console:\nHere is the snippet of a Lambda function for transforming the streaming data record created using a Lambda blueprint for Kinesis Data Firehose Processing:\nconsole.log(\u0026#39;Loading function\u0026#39;); const validateRecord = (recordElement)=\u0026gt;{ // record is considered valid if contains status field  return recordElement.includes(\u0026#34;status\u0026#34;) } exports.handler = async (event, context) =\u0026gt; { /* Process the list of records and transform them */ const output = event.records.map((record)=\u0026gt;{ const decodedData = Buffer.from(record.data, \u0026#34;base64\u0026#34;).toString(\u0026#34;utf-8\u0026#34;) let isValidRecord = validateRecord(decodedData) if(isValidRecord){ let parsedRecord = JSON.parse(decodedData) // read fields from parsed JSON for some more processing  const outputRecord = `status::${parsedRecord.status}` return { recordId: record.recordId, result: \u0026#39;Ok\u0026#39;, // payload is encoded back to base64 before returning the result  data: Buffer.from(outputRecord, \u0026#34;utf-8\u0026#34;).toString(\u0026#34;base64\u0026#34;) } }else{ return { recordId: record.recordId, result: \u0026#39;dropped\u0026#39;, data: record.data // payload is kept intact,  } } }) }; This lambda function is written in Node.js. It validates the record by looking for a status field. If it finds the record as valid, it parses the JSON record to extract the status field and prepares the response before passing the processed record back into the stream for delivery.\nData Delivery Format of a Firehose Delivery Stream After our delivery stream receives the streaming data, it is automatically delivered to the configured destination. Each destination type supported by Kinesis Data Firehose has specific configurations for data delivery.\nFor data delivery to S3, Kinesis Data Firehose concatenates multiple incoming records based on the buffering configuration of our delivery stream. It then delivers the records to the S3 bucket as an S3 object. We can also add a record separator at the end of each record before sending them to Kinesis Data Firehose. This will help us to divide the delivered Amazon S3 object into individual records.\nKinesis Data Firehose adds a UTC time prefix in the format YYYY/MM/dd/HH before writing objects to an S3 bucket. This prefix creates a logical hierarchy in the bucket, where each / creates a level in the hierarchy.\nThe path of our S3 object created as a result of delivery of our streaming data with the Direct PUT operation results in a hierarchy of the form s3://\u0026lt;S3 bucket name\u0026gt;/2022/02/22/03/ in the S3 bucket configured as a destination to the Firehose delivery stream.\nThe data delivery format of other destinations can be found in the official documentation of Kinesis Data Firehose.\nKinesis Data Analytics Kinesis Data Analytics helps us to transform and analyze streaming data. It does this by providing a fully managed environment for running Flink applications.\nApache Flink is a Big Data processing framework for processing a large amount of data efficiently. It has helpful constructs like windowing, filtering, aggregations, mapping, etc for performing operations on streaming data.\nThe results of analyzing streaming data can be used in various use cases like performing time-series analytics, feeding real-time dashboards, and creating real-time metrics.\nKinesis Data Analytics sets up the resources to run Flink applications and scales automatically to handle any volume of incoming data.\nThe Difference Between Kinesis Data Streams and Kinesis Data Anbalytics It is important to note the difference with Kinesis Data Stream where we can also write consumer applications with custom code for performing any processing on the streaming data. But those applications are usually run on server instances like EC2 in an infrastructure provisioned and managed by us.\nKinesis Data Analytics in contrast provides an automatically provisioned environment for running applications built using the Flink framework which scales automatically to handle any volume of incoming data.\nConsumer applications of Kinesis Data Streams usually write the records to a destination like an S3 bucket or a DynamoDB table after some processing. Kinesis Data Analytics applications perform queries like aggregations, filtering, etc. by applying different windows on streaming data to identify trends and patterns for real-time alerts and feeds for dashboards.\nKinesis Data Analytics also supports applications built using Java with the open-source Apache Beam libraries and our own custom code.\nStructure of a Flink Application A basic structure of a Flink application is shown below:\nIn this diagram, we can observe the following components of the Flink application:\n Execution Environment: The execution environment of a Flink application is defined in the application main class and creates the data pipeline. The data pipeline contains the business logic and is composed of one or more operators chained together. Data Source: The application consumes data by using a source. A source connector reads data from a Kinesis data stream, an Amazon S3 bucket, etc. Processing Operators: The application processes data by using one or more operators. These processing operators apply transformations to the input data that comes from the data sources. After the transformation, the application forwards the transformed data to the data sinks. Please check out the Flink documentation to see the complete list of DataStream API Operators with code snippets. Data Sink: The application produces data to external sources by using sinks. A sink connector writes data to a Kinesis data stream, a Kinesis Data Firehose delivery stream, an Amazon S3 bucket, etc.  A few basic data sources and sinks are built into Flink and are always available. Examples of predefined data sources are reading from files, and sockets, and ingesting data from collections and iterators. Similarly, examples of predefined data sink include writing to files, to stdout and stderr, and sockets.\nCreating a Flink Application Let us first create a Flink application which we will run using the Kinesis Data Analytics service. We can create a Flink application in Java, Scala or Python. We will create the application for our example as a Maven project in Java language and set up the following dependencies:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.flink\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;flink-streaming-java_2.11\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.14.3\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;provided\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.flink\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;flink-clients_2.11\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.14.3\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;software.amazon.kinesis\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;amazon-kinesis-connector-flink\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; ... \u0026lt;/dependencies\u0026gt; The dependency: flink-streaming-java_2.11 contains the core API of Flink. We have added the flink-clients_2.11 dependency for running the Flink application locally. For connecting to Kinesis we are using the dependency: amazon-kinesis-connector-flink.\nLet us use a stream of access logs from an Apache HTTP server as our streaming data that we will use for processing by our Flink application. We will first test our application using a text file as a pre-defined source and stdout as a pre-defined sink.\nThe code for processing this stream of access logs is shown below:\npublic class ErrorCounter { private final static Logger logger = Logger.getLogger(ErrorCounter.class.getName()); public static void main(String[] args) throws Exception { // set up the streaming execution environment  final StreamExecutionEnvironment env = StreamExecutionEnvironment .getExecutionEnvironment(); // Create the source of streaming data  DataStream\u0026lt;String\u0026gt; inputStream = createSource(env); // convert string to LogRecord event objects  DataStream\u0026lt;LogRecord\u0026gt; logRecords = mapStringToLogRecord(inputStream); // Filter out error records (with status not equal to 200)  DataStream\u0026lt;LogRecord\u0026gt; errorRecords = filterErrorRecords(logRecords); // Create keyed stream with IP as key  DataStream\u0026lt;LogRecord\u0026gt; keyedStream = assignIPasKey(errorRecords); // convert LogRecord to string to objects  DataStream\u0026lt;String\u0026gt; keyedStreamAsText = mapLogRecordToString(keyedStream); // Create sink  createSink(env, errorRecords); // Execute the job  env.execute(\u0026#34;Error alerts\u0026#34;); } // convert LogRecord to string to objects using the Flink\u0026#39;s flatMap operator  private static DataStream\u0026lt;String\u0026gt; mapLogRecordToString( DataStream\u0026lt;LogRecord\u0026gt; keyedStream) { DataStream\u0026lt;String\u0026gt; keyedStreamAsText = keyedStream.flatMap(new FlatMapFunction\u0026lt;LogRecord, String\u0026gt;() { @Override public void flatMap( LogRecord value, Collector\u0026lt;String\u0026gt; out) throws Exception { out.collect(value.getUrl()+\u0026#34;::\u0026#34; + value.getHttpStatus()); } }); return keyedStreamAsText; } // Create keyed stream with IP as key using Flink\u0026#39;s keyBy operator  private static DataStream\u0026lt;LogRecord\u0026gt; assignIPasKey( DataStream\u0026lt;LogRecord\u0026gt; errorRecords) { DataStream\u0026lt;LogRecord\u0026gt; keyedStream = errorRecords.keyBy(value -\u0026gt; value.getIp()); return keyedStream; } // Filter out error records (with status not equal to 200)  // using Flink\u0026#39;s filter operator  private static DataStream\u0026lt;LogRecord\u0026gt; filterErrorRecords( DataStream\u0026lt;LogRecord\u0026gt; logRecords) { DataStream\u0026lt;LogRecord\u0026gt; errorRecords = logRecords.filter(new FilterFunction\u0026lt;LogRecord\u0026gt;() { @Override public boolean filter(LogRecord value) throws Exception { boolean matched = !value.getHttpStatus().equalsIgnoreCase(\u0026#34;200\u0026#34;); return matched; } }); return errorRecords; } // convert string to LogRecord event objects using Flink\u0026#39;s flatMap operator  private static DataStream\u0026lt;LogRecord\u0026gt; mapStringToLogRecord( DataStream\u0026lt;String\u0026gt; inputStream) { DataStream\u0026lt;LogRecord\u0026gt; logRecords = inputStream.flatMap(new FlatMapFunction\u0026lt;String, LogRecord\u0026gt;() { @Override public void flatMap( String value, Collector\u0026lt;LogRecord\u0026gt; out) throws Exception { String[] parts = value.split(\u0026#34;\\\\s+\u0026#34;); LogRecord record = new LogRecord(); record.setIp(parts[0]); record.setHttpStatus(parts[8]); record.setUrl(parts[6]); out.collect(record); } }); return logRecords; } // Set up the text file as a source  private static DataStream\u0026lt;String\u0026gt; createSource( final StreamExecutionEnvironment env) { return env.readTextFile( \u0026#34;\u0026lt;File Path\u0026gt;/apache_access_log\u0026#34;); } // Set up stdout as the sink  private static void createSink( final StreamExecutionEnvironment env, DataStream\u0026lt;LogRecord\u0026gt; input) { input.print(); } } We have used the DataStream API of Flink in this code example for processing the streams of access logs contained in the text file used as a source. Here we are first creating the execution environment using the StreamExecutionEnvironment class.\nNext, we are creating the source for the streaming data. We have used a text file containing the log records from an Apache HTTP server as a source here. Some sample log records from the text file are shown here:\n83.149.9.216 .. \u0026#34;GET /.../-search.png HTTP/1.1\u0026#34; 200 .. 83.149.9.216 .. \u0026#34;GET /.../-dashboard3.png HTTP/1.1\u0026#34; 200 83.149.9.216 .. \u0026#34;GET /.../highlight.js HTTP/1.1\u0026#34; 403 .. ... ... After this, we have attached a chain of Flink operators to the source of the streaming data. Our sample code uses operators chained together in the below sequence:\n Flatmap: We are using the Flatmap operator to transform the String element to a POJO of type LogRecord. FlatMap functions take elements and transform them, into zero, one, or more elements. Filter: We have applied the Filter operator to select only the error records with HTTP status not equal to 200. KeyBy: With the KeyBy operator we partition the records by IP address for parallel processing. Flatmap: We are once again using another Flatmap operator to transform the POJO of type LogRecord to a String element.  The result of the last operator is connected to a predefined sink: stdout.\nHere is the output after running this application:\n6\u0026gt; /doc/index.html?org/elasticsearch/action/search/SearchResponse.html::404 4\u0026gt; /presentations/logstash-monitorama-2013/css/fonts/Roboto-Bold.ttf::404 4\u0026gt; /presentations/logstash-monitorama-2013/images/frontend-response-codes.png::310 Configuring a Kinesis Data Stream as a Source and a Sink After testing our data pipeline we will modify the data source in our code to connect to a Kinesis Data Stream which will ingest the streaming data which we want to process. For our example, it will be access logs from an Apache HTTP server ingested in a Kinesis Data Stream using an architecture as shown in this diagram:\nIn this architecture, the access log files from the HTTP server will be uploaded to an S3 bucket. A lambda trigger attached to the S3 bucket will read the records from the file and add them to a Kinesis Data Stream using the putRecords() operation.\nThe source and the sink connected to Kinesis Data Stream looks like this:\npublic class ErrorCounter { private final static Logger logger = Logger.getLogger(ErrorCounter.class.getName()); public static void main(String[] args) throws Exception { // set up the streaming execution environment  final StreamExecutionEnvironment env = StreamExecutionEnvironment .getExecutionEnvironment(); DataStream\u0026lt;String\u0026gt; inputStream = createSource(env); ... ... ... DataStream\u0026lt;String\u0026gt; keyedStream = ... keyedStream.addSink(createSink()); } // Create Kinesis Data Stream as a source  private static DataStream\u0026lt;String\u0026gt; createSource( final StreamExecutionEnvironment env) { Properties inputProperties = new Properties(); inputProperties.setProperty( ConsumerConfigConstants.AWS_REGION, Constants.AWS_REGION.toString()); inputProperties.setProperty( ConsumerConfigConstants.STREAM_INITIAL_POSITION, \u0026#34;LATEST\u0026#34;); String inputStreamName = \u0026#34;in-app-log-stream\u0026#34;; return env.addSource( new FlinkKinesisConsumer\u0026lt;\u0026gt;( inputStreamName, new SimpleStringSchema(), inputProperties)); } // Create Kinesis Data Stream as a sink  private static FlinkKinesisProducer\u0026lt;String\u0026gt; createSink() { Properties outputProperties = new Properties(); outputProperties.setProperty( ConsumerConfigConstants.AWS_REGION, Constants.AWS_REGION.toString()); FlinkKinesisProducer\u0026lt;String\u0026gt; sink = new FlinkKinesisProducer\u0026lt;\u0026gt;( new SimpleStringSchema(), outputProperties); String outputStreamName = \u0026#34;log_data_stream\u0026#34;; sink.setDefaultStream(outputStreamName); sink.setDefaultPartition(\u0026#34;0\u0026#34;); return sink; } } Here we have added a Kinesis Data Stream of name in-app-log-stream as the source and another Kinesis Data Stream of name log_data_stream as the sink.\nWe can also configure destinations where you want Kinesis Data Analytics to send the results.\nKinesis Data Analytics also supports Kinesis Data Firehose and AWS Lambda as destinations. Kinesis Data Firehose can be configured to automatically send the data to destinations like S3, Redshift, OpenSearch, and Splunk.\nNext, we need to compile and package this code for deploying to the Kinesis Data Analytics service. We will see this in the next section.\nDeploying the Flink Application to Kinesis Data Analytics Kinesis Data Analytics runs Flink applications by creating a job. It looks for a compiled source in an S3 bucket. Since our Flink application is in a Maven project, we will compile and package our application using Maven as shown below:\nmvn package -Dflink.version=1.13.2 Running this command will create a fat \u0026ldquo;uber\u0026rdquo; jar with all the dependencies. We will upload this jar file to an S3 bucket where Kinesis Data Analytics will look for the application code.\nWe will next create an application in Kinesis Data Analytics using the AWS management console as shown below:\nThis application is the Kinesis Data Analytics entity that we work with for querying and operating on our streaming data.\nWe configure three primary components in an application:\n Input: In the input configuration, we map the streaming source to an in-application data stream. Data flows from one or more data sources into the in-application data stream. We have configured a Kinesis Data Stream as a data source. Application code: Location of an S3 bucket containing the compiled Flink application that reads from an in-application data stream associated with a streaming source and writes to an in-application data stream associated with output. Output: one or more in-application streams to store intermediate results. We can then optionally configure an application output to persist data from specific in-application streams to an external destination.  Our application\u0026rsquo;s dependent resources like CloudWatch Log streams and IAM service roles also get created in this step.\nAfter the application is created, we will configure the application with the location of the S3 bucket where we had uploaded the compiled code of the Flink application as an \u0026ldquo;uber\u0026rdquo; jar earlier.\nRunning the Kinesis Data Analytics Application by Creating a Job We can run our application by choosing Run on our application\u0026rsquo;s page in the AWS console. When we run our Kinesis Data Analytics application, the Kinesis Data Analytics service creates an Apache Flink job.\nThe execution of the job, and the resources it uses, are managed by a Job Manager. The Job Manager separates the execution of the application into tasks. Each task is managed by a Task Manager. We examine the performance of each Task Manager, or the Job Manager as a whole to monitor the performance of our application.\nCreating Flink Application Interactively with Notebooks The Flink application we built earlier was authored separately in a Java IDE (Eclipse) and then packaged and deployed in Kinesis Data Analytics by uploading the compiled artifact (jar file) to an S3 bucket.\nInstead of using an IDE like Eclipse, we can use notebooks which are more widely used for data science tasks, for authoring Flink applications. A notebook is a web-based interactive development environment where data scientists write and execute code and visualize results.\nStudio notebooks provided by Kinesis Data Streams use notebooks powered by Apache Zeppelin and use Apache Flink as the stream processing engine.\nWe can create a Studio notebooks in the AWS Management Console as shown below:\nAfter we start the notebook, we can open it in Apache Zeppelin for writing code in SQL, Python, or Scala for developing applications using the notebook interface for Kinesis Data Streams, Amazon MSK, and S3 using built-in integrations, and various other streaming data sources with custom connectors.\nUsing Studio notebooks, we model queries on streaming data using the Apache Flink Table API and SQL in SQL, Python, Scala, or DataStream API in Scala. After that, we promote the Studio notebook to a continuously-running, non-interactive, Kinesis Data Analytics stream-processing application.\nPlease refer to the official documentation for details about using Studio notebook.\nKinesis Video Streams Kinesis Video Streams is a fully managed service that we can use to :\n connect and stream video, audio, and other time-encoded data from various capturing devices using an infrastructure provisioned dynamically in the AWS Cloud securely and durably store media data for a default retention period of 1 day and a maximum of 10 years. build applications that operate on live data streams by consuming the ingested data frame-by-frame, in real-time for low-latency processing. create batch or ad hoc applications that operate on durably persisted data without strict latency requirements.  Key Concepts: Producer, Consumer, and Kinesis Video Stream The Kinesis Video Streams service is built around the concepts of a producer sending the streaming data to a stream and a consumer application reading that data from the stream.\n  Producer: Any source that puts data into a Kinesis video stream. A producer can be any video-generating device, such as a security camera, a body-worn camera, a smartphone camera, or a dashboard camera. A producer can also send non-video data, such as audio feeds, images, or RADAR data.\n  Kinesis Video Stream: A resource that transports live video data, optionally stores it, and makes the data available for consumption both in real-time and on a batch or ad hoc basis.\n  Consumer: A consumer is an application that reads data like fragments and frames from a Kinesis Video Stream for viewing, processing, or analysis.\n  Creating a Kinesis Video Stream Let us first create a Kinesis Video Stream using the AWS admin console:\nHere we have created a new video stream with a default configuration. We can then use the Kinesis Video Streams API to put data into or read data from this video stream.\nSending Media Data to a Kinesis Video Stream Next, we need to configure a producer for putting data into this Kinesis Video Stream. The producer uses an application that extracts the video data in the form of frames from the media source and uploads it to the Kinesis Video Stream.\nThe producer uses a Kinesis Video Streams Producer SDK to extract the video data in the form of frames from the media source and sends it to the Kinesis Video Stream.\nKinesis Video Streams Producer SDK is used to build an on-device application that securely connects to a video stream, and reliably publishes video and other media data to Kinesis Video Stream.\nIt takes care of all the underlying tasks required to package the frames and fragments generated by the device\u0026rsquo;s media pipeline. The SDK also handles stream creation, token rotation for secure and uninterrupted streaming, processing acknowledgments returned by Kinesis Video Streams, and other tasks.\nPlease refer to the documentation for details about using the Producer SDK.\nConsuming Media Data from a Kinesis Video Stream We can consume media data by either viewing it in the AWS Kinesis Video Stream console or by creating an application that reads media data from a Kinesis Video Stream.\nThe Kinesis Video Stream Parser Library is a set of tools that can be used in Java applications to consume the MKV data from a Kinesis Video Stream.\nPlease refer to the documentation for details about configuring the Parser library.\nConclusion Here is a list of the major points for a quick reference:\n Streaming data is a pattern of data being generated continuously (in a stream) by multiple data sources which typically send the data records simultaneously. Due to its continuous nature, streaming data is also called unbounded data as opposed to bounded data handled by batch processing systems. Amazon Kinesis is a family of managed services for collecting and processing streaming data in real-time. Amazon Kinesis includes the following services each focussing on different stages of handling streaming data :  Kinesis Data Stream for ingestion and storage of streaming data Kinesis Firehose for delivery of streaming data Kinesis Analytics for running analysis programs over the ingested data for deriving analytical insights Kinesis Video Streams for ingestion, storage, and streaming of media data   The Kinesis Data Streams service is used to collect and process streaming data in real-time. The Kinesis Data Stream is composed of multiple data carriers called shards. Each shard provides a fixed unit of capacity. Kinesis Data Firehose is a fully managed service that is used to deliver streaming data to a destination in near real-time. The incoming streaming data is buffered in the delivery stream till it reaches a particular size or exceeds a certain time interval before it is delivered to the destination. Due to this reason, Kinesis Data Firehose is not intended for real-time delivery. Kinesis Data Analytics is used to analyze streaming data in real-time. It provides a fully managed service for running Apache Flink applications. Apache Flink is a Big Data processing framework for building applications that can process a large amount of data efficiently. Kinesis Data Analytics sets up the resources to run Flink applications and scales automatically to handle any volume of incoming data. Kinesis Video Streams is a fully managed AWS service that we can use to ingest streaming video, audio, and other time-encoded data from various media capturing devices using an infrastructure provisioned dynamically in the AWS Cloud.  You can refer to all the source code used in the article on Github.\n","date":"March 17, 2022","image":"https://reflectoring.io/images/stock/0120-data-stream-1200x628-branded_hu1a8be14cb26cc63e1ae5be2e641a079f_478220_650x0_resize_q90_box.jpg","permalink":"/processing-streams-with-aws-kinesis/","title":"Processing Streams with Amazon Kinesis"},{"categories":["Software Craft"],"contents":"According to Google\u0026rsquo;s DevOps Research and Assessment (DORA) group, software delivery performance influences organizational performance in general. That means if you\u0026rsquo;re good at delivering software, you\u0026rsquo;re good at business.\nIn this article, we\u0026rsquo;ll discuss why the practice of using feature flags helps to become good in software delivery and then go through different ways of building a homegrown feature flagging solution. Finally, we\u0026rsquo;ll contrast the homegrown feature flagging solution with using a full-blown feature delivery platform like LaunchDarkly to help you decide whether to make that solution yourself or just buy it.\n Example Code This article is accompanied by a working code example on GitHub. How Do You Become Good at Delivering Software? So how do you become good at delivering software? The DORA group found out that the following metrics have a big impact on software delivery performance:\n Deployment frequency: the frequency in which you deploy a new version of the software you\u0026rsquo;re building. You\u0026rsquo;re good when this is measured in hours, not days or months. Lead time: the time it takes for the customer to request a change until the change is deployed. Since the time to design a solution is often fuzzy, the lead time is often only measured from the moment you start working on implementing the change until the change is deployed. Again, you\u0026rsquo;re good if this is measured in hours. Mean time to restore (MTTR): the mean of the time it takes to restore service after the service was unavailable or impacted in some way. Again, this should be measured in hours. Change failure rate: the percentage of deployments that cause problems and impact the service. You\u0026rsquo;re good if this is below 15%.  These metrics are the so-called \u0026ldquo;DORA metrics\u0026rdquo;. You can read everything about them in the Accelerate book written by some of the DORA researchers.\nIf you want to start one single practice that pushes the needle for all four DORA metrics, you should start using feature flags.\nInstead of deploying a change that is visible to all customers right after deployment, you deploy the change behind a feature flag.\nOnly when you toggle the feature flag will the change become visible to the users. The nice thing is that feature flags don\u0026rsquo;t need to apply to all users at the same time! Instead, you could, for example, start by enabling the feature flag just for yourself to test the feature and only then enable it for a cohort of friendly users before finally enabling it for everyone.\nHere\u0026rsquo;s how feature flags improve the DORA metrics:\n Feature flags improve deployment frequency because you can deploy any time. Even if there is unfinished code in the codebase, it will be hidden behind a feature flag. The main branch is always deployable. Feature flags improve lead time because a change can be deployed even if it\u0026rsquo;s not finished, yet, to gather feedback from key users. Feature flags improve the mean time to restore because you can revert a problematic change by just disabling the corresponding feature flag. Feature flags improve change failure rate because they decouple the risk of deployment with the risk of change. A deployment no longer fails and has to be rolled back because of bad features. The deployment is successful even if you have shipped a bad change because you can disable the bad change any time by flipping a feature flag.  If you\u0026rsquo;re still reading, you should be convinced that using feature flags is a good thing. But how to do it?\nLet\u0026rsquo;s explore some ways of implementing feature flags, starting with simple if/else switches and moving up to context-sensitive feature flags that include user information when deciding to show or not show a feature to a user.\nBuilding a Feature Flag Service For the code examples in this article, we\u0026rsquo;ll be using Java and Spring Boot, but the concepts apply to any programming language and framework.\nWe\u0026rsquo;ll start by building a feature flag service that serves as the single source of truth about the state of our feature flags.\nThe interface looks something like this:\npublic interface FeatureFlagService { Boolean featureOne(); Integer featureTwo(); } It\u0026rsquo;s a rather simple interface with a method for each feature that we want to toggle in our application:\n feature one is a boolean flag that can be either on or off. feature two is a numeric flag that can have no value (null) or a numeric value  Boolean feature flags are the most common type of feature flag and cover most use cases. I added a numeric flag as a representative to any non-boolean flag, just to show that it\u0026rsquo;s possible and how to implement it.\nWe can use the FeatureFlagService interface in our code to determine if a feature is active or not:\nif(featureFlagService.featureOne()){ // new code } else { // old code } But where does the FeatureFlagService get the current state of the feature flags from? How does it know which feature flag is holding which value?\nIn the upcoming sections, we\u0026rsquo;ll implement the FeatureFlagService interface in more and more sophisticated ways to unlock more and more feature flagging use cases.\nFeature Flags Backed by Code The most straightforward solution is to implement the FeatureFlagService interface to just return hard-coded values for each feature flag:\npublic class CodeBackedFeatureFlagService implements FeatureFlagService { @Override public Boolean featureOne() { return true; } @Override public Integer featureTwo() { return 42; } } Hard-coding feature flag state defeats the main purpose of feature flags, however. We need to change the code and re-deploy if we want to enable or disable a certain feature.\nDeployment and shipping of features are not decoupled with this solution! We cannot quickly disable a buggy feature in production because we have to re-deploy!\nLet\u0026rsquo;s see how we can externalize the feature flag state from the code.\nFeature Flags Backed by Configuration Properties The next step in the evolution of feature flags is to externalize the feature flag state so we don\u0026rsquo;t have to change the code.\nInstead of hard-coding the feature flag state, we externalize the state in a configuration file. With Spring Boot, this configuration file would be the application.yml file, for example:\nfeatures: featureOne: true featureTwo: 42 We can then make use of Spring Boot\u0026rsquo;s configuration properties feature to bind the feature flag state to a Java object:\n@Component @ConfigurationProperties(\u0026#34;features\u0026#34;) public class FeatureProperties { private boolean featureOne; private int featureTwo; // getters and setters omitted } This will create a FeatureProperties bean at runtime that encapsulates the state from the configuration file.\nWe can then inject the FeatureProperties bean in an implementation of the FeatureFlagService interface:\n@Component public class PropertiesBackedFeatureFlagService implements FeatureFlagService { private final FeatureProperties featureProperties; public PropertiesBackedFeatureFlagService(FeatureProperties featureProperties) { this.featureProperties = featureProperties; } @Override public Boolean featureOne() { return featureProperties.getFeatureOne(); } @Override public Integer featureTwo() { return featureProperties.getFeatureTwo(); } } The PropertiesBackedFeatureFlagService ultimately returns the feature flag state from the configuration file.\nWhat did we gain by moving the feature flag state from the code to an external configuration file?\nWe no longer have to change and re-compile the code to change the feature flag state. If we want to change the feature flag state, we could log into a running server, change the values in the configuration file, and re-start the application. We no longer need to deploy.\nHowever, logging into a production server to restart an application is very 90s. We don\u0026rsquo;t want to do that because it\u0026rsquo;s cumbersome and, more importantly, prone to human error. Also, in a real-world scenario, we probably have more than one application node and we don\u0026rsquo;t want to repeat the process of changing the configuration file and re-starting the application for each node!\nSo, what about if we store the feature flag state in a central database?\nDatabase-Backed Feature Flags An implementation of the FeatureFlagService interface that loads the feature flag state from the database might look something like this using Java and Spring\u0026rsquo;s JdbcTemplate:\npublic class DatabaseBackedFeatureFlagService implements FeatureFlagService { private JdbcTemplate jdbcTemplate; @Override public Boolean featureOne() { return jdbcTemplate.query(\u0026#34;select value from features where feature_key=\u0026#39;FEATURE_ONE\u0026#39;\u0026#34;, resultSet -\u0026gt; { if (!resultSet.next()) { return false; } boolean value = Boolean.parseBoolean(resultSet.getString(1)); return value ? Boolean.TRUE : Boolean.FALSE; }); } @Override public Integer featureTwo() { return jdbcTemplate.query(\u0026#34;select value from features where feature_key=\u0026#39;FEATURE_TWO\u0026#39;\u0026#34;, resultSet -\u0026gt; { if (!resultSet.next()) { return null; } return Integer.valueOf(resultSet.getString(1)); }); } } The DatabaseBackedFeatureFlagService requires the database table features to exist. That table has the columns feature_key and value.\nInstead of a relational database like in this example, we could also use a simple key/value store.\nWhen asked for the value of a feature flag, the service makes a call to the database and parses the value into a Boolean or Integer, as required by the feature flag. If there is no value, it returns null.\nWe finally have a solution that allows us to change the feature flag state on the fly! We can change the value in the database and it\u0026rsquo;s reflected instantly in the application. If all application nodes are connected against the same database, the new feature flag state is even reflected across our whole fleet of server nodes!\nHowever, our solution only supports simple feature flag state. It can return a Boolean or Integer value for a given feature flag. If we change a feature flag value, it applies to all users. We cannot activate a feature flag for a subset of users, which is a very powerful feature to enable testing in production and progressive rollouts to more and more users, among other things.\nFor this, we need context-sensitive feature flags that react to the context of the user.\nContext-Sensitive Feature Flags Let\u0026rsquo;s extend our database-backed solution to make it context-sensitive so that we can target different users with different feature flag values.\nSay we want to support two types of feature rollouts:\n GLOBAL: the feature flag state applies to all users. This is what we\u0026rsquo;ve done in the previous sections and it\u0026rsquo;s actually not context-sensitive at all. PERCENTAGE: the feature flag state applies to a percentage of all users. We can use this for progressive rollouts, where we first enable a feature for a small percentage of users and then slowly increase the percentage (or set it back to 0 if users complain about the feature not working). This rollout strategy is context-sensitive in the sense that it knows which user it\u0026rsquo;s serving.  A naive implementation of these two rollout strategies might look like the one in this Feature class:\npublic class Feature { public enum RolloutStrategy { GLOBAL, PERCENTAGE; } private final RolloutStrategy rolloutStrategy; private final int percentage; private final String value; private final String defaultValue; public Feature(RolloutStrategy rolloutStrategy, String value, String defaultValue, int percentage) { this.rolloutStrategy = rolloutStrategy; this.percentage = percentage; this.value = value; this.defaultValue = defaultValue; } public boolean evaluateBoolean(String userId) { switch (this.rolloutStrategy) { case GLOBAL: return this.getBooleanValue(); case PERCENTAGE: if (percentageHashCode(userId) \u0026lt;= this.percentage) { return this.getBooleanValue(); } else { return this.getBooleanDefaultValue(); } } return this.getBooleanDefaultValue(); } public Integer evaluateInt(String userId) { switch (this.rolloutStrategy) { case GLOBAL: return this.getIntValue(); case PERCENTAGE: if (percentageHashCode(userId) \u0026lt;= this.percentage) { return this.getIntValue(); } else { return this.getIntDefaultValue(); } } return this.getIntDefaultValue(); } double percentageHashCode(String text) { try { MessageDigest digest = MessageDigest.getInstance(\u0026#34;SHA-256\u0026#34;); byte[] encodedhash = digest.digest( text.getBytes(StandardCharsets.UTF_8)); double INTEGER_RANGE = 1L \u0026lt;\u0026lt; 32; return (((long) Arrays.hashCode(encodedhash) - Integer.MIN_VALUE) / INTEGER_RANGE) * 100; } catch (NoSuchAlgorithmException e) { throw new IllegalStateException(e); } } // getters and setters omitted  } We moved all the logic to calculate the state of a feature flag into the Feature class above. A Feature has the field rolloutStrategy, so we can choose the strategy for each feature. It also has the field percentage which defines the percentage of users for which the feature flag is active when the feature is using a PERCENTAGE rollout strategy. The field value contains the value of the feature flag to serve when the feature flag is active, and the field defaultValue contains the value to serve when the feature flag is not active.\nThe fun part is in the methods evaluateBoolean() and evaluateInt() which evaluate the state of a feature flag for a given userId. This userId is the context for which we evaluate the feature flag.\nBoth methods are very similar, with the only difference that one returns a Boolean and the other an Integer. If the rollout strategy of the feature flag is GLOBAL, we just return the value field.\nIf it\u0026rsquo;s a PERCENTAGE rollout strategy, we check if the hashcode of the userId (calculated by the percentageHashCode() method) is below the percentage value to determine if the feature should be active for the user or not and return the value or defaultValue accordingly.\nThis assumes that the percentageHashCode() method returns a different value for each user ID that is well-distributed between 0 and 100. It must always return the same value for any given user ID because we don\u0026rsquo;t want the feature state to change between two invocations of the evaluate...() method for the same user.\nWe then make use of the Feature class in a new implementation of the FeatureFlagService interface:\npublic class ContextSensitiveFeatureFlagService implements FeatureFlagService { private final JdbcTemplate jdbcTemplate; private final UserSession userSession; public ContextSensitiveFeatureFlagService(JdbcTemplate jdbcTemplate, UserSession userSession) { this.jdbcTemplate = jdbcTemplate; this.userSession = userSession; } @Override public Boolean featureOne() { Feature feature = getFeatureFromDatabase(); if (feature == null) { return Boolean.FALSE; } return feature.evaluateBoolean(userSession.getUsername()); } @Override public Integer featureTwo() { Feature feature = getFeatureFromDatabase(); if (feature == null) { return null; } return feature.evaluateInt(userSession.getUsername()); } @Nullable private Feature getFeatureFromDatabase() { return jdbcTemplate.query(\u0026#34;select targeting, value, defaultValue, percentage from features where feature_key=\u0026#39;FEATURE_ONE\u0026#39;\u0026#34;, resultSet -\u0026gt; { if (!resultSet.next()) { return null; } RolloutStrategy rolloutStrategy = Enum.valueOf(RolloutStrategy.class, resultSet.getString(1)); String value = resultSet.getString(2); String defaultValue = resultSet.getString(3); int percentage = resultSet.getInt(4); return new Feature(rolloutStrategy, value, defaultValue, percentage); }); } } This builds upon the DatabaseBackedFeatureFlagService we\u0026rsquo;ve built before. Instead of returning the feature flag state directly from the database, however, we map it into a Feature object and then ask that Feature object to calculate the feature flag state for a given user ID.\nYou can see that the implementation of both the Feature class and the ContextSensitiveFeatureFlagService contains several special cases. Actually, I don\u0026rsquo;t guarantee at all that the code above behaves as intended in all cases! Use at your own peril!\nAnd the solution above only provides a solution for global and percentage rollouts. There is a host of other rollout strategies like rolling out by user geography, user behavior, or other demographical attributes. Also, we\u0026rsquo;d like to target specific users by their user ID so we can enable a feature for just ourselves to test in production, for example.\nAlso, the homegrown solution we\u0026rsquo;ve built above doesn\u0026rsquo;t provide a user interface to change feature flag state, yet! If we want to change the state of a feature flag, for example, to change the rollout percentage from 0 to 10 percent, we\u0026rsquo;d have to connect to the database and change it there. It would be nice if we had a UI to do that to make it easier and avoid errors.\nAll this means that you probably shouldn\u0026rsquo;t build a feature flagging solution yourself, at least not if you want to be flexible in your rollout strategies. Instead, you might want to go with a feature flagging framework like Togglz, which supports multiple rollout strategies and can store feature flag state in a database. It even provides a (simple) UI to change the state of feature flags.\nOr, you use a feature management service that reduces your custom development to an absolute minimum and takes care of everything for you.\nFeature Flags Backed by a Feature Management Platform So, what would it look like if we delegate the feature flag evaluation to a full-blown feature management service like LaunchDarkly?\nSomething like this:\npublic class LaunchDarklyFeatureFlagService implements FeatureFlagService { private final LDClient launchdarklyClient; private final UserSession userSession; public LaunchDarklyFeatureFlagService(LDClient launchdarklyClient, UserSession userSession) { this.launchdarklyClient = launchdarklyClient; this.userSession = userSession; } @Override public Boolean featureOne() { return launchdarklyClient.boolVariation(\u0026#34;feature-one\u0026#34;, getLaunchdarklyUserFromSession(), false); } @Override public Integer featureTwo() { return launchdarklyClient.intVariation(\u0026#34;feature-two\u0026#34;, getLaunchdarklyUserFromSession(), 0); } private LDUser getLaunchdarklyUserFromSession() { return new LDUser.Builder(userSession.getUsername()) .build(); } } We\u0026rsquo;re making use of LaunchDarkly\u0026rsquo;s Java SDK, which provides the LDClient class.\nTo evaluate the state of a feature flag, we ask that client for the state. We can ask for a boolean value, a numeric value, or other types of values. For context, we pass in an LDUser object that is populated with the name of the user. That way, LaunchDarkly knows for which user it should evaluate the feature flag.\nThe evaluation of the feature flag then happens based on targeting rules that we have previously defined in the LaunchDarkly UI:\nWe can change the targeting rules at any time and the changes will have immediate effect. As long as we pass along a unique identifier for each user, LaunchDarkly takes care of resolving the correct feature flag state for that user, taking care of all edge cases for us.\nIf you want to play around with LaunchDarkly, have a look at my tutorial comparing Togglz with LaunchDarkly, where you\u0026rsquo;ll find a step-by-step guide on integrating LaunchDarkly with your codebase.\nConclusion Working with feature flags is fun.\nWe can deploy code with \u0026ldquo;sleeping\u0026rdquo; features and enable them at any time. We gain confidence in deploying because we know the changes we\u0026rsquo;ve made will only be active once we\u0026rsquo;ve activated the feature flag.\nThis confidence makes us better at delivering software, as the DORA research shows without a doubt.\nIt\u0026rsquo;s also fun to build a homegrown solution to support feature flags in our codebase! It\u0026rsquo;s an interesting technical problem to solve.\nBut as soon as we want to include the user context in the decision to serve a certain feature or not, things get complicated and we\u0026rsquo;re likely to get them wrong the first time. So we should bet on solutions like Togglz or LaunchDarkly instead, so we can focus on the code that brings value to our customers.\nYou can browse the code examples from this article on GitHub.\n","date":"March 15, 2022","image":"https://reflectoring.io/images/stock/0039-start-1200x628_hue9ec581f047a135864ef544dc3d56769_76303_650x0_resize_q90_box.jpg","permalink":"/feature-flags-make-or-buy/","title":"Feature Flags: Make or Buy?"},{"categories":["Spring"],"contents":"Most traditional applications deal with blocking calls or, in other words, synchronous calls. This means that if we want to access a particular resource in a system with most of the threads being busy, then the application would block the new one or wait until the previous threads complete processing its requests.\nIf we want to process Big Data, however, we need to do this with immense speed and agility. That’s when the software developers realized that they would need some kind of multi-threaded environment that handles asynchronous and non-blocking calls to make the best use of processing resources.\n Example Code This article is accompanied by a working code example on GitHub. What is a Stream? Before jumping on to the reactive part, we must understand what streams are. A Stream is a sequence of data that is transferred from one system to another. It traditionally operates in a blocking, sequential, and FIFO (first-in-first-out) pattern.\nThis blocking methodology of data streaming often prohibits a system to process real-time data while streaming. Thus, a bunch of prominent developers realized that they would need an approach to build a \u0026ldquo;reactive\u0026rdquo; systems architecture that would ease the processing of data while streaming. Hence, they signed a manifesto, popularly known as the Reactive Manifesto.\nThe authors of the manifesto stated that a reactive system must be an asynchronous software that deals with producers who have the single responsibility to send messages to consumers. They introduced the following features to keep in mind:\n Responsive: Reactive systems must be fast and responsive so that they can provide consistent high quality of service. Resilient: Reactive systems should be designed to anticipate system failures. Thus, they should be responsive through replication and isolation. Elastic: Reactive systems must be adaptive to shard or replicate components based upon their requirement. They should use predictive scaling to anticipate sudden ups and downs in their infrastructure. Message-driven: Since all the components in a reactive system are supposed to be loosely coupled, they must communicate across their boundaries by asynchronously exchanging messages.  Introducing Reactive Programming Paradigm Reactive programming is a programming paradigm that helps to implement non-blocking, asynchronous, and event-driven or message-driven data processing. It models data and events as streams that it can observe and react to by processing or transforming the data. Let\u0026rsquo;s talk about the differences between blocking and non-blocking request processing.\nBlocking Request In a conventional MVC application, whenever a request reaches the server, a servlet thread is being created and delegated to worker threads to perform various operations like I/O, database processing, etc. While the worker threads are busy completing their processes, the servlet threads enter a waiting state due to which the calls remain blocked. This is blocking or synchronous request processing.\nNon-Blocking Request In a non-blocking system, all the incoming requests are accompanied by an event handler and a callback. The request thread delegates the incoming request to a thread pool that manages a pretty small number of threads. Then the thread pool delegates the request to its handler function and gets available to process the next incoming requests from the request thread.\nWhen the handler function completes its process, one of the threads from the pool fetches the response and passes it to the callback function. Thus the threads in a non-blocking system never go into the waiting state. This increases the productivity and the performance of the application.\nA single request is potentially processed by multiple threads!\nBackpressure Working with reactive code, we often come across the term \u0026ldquo;backpressure\u0026rdquo;. It is an analogy derived from fluid dynamics which literally means the resistance or force that opposes the desired flow of data. In Reactive Streams, backpressure defines the mechanism to regulate the data transmission across streams.\nConsider that server A sends 1000 EPS (events per second) to server B. But server B could only process 800 EPS and thus has a deficit of 200 EPS. Server B would now tend to fall behind as it has to process the deficit data and send it downstream or maybe store it in a database. Thus, server B deals with backpressure and soon will go out of memory and fail.\nSo, this backpressure can be handled or managed by the following options or strategies:\n Buffer - We can easily buffer the deficit data and process it later when the server has capacity. But with a huge load of data coming in, this buffer might increase and the server would soon run out of memory. Drop - Dropping, i.e. not processing events, should be the last option. Usually, we can use the concept of data sampling combined with buffering to achieve less data loss. Control - The concept of controlling the producer that sends the data is by far the best option. Reactive Streams provides various options in both push and pull-based streams to control the data that is being produced and sent to the consumer.  Reactive Java Libraries The reactive landscape in Java has evolved a lot in recent years. Before we move on further to understand the Spring Webflux component, let’s take a look into the reactive core libraries written in Java today. Here are the most popular ones:\n RxJava: It is implemented out of the ReactorX project which hosts implementations for multiple programming languages and platforms. ReactiveX is a combination of the best ideas from the Observer pattern, the Iterator pattern, and functional programming. Project Reactor: Reactor is a framework built by Pivotal and powered by Spring. It is considered as one of the foundations of the reactive stack in the Spring ecosystem. It implements Reactive API patterns which are based on the Reactive Streams specification. Akka Streams: Although it implements the Reactive Streams implementation, the Akka Streams API is completely decoupled from the Reactive Streams interfaces. It uses Actors to deal with the streaming data. It is considered a 3rd generation Reactive library. Ratpack: Ratpack is a set of Java libraries used for building scalable and high-performance HTTP applications. It uses Java 8, Netty, and reactive principles to provide a basic implementation of Reactive Stream API. You can also use Reactor or RxJava along with it. Vert.x: Vert.x is a foundation project by Eclipse which delivers a polyglot event-driven framework for JVM. It is similar to Ratpack and allows to use RxJava or their native implementation for Reactive Streams API.  Spring Webflux is internally built using the core components of RxJava and RxNetty.\nIntro to Java 9 Reactive Streams API The whole purpose of Reactive Streams was to introduce a standard for asynchronous stream processing of data with non-blocking backpressure. Hence, Java 9 introduced the Reactive Streams API. It is implemented based upon the Publisher-Subscriber Model or Producer-Consumer Model and primarily defines four interfaces:\n  Publisher: It is responsible for preparing and transferring data to subscribers as individual messages. A Publisher can serve multiple subscribers but it has only one method, subscribe().\npublic interface Publisher\u0026lt;T\u0026gt; { public void subscribe(Subscriber\u0026lt;? super T\u0026gt; s); }   Subscriber: A Subscriber is responsible for receiving messages from a Publisher and processing those messages. It acts as a terminal operation in the Streams API. It has four methods to deal with the events received:\n onSubscribe(Subscription s): Gets called automatically when a publisher registers itself and allows the subscription to request data. onNext(T t): Gets called on the subscriber every time it is ready to receive a new message of generic type T. onError(Throwable t): Is used to handle the next steps whenever an error is monitored. onComplete(): Allows to perform operations in case of successful subscription of data.  public interface Subscriber\u0026lt;T\u0026gt; { public void onSubscribe(Subscription s); public void onNext(T t); public void onError(Throwable t); public void onComplete(); }   Subscription: It represents a relationship between the subscriber and publisher. It can be used only once by a single Subscriber. It has methods that allow requesting for data and to cancel the demand:\npublic interface Subscription { public void request(long n); public void cancel(); }   Processor: It represents a processing stage that consists of both Publisher and Subscriber.\npublic interface Processor\u0026lt;T, R\u0026gt; extends Subscriber\u0026lt;T\u0026gt;, Publisher\u0026lt;R\u0026gt; { }   Introduction to Spring Webflux Spring introduced a Multi-Event Loop model to enable a reactive stack known as WebFlux. It is a fully non-blocking and annotation-based web framework built on Project Reactor which allows building reactive web applications on the HTTP layer. It provides support for popular inbuilt severs like Netty, Undertow, and Servlet 3.1 containers.\nBefore we get started with Spring Webflux, we must accustom ourselves to two of the publishers which are being used heavily in the context of Webflux:\n  Mono: A Publisher that emits 0 or 1 element.\nMono\u0026lt;String\u0026gt; mono = Mono.just(\u0026#34;John\u0026#34;); Mono\u0026lt;Object\u0026gt; monoEmpty = Mono.empty(); Mono\u0026lt;Object\u0026gt; monoError = Mono.error(new Exception());   Flux: A Publisher that emits 0 to N elements which can keep emitting elements forever. It returns a sequence of elements and sends a notification when it has completed returning all its elements.\nFlux\u0026lt;Integer\u0026gt; flux = Flux.just(1, 2, 3, 4); Flux\u0026lt;String\u0026gt; fluxString = Flux.fromArray(new String[]{\u0026#34;A\u0026#34;, \u0026#34;B\u0026#34;, \u0026#34;C\u0026#34;}); Flux\u0026lt;String\u0026gt; fluxIterable = Flux.fromIterable(Arrays.asList(\u0026#34;A\u0026#34;, \u0026#34;B\u0026#34;, \u0026#34;C\u0026#34;)); Flux\u0026lt;Integer\u0026gt; fluxRange = Flux.range(2, 5); Flux\u0026lt;Long\u0026gt; fluxLong = Flux.interval(Duration.ofSeconds(10)); // To Stream data and call subscribe method List\u0026lt;String\u0026gt; dataStream = new ArrayList\u0026lt;\u0026gt;(); Flux.just(\u0026#34;X\u0026#34;, \u0026#34;Y\u0026#34;, \u0026#34;Z\u0026#34;) .log() .subscribe(dataStream::add); Once the stream of data is created, it needs to be subscribed to so it starts emitting elements. The data won’t flow or be processed until the subscribe() method is called. Also by using the .log() method above, we can trace and observe all the stream signals. The events are logged into the console.\nReactor also provides operators to work with Mono and Flux objects. Some of them are:\n Map - It is used to transform from one element to another. FlatMap - It flattens a list of Publishers to the values that these publishers emit. The transformation is asynchronous. FlatMapMany - This is a Mono operator which is used to transform a Mono object into a Flux object. DelayElements - It delays the publishing of each element by a defined duration. Concat - It is used to combine the elements emitted by a Publisher by keeping the sequence of the publishers intact. Merge - It is used to combine the publishers without keeping its sequence. Zip - It is used to combine two or more publishers by waiting on all the sources to emit one element and combining these elements into an output value.    Sping Webflux Dependencies Until now we have spoken a lot about Reactive Streams and Webflux. Let’s get started with the implementation part. We are going to build a REST API using Webflux and we will use MongoDB as our database to store data. We will build a user management service to store and retrieve users.\nLet’s initialize the Spring Boot application by defining a skeleton project in Spring Initializr:\nWe have added the Spring Reactive Web dependency, Spring Data Reactive MongoDB to reactively connect to MongoDB, Lombok and Spring DevTools. The use of Lombok is optional, as it\u0026rsquo;s a convenience library that helps us reduce boilerplate code such as getters, setters, and constructors, just by annotating our entities with Lombok annotations. Similar for Spring DevTools.\nData Model Let’s start by defining the User entity that we will be using throughout our implementation:\n@ToString @EqualsAndHashCode(of = {\u0026#34;id\u0026#34;,\u0026#34;name\u0026#34;,\u0026#34;department\u0026#34;}) @AllArgsConstructor @NoArgsConstructor @Data @Document(value = \u0026#34;users\u0026#34;) public class User { @Id private String id; private String name; private int age; private double salary; private String department; } We are initially using the Lombok annotations to define Getters, Setters, toString(), equalsAndHashCode() methods, and constructors to reduce boilerplate implementations. We have also used @Document to mark it as a MongoDB entity.\nPersistence Layer - Defining Repositories Next, we will define our Repository layer using the ReactiveMongoRepository interface.\n@Repository public interface UserRepository extends ReactiveMongoRepository\u0026lt;User, String\u0026gt; { } Service Layer Now we will define the Service that would make calls to MongoDB using Repository and pass the data on to the web layer:\n@Service @Slf4j @RequiredArgsConstructor @Transactional public class UserService { private final ReactiveMongoTemplate reactiveMongoTemplate; private final UserRepository userRepository; public Mono\u0026lt;User\u0026gt; createUser(User user){ return userRepository.save(user); } public Flux\u0026lt;User\u0026gt; getAllUsers(){ return userRepository.findAll(); } public Mono\u0026lt;User\u0026gt; findById(String userId){ return userRepository.findById(userId); } public Mono\u0026lt;User\u0026gt; updateUser(String userId, User user){ return userRepository.findById(userId) .flatMap(dbUser -\u0026gt; { dbUser.setAge(user.getAge()); dbUser.setSalary(user.getSalary()); return userRepository.save(dbUser); }); } public Mono\u0026lt;User\u0026gt; deleteUser(String userId){ return userRepository.findById(userId) .flatMap(existingUser -\u0026gt; userRepository.delete(existingUser) .then(Mono.just(existingUser))); } public Flux\u0026lt;User\u0026gt; fetchUsers(String name) { Query query = new Query() .with(Sort .by(Collections.singletonList(Sort.Order.asc(\u0026#34;age\u0026#34;))) ); query.addCriteria(Criteria .where(\u0026#34;name\u0026#34;) .regex(name) ); return reactiveMongoTemplate .find(query, User.class); } } We have defined service methods to save, update, fetch, search and delete a user. We have primarily used UserRepository to store and retrieve data from MongoDB, but we have also used a ReactiveTemplate and Query to search for a user given by a regex string.\nWeb Layer We have covered the middleware layers to store and retrieve data, let’s just focus on the web layer. Spring Webflux supports two programming models:\n Annotation-based Reactive components Functional Routing and Handling  Annotation-based Reactive Components Let’s first look into the annotation-based components. We can simply create a UserController for that and annotate with routes and methods:\n@RequiredArgsConstructor @RestController @RequestMapping(\u0026#34;/users\u0026#34;) public class UserController { private final UserService userService; @PostMapping @ResponseStatus(HttpStatus.CREATED) public Mono\u0026lt;User\u0026gt; create(@RequestBody User user){ return userService.createUser(user); } @GetMapping public Flux\u0026lt;User\u0026gt; getAllUsers(){ return userService.getAllUsers(); } @GetMapping(\u0026#34;/{userId}\u0026#34;) public Mono\u0026lt;ResponseEntity\u0026lt;User\u0026gt;\u0026gt; getUserById(@PathVariable String userId){ Mono\u0026lt;User\u0026gt; user = userService.findById(userId); return user.map(ResponseEntity::ok) .defaultIfEmpty(ResponseEntity.notFound().build()); } @PutMapping(\u0026#34;/{userId}\u0026#34;) public Mono\u0026lt;ResponseEntity\u0026lt;User\u0026gt;\u0026gt; updateUserById(@PathVariable String userId, @RequestBody User user){ return userService.updateUser(userId,user) .map(ResponseEntity::ok) .defaultIfEmpty(ResponseEntity.badRequest().build()); } @DeleteMapping(\u0026#34;/{userId}\u0026#34;) public Mono\u0026lt;ResponseEntity\u0026lt;Void\u0026gt;\u0026gt; deleteUserById(@PathVariable String userId){ return userService.deleteUser(userId) .map( r -\u0026gt; ResponseEntity.ok().\u0026lt;Void\u0026gt;build()) .defaultIfEmpty(ResponseEntity.notFound().build()); } @GetMapping(\u0026#34;/search\u0026#34;) public Flux\u0026lt;User\u0026gt; searchUsers(@RequestParam(\u0026#34;name\u0026#34;) String name) { return userService.fetchUsers(name); } } This almost looks the same as the controller defined in Spring MVC. But the major difference between Spring MVC and Spring Webflux relies on how the request and response are handled using non-blocking publishers Mono and Flux.\nWe don’t need to call subscribe methods in the Controller as the internal classes of Spring would call it for us at the right time.\nDo Not Block! We must make sure that we don’t use any blocking methods throughout the lifecycle of an API. Otherwise, we lose the main advantage of reactive programming!\n Functional Routing and Handling Initially, the Spring Functional Web Framework was built and designed for Spring Webflux but later it was also introduced in Spring MVC. We use functions for routing and handling requests. This introduces an alternative programming model to the one provided by the Spring annotation-based framework.\nFirst of all, we will define a Handler function that can accept a ServerRequest as an incoming argument and returns a Mono of ServerResponse as the response of that functional method. Let’s name the handler class as UserHandler:\n@Component @RequiredArgsConstructor public class UserHandler { private final UserService userService; public Mono\u0026lt;ServerResponse\u0026gt; getAllUsers(ServerRequest request) { return ServerResponse .ok() .contentType(MediaType.APPLICATION_JSON) .body(userService.getAllUsers(), User.class); } public Mono\u0026lt;ServerResponse\u0026gt; getUserById(ServerRequest request) { return userService .findById(request.pathVariable(\u0026#34;userId\u0026#34;)) .flatMap(user -\u0026gt; ServerResponse .ok() .contentType(MediaType.APPLICATION_JSON) .body(user, User.class) ) .switchIfEmpty(ServerResponse.notFound().build()); } public Mono\u0026lt;ServerResponse\u0026gt; create(ServerRequest request) { Mono\u0026lt;User\u0026gt; user = request.bodyToMono(User.class); return user .flatMap(u -\u0026gt; ServerResponse .status(HttpStatus.CREATED) .contentType(MediaType.APPLICATION_JSON) .body(userService.createUser(u), User.class) ); } public Mono\u0026lt;ServerResponse\u0026gt; updateUserById(ServerRequest request) { String id = request.pathVariable(\u0026#34;userId\u0026#34;); Mono\u0026lt;User\u0026gt; updatedUser = request.bodyToMono(User.class); return updatedUser .flatMap(u -\u0026gt; ServerResponse .ok() .contentType(MediaType.APPLICATION_JSON) .body(userService.updateUser(id, u), User.class) ); } public Mono\u0026lt;ServerResponse\u0026gt; deleteUserById(ServerRequest request){ return userService.deleteUser(request.pathVariable(\u0026#34;userId\u0026#34;)) .flatMap(u -\u0026gt; ServerResponse.ok().body(u, User.class)) .switchIfEmpty(ServerResponse.notFound().build()); } } Next, we will define the router function. Router functions usually evaluate the request and choose the appropriate handler function. They serve as an alternate to the @RequestMapping annotation. So we will define this RouterFunction and annotate it with @Bean within a @Configuration class to inject it into the Spring application context:\n@Configuration public class RouterConfig { @Bean RouterFunction\u0026lt;ServerResponse\u0026gt; routes(UserHandler handler) { return route(GET(\u0026#34;/handler/users\u0026#34;).and(accept(MediaType.APPLICATION_JSON)), handler::getAllUsers) .andRoute(GET(\u0026#34;/handler/users/{userId}\u0026#34;).and(contentType(MediaType.APPLICATION_JSON)), handler::getUserById) .andRoute(POST(\u0026#34;/handler/users\u0026#34;).and(accept(MediaType.APPLICATION_JSON)), handler::create) .andRoute(PUT(\u0026#34;/handler/users/{userId}\u0026#34;).and(contentType(MediaType.APPLICATION_JSON)), handler::updateUserById) .andRoute(DELETE(\u0026#34;/handler/users/{userId}\u0026#34;).and(accept(MediaType.APPLICATION_JSON)), handler::deleteUserById); } } Finally, we will define some properties as part of application.yaml in order to configure our database connection and server config.\nspring: application: name: spring-webflux-guide webflux: base-path: /api data: mongodb: authentication-database: admin uri: mongodb://localhost:27017/test database: test logging: level: io: reflectoring: DEBUG org: springframework: web: INFO data: mongodb: core: ReactiveMongoTemplate: DEBUG reactor: netty: http: client: DEBUG This constitutes our basic non-blocking REST API using Spring Webflux. Now this works as a Publisher-Subscriber model that we were talking about initially in this article.\nServer-Sent Events Server-Sent Events (SSE) is an HTTP standard that provides the capability for servers to push streaming data to the web client. The flow is unidirectional from server to client and the client receives updates whenever the server pushes some data. This kind of mechanism is often used for real-time messaging, streaming or notification events. Usually, for multiplexed and bidirectional streaming, we often use Websockets. But SSE are mostly used for the following use-cases:\n Receiving live feed from the server whenever there is a new or updated record. Message notification without unnecessary reloading of a server. Subscribing to a feed of news, stocks, or cryptocurrency  The biggest limitation of SSE is that it’s unidirectional and hence information can’t be passed to a server from the client. Spring Webflux allows us to define server streaming events which can send events in a given interval. The web client initiates the REST API call and keeps it open until the event stream is closed.\nThe server-side event would have the content type text/event-stream. Now we can define a Server Side Event streaming endpoint using WebFlux by simply returning a Flux and specifying the content type as text/event-stream. So let’s add this method to our existing UserController:\n@GetMapping(value = \u0026#34;/stream\u0026#34;, produces = MediaType.TEXT_EVENT_STREAM_VALUE) public Flux\u0026lt;User\u0026gt; streamAllUsers() { return userService .getAllUsers() .flatMap(user -\u0026gt; Flux .zip(Flux.interval(Duration.ofSeconds(2)), Flux.fromStream(Stream.generate(() -\u0026gt; user)) ) .map(Tuple2::getT2) ); } Here, we will stream all the users in our system every 2 seconds. This serves the whole list of updated users from the MongoDB every interval.\nWebflux Internals Traditionally, Spring MVC uses the Tomcat server for servlet stack applications whereas Spring WebFlux uses Reactor Netty by default for reactive stack applications.\nReactor Netty is an asynchronous, event-driven network application framework built out of Netty server which provides non-blocking and backpressure-ready network engines for HTTP, TCP, and UDP clients and servers.\nSpring WebFlux automatically configures Reactor Netty as the default server but if we want to override the default configuration, we can simply do that by defining them under server prefix.\nserver: port: 9000 http2: enabled: true We can also define the other properties in the same way that start with the server prefix by overriding the default server configuration.\nConclusion Spring WebFlux or Reactive non-blocking applications usually do not make the applications run faster. The essential benefit it serves is the ability to scale an application with a small, fixed number of threads and lesser memory requirements while at the same time making the best use of the available processing power. It often makes a service more resilient under load as they can scale predictably.\nWebFlux is a good fit for highly concurrent applications. Applications which would be able to process a huge number of requests with as few resources as possible. WebFlux is also relevant for applications that need scalability or to stream request data in real time. While implementing a micro-service in WebFlux we must take into account that the entire flow uses reactive and asynchronous programming and none of the operations are blocking in nature.\nIf you want to learn about how to build a client to a reactive server, have a look at our WebClient article.\nYou can refer to all the source code used in the article on Github.\n","date":"March 10, 2022","image":"https://reflectoring.io/images/stock/0120-stream-1200x628_hua2546abb4041b122c99634e85915ced3_400096_650x0_resize_q90_box.jpg","permalink":"/getting-started-with-spring-webflux/","title":"Getting Started with Spring WebFlux"},{"categories":["Spring"],"contents":"Spring Shell allows us to build a command line (shell) application using the Spring framework and all the advantages it provides.\n Example Code This article is accompanied by a working code example on GitHub. What Is a Shell Anyways? A shell provides us with an interface to a system (usually an operating system) to which we give commands and parameters. The shell in turn does some useful tasks for us and provides an output.\nCreating a Basic Shell First, we have to get the SpringShell dependency from Maven central, which has everything we need.\nIn Gradle, the dependency will look something like this:\ndependencies { implementation \u0026#39;org.springframework.shell:spring-shell-starter:2.0.1.RELEASE\u0026#39; } Then, since it\u0026rsquo;s a Spring Boot application, our main method has to be annotated with @SpringBootApplication\n@SpringBootApplication public class SpringShellApplication { public static void main(String[] args) { SpringApplication.run(SpringShellApplication.class, args); } } Now, let\u0026rsquo;s create our first shell command which simulates an SSH command:\n@ShellComponent public class SSHCommand { Logger log = Logger.getLogger(SSHCommand.class.getName()); @ShellMethod(value = \u0026#34;connect to remote server\u0026#34;) public void ssh(@ShellOption(value = \u0026#34;-s\u0026#34;) String remoteServer) { log.info(format(\u0026#34;Logged to machine \u0026#39;%s\u0026#39;\u0026#34;, remoteServer)); } } The annotation @ShellComponent tells Spring Shell that an annotated class may contain shell methods, which are annotated with @ShellMethod.\nAs for the @ShellMethod annotation, it\u0026rsquo;s used to mark a method as invokable via Spring Shell. We can also see the value property which is used to describe the command.\nThe @ShellOption annotation simply states that this command takes a parameter named -s.\nSo as a result when we run the application we get a shell that has a command called ssh which takes a parameter -s and all it does is logging the passed parameter value to the command line.\nshell:\u0026gt;ssh -s my-machine 2022-02-11 15:44:04.065 INFO 5648 --- [ main] j.t.springshell.command.SSHCommand : Logged to machine \u0026#39;my-machine\u0026#39;shell:\u0026gt; Modifying the Command Name The default naming convention for Spring Shell, as we\u0026rsquo;ve seen, is taking the method name ssh and turning it into the command name.\n If we wrote the name in camel case Spring would turn camelCase humps into \u0026ldquo;-\u0026rdquo;. So customSsh would translate to custom-ssh.  We can also add a name of our own using the key property of the ShellMethod annotation:\n@ShellComponent public class SSHCommand { ... @ShellMethod(key = \u0026#34;my-ssh\u0026#34;, value = \u0026#34;connect to remote server\u0026#34;) public void ssh(@ShellOption(value = \u0026#34;-s\u0026#34;) String remoteServer) { log.info(format(\u0026#34;Logged to machine \u0026#39;%s\u0026#39;\u0026#34;, remoteServer)); } } Working with Command Parameters Commands can take parameters as input from the user. Spring Shell offers a simple and easy way to introduce parameters.\nParameter Naming As we\u0026rsquo;ve seen from the previous example, command parameters are expressed through method parameters.\nWe can specify the name of the parameter using the value property of the @ShellOption annotation.\nIf we don\u0026rsquo;t specify the value however, Spring Shell assigns it a default value of parameter name \u0026ldquo;-\u0026rdquo; separated prefixed by ShellMethod.prefix().\nThe default value for @ShellMethod.prefix() is \u0026ldquo;\u0026ndash;\u0026quot;:\n@ShellComponent public class SSHCommand { ... @ShellMethod(key = \u0026#34;my-ssh\u0026#34;, prefix = \u0026#34;-\u0026#34;, value = \u0026#34;connect to remote server\u0026#34;) public void ssh(@ShellOption String remoteServer) { log.info(format(\u0026#34;Logged to machine \u0026#39;%s\u0026#39;\u0026#34;, remoteServer)); } } Then, our command would be something like:\nCool Machine==\u0026gt; my-ssh -remote-server test 2022-02-27 12:39:14.800 INFO 11704 --- [ main] i.r.springshell.command.SSHCommand : Logged to machine \u0026#39;test\u0026#39; Declaring Default Parameters Values We can assign default values to parameters in case the user doesn\u0026rsquo;t specify any. Doing this also allows the user to treat those parameters as optional:\n@ShellComponent public class SSHCommand { ... @ShellMethod(value = \u0026#34;connect to remote server\u0026#34;) public void ssh(@ShellOption(value = \u0026#34;--s\u0026#34;, defaultValue = \u0026#34;default-server\u0026#34;) String remoteServer) { log.info(format(\u0026#34;Logged to machine \u0026#39;%s\u0026#39;\u0026#34;, remoteServer)); } } Typing only ssh to the console will give us:\nshell:\u0026gt;ssh 2022-02-11 19:55:05.133 INFO 4700 --- [ main] j.t.springshell.command.SSHCommand : Logged to machine \u0026#39;default-server\u0026#39; Multi-valued Parameters We can specify multiple values for a single parameter by using the arity() attribute of the @ShellOption annotation. Simply use a collection or array for the parameter type, and specify how many values are expected\u0026rdquo;:\n@ShellComponent public class SSHCommand { ... @ShellMethod(value = \u0026#34;add keys\u0026#34;) public void sshAdd(@ShellOption(value = \u0026#34;--k\u0026#34;, arity = 2) String[] keys) { log.info(format(\u0026#34;Adding keys \u0026#39;%s\u0026#39; \u0026#39;%s\u0026#39;\u0026#34;, keys[0], keys[1])); } } Let\u0026rsquo;s try the command out in the shell:\nshell:\u0026gt;ssh-add --k test1 test2 2022-02-12 18:27:00.301 INFO 4928 --- [ main] j.t.springshell.command.SSHCommand : Adding keys \u0026#39;test1\u0026#39; \u0026#39;test2\u0026#39; Working with Boolean Parameters Boolean parameters receive a special treatment by command-line utilities. The absence of the parameter in the command indicates a false value. On the other hand, its existence indicates true value:\n@ShellComponent public class SSHCommand { ... @ShellMethod(value = \u0026#34;sign in\u0026#34;) public void sshLogin(@ShellOption(value = \u0026#34;--r\u0026#34;) boolean rememberMe) { log.info(format(\u0026#34;remember me option is \u0026#39;%s\u0026#39;\u0026#34;, rememberMe)); } } Let\u0026rsquo;s check it out in the command line:\nshell:\u0026gt;ssh-login --r 2022-02-12 18:41:34.903 INFO 10044 --- [ main] j.t.springshell.command.SSHCommand : remember me option is \u0026#39;true\u0026#39; shell:\u0026gt;ssh-login 2022-02-12 18:41:44.606 INFO 10044 --- [ main] j.t.springshell.command.SSHCommand : remember me option is \u0026#39;false\u0026#39; Validating Command Parameters Spring Shell integrates with the Bean Validation API to provide us with automatic and self-documenting constraints on command parameters. Validation annotations found on command parameters as well as annotations at the method level will trigger validation prior to the command executing.\nLet\u0026rsquo;s try this in action by adding a @Size annotation to the method parameter:\n@ShellComponent public class SSHCommand { ... @ShellMethod(value = \u0026#34;ssh agent\u0026#34;) public void sshAgent( @ShellOption(value = \u0026#34;--a\u0026#34;) @Size(min = 2, max = 10) String agent) { log.info(format(\u0026#34;adding agent \u0026#39;%s\u0026#39;\u0026#34;, agent)); } } Now, if we try to pass a parameter value with a length of 1 we will get an error stating the reason:\nshell:\u0026gt;ssh-agent --a t The following constraints were not met: --a string : size must be between 2 and 10 (You passed \u0026#39;t\u0026#39;) Note that the @Size annotation is a part of the Jakarta Bean Validation which offers many more validation options like @NotEmpty @Max.\nDynamic Command Availability Some commands only make sense when certain pre-conditions are met. For example, a sign-out command should be available only if a sign-in command has been issued, and if the user tries to run the sign-out command we want to warn them that it\u0026rsquo;s not possible.\nSpring Shell offers us three ways to achieve our goal.\nCreate a Method to Check Availability It checks our class for a method with a special name and with a return type of Availability.\nThe special name has to be in the format commandToCheckAvailability:\n@ShellComponent public class SSHLoggingCommand { Logger log = Logger.getLogger(SSHLoggingCommand.class.getName()); private boolean signedIn; @ShellMethod(value = \u0026#34;sign in\u0026#34;) public void signIn() { this.signedIn = true; log.info(\u0026#34;Signed In!\u0026#34;); } @ShellMethod(value = \u0026#34;sign out\u0026#34;) public void signOut() { this.signedIn = false; log.info(\u0026#34;Signed out!\u0026#34;); } // note the naming  public Availability signOutAvailability() { return signedIn ? Availability.available() : Availability.unavailable(\u0026#34;Must be signed in first\u0026#34;); } } So if we try to run the sign-out command without first signing in we will get the following message:\nshell:\u0026gt;sign-out Command \u0026#39;sign-out\u0026#39; exists but is not currently available because Must be signed in firstDetails of the error have been omitted. You can use the stacktrace command to print the full stacktrace.shell:\u0026gt; Specifying the Name of the Availability Method Uses the @ShellMethodAvailability annotation, in which we specify the method name we want to use to Availability check:\n@ShellComponent public class SSHLoggingCommand { ... @ShellMethod(value = \u0026#34;sign out\u0026#34;) @ShellMethodAvailability(\u0026#34;signOutCheck\u0026#34;) public void signOut() { this.signedIn = false; log.info(\u0026#34;Signed out!\u0026#34;); } public Availability signOutCheck() { return signedIn ? Availability.available() : Availability.unavailable(\u0026#34;Must be signed in first\u0026#34;); } } One Availability Method for Multiple Commands It enables us to have several methods attached to a single availability method.\nWe are going to use the annotation ShellMethodAvailability with an array of the commands names (not method names):\n@ShellComponent public class SSHLoggingCommand { ... @ShellMethod(value = \u0026#34;sign out\u0026#34;) public void signOut() { this.signedIn = false; log.info(\u0026#34;Signed out!\u0026#34;); } @ShellMethod(value = \u0026#34;Change password\u0026#34;) public void changePass(@ShellOption String newPass) { log.info(format(\u0026#34;Changed password to \u0026#39;%s\u0026#39;\u0026#34;, newPass)); } @ShellMethodAvailability({\u0026#34;sign-out\u0026#34;, \u0026#34;change-pass\u0026#34;}) public Availability signOutCheck() { return signedIn ? Availability.available() : Availability.unavailable(\u0026#34;Must be signed in first\u0026#34;); } } Other Cool Features in Spring Shell Since Spring Shell builds on top of JLine it inherits a lot of its features. Let\u0026rsquo;s look at some of them:\nTab Completion Spring Shell allows us to use tab completion with command names and even with parameter names. Since this feature is a part of JLine we can use it out of the box with no need for any configuration.\nBuilt-in Commands Spring Shell offers us a set of useful built-in commands. Let\u0026rsquo;s take a look at two important ones:\n  help: lists all the commands known to the shell, including the built-in commands and commands we wrote.\n  script: accepts a local file as an argument and will replay commands found there, one at a time.\n  Styling the Shell We can do so by registering a bean of type PromptProvider which includes information on how to render the Shell prompt.\nFor example, let\u0026rsquo;s change the prompt text to Cool Machine==\u0026gt;  with a green color for the text:\n@Component public class CustomPromptProvider implements PromptProvider { @Override public AttributedString getPrompt() { return new AttributedString( \u0026#34;Cool Machine\u0026#34; + \u0026#34;==\u0026gt; \u0026#34;, AttributedStyle.DEFAULT.background(AttributedStyle.GREEN)); } } This will give us a prompt like this\n2022-02-26 23:37:49.949 INFO 6560 --- [ main] i.r.springshell.SpringShellApplication : Started SpringShellApplication in 2.267 seconds (JVM running for 3.77) Cool Machine==\u0026gt; Running the Shell from the Jar File After obtaining the JAR file we run it using the command java -jar our-spring-shell-jar-name.jar. This will open our shell in the command line and have it ready for us to type in commands.\nSummary   The Shell allows us to interface with a system using commands.\n  Spring Shell introduces a simple and quick way to build a Shell leveraging all the good sides of the Spring framework.\n  The three main building blocks of Spring Shell are\n @ShellComponent @ShellMethod @ShellOption.    Spring Shell is built on top of JLine which offers useful features like tab completion and built in commands.\n  We can choose to make some commands available based on certain conditions.\n  We can style the command line as we like.\n  ","date":"March 9, 2022","image":"https://reflectoring.io/images/stock/0119-keyboard-coffee-1200-628_hu4748980c1b678a48732277e8abc1521f_175131_650x0_resize_q90_box.jpg","permalink":"/spring-shell/","title":"Create Command-line Applications with Spring Shell"},{"categories":["Java"],"contents":"In this article, we will learn how to use CompletableFuture to increase the performance of our application. We\u0026rsquo;ll start with looking at the Future interface and its limitations and then will discuss how we can instead use the CompletableFuture class to overcome these limitations.\nWe will do this by building a simple application that tries to categorize a list of bank Transactions using a remote service. Let\u0026rsquo;s begin our journey!\nWhat Is a Future? Future is a Java interface that was introduced in Java 5 to represent a value that will be available in the future. The advantages of using a Future are enormous because we could do some very intensive computation asynchronously without blocking the current thread that in the meantime can do some other useful job.\nWe can think of it as going to the restaurant. During the time that the chef is preparing our dinner, we can do other things, like talking to friends or drinking a glass of wine and once the chef has finished the preparation, we can finally eat. Another advantage is that using the Future interface is much more developer-friendly than working directly with threads.\nCompletableFuture vs. Future In this section we will look at some limitations of the Future interface and how we can solve these by using the CompletableFuture class.\nDefining a Timeout The Future interface provides only the get() method to retrieve the result of the computation, but if the computation takes too long we don\u0026rsquo;t have any way to complete it by returning a value that we can assign.\nTo understand better, let\u0026rsquo;s look at some code:\nclass Demo { public static void main(String[] args) throws ExecutionException, InterruptedException { ExecutorService executor = Executors.newSingleThreadExecutor(); Future\u0026lt;String\u0026gt; stringFuture = executor.submit(() -\u0026gt; neverEndingComputation()); System.out.println(\u0026#34;The result is: \u0026#34; + stringFuture.get()); } } We have created an instance of ExecutorService that we will use to submit a task that never ends - we call it neverEndingComputation().\nAfter that we want to print the value of the stringFuture variable on the console by invoking the get() method. This method waits if necessary for the computation to complete, and then retrieves its result. But because we are calling neverEndingComputation() that never ends, the result will never be printed on the console, and we don\u0026rsquo;t have any way to complete it manually by passing a value.\nNow let\u0026rsquo;s see how to overcome this limitation by using the class CompletableFuture. We will use the same scenario, but in this case, we will provide our value by using the method complete() of the CompletableFuture class.\nclass Demo { public static void main(String[] args) { CompletableFuture\u0026lt;String\u0026gt; stringCompletableFuture = CompletableFuture.supplyAsync(() -\u0026gt; neverEndingComputation()); stringCompletableFuture.complete(\u0026#34;Completed\u0026#34;); System.out.println(\u0026#34;Is the stringCompletableFuture done ? \u0026#34; + stringCompletableFuture.isDone()); } } Here we are creating a CompletableFuture of type String by calling the method supplyAsync() which takes a Supplier as an argument.\nIn the end, we are testing if stringCompletableFuture really has a value by using the method isDone() which returns true if completed in any fashion: normally, exceptionally, or via cancellation. The output of the main() method is:\nIs the stringCompletableFuture done ? true Combining Asynchronous Operations Let\u0026rsquo;s imagine that we need to call two remote APIs, firstApiCall() and secondApiCall(). The result of the first API will be the input for the second API. By using the Future interface there is no way to combine these two operations asynchronously:\nclass Demo { public static void main(String[] args) throws ExecutionException, InterruptedException { ExecutorService executor = Executors.newSingleThreadExecutor(); Future\u0026lt;String\u0026gt; firstApiCallResult = executor.submit( () -\u0026gt; firstApiCall(someValue) ); String stringResult = firstApiCallResult.get(); Future\u0026lt;String\u0026gt; secondApiCallResult = executor.submit( () -\u0026gt; secondApiCall(stringResult) ); } } In the code example above, we call the first API by submitting a task on the ExecutorService that returns a Future. We need to pass this value to the second API, but the only way to retrieve the value is by using the get() of the Future method that we have discussed earlier, and by using it we block the main thread. Now we have to wait until the first API returns the result before doing anything else.\nBy using the CompletableFuture class we don\u0026rsquo;t need to block the main thread anymore, but we can asynchronously combine more operations:\nclass Demo { public static void main(String[] args) { var finalResult = CompletableFuture.supplyAsync( () -\u0026gt; firstApiCall(someValue) ) .thenApply(firstApiResult -\u0026gt; secondApiCall(firstApiResult)); } } We are using the method supplyAsync() of the CompletableFuture class which returns a new CompletableFuture that is asynchronously completed by a task running in the ForkJoinPool.commonPool() with the value obtained by calling the given Supplier. After that we are taking the result of the firstApiCall() and using the method thenApply(), we pass it to the other API invoking secondApiCall().\nReacting to Completion Without Blocking the Thread Using the Future interface we don\u0026rsquo;t have a way to react to the completion of an operation asynchronously. The only way to get the value is by using the get() method which blocks the thread until the result is returned:\nclass Demo { public static void main(String[] args) throws ExecutionException, InterruptedException { ExecutorService executor = Executors.newSingleThreadExecutor(); Future\u0026lt;String\u0026gt; stringFuture = executor.submit(() -\u0026gt; \u0026#34;hello future\u0026#34;); String uppercase = stringFuture.get().toUpperCase(); System.out.println(\u0026#34;The result is: \u0026#34; + uppercase); } } The code above creates a Future by returning a String value. Then we transform it to uppercase by firstly calling the get() method and right after the toUpperCase() method of the String class.\nUsing CompletableFuture we can now create a pipeline of asynchronous operations. Let\u0026rsquo;s see a simple example of how to do it:\nclass Demo { public static void main(String[] args) { CompletableFuture.supplyAsync(() -\u0026gt; \u0026#34;hello completable future\u0026#34;) .thenApply(String::toUpperCase) .thenAccept(System.out::println); } } In the example above we can notice how simple is to create such a pipeline. First, we are calling the supplyAsync() method which takes a Supplier and returns a new CompletableFuture. Then we are then transforming the result to an uppercase string by calling thenApply() method. In the end, we just print the value on the console using thenAccept() that takes a Consumer as the argument.\nIf we step back for a moment, we realize that working with CompletableFuture is very similar to Java Streams.\nPerformance Gains with CompletableFuture In this section we will build a simple application that takes a list of bank transactions and calls an external service to categorize each transaction based on the description. We will simulate the call of the external service by using a method that adds some delay before returning the category of the transaction. In the next sections we will incrementally change the implementation of our client application to improve the performance by using CompletableFuture.\nSynchronous Implementation Let\u0026rsquo;s start implementing our categorization service that declares a method called categorizeTransaction :\npublic class CategorizationService { public static Category categorizeTransaction(Transaction transaction) { delay(); return new Category(\u0026#34;Category_\u0026#34; + transaction.getId()); } public static void delay() { try { Thread.sleep(1000L); } catch (InterruptedException e) { throw new RuntimeException(e); } } } public class Category { private final String category; public Category(String category) { this.category = category; } @Override public String toString() { return \u0026#34;Category{\u0026#34; + \u0026#34;category=\u0026#39;\u0026#34; + category + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#39;}\u0026#39;; } } public class Transaction { private String id; private String description; public Transaction(String id, String description) { this.id = id; this.description = description; } public String getId() { return id; } public void setId(String id) { this.id = id; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } } In the code above we have a class called Transaction that has an id and a description field.\nWe will pass an instance of this class to the static method categorizeTransaction(Transaction transaction) of our CategorizationService which will return an instance of the class Category.\nBefore returning the result, the categorizeTransaction() method waits for one second and then returns a Category object that has field of type String called description. The description field will be just the concatenation of the String \u0026quot;Category_\u0026quot; with the id field from the Transaction class.\nTo test this implementation we will build a client application that tries to categorize three transactions, as follows :\npublic class Demo { public static void main(String[] args) { long start = System.currentTimeMillis(); var categories = Stream.of( new Transaction(\u0026#34;1\u0026#34;, \u0026#34;description 1\u0026#34;), new Transaction(\u0026#34;2\u0026#34;, \u0026#34;description 2\u0026#34;), new Transaction(\u0026#34;3\u0026#34;, \u0026#34;description 3\u0026#34;)) .map(CategorizationService::categorizeTransaction) .collect(Collectors.toList()); long end = System.currentTimeMillis(); System.out.printf(\u0026#34;The operation took %s ms%n\u0026#34;, end - start); System.out.println(\u0026#34;Categories are: \u0026#34; + categories); } } After running the code, it prints on the console the total time taken to categorize the three transactions, and on my machine it is saying :\nThe operation took 3039 ms Categories are: [Category{category=\u0026#39;Category_1\u0026#39;}, Category{category=\u0026#39;Category_2\u0026#39;}, Category{category=\u0026#39;Category_3\u0026#39;}] The program takes 3 seconds to complete because we are categorizing each transaction in sequence and the time needed to categorize one transaction is one second. In the next section, we will try to refactor our client application using a parallel stream.\nParallel Stream Implementation Using a parallel stream, our client application will look like this:\npublic class Demo { public static void main(String[] args) { long start = System.currentTimeMillis(); var categories = Stream.of( new Transaction(\u0026#34;1\u0026#34;, \u0026#34;description 1\u0026#34;), new Transaction(\u0026#34;2\u0026#34;, \u0026#34;description 2\u0026#34;), new Transaction(\u0026#34;3\u0026#34;, \u0026#34;description 3\u0026#34;)) .parallel() .map(CategorizationService::categorizeTransaction) .collect(Collectors.toList()); long end = System.currentTimeMillis(); System.out.printf(\u0026#34;The operation took %s ms%n\u0026#34;, end - start); System.out.println(\u0026#34;Categories are: \u0026#34; + categories); } } It\u0026rsquo;s almost identical to before, apart from that here we are using the parallel() method to parallelize the computation. If we run this program now, it will print the following output:\nThe operation took 1037 ms Categories are: [Category{category=\u0026#39;Category_1\u0026#39;}, Category{category=\u0026#39;Category_2\u0026#39;}, Category{category=\u0026#39;Category_3\u0026#39;}] The difference is huge! Now our application runs almost three times faster, but this is not the whole story.\nThis solution can scale until we reach the limit of the number of processors. After that the performance doesn\u0026rsquo;t change because internally the parallel stream uses a Thread pool that has a fixed number of threads that is equal to Runtime.getRuntime().availableProcessors().\nIn my machine, I have 8 processors, so if we run the code above with ten transactions it should take at least 2 seconds:\nThe operation took 2030 ms Categories are: [Category{category=\u0026#39;Category_1\u0026#39;}, Category{category=\u0026#39;Category_2\u0026#39;}, Category{category=\u0026#39;Category_3\u0026#39;}, Category{category=\u0026#39;Category_4\u0026#39;}, Category{category=\u0026#39;Category_5\u0026#39;}, Category{category=\u0026#39;Category_6\u0026#39;}, Category{category=\u0026#39;Category_7\u0026#39;}, Category{category=\u0026#39;Category_8\u0026#39;}, Category{category=\u0026#39;Category_9\u0026#39;}, Category{category=\u0026#39;Category_10\u0026#39;}] We see that the operation took 2030 ms, as predicted. Can we do something to increase the performance of our application even more? YES!\nIncreasing Performance Using CompletableFuture Now will refactor our client application to take advantage of CompletableFuture:\npublic class Demo { public static void main(String[] args) { Executor executor = Executors.newFixedThreadPool(10); long start = System.currentTimeMillis(); var futureCategories = Stream.of( new Transaction(\u0026#34;1\u0026#34;, \u0026#34;description 1\u0026#34;), new Transaction(\u0026#34;2\u0026#34;, \u0026#34;description 2\u0026#34;), new Transaction(\u0026#34;3\u0026#34;, \u0026#34;description 3\u0026#34;), new Transaction(\u0026#34;4\u0026#34;, \u0026#34;description 4\u0026#34;), new Transaction(\u0026#34;5\u0026#34;, \u0026#34;description 5\u0026#34;), new Transaction(\u0026#34;6\u0026#34;, \u0026#34;description 6\u0026#34;), new Transaction(\u0026#34;7\u0026#34;, \u0026#34;description 7\u0026#34;), new Transaction(\u0026#34;8\u0026#34;, \u0026#34;description 8\u0026#34;), new Transaction(\u0026#34;9\u0026#34;, \u0026#34;description 9\u0026#34;), new Transaction(\u0026#34;10\u0026#34;, \u0026#34;description 10\u0026#34;) ) .map(transaction -\u0026gt; CompletableFuture.supplyAsync( () -\u0026gt; CategorizationService.categorizeTransaction(transaction), executor) ) .collect(toList()); var categories = futureCategories.stream() .map(CompletableFuture::join) .collect(toList()); long end = System.currentTimeMillis(); System.out.printf(\u0026#34;The operation took %s ms%n\u0026#34;, end - start); System.out.println(\u0026#34;Categories are: \u0026#34; + categories); } } Our client application is trying to call the categorization service by using the method supplyAsync() that takes as arguments a Supplier and an Executor. Here we can now pass a custom Executor with a pool of ten threads to make the computation finish even faster than before.\nWith 10 threads, we expect that the operation should take around one second. Indeed, the output confirms the expected result :\nThe operation took 1040 ms Categories are: [Category{category=\u0026#39;Category_1\u0026#39;}, Category{category=\u0026#39;Category_2\u0026#39;}, Category{category=\u0026#39;Category_3\u0026#39;}, Category{category=\u0026#39;Category_4\u0026#39;}, Category{category=\u0026#39;Category_5\u0026#39;}, Category{category=\u0026#39;Category_6\u0026#39;}, Category{category=\u0026#39;Category_7\u0026#39;}, Category{category=\u0026#39;Category_8\u0026#39;}, Category{category=\u0026#39;Category_9\u0026#39;}, Category{category=\u0026#39;Category_10\u0026#39;}] Conclusion In this article, we learned how to use the Future interface in Java and its limitations. We learned how to overcome these limitations by using the CompletableFuture class. After that, we analyzed a demo application, and step by step using the potential offered by CompletableFuture we refactored it for better performance.\n","date":"February 21, 2022","image":"https://reflectoring.io/images/stock/0117-future-1200x628-branded_hub111088ce133503489c85678f8ae0f75_83677_650x0_resize_q90_box.jpg","permalink":"/java-completablefuture/","title":"Improving Performance with Java's CompletableFuture"},{"categories":["Node"],"contents":"A module system allows us to split up our code in different parts or to include code written by other developers.\nSince the very beginning of NodeJS, the CommonJS module system is the default module system within the ecosystem. However, recently a new module system was added to NodeJS - ES modules.\nWe are going to have a look at both of them, discuss why we need a new module system in the first place and when to use which.\n Example Code This article is accompanied by a working code example on GitHub. Why Do We Need a Module System in NodeJS? Usually, we want to split up our code into different files as soon as our code base grows. This way, we can not only organize and reuse code in a structured manner. We can also control in which file which part of the code is accessible.\nWhile this is a fundamental part in most programming languages, this was not the case in JavaScript. Everything we write in JavaScript is global by default. This hasn\u0026rsquo;t been a huge issue in the early beginnings of the language. As soon as developers began to write full-blown applications in JavaScript, however, it got them into real trouble.\nThis is why the NodeJS creators initially decided to include a default module system, which is CommonJS.\nCommonJS: The Default NodeJS Module System In NodeJS each .js file is handled as a separate CommonJS module. This means that variables, functions, classes, etc. are not accessible to other files by default. You need to explicitly tell the module system which parts of your code should be exported.\nThis is done via the module.exports object or the exports shortcut, which are both available in every CommonJS module. Whenever you want to import code into a file, you use the require() function. Let\u0026rsquo;s see how this all works together.\nImporting Core NodeJS Modules Without writing or installing any module, you can just start by importing any of NodeJS\u0026rsquo;s built-in modules:\nconst http = require(\u0026#34;http\u0026#34;); const server = http.createServer(function (_req, res) { res.writeHead(200); res.end(\u0026#34;Hello, World!\u0026#34;); }); server.listen(8080); Here we import the http module in order to create a simple NodeJS server. The http module is identified by require() via the string \u0026ldquo;http\u0026rdquo; which always points to the NodeJS internal module.\nNote how the result of require(\u0026quot;http\u0026quot;) is handled like every other function invocation. It is basically written to the local constant http. We can name it however we want to.\nImporting NPM Dependencies The same way, we can import and use modules from NPM packages (i.e. from the node_modules folder):\nconst chalk = require(\u0026#34;chalk\u0026#34;); // don\u0026#39;t forget to run npm install  console.log(chalk.blue(\u0026#34;Hello world printed in blue\u0026#34;)); Exporting and Importing Your Own Code To import our own code, we first need to tell CommonJS which aspects of our code should be accessible by other modules. Let\u0026rsquo;s assume we want to write our own logging module to make logs look a bit more colorful:\n// logger.js const chalk = require(\u0026#34;chalk\u0026#34;); exports.logInfo = function (message) { console.log(chalk.blue(message)); }; exports.logError = function logError(message) { console.log(chalk.red(message)); }; exports.defaultMessage = \u0026#34;Hello World\u0026#34;; Again, we import chalk which will colorize the log output. Then we add logInfo() and logError() to the existing exports object, which makes them accessible to other modules. Also, we add defaultMessage with the string \u0026ldquo;Hello World\u0026rdquo; only to demonstrate that exports can have various types.\nNow we want to use those exported artifacts in our index file:\n// index.js const logger = require(\u0026#34;./logger\u0026#34;); logger.logInfo(`${logger.defaultMessage}printed in blue`); logger.logError(\u0026#34;some error message printed in red\u0026#34;); As you can see, require() now receives a relative file path and returns whatever was put into the exports object.\nUsing module.exports Instead of exports The exports object is read-only, which means it will always remain the same object instance and cannot be overwritten. However, it is only a shortcut to the exports property of the module object. We could rewrite our logger module like this:\n// logger.js const chalk = require(\u0026#34;chalk\u0026#34;); function info(message) { console.log(chalk.blue(message)); } function error(message) { console.log(chalk.red(message)); } const defaultMessage = \u0026#34;Hello World\u0026#34;; module.exports = { logInfo: info, logError: error, defaultMessage, }; Now, instead of assigning functions directly to an object, we first declare everything and then create our own object, which is assigned to module.exports.\nNote that we have rewritten the internal function names from logInfo and logError to info and error respectively. This way, we can truly separate the internal from the external API. However, the code is often simpler and more approachable if we keep internal and external naming the same.\nWhere Do module.exports and require() Come From? Although at first glance it may seem like module.exports, exports and require are global, actually they are not. CommonJS wraps your code in a function like this:\n(function(exports, require, module, __filename, __dirname) { // your code lives here }); This way, those keywords are always module specific. Have a look into the NodeJS modules documentation to get a better understanding of the different function parameters.\n Importing Only Specific Properties Typically, we only need certain aspects of the code we import. In this case, we can make use of JavaScript\u0026rsquo;s destructuring feature:\n// index.js const { logError } = require(\u0026#34;./logger\u0026#34;); logError(\u0026#34;some error message printed in red\u0026#34;); This basically says \u0026ldquo;give me the property logError of the logger object and assign it a local constant with the same name\u0026rdquo;. This might make our code look a bit cleaner.\nExporting Not Only Objects So far, we only exported objects. What if we want to export something different? No problem. We can assign any type to module.export. For example, we can rewrite our logger to be a class:\n// logger.js const chalk = require(\u0026#34;chalk\u0026#34;); class Logger { static defaultMessage = \u0026#34;Hello World\u0026#34;; static info(message) { console.log(chalk.blue(message)); } static error(message) { console.log(chalk.red(message)); } } module.exports = Logger; As we changed the function names a bit, we need to modify our index file:\n// index.js const Logger = require(\u0026#34;./logger\u0026#34;); Logger.info(`${logger.defaultMessage}printed in blue`); Logger.error(\u0026#34;some error message printed in red\u0026#34;); We also clarify that we are using a class by capitalizing its name.\nLooks like we can now write clean and modular NodeJS code with the help of CommonJS. Why on earth do we need any other module system? Rest assured that there is a good reason for this.\nES Modules: The ECMAScript Standard So, why would we need another option for imports?\nAs we already learned, CommonJS was initially chosen to be the default module system for NodeJS. At this time there was no such thing as a built-in module system in JavaScript. Thanks to the enormous growth of the world-wide JavaScript usage, the language evolved a lot.\nSince the 2015 edition of the underlying ECMAScript standard (ES2015) we actually have a standardized module system in the language itself, which is simply called ES Modules.\nIt took a while before the browser vendors and the NodeJS maintainers actually fully implemented the standard. This was finally the case for NodeJS with version 14, when it first got stable. So, let\u0026rsquo;s just dive into it!\nExport with ES Modules To preserve comparability, we stay with our logging example. We need to rewrite our Logger class example like this:\n// logger.mjs import chalk from \u0026#34;chalk\u0026#34;; export class Logger { static defaultMessage = \u0026#34;Hello World\u0026#34;; static info(message) { console.log(chalk.blue(message)); } static error(message) { console.log(chalk.red(message)); } } Instead of the require() function for importing modules, we now use a specific import syntax.\nAlso, instead of a specific module object, we now use the export keyword in front of our class declaration. This tells the compiler, which parts of the file should be accessible by other files.\nImport with ES Modules We need to change our index file as well:\n// index.mjs import { Logger } from \u0026#34;./logger.mjs\u0026#34;; Logger.info(`${Logger.defaultMessage}printed in blue`); Logger.error(\u0026#34;some error message printed in red\u0026#34;); Note, how we use a slightly different import syntax compared to the logger file. Similar to the above-mentioned object destructuring, we explicitly choose the property we want to import from the logger module. While this was more of a special case with CommonJS, this is much more often seen with ES Modules.\nExports vs. Default Exports One reason why this might be seen more often is the way how JavaScript separates between usual and default exports. We as implementers may choose one specific declaration to be the default export of your module:\nexport default class Logger {...} If we put the default keyword behind any export, we basically say \u0026ldquo;treat this as the thing every module gets, if it doesn\u0026rsquo;t ask for something specific\u0026rdquo;. We can (but are not forced to) import it by leaving out the curly brackets:\nimport Logger from \u0026#34;./logger.mjs\u0026#34;; As a consequence, we cannot declare more than one part of our code as the default export.\nHowever, we might declare no default at all. In this case, we cannot use the default import syntax. The most obvious solution is then to explicitly specify what we want to import, just the way we have seen above.\nNamed Imports There is another import option. We can simply say \u0026ldquo;give me everything the module exports and give it the namespace xyz\u0026rdquo;. To demonstrate this, we move the defaultMessage from the class to an exported constant declaration.\n// logger.mjs import chalk from \u0026#34;chalk\u0026#34;; export const defaultMessage = \u0026#34;Hello World\u0026#34;; export class Logger { static info(message) { console.log(chalk.blue(message)); } static error(message) { console.log(chalk.red(message)); } } Now we export two declarations from our file: defaultMessage and Logger, none of them is a default export. If we still want to import all of it, we would use a named import:\n// index.mjs import * as LoggerModule from \u0026#34;./logger.mjs\u0026#34; LoggerModule.Logger.info(`${LoggerModule.defaultMessage}printed in blue`); LoggerModule.Logger.error(\u0026#34;some error message printed in red\u0026#34;); This way, everything from logger.mjs is put into a namespace with the name LoggerModule. Often we see this syntax as a fallback solution for imports from non ES Modules files.\nNamed Default Imports The default import we used above actually is also a named import:\nimport Logger from \u0026#34;./logger.mjs\u0026#34;; Under the hood, this is a shortcut for:\nimport { default as Logger } from \u0026#34;./logger.mjs\u0026#34;; Anyway, most times we use the shortcut as it is simpler to read and follow.\n Importing CommonJS Modules from ES Modules Currently, we quickly might run into the need to import CommonJS modules as many NPM packages are not available as ES Modules. This is not an issue at all. NodeJS allows us to import CommonJS modules from ES Modules. If we would like to import our CommonJS class export example from above, our ES Module import would look like this:\n// index.mjs import Logger from \u0026#34;./logger.js\u0026#34;; Logger.info(`${Logger.defaultMessage}printed in blue`); Logger.error(\u0026#34;some error message printed in red\u0026#34;); In this case, module.exports is simply treated as the default export which you might import as such.\nDifferences Between CommonJS and ES Modules There are a few key differences which you need to keep in mind when working with the two different NodeJS module systems. We are going to highlight the most important ones here.\nFile Extensions As you might already have noticed, in all of our ES modules imports we explicitly added the file extension to all file imports. This is mandatory for ES Modules (as opposed to e.g. CommonJS, Webpack or TypeScript).\nThis is significant as NodeJS distinguishes between CommonJS modules and ES Modules via the file extension. By default, files with the .js extension will be treated as CommonJS modules, while files with the .mjs extension are treated as ES Modules.\nHowever, you might want to configure your NodeJS project to use ES Modules as the default module system. Please consult the NodeJS documentation on file extensions to find out how to correctly configure your project.\nAs we already have seen, ES Modules can import CommonJS modules. Vice versa is not the case. CommonJS modules cannot import ES Modules. You are not able to import .mjs files from .js files. This is due to the different nature of the two systems.\nDynamic vs. Static The two module systems do not only have a different syntax. They also differ in the way how imports and exports are treated.\nCommonJS imports are dynamically resolved at runtime. The require() function is simply run at the time our code executes. As a consequence, you can call it everywhere in your code.\nWith ES Modules, imports are static, which means they are executed at parse time. This is why imports are \u0026ldquo;hoisted\u0026rdquo;. They are implicitly moved to the top of the file. Therefore, we cannot use the import syntax we have seen above just in the middle of your code. The upside of this is that errors can be caught upfront and developer tools can better support us with writing valid code.\nThere might be cases where we really need to dynamically import modules at runtime. There is a solution: The dynamic import() function. As we really should treat this as a special use case, we did not cover it in this article. You may consult the NodeJS documentation if you want to know more.\nWhen to Use Which? We have now learned about the two module system options in NodeJS. We have seen how we can create and import modules in CommonJS. We have also seen how to accomplish the same things with ES Modules.\nNow you might wonder which module system you should use. Of course, the answer is: it depends. My personal advice is the following:\nIf you are starting a new project, use ES Modules. It has been standardized for many years now. NodeJS has stable support for it since version 14, which was released in April 2020. You can find a lot of documentation and examples out there. Many package maintainers already published their libraries with ES Modules support. There is no reason not to use it.\nThings may be different if you are maintaining an existing NodeJS project which uses CommonJS. The most important fact is that currently there is no pressure to migrate your existing code. CommonJS is still the default module system of NodeJS and there are no signs that this will change soon. However, you might migrate to the ES Modules syntax while using CommonJS under the hood. This can be accomplished by tools like Babel or TypeScript and allows you to decide to more easily switch to ES Modules at a later point in time.\nWhatever you choose, you won\u0026rsquo;t make a huge mistake. Both options are valid options, and this is the beauty of the JavaScript ecosystem. As we have just seen, it has evolved a lot in the past decade, and you have options for nearly anything you want to achieve.\n","date":"February 18, 2022","image":"https://reflectoring.io/images/stock/0118-module-1200x628-branded_hu63a6c159580aa499b481ef25b7df6540_73843_650x0_resize_q90_box.jpg","permalink":"/nodejs-modules-imports/","title":"CommonJS vs. ES Modules: Modules and Imports in NodeJS"},{"categories":["Node"],"contents":"Express is a web application framework for Node.js. We can use this framework to build APIs, serve web pages, and other static assets and use it as a lightweight HTTP server and backend for our applications.\nIn this article, we will introduce the Express framework and learn to use it to build HTTP servers, REST APIs, and web pages using both JavaScript and TypeScript.\n Example Code This article is accompanied by a working code example on GitHub. Introducing Node.js A basic understanding of Node.js is essential for working with Express.\nNode.js is an open-source runtime environment for executing server-side JavaScript applications. A unique feature of Node.js runtime is that it is a non-blocking, event-driven input/output(I/O) request processing model.\nNode.js uses the V8 JavaScript Runtime engine which is also used by Google Chrome web browser developed by Google. This makes the runtime engine much faster and hence enables faster processing of requests.\nTo use Express, we have to first install Node.js and npm in our development environment. npm is a JavaScript Package Manager. npm is bundled with Node.js by default.\nWe can refer to the npm site for the installation instructions for npm. Similarly, we can find the installation instructions for Node.js on its official website.\nWhat is Express? Express is a popular Node.js framework for authoring web applications. Express provides methods to specify the function to be called for a particular HTTP verb (GET, POST, SET, etc.) and URL pattern (\u0026ldquo;Route\u0026rdquo;).\nA typical Express application looks like this:\n// Import the express function const express = require(\u0026#39;express\u0026#39;) const app = express() // Define middleware for all routes app.use((request, response, next) =\u0026gt; { console.log(request) next()}) // Define route for GET request on path \u0026#39;/\u0026#39; app.get(\u0026#39;/\u0026#39;, (request, response) =\u0026gt; { response.send(\u0026#39;response for GET request\u0026#39;); }); // Start the server on port 3000 app.listen( 3000, () =\u0026gt; console.log(`Server listening on port 3000.`)); When we run this application in Node.js, we will have an HTTP server listening on port 3000 which can respond to a GET request to the URL: http://localhost:3000/ respond with a text message: response for GET request.\nWe can observe the following components in this application:\n A server that listens for HTTP requests on a port The app object representing the Express function Routes that define URLs or paths to receive the HTTP request with different HTTP verbs Handler functions associated with each route are called by the framework when a request is received on a particular route. Middleware functions that perform processing on the request in different stages of a request handling pipeline  While Express itself is fairly minimalist, there is a wealth of utilities created in the community in the form of middleware packages that can address almost any web development problem.\nInstalling Express Let us start by first installing Express.\nBefore that let us create a folder and initialize a Node.js project under it by running the npm init command:\nmkdir storefront cd storefront npm init -y Running these commands will create a Node.js project containing a package.json file resulting in this output:\nWrote to /.../storefront/package.json : { \u0026#34;name\u0026#34;: \u0026#34;storefront\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;main\u0026#34;: \u0026#34;index.js\u0026#34;, \u0026#34;scripts\u0026#34;: { \u0026#34;test\u0026#34;: \u0026#34;echo \\\u0026#34;Error: no test specified\\\u0026#34; \u0026amp;\u0026amp; exit 1\u0026#34; }, \u0026#34;keywords\u0026#34;: [], \u0026#34;author\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;license\u0026#34;: \u0026#34;ISC\u0026#34; } The Express framework is published as a Node.js module and made available through the npm registry.\nInstallation of the framework is done using the npm install command as shown below:\nnpm install express --save Running this command will install the Express framework and add it as a dependency in the dependencies list in a package.json file as shown below:\n{ \u0026#34;name\u0026#34;: \u0026#34;storefront\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;main\u0026#34;: \u0026#34;index.js\u0026#34;, \u0026#34;scripts\u0026#34;: { \u0026#34;test\u0026#34;: \u0026#34;echo \\\u0026#34;Error: no test specified\\\u0026#34; \u0026amp;\u0026amp; exit 1\u0026#34; }, \u0026#34;keywords\u0026#34;: [], \u0026#34;author\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;license\u0026#34;: \u0026#34;ISC\u0026#34;, \u0026#34;dependencies\u0026#34;: { \u0026#34;express\u0026#34;: \u0026#34;^4.17.2\u0026#34; } } In this package.json file, we can see the Express framework added as a dependency: \u0026quot;express\u0026quot;: \u0026quot;^4.17.2\u0026quot; .\nRunning a Simple Web Server Now that Express is installed, let us create a new file named index.js and open the project folder in our favorite code editor. We are using Visual Studio Code as our source-code editor.\nLet us now add the following lines of code to index.js:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); // start the server app.listen(3000, () =\u0026gt; console.log(\u0026#39;Server listening on port 3000.\u0026#39;)); The first line here is importing the express module from the Express framework package we installed earlier. This module is a function, which we are running on the second line to assign its handle to a variable named app. Next, we are calling the listen() function on the app handle to start our server.\nThe listen() function takes a port number as the first parameter on which the server will listen for the requests from clients.\nThe second parameter to the listen() function is optional. It is a function that runs after the server starts up. Here we are setting the port number as 3000 and a function which will print a message to the console about the server starting up.\nLet us run our application with the node command:\nnode index.js We can see the message in our listen() function appearing in our terminal window:\nServer listening on port 3000. Our server is running now and listening for requests in port 3000. When we can visit the URL: localhost:3000 in our web browser we will get a message: Cannot GET /. This means that the server recognizes it as an HTTP GET request on the root path / but fails to give any response.\nWe will fix this in the next section where we will add some routes to our server which will enable it to give appropriate responses by detecting the request path sent in the browser URL.\nAdding our First Route for Handling Requests A route in express helps us to determine how our application will respond to a client request sent to a particular URL or path made with a specific HTTP request method like GET, POST, PUT, and so on.\nWe define a route by associating it with one or more callback functions called handler functions, which are executed when the application receives a request to the specified route (endpoint) and the HTTP method is matched.\nLet us now add a route to tell our Express application that will enable it to handle a GET request to our server sent to the root path: /:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); // handle get request app.get(\u0026#39;/\u0026#39;, (request, response) =\u0026gt; { // send back a response in plain text  response.send(\u0026#39;response for GET request\u0026#39;); }); // start the server app.listen(3000, () =\u0026gt; console.log(\u0026#39;Server listening on port 3000.\u0026#39;)); We have added the route just after the declaration of the app variable. In this route, we tell our Express server how to handle a GET request sent to the server.\nThis function takes two parameters:\n  Route Path: The route path is sent as the first parameter. It is in the form of a URL that will be matched with the URL of the HTTP request received by the server. In this case, we are using a route path: /, which is the root of our website. This route will match GET requests sent from URL: localhost:3000. Instead of using fixed URLs, we can also use string patterns, or regular expressions to define route paths.\n  Handler Function: The second parameter is a function with two arguments: request, and response, also called the Handler Function. The first argument of the handler function: request represents the HTTP request that was sent to the server. We can use this object to extract information about the HTTP request like request headers, and request parameters sent as a query string, path parameters, request body, etc. The second argument: response represents the HTTP response that we will be sending back to the client.\n  Here, we are calling the send() method on the response object to send back a response in plain text: response for GET request.\nAdding Parameters to Routes A route as we saw earlier is identified by a route path in combination with a request method which defines the endpoints at which requests can be made.\nRoute paths are often accompanied by route parameters and take this form: /products/:brand\nLet us define a route containing a route parameter as shown below. For simplicity of this example, we are reading from a hardcoded in-memory products array. In a real-world application, we will want to replace the hardcoded data with data residing in a database:\nlet products = [ {\u0026#34;name\u0026#34;:\u0026#34;television\u0026#34;, \u0026#34;price\u0026#34;:112.34, \u0026#34;brand\u0026#34;:\u0026#34;samsung\u0026#34;}, {\u0026#34;name\u0026#34;:\u0026#34;washing machine\u0026#34;, \u0026#34;price\u0026#34;: 345.34, \u0026#34;brand\u0026#34;: \u0026#34;LG\u0026#34;}, {\u0026#34;name\u0026#34;:\u0026#34;Macbook\u0026#34;, \u0026#34;price\u0026#34;: 3454.34, \u0026#34;brand\u0026#34;: \u0026#34;Apple\u0026#34;} ]; // handle get request for fetching products // belonging to a particular brand app.get(\u0026#39;/products/:brand\u0026#39;, (request, response) =\u0026gt; { // read the captured value of route parameter named: brand  const brand = request.params.brand console.log(`brand ${brand}`) const productsFiltered = products.filter(product=\u0026gt; product.brand == brand) response.json(productsFiltered) }); Here we have used a route parameter named brand. Route parameters are named URL segments that are used to capture the values specified at their position in the URL. The captured values are populated in the request.params object, with the name of the route parameter specified in the path as their respective keys.\nIn this example, the name of the route parameter is brand and is read with the construct request.params.brand.\nModularizing Routes with Express Router Defining all the routes in a single file becomes unwieldy in real-life projects. We can add modularity to the routes with the help of Express\u0026rsquo;s Router class. This class can be used to create modular route handlers.\nAn instance of the Router class is a complete middleware and routing system. Let us define our routes in a separate file and name it routes.js. We will define our routes using the Router class like this:\n// routes.js const express = require(\u0026#39;express\u0026#39;) const router = express.Router() // handle get request for path /products router.get(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { ... }); // handle get request for path /products/:brand router.get(\u0026#39;/products/:brand\u0026#39;, (request, response) =\u0026gt; { ... ... }); module.exports = router We will next define our server in another file: server.js and import the routes defined in the file: routes.js. The server code looks much more concise like this:\n// server.js const express = require(\u0026#39;express\u0026#39;) const routes = require(\u0026#39;./routes\u0026#39;); const app = express() const PORT = process.env.PORT || 3000 app.use(routes) app.listen(PORT, () =\u0026gt; { console.log(`Server listening at http://localhost:${PORT}`) }) We have also used an environment variable to define the server port which will default to 3000 if the port is not supplied.\nLet us run this file with the node command:\nnode server.js We will use this file: server.js henceforth to run our HTTP server instead of index.js.\nAdding Middleware for Processing Requests Middleware in Express are functions that come into play after the server receives the request and before the response is sent to the client. They are arranged in a chain and are called in sequence.\nWe can use middleware functions for different types of processing tasks required for fulfilling the request like database querying, making API calls, preparing the response, etc, and finally calling the next middleware function in the chain.\nMiddleware functions take three arguments: the request object (request), the response object (response), and optionally the next() middleware function :\nfunction middlewareFunction(request, response, next){ ... next() } Middleware functions in Express are of the following types:\n Application-level middleware which runs for all routes in an app object Router level middleware which runs for all routes in a router object Built-in middleware provided by Express like express.static, express.json, express.urlencoded Error handling middleware for handling errors Third-party middleware maintained by the community  Adding Application-Level Middleware for Processing All Requests We will define our middleware functions in a file: middleware.js.\nLet us define a simple middleware function which prints the request to the console:\nconst requestLogger = (request, response, next) =\u0026gt; { console.log(request); next(); }; As we can see the middleware function takes the request and the response objects as the first two parameters and the next() function as the third parameter.\nLet us attach this middleware function to the app object by calling the use() method:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); const requestLogger = (request, response, next) =\u0026gt; { console.log(request); next(); }; app.use(requestLogger); Since we have attached this function to the app object, it will get called for every call to the express application. Now when we visit http://localhost:3000, we can see the output of the incoming request object in the terminal window.\nUsing Express' Built-in Middleware for some more Processing Express also offers middleware functions called built-in middleware.\nTo demonstrate the use of Express' built-in middleware, let us create a route for the HTTP POST method for adding a new product. The handler function for this route will accept product data from the request object in JSON format. As such we require a JSON parser to parse the fields of the new product.\nFor this we will use Express' built-in middleware for parsing JSON and attach it to our router object like this:\n// routes.js const express = require(\u0026#39;express\u0026#39;) const { requireJsonContent } = require(\u0026#39;./middleware\u0026#39;) const router = express.Router() // use express\u0026#39; json middleware and // Set the body size limit of JSON payload 100 bytes router.use(express.json({ limit: 100 })) We have also configured a maximum size of 100 bytes for the JSON request.\nNow we can extract the fields from the JSON payload sent in the request body as shown in this route definition:\n// routes.js const express = require(\u0026#39;express\u0026#39;) const router = express.Router() let products = [] // handle post request for path /products router.post(\u0026#39;/products\u0026#39;, (request, response) =\u0026gt; { // sample JSON request  // {\u0026#34;name\u0026#34;:\u0026#34;furniture\u0026#34;, \u0026#34;brand\u0026#34;:\u0026#34;century\u0026#34;, \u0026#34;price\u0026#34;:1067.67}  // Extract name of product  const name = request.body.name const brand = request.body.brand console.log(name + \u0026#34; \u0026#34; + brand) products.push({ name: request.body.name, brand: request.body.brand, price: request.body.price }) const productCreationResponse = { productID: \u0026#34;12345\u0026#34;, result: \u0026#34;success\u0026#34; } response.json(productCreationResponse) }) Here we are extracting the contents of the JSON request by calling req.body.FIELD_NAME before using those fields for adding a new product.\nSimilarly we will use express' urlencoded() middleware to process URL encoded fields submitted through a HTTP form object:\napp.use(express.urlencoded({ extended: false })); Adding Middleware for a Single Route Next, let us define another middleware function that will apply to a specific route only. We will attach this to the route instead of the app object.\nAs an example, let us validate the existence of JSON content in the HTTP POST request before performing any further processing and instead send back an error response if JSON content is not received.\nOur middleware function for performing this check will look like this:\n// middleware.js  const requireJsonContent = (request, response, next) =\u0026gt; { if (request.headers[\u0026#39;content-type\u0026#39;] !== \u0026#39;application/json\u0026#39;) { response.status(400).send(\u0026#39;Server requires application/json\u0026#39;) } else { next() } } module.exports = { requireJsonContent } Here we are checking for the existence of a content-type header with a value of application/json in the request. We are sending back an error response with status 400 accompanied by an error message if this header is not present. Otherwise, the next() function is invoked to call the subsequent middleware present in the chain.\nOur route for the HTTP POST method with the requireJsonContent() middleware function attached will look like this:\n// handle post request for path /products router.post(\u0026#39;/products\u0026#39;, // first function in the chain will check for JSON content  requireJsonContent, // second function will process the request if first function detects JSON  (request, response) =\u0026gt; { // process json request  ... ... response.json( {productID: \u0026#34;12345\u0026#34;, result: \u0026#34;success\u0026#34;)} ); Here we have two middleware functions attached to the route with route path /products.\nThe first middleware function requireJsonContent() will pass the control to the next function in the chain if the content-type header in the HTTP request contains application/json. The second middleware function processes the request further and sends back a response in JSON format to the caller.\nAdding Error Handling Middleware Express comes with a default error handler that takes care of any errors that might be encountered in the app. This default error handler is a middleware function that is added at the end of the middleware function stack.\nWhen an error is encountered in a synchronous code, Express catches it automatically. Here is an example of a route handler function where we simulate an error condition by throwing an error:\nconst express = require(\u0026#39;express\u0026#39;) const router = express.Router() router.get(\u0026#39;/productswitherror\u0026#39;, (request, response) =\u0026gt; { let err = new Error(\u0026#34;processing error \u0026#34;) err.statusCode = 400 throw err }); Here we are throwing an error with status code 400 and an error message processing error .\nWhen this route is invoked with URL: localhost:3000/productswitherror, Express catches this error for us and responds with the error’s status code, message, and the stack trace of the error (for non-production environments) as shown below:\nError: processing error! at ...storefront/routes.js:68:9 at Layer.handle [as handle_request] (...storefront/node_modules/express/lib/router/layer.js:95:5) at next (...storefront/node_modules/express/lib/router/route.js:137:13) at Route.dispatch (...storefront/node_modules/express/lib/router/route.js:112:3) at Layer.handle [as handle_request] (...storefront/node_modules/express/lib/router/layer.js:95:5) at ...storefront/node_modules/express/lib/router/index.js:281:22 ... ... We can change this default error handling behavior by adding a custom error handler.\nThe custom error handling in Express works by adding an error parameter into a middleware function in addition to the parameters: request, response, and the next() function.\nThe basic signature of Express Middleware which handles errors appears as:\nfunction customeErrorHandler(err, request, response, next) { // Error handling middleware functionality here  } When we want to call an error-handling middleware, we pass on the error object by calling the next() function like this:\nconst errorLogger = (err, request, response, next) =\u0026gt; { console.log( `error ${err.message}`) next(err) // calling next middleware } Let us define three middleware error handling functions in a separate file: errormiddleware.js as shown below:\n// errormiddleware.js const errorLogger = (err, request, response, next) =\u0026gt; { console.log( `error ${err.message}`) next(err) // calling next middleware } const errorResponder = (err, request, response, next) =\u0026gt; { response.header(\u0026#34;Content-Type\u0026#34;, \u0026#39;application/json\u0026#39;) response.status(err.statusCode).send(err.message) } const invalidPathHandler = (request, response, next) =\u0026gt; { response.status(400) response.send(\u0026#39;invalid path\u0026#39;) } module.exports = { errorLogger, errorResponder, invalidPathHandler } These middleware error handling functions perform different tasks: one of them logs the error message, the second sends the error response to the client, and the third one responds with a message for invalid path when a non-existing route is requested.\nNext, let us import these error handling middleware functions into our server.js file and attach them in our application:\n// server.js const express = require(\u0026#39;express\u0026#39;) const routes = require(\u0026#39;./routes\u0026#39;) const { errorLogger, errorResponder, invalidPathHandler } = require(\u0026#39;./errormiddleware\u0026#39;) const app = express() const PORT = process.env.PORT || 3000 app.use(requestLogger) app.use(routes) // adding the error handlers app.use(errorLogger) app.use(errorResponder) app.use(invalidPathHandler) app.listen(PORT, () =\u0026gt; { console.log(`Server listening at http://localhost:${PORT}`) }) Here we have attached the three middleware functions for handling errors to the app object by calling the use() method.\nTo test how our application handles errors with the help of these error handling functions, let us invoke the same route we invoked earlier with URL: localhost:3000/productswitherror.\nNow instead of the default error handler, the first two error handlers get triggered. The first one logs the error message to the console and the second one sends the error message in the response.\nWhen we request a non-existent route, the third error handler is invoked giving us an error message: invalid path.\nCreating Dynamic HTML with a Template Engine We can create dynamic HTML pages using Express from our server-side applications by configuring a template engine.\nA template engine works by creating a template file with placeholders mapped to variables. We assign values to the variables declared in our template file in our application which will then return a response to the web browser, often dynamically creating an HTML page for the browser to display by inserting the retrieved data into placeholders.\nLet us generate HTML for a home page using the Pug template engine. For that we need to first install the Pug template engine using npm:\nnpm install pug --save Next, we will set the following properties in our app object defined in the server.js file to render the template files:\n// server.js const express = require(\u0026#39;express\u0026#39;) const app = express() app.set(\u0026#39;view engine\u0026#39;, \u0026#39;pug\u0026#39;) app.set(\u0026#39;views\u0026#39;, \u0026#39;./views\u0026#39;) The views property defines the directory where the template files are located. Let us define a folder named views in the root project directory and create a template file named home.pug with the following contents:\nhtml head title= title body h1= message div p Generated by express at span= sysdate This is a Pug template with three placeholders represented by the variables: title, message, and sysdate.\nWe set the values of these variables in a handler function associated with a route as shown below:\nconst express = require(\u0026#39;express\u0026#39;) const router = express.Router() router.get(\u0026#39;/home\u0026#39;, (request, response) =\u0026gt; { res.render(\u0026#34;home\u0026#34;, { title: \u0026#34;Home\u0026#34;, message: \u0026#34;My home page\u0026#34; , sysdate: new Date().toLocaleString() }) }) Here we are invoking the render() method on the res object to render the template named Home and assigned the values of the three variables in the template file. When we browse the route with URL: http://localhost:3000/home, we can see the HTML rendered from the template in the browser.\nOther than Pug, some other template engines supported by Express are Mustache and EJS. The complete list can be found in the website of express.\nDeveloping Express Applications with TypeScript So far we have written all our code in JavaScript. However, a major downside of JavaScript is the lack of support for types like string, number, etc. The types are interpreted at runtime. This means that unintentional type-related errors can only be detected during runtime making it unfavorable for building enterprise applications. The TypeScript language seeks to address this limitation.\nTypeScript is an open-source language developed by Microsoft. It is a superset of JavaScript with additional capabilities, most notable being static type definitions making it an excellent tool for a better and safer development experience.\nLet us look at the steps for building an Express application using the TypeScript language.\nInstalling TypeScript and other Configurations We will enrich the project we have used till now to add support for TypeScript by starting with the installation of TypeScript.\nWe will install TypeScript as an npm package called typescript along with another package: ts-node:\nnpm i -D typescript ts-node The typescript package transforms the code written in TypeScript language to JavaScript using a process called transcompiling or transpiling.\nThe ts-node npm package enables running TypeScript files from the command line in Node.js environments.\nThe -D, also known as the --dev option, means that both the packages are installed as development dependencies. After the installation, we will have the devDependencies property inside the package.json populated with these packages as shown below:\n{ \u0026#34;name\u0026#34;: \u0026#34;storefront\u0026#34;, ... \u0026#34;devDependencies\u0026#34;: { \u0026#34;ts-node\u0026#34;: \u0026#34;^10.5.0\u0026#34;, \u0026#34;typescript\u0026#34;: \u0026#34;^4.5.5\u0026#34; } } Next, let us create a JSON file named tsconfig.json in our project’s root folder. We can define different options for compiling the TypeScript code inside the project as shown here:\n{ \u0026#34;compilerOptions\u0026#34;: { \u0026#34;module\u0026#34;: \u0026#34;commonjs\u0026#34;, \u0026#34;target\u0026#34;: \u0026#34;es6\u0026#34;, \u0026#34;rootDir\u0026#34;: \u0026#34;./\u0026#34;, \u0026#34;esModuleInterop\u0026#34;: true } } Here we have specified four basic compiler options for the module system to be used in the compiled JavaScript code, targeted JavaScript version of the compiled code, root location of typescript files inside the project, and a flag that enables default imports for TypeScript modules with export = syntax.\nNext, we will need the type definitions of the Node APIs and Express to be fetched from the @types namespace. For this we will need to install the @types/node and @types/express packages as a development dependency:\nnpm i -D @types/node @types/express Our setup for TypeScript is now complete with the options for transpiling the TypeScript set and the types from Node.js and Express framework installed. We will use this setup to create our server and routes in TypeScript in the next sections.\nRunning the Server Created with TypeScript Let us create a file named app.ts which will contain the code written in TypeScript for running the server application in the root directory. The TypeScript code for running the server application looks like this:\nimport express from \u0026#39;express\u0026#39;; const app = express(); const port = 3000; app.listen(port, () =\u0026gt; { console.log(`Server listening at port ${port}.`); }); Here we have used the express module to create a server as we have seen before. With this configuration, the server will run on port 3000 and can be accessed with the URL: http://localhost:3000.\nLet us next install the utility package Nodemon as another development dependency, which will speed up development by automatically restarting the server after each change:\nnpm i -D nodemon We will next add a script named serve with nodemon app.ts command inside the scripts property in our project\u0026rsquo;s package.json file:\n\u0026#34;scripts\u0026#34;: { \u0026#34;serve\u0026#34;: \u0026#34;nodemon app.ts\u0026#34; } This script is used to start the server. The ts-node package installed earlier makes this possible under the hood, as normally we will not be able to run TypeScript files from the command line.\nNow we can start our server by running the following command:\nnpm run serve The output in the console after running the server looks like this:\n[nodemon] 2.0.15 [nodemon] to restart at any time, enter `rs` [nodemon] watching path(s): *.* [nodemon] watching extensions: ts,json [nodemon] starting `ts-node app.ts` Server listening at port 3000. We can choose not to use Nodemon and instead run the application using the below command:\nnpx ts-node app.ts Running this command will start the server and result in a similar output as before. We have used npx here which is a command-line tool that can execute a package from the npm registry without installing that package.\nAdding a Route with a Handler Function in TypeScript Let us now modify the TypeScript code written in the earlier section to add a route for defining a REST API as shown below:\nimport express, { Request, Response, NextFunction } from \u0026#39;express\u0026#39;; const app = express(); const port = 3000; // Define a type for Product interface Product { name: string; price: number; brand: string; }; // Define a handler function const getProducts = ( request: Request, response: Response, next: NextFunction) =\u0026gt; { // Defining a hardcoded array of product entities  let products: Product[] = [ {\u0026#34;name\u0026#34;:\u0026#34;television\u0026#34;, \u0026#34;price\u0026#34;:112.34, \u0026#34;brand\u0026#34;:\u0026#34;samsung\u0026#34;}, {\u0026#34;name\u0026#34;:\u0026#34;washing machine\u0026#34;, \u0026#34;price\u0026#34;: 345.34, \u0026#34;brand\u0026#34;: \u0026#34;LG\u0026#34;}, {\u0026#34;name\u0026#34;:\u0026#34;Macbook\u0026#34;, \u0026#34;price\u0026#34;: 3454.34, \u0026#34;brand\u0026#34;: \u0026#34;Apple\u0026#34;} ] // sending a JSON response  response.status(200).json(products); } // Define the route with route path \u0026#39;/products\u0026#39; app.get(\u0026#39;/products\u0026#39;, getProducts); // Start the server app.listen(port, () =\u0026gt; { console.log(`Server listening at port ${port}.`); }); We have modified the import statement on the first line to import the TypeScript interfaces that will be used for the request, response, and next parameters inside the Express middleware.\nNext, we have defined a type named Product containing attributes: name, price, and brand. After we have defined the handler function for returning an array of products and finally associated it with a route with route path /products.\nWe can now access the URL: http://localhost:3000/products from the browser or run a curl command and get a JSON response containing the products array.\nConclusion Here is a list of the major points for a quick reference:\n  Express is a lightweight framework for building web applications on Node.js\n  Express is installed as an npm module in a Node.js project\n  We define Routes in Express by associating handler functions with URL paths also called route paths.\n  We use one or more middleware functions to perform intermediate processing between the time the request is received and the response is sent.\n  Express comes with a default error handler for handling error conditions. Beyond this, we can define custom error handlers as middleware functions.\n  We can create dynamic HTML pages using Express from our server-side applications by configuring template engines like Pug, Mustache, and EJS.\n  In this article, we built a web application containing GET and POST endpoints for a REST API and another endpoint for rendering an HTML.\n  We also used TypeScript to define a Node.js server application containing an endpoint for a REST API.\n  The code of our web application is distributed across the following files :\n routes.js contains all the route handler functions for the REST API along with another route to render the dynamic HTML based on a Pug template. middleware.js contains all the middleware functions. errormiddleware.js contains all the custom error handlers. server.js which uses functions from the above files and runs the Express application. app.ts which contains the code written in TypeScript for running a server application with a REST API endpoint.    You can refer to all the source code used in the article on Github.\n","date":"February 15, 2022","image":"https://reflectoring.io/images/stock/0118-keyboard-1200x628-branded_huf25a9b6a90140c9cfeb91e792ab94429_105919_650x0_resize_q90_box.jpg","permalink":"/getting-started-with-express/","title":"Getting Started with Express"},{"categories":["AWS"],"contents":"Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables decoupling and communication between the components of a distributed system. We can send, store, and receive messages at any volume, without losing messages or requiring other systems to be available.\nBeing fully managed, Amazon SQS also eliminates the additional overhead associated with managing and operating message-oriented middleware, thereby empowering developers to focus on application development instead of managing infrastructure.\nIn this article, we will introduce Amazon SQS, understand its core concepts of the queue and sending and receiving messages and work through some examples.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS SQS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and others, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\nAlso, check out the sample chapters from the book about deploying a Spring Boot application with CDK and how to design a CDK project.\n  Example Code This article is accompanied by a working code example on GitHub. What is Message Queueing? Message queueing is an asynchronous style of communication between two or more processes.\nMessages and queues are the basic components of a message queueing system.\nPrograms communicate with each other by sending data in the form of messages which are placed in a storage called a queue, instead of calling each other directly. The receiver programs retrieve the message from the queue and do the processing without any knowledge of the producer programs.\nThis allows the communicating programs to run independently of each other, at different speeds and times, in different processes, and without having a direct connection between them.\nCore Concepts of Amazon SQS The Amazon Simple Queue Service (SQS) is a fully managed distributed message queueing system. The queue provided by the SQS service redundantly stores the messages across multiple Amazon SQS servers. Let us look at some of its core concepts.\nStandard Queues vs. FIFO Queues Amazon SQS provides two types of message queues:\nStandard queues: They provide maximum throughput, best-effort ordering, and at-least-once delivery. The standard queue is the default queue type in SQS. When using standard queues, we should design our applications to be idempotent so that there is no negative impact when processing the same message more than once.\nFIFO queues: FIFO (First-In-First-Out) queues are used for messaging when the order of operations and events exchanged between applications is important, or in situations where we want to avoid processing duplicate messages. FIFO queues guarantee that messages are processed exactly once, in the exact order that they are sent.\nOrdering and Deduplication (Exactly-Once Delivery) in FIFO Queues A FIFO queue preserves the order in which messages are sent and received and a message is delivered exactly once.\nThe messages are ordered based on message group ID. If multiple hosts send messages with the same message group ID to a FIFO queue, Amazon SQS stores the messages in the order in which they arrive for processing.\nTo make sure that Amazon SQS preserves the order in which messages are sent and received, each producer should use a unique message group ID to send all its messages.\nMessages that belong to the same message group are always processed one by one, in a strict order relative to the message group.\nFIFO queues also help us to avoid sending duplicate messages to a queue. If we send the same message within the 5-minute deduplication interval, it is not added to the queue. We can configure deduplication in two ways:\n  Enabling Content-Based Deduplication: When this property is enabled for a queue, SQS uses a SHA-256 hash to generate the message deduplication ID using the contents in the body of the message.\n  Providing the Message Deduplication ID: When a message with a particular message deduplication ID is sent, any messages subsequently sent with the same message deduplication ID are accepted successfully but are not delivered during the 5-minute deduplication interval.\n  Queue Configurations After creating the queue, we need to configure the queue with specific attributes based on our message processing requirements. Let us look at some of the properties which we configure:\nDead-letter Queue: A dead-letter queue is a queue that one or more source queues can use for messages that are not consumed successfully. They are useful for debugging our applications or messaging system because they let us isolate unconsumed messages to determine why their processing does not succeed.\nDead-letter Queue Redrive: We use this configuration to define the time after which unconsumed messages are moved out of an existing dead-letter queue back to their source queues.\nVisibility Timeout: The visibility timeout is a period during which a message received from a queue by one consumer is not visible to the other message consumers. Amazon SQS prevents other consumers from receiving and processing the message during the visibility timeout period.\nMessage Retention Period: The amount of time for which a message remains in the queue. The messages in the queue should be received and processed before this time is crossed. They are automatically deleted from the queue once the message retention period has expired.\nDelaySeconds: The length of time for which the delivery of all messages in the queue is delayed.\nMaximumMessageSize: The limit on the size of a message in bytes that can be sent to SQS before being rejected.\nReceiveMessageWaitTimeSeconds: The length of time for which a message receiver waits for a message to arrive. This value defaults to 0 and can take any value from 0 to 20 seconds.\nShort and long polling: Amazon SQS uses short polling and long polling mechanisms to receive messages from a queue. Short polling returns immediately, even if the message queue being polled is empty, while long polling does not return a response until a message arrives in the message queue, or the long polling period expires. The SQS client uses short polling by default. Long polling is preferable to short polling in most cases.\nCreating a Standard SQS Queue We can use the Amazon SQS console to create standard queues and FIFO queues. The console provides default values for all settings except for the queue name.\nHowever, for our examples, we will use the AWS SDK for Java to create our queues and send and receive messages.\nThe AWS SDK for Java simplifies the use of AWS Services by providing a set of libraries that are based on common design patterns familiar to Java developers.\nLet us first set up the AWS SDK for Java by adding the following Maven dependency in pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;software.amazon.awssdk\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;sqs\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;software.amazon.awssdk\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;bom\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.17.116\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; We will next create a standard queue with the AWS Java SDK as shown below:\npublic class ResourceHelper { private static Logger logger = Logger.getLogger(ResourceHelper.class.getName()); public static void main(String[] args) { createStandardQueue(); } public static void createStandardQueue() { SqsClient sqsClient = getSQSClient(); // Define the request for creating a  // standard queue with default parameters  CreateQueueRequest createQueueRequest = CreateQueueRequest.builder() .queueName(\u0026#34;myqueue\u0026#34;) .build(); // Create the queue  sqsClient.createQueue(createQueueRequest); } private static SqsClient getSQSClient() { AwsCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create(\u0026#34;\u0026lt;Profile\u0026gt;\u0026#34;); SqsClient sqsClient = SqsClient .builder() .credentialsProvider(credentialsProvider) .region(Region.US_EAST_1).build(); return sqsClient; } } We have defined an SQS queue with a default configuration and set the name of the queue as myqueue. The queue name is unique for our AWS account and region.\nRunning this program will create a standard type SQS queue of name myqueue with a default configuration. We can see the queue we just created in the aws console:\nSending a Message to a Standard SQS Queue We can send a message to an SQS queue either from the AWS console or from an application using the AWS SDK.\nLet us send a message to the queue that we created earlier from a Java program as shown below:\nimport software.amazon.awssdk.auth.credentials.AwsCredentialsProvider; import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.sqs.SqsClient; import software.amazon.awssdk.services.sqs.model.MessageAttributeValue; import software.amazon.awssdk.services.sqs.model.SendMessageRequest; import software.amazon.awssdk.services.sqs.model.SendMessageResponse; public class MessageSender { private static Logger logger = Logger .getLogger(MessageSender.class.getName()); public static void main(String[] args) { sendMessage(); } public static void sendMessage() { SqsClient sqsClient = getSQSClient(); final String queueURL = \u0026#34;https://sqs.us-east-1.amazonaws.com/\u0026#34; + AppConfig.ACCOUNT_NO + \u0026#34;/myqueue\u0026#34;; // Prepare request for sending message  // with queueUrl and messageBody  SendMessageRequest sendMessageRequest = SendMessageRequest .builder() .queueUrl(queueURL) .messageBody(\u0026#34;Test message\u0026#34;) .build(); // Send message and get the messageId in return  SendMessageResponse sendMessageResponse = sqsClient .sendMessage(sendMessageRequest); logger.info(\u0026#34;message id: \u0026#34;+ sendMessageResponse.messageId()); sqsClient.close(); } private static SqsClient getSQSClient() { ... } } Here we are first establishing a connection with the AWS SQS service using the SqsClient class. After that, the message to be sent is constructed with the SendMessageRequest class by specifying the URL of the queue and the message body.\nThen the message is sent by invoking the sendMessage() method on the SqsClient instance.\nWhen we run this program we can see the message ID in the output:\nINFO: message id: fa5fd857-59b4-4a9a-ba54-a5ab98ee82f9 This message ID returned in the sendMessage() response is assigned by SQS and is useful for identifying messages.\nWe can also send multiple messages in a single request using the sendMessageBatch() method of the SqsClient class.\nCreating a First-In-First-Out (FIFO) SQS Queue Let us now create a FIFO queue that we can use for sending non-duplicate messages in a fixed sequence. We will do this in the createFifoQueue() method as shown here:\npublic class ResourceHelper { private static Logger logger = Logger.getLogger(ResourceHelper.class.getName()); public static void main(String[] args) { createFifoQueue(); } public static void createFifoQueue() { SqsClient sqsClient = getSQSClient(); // Define attributes of FIFO queue in an attribute map  Map\u0026lt;QueueAttributeName, String\u0026gt; attributeMap = new HashMap\u0026lt;QueueAttributeName, String\u0026gt;(); // FIFO_QUEUE attribute is set to true mark the queue as FIFO  attributeMap.put( QueueAttributeName.FIFO_QUEUE, \u0026#34;true\u0026#34;); // Scope of DEDUPLICATION is set to messageGroup  attributeMap.put( QueueAttributeName.DEDUPLICATION_SCOPE, \u0026#34;messageGroup\u0026#34;); // CONTENT_BASED_DEDUPLICATION is disabled  attributeMap.put( QueueAttributeName.CONTENT_BASED_DEDUPLICATION, \u0026#34;false\u0026#34;); // Prepare the queue creation request and end the name of the queue with fifo  CreateQueueRequest createQueueRequest = CreateQueueRequest.builder() .queueName(\u0026#34;myfifoqueue.fifo\u0026#34;) .attributes(attributeMap) .build(); // Create the FIFO queue  CreateQueueResponse createQueueResponse = sqsClient.createQueue(createQueueRequest); // URL of the queue is returned in the response  logger.info(\u0026#34;url \u0026#34; + createQueueResponse.queueUrl()); } private static String getQueueArn( final String queueName, final String region) { return \u0026#34;arn:aws:sqs:\u0026#34; + region + \u0026#34;:\u0026#34; + AppConfig.ACCOUNT_NO + \u0026#34;:\u0026#34; + queueName; } private static SqsClient getSQSClient() { ... } } As we can see, we have defined a queue with the name myfifoqueue.fifo. The name of FIFO queues must end with .fifo.\nWe have set the property: contentBasedDeduplication of our FIFO queue to false which means that SQS will not detect messages sent to the queue as duplicate by checking their content.\nInstead, SQS will look for a property named messageDeduplicationId in the message which we need to explicitly send when sending messages to a FIFO queue. SQS will treat messages with the same value of the property: messageDeduplicationId as duplicate.\nFurther, the deduplicationScope property of the queue is set to MESSAGE_GROUP which indicates the message group as the scope for identifying duplicate messages. The deduplicationScope property can alternately be set to QUEUE.\nSending a Message to a FIFO Queue As explained earlier, a FIFO queue preserves the order in which messages are sent and received.\nTo check this behavior, let us send five messages to the FIFO queue, we created earlier :\npublic class MessageSender { private static Logger logger = Logger.getLogger(MessageSender.class.getName()); public static void sendMessageToFifo() { SqsClient sqsClient = getSQSClient(); Map\u0026lt;String, MessageAttributeValue\u0026gt; messageAttributes = new HashMap\u0026lt;String, MessageAttributeValue\u0026gt;(); ... final String queueURL = \u0026#34;https://sqs.us-east-1.amazonaws.com/\u0026#34; + AppConfig.ACCOUNT_NO + \u0026#34;/myfifoqueue.fifo\u0026#34;; // List of deduplicate IDs to be sent with different messages  List\u0026lt;String\u0026gt; dedupIds = List.of(\u0026#34;dedupid1\u0026#34;, \u0026#34;dedupid2\u0026#34;, \u0026#34;dedupid3\u0026#34;, \u0026#34;dedupid2\u0026#34;, \u0026#34;dedupid1\u0026#34;); String messageGroupId = \u0026#34;signup\u0026#34;; // List of messages to be sent. 2 of them are duplicates  List\u0026lt;String\u0026gt; messages = List.of( \u0026#34;My fifo message1\u0026#34;, \u0026#34;My fifo message2\u0026#34;, \u0026#34;My fifo message3\u0026#34;, \u0026#34;My fifo message2\u0026#34;, // Duplicate message  \u0026#34;My fifo message1\u0026#34;); // Duplicate message  short loop = 0; // sending the above messages in sequence.  // Duplicate messages will be sent but will not be received.  for (String message : messages) { // message is identified as duplicate  // if deduplication id is already used  SendMessageRequest sendMessageRequest = SendMessageRequest.builder() .queueUrl(queueURL) .messageBody(message) .messageAttributes(messageAttributes) .messageDeduplicationId(dedupIds.get(loop)) .messageGroupId(messageGroupId) .build(); SendMessageResponse sendMessageResponse = sqsClient .sendMessage(sendMessageRequest); logger.info(\u0026#34;message id: \u0026#34; + sendMessageResponse.messageId()); loop += 1; } sqsClient.close(); } } A sample of the output generated by running this program is this:\nmessage id and sequence no.: 9529ddac-8946-4fee-a2dc-7be428666b63 | 18867399222923248640 message id and sequence no.: 2ba4d7dd-877c-4982-b41e-817c99633fc4 | 18867399223023088896 message id and sequence no.: ad354de3-3a89-4400-83b8-89a892c30526 | 18867399223104239872 message id and sequence no.: 2ba4d7dd-877c-4982-b41e-817c99633fc4 | 18867399223023088896 message id and sequence no.: 9529ddac-8946-4fee-a2dc-7be428666b63 | 18867399222923248640 When SQS accepts the message, it returns a sequence number along with a message identifier. The Sequence number as we can see is a large, non-consecutive number that Amazon SQS assigns to each message.\nWe are sending five messages to the queue myfifoqueue.fifo with two of them being duplicates. Since we had set the contentBasedDeduplication property to false when creating this queue, SQS determines duplicate messages by checking the value of the messageDeduplicationId property in the message.\nThe messages My fifo message1 and My fifo message2 are each sent twice with the same messageDeduplicationId while My fifo message3 is sent only once.\nAlthough we have sent five messages to the queue, we will only receive three unique messages in the same order when we consume the messages from the queue.\nWith the messages residing in the queue, we will look at how to consume messages from an SQS queue in the next section.\nConsuming Messages from a Queue Now let us read the message we sent to the queue from a different consumer program. As explained earlier, in keeping with the asynchronous programming model, the consumer program is independent of the sender program. The sender program does not wait for the consumer program to read the message before completion.\nWe retrieve messages that are currently in the queue by calling the AmazonSQS client’s receiveMessage() method of the SqsClient class as shown here:\npublic class MessageReceiver { public static void receiveMessage() { SqsClient sqsClient = getSQSClient(); final String queueURL = \u0026#34;https://sqs.us-east-1.amazonaws.com/\u0026#34; + AppConfig.ACCOUNT_NO + \u0026#34;/myqueue\u0026#34;; // long polling and wait for waitTimeSeconds before timing out  ReceiveMessageRequest receiveMessageRequest = ReceiveMessageRequest .builder() .queueUrl(queueURL) .waitTimeSeconds(20) .messageAttributeNames(\u0026#34;trace-id\u0026#34;) .build(); List\u0026lt;Message\u0026gt; messages = sqsClient .receiveMessage(receiveMessageRequest) .messages(); } private static SqsClient getSQSClient() { ... } } Here we have enabled long polling for receiving the SQS messages by setting the wait time as 20 seconds on the ReceiveMessageRequest object which we have supplied as the parameter to the receiveMessage() method of the SqsClient class.\nThe receiveMessage() returns the messages from the queue as a list of Message objects. We need to call this method in a loop to always get new messages as they come in. There are libraries available for sending and consuming messages from an SQS queue.\nSpring Cloud AWS Messaging is one such library that simplifies the publication and consumption of messages over Amazon SQS.\nPlease check our earlier article on Spring Cloud SQS which illustrates integrating with Amazon SQS along with sending and receiving messages in an application.\nDeleting Messages from a Queue with ReceiptHandle We get an identifier called receiptHandle when we receive a message from an SQS queue.\nWe use this receiptHandle identifier to delete a message from a queue as shown in this example :\npublic class MessageReceiver { public static void receiveFifoMessage() throws InterruptedException { SqsClient sqsClient = getSQSClient(); final String queueURL = \u0026#34;https://sqs.us-east-1.amazonaws.com/\u0026#34; + AppConfig.ACCOUNT_NO + \u0026#34;/myfifoqueue.fifo\u0026#34;; // long polling and wait for waitTimeSeconds before timing out  ReceiveMessageRequest receiveMessageRequest = ReceiveMessageRequest .builder() .queueUrl(queueURL) .waitTimeSeconds(20) .messageAttributeNames(\u0026#34;trace-id\u0026#34;) // returns the trace Id  .build(); while (true) { Thread.sleep(20000l); List\u0026lt;Message\u0026gt; messages = sqsClient .receiveMessage(receiveMessageRequest) .messages(); messages.stream().forEach(msg -\u0026gt; { // Get the receipt handle of the message received  String receiptHandle = msg.receiptHandle(); // Create the delete request with the receipt handle  DeleteMessageRequest deleteMessageRequest = DeleteMessageRequest .builder() .queueUrl(queueURL) .receiptHandle(receiptHandle) .build(); // Delete the message  DeleteMessageResponse deleteMessageResponse = sqsClient .deleteMessage(deleteMessageRequest); }); } } private static SqsClient getSQSClient() { ... } } Here in the receiveFifoMessage() method, we are using long polling to receive messages from the queue. We have also added an interval of 20 seconds (using Thread.sleep()) to delay the subsequent reading of the messages from the queue. After receiving the message from the SQS queue, we are getting the receiptHandle identifier of the message and using it to delete this message from the queue.\nThe receiptHandle identifier is associated with a specific instance of receiving a message. It is different each time we receive the message in case we receive the message more than once. So we must use the most recently received receiptHandle for the message for sending deletion requests.\nFor standard queues, it is possible to receive a message even after we have deleted it because of the distributed nature of the underlying storage. We should ensure that our application is idempotent to handle this scenario.\nIf we do not delete a message after consuming it, the message will remain in the queue and will be received again after the visibility timeout has expired.\nOtherwise, the messages left in a queue are deleted automatically after the expiry of the retention period configured for the queue.\nHandling Messaging Failures with an SQS Dead Letter Queue (DLQ) Sometimes, messages cannot be processed because of errors within the producer or consumer application. We can isolate the messages which failed processing by moving them to a separate queue called Dead Letter Queue (DLQ).\nAfter we have fixed the consumer application or when the consumer application is available to consume the message, we can move the messages back to the source queue using the dead-letter queue redrive capability.\nA dead-letter queue is a queue that one or more source queues can use for messages that are not consumed successfully.\nAmazon SQS does not create the dead-letter queue automatically. We must first create the queue before using it as a dead-letter queue. With this understanding, let us update the queue creation method that we defined earlier using AWS SDK:\npublic class ResourceHelper { private static Logger logger = Logger.getLogger(ResourceHelper.class.getName()); public static void main(String[] args) { createStandardQueue(); } public static void createStandardQueue() { SqsClient sqsClient = getSQSClient(); String dlqName = \u0026#34;mydlq\u0026#34;; CreateQueueRequest createQueueRequest = CreateQueueRequest .builder() .queueName(dlqName) .build(); // Create dead letter queue  CreateQueueResponse createQueueResponse = sqsClient.createQueue(createQueueRequest); String dlqArn = getQueueArn(dlqName, \u0026#34;us-east-1\u0026#34;); Map\u0026lt;QueueAttributeName, String\u0026gt; attributeMap = new HashMap\u0026lt;QueueAttributeName, String\u0026gt;(); attributeMap.put(QueueAttributeName.REDRIVE_POLICY, \u0026#34;{\\\u0026#34;maxReceiveCount\\\u0026#34;:10,\\\u0026#34;deadLetterTargetArn\\\u0026#34;:\\\u0026#34;\u0026#34; + dlqArn + \u0026#34;\\\u0026#34;}\u0026#34;); // Prepare request for creating the standard queue  createQueueRequest = CreateQueueRequest.builder() .queueName(\u0026#34;myqueue\u0026#34;) .attributes(attributeMap) .build(); // create the queue  createQueueResponse = sqsClient.createQueue(createQueueRequest); logger.info(\u0026#34;Queue URL \u0026#34; + createQueueResponse.queueUrl()); } private static String getQueueArn() { ... ... } } Here we have first defined a standard queue named mydlq for using it as the dead-letter queue.\nThe redrive policy of an SQS queue is used to specify the source queue, the dead-letter queue, and the conditions under which Amazon SQS will move messages if the consumer of the source queue fails to process a message a specified number of times. The maxReceiveCount is the number of times a consumer tries to receive a message from a queue without deleting it before being moved to the dead-letter queue.\nAccordingly, we have defined the Redrive policy in the attribute map when creating the source queue with maxReceiveCount value of 10 and Amazon Resource Names (ARN) of the dead-letter queue.\nTrigger an AWS Lambda Function by Incoming Messages in the Queue AWS Lambda is a serverless, event-driven compute service which we can use to run code for any type of application or backend service without provisioning or managing servers.\nWe can trigger a Lambda function from many AWS services and only pay for what we use.\nWe can attach SQS standard and FIFO queues to an AWS Lambda function as an event source. The lambda function will get triggered whenever messages are put in the queue. The function will read and process messages in the queue.\nThe Lambda function will poll the queue and invoke the Lambda function by passing an event parameter that contains the messages in the queue.\nLambda functions support many language runtimes like Node.js, Python, C#, and Java.\nLet us attach the following lambda function to our standard queue created earlier to process SQS messages:\nexports.handler = async function(event, context) { event.Records.forEach(record =\u0026gt; { const { body } = record; console.log(body); }); return {}; } This function is written in Javascript and uses the Node.js runtime during execution in AWS Lambda. A handler function named handler() is exported that takes an event object and a context object as parameters and prints the message received from the SQS queue in the console. The handler function in the Lambda is the method that processes events. Lambda runs the handler method when the function is invoked.\nWe will also need to create an execution role for the Lambda function with the following IAM policy attached:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;VisualEditor0\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;sqs:DeleteMessage\u0026#34;, \u0026#34;sqs:ReceiveMessage\u0026#34;, \u0026#34;sqs:GetQueueAttributes\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:sqs:us-east-1:\u0026lt;account-no\u0026gt;:myqueue\u0026#34; ] } ] } For processing messages from the queue, the lambda function needs permissions for DeleteMessage, ReceiveMessage, GetQueueAttributes on our SQS queue and an AWS managed policy: AWSLambdaBasicExecutionRole for permission for writing to CloudWatch logs.\nLet us create this lambda function from the AWS console as shown here:\nLet us run our sendMessage() method to send a message to the queue where the lambda function is attached. Since the Lambda function is attached to be triggered by messages in the queue, we can see the message sent by the sendMessage() method in the CloudWatch console:\nWe can see the message: Test message which was sent to the SQS queue, printed by the lambda receiver function in the CloudWatch console.\nWe can also specify a queue to act as a dead-letter queue for messages that our Lambda function fails to process.\nSending Message Metadata with Message Attributes Message attributes are structured metadata that can be attached and sent together with the message to SQS.\nMessage Metadata are of two kinds:\n  Message Attributes: These are custom metadata usually added and extracted by our applications for general-purpose use cases. Each message can have up to 10 attributes.\n  Message System Attributes: These are used to store metadata for other AWS services like AWS X-Ray.\n  Let us modify our earlier example of sending a message by adding a message attribute to be sent with the message:\npublic class MessageSender { private static final String TRACE_ID_NAME = \u0026#34;trace-id\u0026#34;; private static Logger logger = Logger.getLogger(MessageSender.class.getName()); public static void main(String[] args) { sendMessage(); } public static void sendMessage() { SqsClient sqsClient = getSQSClient(); Map\u0026lt;String, MessageAttributeValue\u0026gt; messageAttributes = new HashMap\u0026lt;String, MessageAttributeValue\u0026gt;(); // generates a UUID as the traceId  String traceId = UUID.randomUUID().toString(); // add traceId as a message attribute  messageAttributes.put(TRACE_ID_NAME, MessageAttributeValue.builder() .dataType(\u0026#34;String\u0026#34;) .stringValue(traceId) .build()); final String queueURL = \u0026#34;https://sqs.us-east-1.amazonaws.com/\u0026#34; + AppConfig.ACCOUNT_NO + \u0026#34;/myqueue\u0026#34;; SendMessageRequest sendMessageRequest = SendMessageRequest.builder() .queueUrl(queueURL) .messageBody(\u0026#34;Test message\u0026#34;) .messageAttributes(messageAttributes) .build(); SendMessageResponse sendMessageResponse = sqsClient .sendMessage(sendMessageRequest); logger.info(\u0026#34;message id: \u0026#34; + sendMessageResponse.messageId()); sqsClient.close(); } private static SqsClient getSQSClient() { ... } } In this example, we have added a message attribute named traceId which will be of String type.\nDefining an SQS Queue as an SNS Topic Subscriber Amazon Simple Notification Service (SNS) is a fully managed publish/subscribe messaging service that allows us to fan out messages from a logical access point called \u0026ldquo;topic\u0026rdquo; to multiple recipients at the same time.\nSNS topics support different subscription types like SQS queues, AWS Lambda functions, HTTP endpoints, email addresses, SMS, and mobile push where we can publish messages.\nWe can subscribe multiple Amazon SQS queues to an Amazon Simple Notification Service (Amazon SNS) topic. When we publish a message to a topic, Amazon SNS sends the message to each of the subscribed queues.\nLet us update our ResourceHelper class by adding a method to create an SNS topic along with a subscription to the SQS Standard Queue created earlier:\npublic class ResourceHelper { private static Logger logger = Logger.getLogger(ResourceHelper.class.getName()); public static void main(String[] args) { createSNSTopicWithSubscription(); } public static void createSNSTopicWithSubscription() { SnsClient snsClient = getSNSClient(); // Prepare the request for creating an SNS topic  CreateTopicRequest createTopicRequest = CreateTopicRequest .builder() .name(\u0026#34;mytopic\u0026#34;) .build(); // Create the topic  CreateTopicResponse createTopicResponse = snsClient.createTopic(createTopicRequest); String topicArn = createTopicResponse.topicArn(); String queueArn = getQueueArn(\u0026#34;myqueue\u0026#34;, \u0026#34;us-east-1\u0026#34;); // Prepare the SubscribeRequest for subscribing  // endpoint of protocol sqs to the topic of topicArn  SubscribeRequest subscribeRequest = SubscribeRequest.builder() .protocol(\u0026#34;sqs\u0026#34;) .topicArn(topicArn) .endpoint(queueArn) .build(); SubscribeResponse subscribeResponse = snsClient.subscribe(subscribeRequest); logger.info(\u0026#34;subscriptionArn \u0026#34; + subscribeResponse.subscriptionArn()); } private static SnsClient getSNSClient() { AwsCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create(\u0026#34;\u0026lt;Profile\u0026gt;\u0026#34;); // Construct the SnsClient with AWS account credentials  SnsClient snsClient = SnsClient .builder() .credentialsProvider(credentialsProvider) .region(Region.US_EAST_1).build(); return snsClient; } } Here we have first created an SNS topic of the name mytopic. Then we have created a subscription by adding the SQS queue as a subscriber to the topic.\nLet us now publish a message to this SNS topic using AWS Java SDK as shown below:\npublic class MessageSender { private static Logger logger = Logger.getLogger(MessageSender.class.getName()); public static void main(String[] args) { sendMessageToSnsTopic(); } public static void sendMessageToSnsTopic() { SnsClient snsClient = getSNSClient(); final String topicArn = \u0026#34;arn:aws:sns:us-east-1:\u0026#34; + AppConfig.ACCOUNT_NO + \u0026#34;:mytopic\u0026#34;; // Build the publish request with the  // SNS Topic Arn and the message body  PublishRequest publishRequest = PublishRequest .builder() .topicArn(topicArn) .message(\u0026#34;Test message published to topic\u0026#34;) .build(); // Publish the message to the SNS topic  PublishResponse publishResponse = snsClient.publish(publishRequest); logger.info(\u0026#34;message id: \u0026#34; + publishResponse.messageId()); snsClient.close(); } private static SnsClient getSNSClient() { ... } } Here we have set up the SNS client using our AWS account credentials and invoked the publish method on the SnsClient instance to publish a message to the topic. The SQS queue being a subscriber to the queue receives the message from the topic.\nSecurity and Access Control SQS comes with many security features designed for least privilege access and protecting data integrity. It requires three types of roles for access to the different components of the producer-consumer model:\nAdministrators: Administrators need access to control queue policies and to create, modify, and delete queues.\nProducers: They need access to send messages to queues.\nConsumers: They need access to receive and delete messages from queues.\nWe should define IAM roles to grant these three types of access to SQS to applications or services.\nSQS also supports encryption at rest for encrypting messages stored in the queue:\n SSE-KMS : Server-side encryption with encryption keys managed in AWS Key Management Service SSE-SQS : Server-side encryption with encryption keys managed in SQS  SSE encrypts the body of the message when the message is received by SQS. The message is stored in encrypted form and SQS decrypts messages when they are sent to an authorized consumer.\nWe can also enforce encryption of data in transit by allowing only connections to SQS over HTTPS (TLS) by configuring this condition in the queue policy.\nConclusion Here is a list of the major points for a quick reference:\n  Message Queueing is an asynchronous style of communication between two or more processes.\n  Messages and queues are the basic components of a message queuing system.\n  Amazon Simple Queue Service (SQS) is a fully managed message queuing service using which we can send, store, and receive messages to enable asynchronous communication between decoupled systems.\n  SQS provides two types of queues: Standard Queue and First-In-First-Out FIFO Queue.\n  Standard queues are more performant but do not preserve message ordering.\n  FIFO queues preserve the order of the messages that are sent with the same message group identifier, and also do not allow to send duplicate messages.\n  We used AWS Java SDK to build queues, topics, and subscriptions and also for sending and receiving messages from the queue. Other than AWS SDK, we can use Infrastructure as Code (IaC) services of AWS like CloudFormation or AWS Cloud Development Kit (CDK) for a supported language to build queues and topics.\n  We define a Dead-letter queue (DLQ) to receive messages which have failed processing due to any erroneous condition in the producer or consumer program.\n  We also defined a Lambda function that will get triggered by messages in the queue.\n  At last, we defined an SQS queue as a subscription endpoint to an SNS topic to implement a publish-subscribe pattern of asynchronous communication.\n  You can refer to all the source code used in the article on Github.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS SQS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and others, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\nAlso, check out the sample chapters from the book about deploying a Spring Boot application with CDK and how to design a CDK project.\n ","date":"February 10, 2022","image":"https://reflectoring.io/images/stock/0117-queue-1200x628-branded_hu88ffcb943027ab1241b6b9f65033c311_123865_650x0_resize_q90_box.jpg","permalink":"/getting-started-with-aws-sqs/","title":"Getting Started with Amazon SQS"},{"categories":["Java"],"contents":"An annotation is a construct associated with Java source code elements such as classes, methods, and variables. Annotations provide information to a program at compile time or at runtime based on which the program can take further action. An annotation processor processes these annotations at compile time or runtime to provide functionality such as code generation, error checking, etc.\nThe java.lang package provides some core annotations and also gives us the capability to create our custom annotations that can be processed with annotation processors.\nIn this article, we will discuss the topic of annotations and demonstrate the power of annotation processing with a real-world example.\n Example Code This article is accompanied by a working code example on GitHub. Annotation Basics An annotation is preceded by the @ symbol. Some common examples of annotations are @Override and @SuppressWarnings. These are built-in annotations provided by Java through the java.lang package. We can further extend the core functionality to provide our custom annotations.\nAn annotation by itself does not perform any action. It simply provides information that can be used at compile time or runtime to perform further processing.\nLet\u0026rsquo;s look at the @Override annotation as an example:\npublic class ParentClass { public String getName() {...} } public class ChildClass extends ParentClass { @Override public String getname() {...} } We use the @Override annotation to mark a method that exists in a parent class, but that we want to override in a child class. The above program throws an error during compile time because the getname() method in ChildClass is annotated with @Override even though it doesn\u0026rsquo;t override a method from ParentClass (because there is no getname() method in ParentClass).\nBy adding the @Override annotation in ChildClass, the compiler can enforce the rule that the overriding method in the child class should have the same case-sensitive name as that in the parent class, and so the program would throw an error at compile time, thereby catching an error which could have gone undetected even at runtime.\nStandard Annotations Below are some of the most common annotations available to us. These are standard annotations that Java provides as part of the java.lang package. To see their full effect it would be best to run the code snippets from the command line since most IDEs provide their custom options that alter warning levels.\n@SuppressWarnings We can use the @SuppressWarnings annotation to indicate that warnings on code compilation should be ignored. We may want to suppress warnings that clutter up the build output. @SuppressWarnings(\u0026quot;unchecked\u0026quot;), for example, suppresses warnings associated with raw types.\nLet\u0026rsquo;s look at an example where we might want to use @SuppressWarnings:\npublic class SuppressWarningsDemo { public static void main(String[] args) { SuppressWarningsDemo swDemo = new SuppressWarningsDemo(); swDemo.testSuppressWarning(); } public void testSuppressWarning() { Map testMap = new HashMap(); testMap.put(1, \u0026#34;Item_1\u0026#34;); testMap.put(2, \u0026#34;Item_2\u0026#34;); testMap.put(3, \u0026#34;Item_3\u0026#34;); } } If we run this program from the command-line using the compiler switch -Xlint:unchecked to receive the full warning list, we get the following message:\njavac -Xlint:unchecked ./com/reflectoring/SuppressWarningsDemo.java Warning: unchecked call to put(K,V) as a member of the raw type Map The above code-block is an example of legacy Java code (before Java 5), where we could have collections in which we could accidentally store mixed types of objects. To introduce compile time error checking generics were introduced. So to get this legacy code to compile without error we would change:\nMap testMap = new HashMap(); to\nMap\u0026lt;Integer, String\u0026gt; testMap = new HashMap\u0026lt;\u0026gt;(); If we had a large legacy code base, we wouldn\u0026rsquo;t want to go in and make lots of code changes since it would mean a lot of QA regression testing. So we might want to add the @SuppressWarning annotation to the class so that the logs are not cluttered up with redundant warning messages. We would add the code as below:\n@SuppressWarnings({\u0026#34;rawtypes\u0026#34;, \u0026#34;unchecked\u0026#34;}) public class SuppressWarningsDemo { ... } Now if we compile the program, the console is free of warnings.\n@Deprecated We can use the @Deprecated annotation to mark that a method or type has been replaced with newer functionality.\nIDEs make use of annotation processing to throw a warning at compile time, usually indicating the deprecated method with a strike-through to tell the developer that they shouldn\u0026rsquo;t use this method or type anymore.\nThe following class declares a deprecated method:\npublic class DeprecatedDemo { @Deprecated(since = \u0026#34;4.5\u0026#34;, forRemoval = true) public void testLegacyFunction() { System.out.println(\u0026#34;This is a legacy function\u0026#34;); } } The attribute since in the annotation tells us in which version the element was deprecated, and forRemoval indicates if the element is going to be removed in the next version.\nNow, calling the legacy method as below will trigger a compile time warning indicating that the method call needs to be replaced:\n./com/reflectoring/DeprecatedDemoTest.java:8: warning: [removal] testLegacyFunction() in DeprecatedDemo has been deprecated and marked for removal demo.testLegacyFunction(); ^ 1 warning @Override We already had a look at the @Override annotation above. We can use it to indicate that a method will be overriding the method with the same signature in a parent class. It is used to throw compile time errors in cases such as typos in letter-casing as in this code example:\npublic class Employee { public void getEmployeeStatus(){ System.out.println(\u0026#34;This is the Base Employee class\u0026#34;); } } public class Manager extends Employee { public void getemployeeStatus(){ System.out.println(\u0026#34;This is the Manager class\u0026#34;); } } We intended to override the getEmployeeStatus() method but we misspelled the method name. This can lead to serious bugs. The program above would compile and run without issue without catching that bug.\nIf we add the annotation @Override to the getemployeeStatus() method, we get a compile time error, which causes a compile error and forces us to correct the typo right away:\n./com/reflectoring/Manager.java:5: error: method does not override or implement a method from a supertype @Override ^ 1 error @FunctionalInterface  The @FunctionalInterface annotation is used to indicate that an interface cannot have more than one abstract method. The compiler throws an error in case there is more than one abstract method. Functional interfaces were introduced in Java 8, to implement Lambda expressions and to ensure that they didn\u0026rsquo;t make use of more than one method.\nEven without the @FunctionalInterface annotation, the compiler will throw an error if you include more than one abstract method in the interface. So why do we need @FunctionalInterface if it is not mandatory?\nLet us take the example of the code below:\n@FunctionalInterface interface Print { void printString(String testString); } If we add another method printString2() to the Print interface, the compiler or the IDE will throw an error and this will be obvious right away.\nNow, what if the Print interface was in a separate module, and there was no @FunctionalInterface annotation? The developers of that other module could easily add another function to the interface and break your code. Further, now we have to figure out which of the two is the right function in our case. By adding the @FunctionalInterface annotation we get an immediate warning in the IDE, such as this:\nMultiple non-overriding abstract methods found in interface com.reflectoring.Print So it is good practice to always include the @FunctionalInterface if the interface should be usable as a Lambda.\n@SafeVarargs The varargs functionality allows the creation of methods with variable arguments. Before Java 5, the only option to create methods with optional parameters was to create multiple methods, each with a different number of parameters. Varargs allows us to create a single method to handle optional parameters with syntax as below:\n// we can do this: void printStrings(String... stringList) // instead of having to do this: void printStrings(String string1, String string2) However, warnings are thrown when generics are used in the arguments. @SafeVarargs allows for suppression of these warnings:\npackage com.reflectoring; import java.util.Arrays; import java.util.List; public class SafeVarargsTest { private void printString(String test1, String test2) { System.out.println(test1); System.out.println(test2); } private void printStringVarargs(String... tests) { for (String test : tests) { System.out.println(test); } } private void printStringSafeVarargs(List\u0026lt;String\u0026gt;... testStringLists) { for (List\u0026lt;String\u0026gt; testStringList : testStringLists) { for (String testString : testStringList) { System.out.println(testString); } } } public static void main(String[] args) { SafeVarargsTest test = new SafeVarargsTest(); test.printString(\u0026#34;String1\u0026#34;, \u0026#34;String2\u0026#34;); test.printString(\u0026#34;*******\u0026#34;); test.printStringVarargs(\u0026#34;String1\u0026#34;, \u0026#34;String2\u0026#34;); test.printString(\u0026#34;*******\u0026#34;); List\u0026lt;String\u0026gt; testStringList1 = Arrays.asList(\u0026#34;One\u0026#34;, \u0026#34;Two\u0026#34;); List\u0026lt;String\u0026gt; testStringList2 = Arrays.asList(\u0026#34;Three\u0026#34;, \u0026#34;Four\u0026#34;); test.printStringSafeVarargs(testStringList1, testStringList2); } } In the above code, printString() and printStringVarargs() achieve the same result. Compiling the code, however, produces a warning for printStringSafeVarargs() since it used generics:\njavac -Xlint:unchecked ./com/reflectoring/SafeVarargsTest.java ./com/reflectoring/SafeVarargsTest.java:28: warning: [unchecked] Possible heap pollution from parameterized vararg type List\u0026lt;String\u0026gt; private void printStringSafeVarargs(List\u0026lt;String\u0026gt;... testStringLists) { ^ ./com/reflectoring/SafeVarargsTest.java:52: warning: [unchecked] unchecked generic array creation for varargs parameter of type List\u0026lt;String\u0026gt;[] test.printStringSafeVarargs(testStringList1, testStringList2); ^ 2 warnings By adding the SafeVarargs annotation as below, we can get rid of the warning:\n@SafeVarargs private void printStringSafeVarargs(List\u0026lt;String\u0026gt;... testStringLists) { Custom Annotations These are annotations that are custom-created to serve a particular purpose. We can create them ourselves. We can use custom annotations to\n reduce repetition, automate the generation of boilerplate code, catch errors at compile time such as potential null pointer checks, customize runtime behavior based on the presence of a custom annotation.  An example of a custom annotation would be this @Company annotation:\n@Company{ name=\u0026#34;ABC\u0026#34; city=\u0026#34;XYZ\u0026#34; } public class CustomAnnotatedEmployee { ... } When creating multiple instances of the CustomAnnotatedEmployee class, all instances would contain the same company name and city, so wouldn\u0026rsquo;t need to add that information to the constructor anymore.\nTo create a custom annotation we need to declare it with the @interface keyword:\npublic @interface Company{ } To specify information about the scope of the annotation and the area it targets, such as compile time or runtime, we need to add meta annotations to the custom annotation.\nFor example, to specify that the annotation applies to classes only, we need to add @Target(ElementType.TYPE), which specifies that this annotation only applies to classes, and @Retention(RetentionPolicy.RUNTIME), which specifies that this annotation must be available at runtime. We will discuss further details about meta annotations once we get this basic example running.\nWith the meta annotations, our annotation looks like this:\n@Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface Company{ } Next, we need to add the fields to the custom annotation. In this case, we need name and city. So we add it as below:\n@Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface Company{ String name() default \u0026#34;ABC\u0026#34;; String city() default \u0026#34;XYZ\u0026#34;; } Putting it all together, we can create a CustomAnnotatedEmployee class and apply the annotation to it as below:\n@Company public class CustomAnnotatedEmployee { private int id; private String name; public CustomAnnotatedEmployee(int id, String name) { this.id = id; this.name = name; } public void getEmployeeDetails(){ System.out.println(\u0026#34;Employee Id: \u0026#34; + id); System.out.println(\u0026#34;Employee Name: \u0026#34; + name); } } Now we can create a test class to read the @Company annotation at runtime:\nimport java.lang.annotation.Annotation; public class TestCustomAnnotatedEmployee { public static void main(String[] args) { CustomAnnotatedEmployee employee = new CustomAnnotatedEmployee(1, \u0026#34;John Doe\u0026#34;); employee.getEmployeeDetails(); Annotation companyAnnotation = employee .getClass() .getAnnotation(Company.class); Company company = (Company)companyAnnotation; System.out.println(\u0026#34;Company Name: \u0026#34; + company.name()); System.out.println(\u0026#34;Company City: \u0026#34; + company.city()); } } This would give the output below:\nEmployee Id: 1 Employee Name: John Doe Company Name: ABC Company City: XYZ So by introspecting the annotation at runtime we can access some common information of all employees and avoid a lot of repetition if we had to construct a lot of objects.\nMeta Annotations Meta annotations are annotations applied to other annotations that provide information about the annotation to the compiler or the runtime environment.\nMeta annotations can answer the following questions about an annotation:\n Can the annotation be inherited by child classes? Does the annotation need to show up in the documentation? Can the annotation be applied multiple times to the same element? What specific element does the annotation apply to, such as class, method, field, etc.? Is the annotation being processed at compile time or runtime?  @Inherited By default, an annotation is not inherited from a parent class to a child class. Applying the @Inherited meta annotation to an annotation allows it to be inherited:\n@Inherited @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface Company{ String name() default \u0026#34;ABC\u0026#34;; String city() default \u0026#34;XYZ\u0026#34;; } @Company public class CustomAnnotatedEmployee { private int id; private String name; public CustomAnnotatedEmployee(int id, String name) { this.id = id; this.name = name; } public void getEmployeeDetails(){ System.out.println(\u0026#34;Employee Id: \u0026#34; + id); System.out.println(\u0026#34;Employee Name: \u0026#34; + name); } } public class CustomAnnotatedManager extends CustomAnnotatedEmployee{ public CustomAnnotatedManager(int id, String name) { super(id, name); } } Since CustomAnnotatedEmployee has the @Company annotation and CustomAnnotatedManager inherits from it, the CustomAnnotatedManager class does not need to include it.\nNow if we run the test for the Manager class, we still get access to the annotation information, even though the Manager class does not have the annotation:\npublic class TestCustomAnnotatedManager { public static void main(String[] args) { CustomAnnotatedManager manager = new CustomAnnotatedManager(1, \u0026#34;John Doe\u0026#34;); manager.getEmployeeDetails(); Annotation companyAnnotation = manager .getClass() .getAnnotation(Company.class); Company company = (Company)companyAnnotation; System.out.println(\u0026#34;Company Name: \u0026#34; + company.name()); System.out.println(\u0026#34;Company City: \u0026#34; + company.city()); } } @Documented @Documented ensures that custom annotations show up in the JavaDocs.\nNormally, when we run JavaDoc on the class CustomAnnotatedManager the annotation information would not show up in the documentation. But when we use the @Documented annotation, it will:\n@Inherited @Documented @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface Company{ String name() default \u0026quot;ABC\u0026quot;; String city() default \u0026quot;XYZ\u0026quot;; } @Repeatable @Repeatable allows multiple repeating custom annotations on a method, class, or field. To use the @Repeatable annotation we need to wrap the annotation in a container class which refers to it as an array:\n@Target(ElementType.TYPE) @Repeatable(RepeatableCompanies.class) @Retention(RetentionPolicy.RUNTIME) public @interface RepeatableCompany { String name() default \u0026#34;Name_1\u0026#34;; String city() default \u0026#34;City_1\u0026#34;; } @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface RepeatableCompanies { RepeatableCompany[] value() default{}; } We declare our main class as below:\n@RepeatableCompany @RepeatableCompany(name = \u0026#34;Name_2\u0026#34;, city = \u0026#34;City_2\u0026#34;) public class RepeatedAnnotatedEmployee { } If we run a test on it as below:\npublic class TestRepeatedAnnotation { public static void main(String[] args) { RepeatableCompany[] repeatableCompanies = RepeatedAnnotatedEmployee.class .getAnnotationsByType(RepeatableCompany.class); for (RepeatableCompany repeatableCompany : repeatableCompanies) { System.out.println(\u0026#34;Name: \u0026#34; + repeatableCompany.name()); System.out.println(\u0026#34;City: \u0026#34; + repeatableCompany.city()); } } } We get the following output which displays the value of multiple @RepeatableCompany annotations:\nName: Name_1 City: City_1 Name: Name_2 City: City_2 @Target @Target specifies on which elements the annotation can be used, for example in the above example the annotation @Company was defined only for TYPE and so it could only be applied to a class.\nLet\u0026rsquo;s see what happens if we apply the @Company annotation to a method:\n@Company public class Employee { @Company public void getEmployeeStatus(){ System.out.println(\u0026#34;This is the Base Employee class\u0026#34;); } } If we applied the @Company annotation to the method getEmployeeStatus() as above, we get a compiler error stating: '@Company' not applicable to method.\nThe various self-explanatory target types are:\n ElementType.ANNOTATION_TYPE ElementType.CONSTRUCTOR ElementType.FIELD ElementType.LOCAL_VARIABLE ElementType.METHOD ElementType.PACKAGE ElementType.PARAMETER ElementType.TYPE  @Retention @Retention specifies when the annotation is discarded.\n  SOURCE - The annotation is used at compile time and discarded at runtime.\n  CLASS - The annotation is stored in the class file at compile time and discarded at run time.\n  RUNTIME - The annotation is retained at runtime.\n  If we needed an annotation to only provide error checking at compile time as @Override does, we would use SOURCE. If we need an annotation to provide functionality at runtime such as @Test in Junit we would use RUNTIME. To see a real example, create the following annotations in 3 separate files:\n@Target(ElementType.TYPE) @Retention(RetentionPolicy.CLASS) public @interface ClassRetention { } @Target(ElementType.TYPE) @Retention(RetentionPolicy.SOURCE) public @interface SourceRetention { } @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface RuntimeRetention { } Now create a class that uses all 3 annotations:\n@SourceRetention @RuntimeRetention @ClassRetention public class EmployeeRetentionAnnotation { } To verify that only the runtime annotation is available at runtime, run a test as follows:\npublic class RetentionTest { public static void main(String[] args) { SourceRetention[] sourceRetention = new EmployeeRetentionAnnotation() .getClass() .getAnnotationsByType(SourceRetention.class); System.out.println(\u0026#34;Source Retentions at runtime: \u0026#34; + sourceRetention.length); RuntimeRetention[] runtimeRetention = new EmployeeRetentionAnnotation() .getClass() .getAnnotationsByType(RuntimeRetention.class); System.out.println(\u0026#34;Runtime Retentions at runtime: \u0026#34; + runtimeRetention.length); ClassRetention[] classRetention = new EmployeeRetentionAnnotation() .getClass() .getAnnotationsByType(ClassRetention.class); System.out.println(\u0026#34;Class Retentions at runtime: \u0026#34; + classRetention.length); } } The output would be as follows:\nSource Retentions at runtime: 0 Runtime Retentions at runtime: 1 Class Retentions at runtime: 0 So we verified that only the RUNTIME annotation gets processed at runtime.\nAnnotation Categories Annotation categories distinguish annotations based on the number of parameters that we pass into them. By categorizing annotations as parameter-less, single value, or multi-value, we can more easily think and talk about annotations.\nMarker Annotations Marker annotations do not contain any members or data. We can use the isAnnotationPresent() method at runtime to determine the presence or absence of a marker annotation and make decisions based on the presence of the annotation.\nFor example, if our company had several clients with different data transfer mechanisms, we could annotate the class with an annotation indicating the method of data transfer as below:\n@Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface CSV { } The client class could use the annotation as below:\n@CSV public class XYZClient { ... } We can process the annotation as follows:\npublic class TestMarkerAnnotation { public static void main(String[] args) { XYZClient client = new XYZClient(); Class clientClass = client.getClass(); if (clientClass.isAnnotationPresent(CSV.class)){ System.out.println(\u0026#34;Write client data to CSV.\u0026#34;); } else { System.out.println(\u0026#34;Write client data to Excel file.\u0026#34;); } } } Based on whether the @CSV annotation exists or not, we can decide whether to write out the information to CSV or an Excel file. The above program would produce this output:\nWrite client data to CSV. Single-Value Annotations Single-value annotations contain only one member and the parameter is the value of the member. The single member has to be named value.\nLet\u0026rsquo;s create a SingleValueAnnotationCompany annotation that uses only the value field for the name, as below:\n@Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface SingleValueAnnotationCompany { String value() default \u0026#34;ABC\u0026#34;; } Create a class that uses the annotation as below:\n@SingleValueAnnotationCompany(\u0026#34;XYZ\u0026#34;) public class SingleValueAnnotatedEmployee { private int id; private String name; public SingleValueAnnotatedEmployee(int id, String name) { this.id = id; this.name = name; } public void getEmployeeDetails(){ System.out.println(\u0026#34;Employee Id: \u0026#34; + id); System.out.println(\u0026#34;Employee Name: \u0026#34; + name); } } Run a test as below:\npublic class TestSingleValueAnnotatedEmployee { public static void main(String[] args) { SingleValueAnnotatedEmployee employee = new SingleValueAnnotatedEmployee(1, \u0026#34;John Doe\u0026#34;); employee.getEmployeeDetails(); Annotation companyAnnotation = employee .getClass() .getAnnotation(SingleValueAnnotationCompany.class); SingleValueAnnotationCompany company = (SingleValueAnnotationCompany)companyAnnotation; System.out.println(\u0026#34;Company Name: \u0026#34; + company.value()); } } The single value \u0026lsquo;XYZ\u0026rsquo; overrides the default annotation value and the output is as below:\nEmployee Id: 1 Employee Name: John Doe Company Name: XYZ Full Annotations They consist of multiple name value pairs. For example Company(name=\u0026quot;ABC\u0026quot;, city=\u0026quot;XYZ\u0026quot;). Considering our original Company example:\n@Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface Company{ String name() default \u0026#34;ABC\u0026#34;; String city() default \u0026#34;XYZ\u0026#34;; } Let\u0026rsquo;s create the MultiValueAnnotatedEmployee class as below. Specify the parameters and values as below. The default values will be overwritten.\n@Company(name = \u0026#34;AAA\u0026#34;, city = \u0026#34;ZZZ\u0026#34;) public class MultiValueAnnotatedEmployee { } Run a test as below:\npublic class TestMultiValueAnnotatedEmployee { public static void main(String[] args) { MultiValueAnnotatedEmployee employee = new MultiValueAnnotatedEmployee(); Annotation companyAnnotation = employee.getClass().getAnnotation(Company.class); Company company = (Company)companyAnnotation; System.out.println(\u0026#34;Company Name: \u0026#34; + company.name()); System.out.println(\u0026#34;Company City: \u0026#34; + company.city()); } } The output is as below, and has overridden the default annotation values:\nCompany Name: AAA Company City: ZZZ Building a Real-World Annotation Processor For our real-world annotation processor example, we are going to do a simple simulation of the annotation @Test in JUnit. By marking our functions with the @Test annotation we can determine at runtime which of the methods in a test class need to be run as tests.\nWe first create the annotation as a marker annotation for methods:\n@Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) public @interface Test { } Next, we create a class AnnotatedMethods, to which we will apply the @Test annotations to the method test1(). This will enable the method to be executed at runtime. The method test2() does not have an annotation, and should not be executed at runtime.\npublic class AnnotatedMethods { @Test public void test1() { System.out.println(\u0026#34;This is the first test\u0026#34;); } public void test2() { System.out.println(\u0026#34;This is the second test\u0026#34;); } } Now we create the test to run the AnnotatedMethods class:\nimport java.lang.annotation.Annotation; import java.lang.reflect.Method; public class TestAnnotatedMethods { public static void main(String[] args) throws Exception { Class\u0026lt;AnnotatedMethods\u0026gt; annotatedMethodsClass = AnnotatedMethods.class; for (Method method : annotatedMethodsClass.getDeclaredMethods()) { Annotation annotation = method.getAnnotation(Test.class); Test test = (Test) annotation; // If the annotation is not null  if (test != null) { try { method.invoke(annotatedMethodsClass .getDeclaredConstructor() .newInstance()); } catch (Throwable ex) { System.out.println(ex.getCause()); } } } } } By calling getDeclaredMethods(), we\u0026rsquo;re getting the methods of our AnnotatedMethods class. Then, we\u0026rsquo;re iterating through the methods and checking each method if it is annotated with the@Test annotation. Finally, we perform a runtime invocation of the methods that were identified as being annotated with @Test.\nWe want to verify the test1() method will run since it is annotated with @Test, and test2() will not run since it is not annotated with @Test.\nThe output is:\nThis is the first test So we verified that test2(), which did not have the @Test annotation, did not have its output printed.\nConclusion We did an overview of annotations, followed by a simple real-world example of annotation processing.\nWe can further use the power of annotation processing to perform more complex automated tasks such as creating builder source files for a set of POJOs at compile time. A builder is a design pattern in Java that is used to provide a better alternative to constructors when there is a large number of parameters involved or there is a need for multiple constructors with optional parameters. If we had a few dozen POJOs, the code generation capabilities of the annotation processor would save us a lot of time by creating the corresponding builder files at compile time.\nBy fully leveraging the power of annotation processing we will be able to skip a lot of repetition and save a lot of time.\nYou can play around with the code examples from this article on GitHub.\n","date":"February 5, 2022","image":"https://reflectoring.io/images/stock/0116-post-its-1200x628-branded_hufcc297de820f3cb09743a12e8a176428_69308_650x0_resize_q90_box.jpg","permalink":"/java-annotation-processing/","title":"An Introduction to Annotations and Annotation Processing in Java"},{"categories":["Spring Boot"],"contents":"NullPointerExceptions (often shortened as \u0026ldquo;NPE\u0026rdquo;) are a nightmare for every Java programmer.\nWe can find plenty of articles on the internet explaining how to write null-safe code. Null-safety ensures that we have added proper checks in the code to guarantee the object reference cannot be null or possible safety measures are taken when an object is null, after all.\nSince NullPointerException is a runtime exception, it would be hard to figure out such cases during code compilation. Java\u0026rsquo;s type system does not have a way to quickly eliminate the dangerous null object references.\nLuckily, Spring Framework offers some annotations to solve this exact problem. In this article, we will learn how to use these annotations to write null-safe code using Spring Boot.\n Example Code This article is accompanied by a working code example on GitHub. Null-Safety Annotations in Spring Under the org.springframework.lang Spring core package, there are 4 such annotations:\n @NonNull, @NonNullFields, @Nullable, and @NonNullApi.  Popular IDEs like Eclipse and IntelliJ IDEA can understand these annotations. They can warn developers of potential issues during compile time.\nWe are going to use IntelliJ IDEA in this tutorial. Let us find out more with some code examples.\nTo create the base project, we can use the Spring Initializr. The Spring Boot starter is all we need, no need to add any extra dependencies.\nIDE Configuration Please note that not all development tools can show these compilation warnings. If you don\u0026rsquo;t see the relevant warning, check the compiler settings in your IDE.\nIntelliJ For IntelliJ, we can activate the annotation checking under \u0026lsquo;Build, Execution, Deployment -\u0026gt; Compiler\u0026rsquo;:\nEclipse For Eclipse, we can find the settings under \u0026lsquo;Java -\u0026gt; Compiler -\u0026gt; Errors/Warnings\u0026rsquo;:\nExample Code Let\u0026rsquo;s use a plain Employee class to understand the annotations:\npackage io.reflectoring.nullsafety; // imports  class Employee { String id; String name; LocalDate joiningDate; String pastEmployment; // standard constructor, getters, setters } @NonNull Mostly the id field (in the Employee class) is going to be a non-nullable value. So, to avoid any potential NullPointerException we can mark this field as @NonNull:\nclass Employee { @NonNull String id; //... } Now, if we accidentally try to set the value of id as null anywhere in the code, the IDE will show a compilation warning:\nThe @NonNull annotation can be used at the method, parameter, or field level.\nAt this point, you might be thinking \u0026ldquo;what if a class has more than one non-null field?\u0026rdquo;. Would it not be too wordy if we have to add a @NonNull annotation before each of these?\nWe can solve this problem by using the @NonNullFields annotation.\nHere is a quick summary for @NonNull:\n   Annotated element Effect     field Shows a warning when the field is null   parameter Shows a warning when the parameter is null   method Shows a warning when the method returns null   package Not Applicable    @NonNullFields Let us create a package-info.java file to apply the non-null field checks at the package level. This file will contain the root package name with @NonNullFields annotation:\n@NonNullFields package io.reflectoring.nullsafety; import org.springframework.lang.NonNullFields; Now, we no longer need to annotate the fields with the @NonNull annotation. Because by default, all fields of classes in that package are now treated as non-null. And, we will still see the same warning as before:\nAnother point to note here is if there are any uninitialized fields, then we will see a warning to initialize those:\nHere is a quick summary for @NonNullFields:\n   Annotated element Effect     field Not Applicable   parameter Not Applicable   method Not Applicable   package Shows a warning if any of the fields are null for the applied package    @NonNullApi By now, you might have spotted another requirement, i.e., to have similar checks for method parameters or return values. Here @NonNullApi will come to our rescue.\nSimilar to @NonNullFields, we can use a package-info.java file and add the @NonNullApi annotation for the intended package:\n@NonNullApi package io.reflectoring.nullsafety; import org.springframework.lang.NonNullApi; Now, if we write code where the method is returning null:\npackage io.reflectoring.nullsafety; // imports  class Employee { String getPastEmployment() { return null; } //... } We can see the IDE is now warning us about the non-nullable return value:\nHere is a quick summary for @NonNullApi:\n   Annotated element Effect     field Not Applicable   parameter Not Applicable   method Not Applicable   package Shows a warning if any of the parameters or return values are null for the applied package    @Nullable But here is a catch. There could be scenarios where a particular field can be null (no matter how much we want to avoid it).\nFor example, the pastEmployment field could be nullable in the Employee class (for someone who hasn\u0026rsquo;t had previous employment). But as per our safety checks, the IDE thinks it cannot be.\nWe can express our intention using the @Nullable annotation on the field. This will tell the IDE that the field can be null in some cases, so no need to trigger an alarm. As the JavaDoc suggests:\n Can be used in association with @NonNullApi or @NonNullFields to override the default non-nullable semantic to nullable.\n Similar to NonNull, the Nullable annotation can be applied to the method, parameter, or field level.\nWe can now mark the pastEmployment field as nullable:\npackage io.reflectoring.nullsafety; // imports  class Employee { @Nullable String pastEmployment; @Nullable String getPastEmployment() { return pastEmployment; } //... } Here is a quick summary for @Nullable:\n   Annotated element Effect     field Indicates that the field can be null   parameter Indicates that the parameter can be null   method Indicates that the method can return null   package Not Applicable    Automated Build Checks So far, we are discussing how modern IDEs make it easier to write null-safe code. However, if we want to have some automated code checks in our build pipeline, that\u0026rsquo;s also doable to some extent.\nSpotBugs (the reincarnation of the famous but abandoned FindBugs project) offers a Maven/Gradle plugin that can detect code smells due to nullability. Let\u0026rsquo;s see how we can use it.\nFor a Maven project, we need to update the pom.xml to add the SpotBugs Maven Plugin:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;com.github.spotbugs\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spotbugs-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;4.5.2.0\u0026lt;/version\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;!-- overwrite dependency on spotbugs if you want to specify the version of spotbugs --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.github.spotbugs\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spotbugs\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;4.5.3\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/plugin\u0026gt; After building the project, we can use the following goals from this plugin:\n the spotbugs goal analyzes the target project. the check goal runs the spotbugs goal and makes the build fail if it finds any bugs.  If you use Gradle instead of Maven, you can configure the SpotBugs Gradle Plugin in your build.gradle file:\ndependencies { spotbugsPlugins \u0026#39;com.h3xstream.findsecbugs:findsecbugs-plugin:1.11.0\u0026#39; } spotbugs { toolVersion = \u0026#39;4.5.3\u0026#39; } Once the project is updated, we can run the check using the gradle check command.\nSpotBugs provides a few rules to flag potential issues by processing the @NonNull annotation during Maven build. You can go through the detailed list of bug descriptions.\nFor example, if any of the methods annotated with @NonNull is accidentally returning null, then the SpotBugs check will fail with an error similar to this:\n[ERROR] High: io.reflectoring.nullsafety.Employee.getJoiningDate() may return null, but is declared @Nonnull [io.reflectoring.nullsafety.Employee] At Employee.java:[line 36] NP_NONNULL_RETURN_VIOLATION Conclusion These annotations are indeed a boon for Java programmers to reduce the possibility of a NullPointerException arising during runtime. Please bear in mind this does not guarantee complete null safety, however.\nKotlin uses these annotations to infer the nullability of the Spring API.\nI hope you are now ready to write null-safe code in Spring Boot!\n","date":"February 1, 2022","image":"https://reflectoring.io/images/stock/0116-shield-1200x628-branded_huf59ea85e18d7ebdb2031719895d53f7d_161606_650x0_resize_q90_box.jpg","permalink":"/spring-boot-null-safety-annotations/","title":"Protect Your Code from NullPointerExceptions with Spring's Null-Safety Annotations"},{"categories":["AWS"],"contents":"Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through a manual process.\nAWS provides native support for IaC through the CloudFormation service. With CloudFormation, teams can define declarative templates that specify the infrastructure required to deploy their solutions.\nAWS Cloud Development Kit (CDK) is a framework for defining cloud infrastructure with the expressive power of a programming language and provisioning it through AWS CloudFormation.\nIn this article, we will introduce AWS CDK, understand its core concepts and work through some examples.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS CDK.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\nAlso check out the sample chapters from the book about deploying a Spring Boot application with CDK and how to design a CDK project.\n  Example Code This article is accompanied by a working code example on GitHub. What is AWS CDK The AWS Cloud Development Kit (AWS CDK) is an open-source framework for defining cloud infrastructure as code with a set of supported programming languages. It is designed to support multiple programming languages. The core of the system is written in TypeScript, and bindings for other languages can be added.\nAWS CDK comes with a Command Line Interface (CLI) to interact with CDK applications for performing different tasks like:\n listing the infrastructure stacks defined in the CDK app synthesizing the stacks into CloudFormation templates determining the differences between running stack instances and the stacks defined in our CDK code, and deploying changes in stacks to any AWS Region.  Primer on CloudFormation - the Engine underneath CDK The CDK is built on top of the AWS CloudFormation service and uses it as the engine for provisioning AWS resources. So it is very important to have a good understanding of CloudFormation when working with CDK.\nAWS CloudFormation is an infrastructure as code (IaC) service for modeling, provisioning, and managing AWS and third-party resources.\nWe work with templates and stacks when using AWS CloudFormation. We create templates in YAML or JSON format to describe our AWS resources with their properties. A sample template for hosting a web application might look like this:\nResources: WebServer: Type: \u0026#39;AWS::EC2::Instance\u0026#39; Properties: SecurityGroups: - !Ref WebServerSecurityGroup KeyName: mykey ImageId: \u0026#39;ami-08e4e35cccc6189f4\u0026#39; Database: Type: AWS::RDS::DBInstance Properties: AllocatedStorage: 20 ... Engine: \u0026#39;mysql\u0026#39; WebServerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: SecurityGroupIngress: - CidrIp: 0.0.0.0/0 FromPort: 80 IpProtocol: tcp This template specifies the resources that we want for hosting a website:\n an Amazon EC2 instance an RDS MySQL database for storage An Amazon EC2 security group to control firewall settings for the Amazon EC2 instance.  You can browse the CloudFormation reference documentation for a list of all the resources that are available for use in CloudFormation templates.\nA CloudFormation stack is a collection of AWS resources that we can create, update, or delete as a single unit. The stack in our example includes all the resources required to run the web application: such as a web server, a database, and firewall rules.\nWhen creating a stack, CloudFormation provisions the resources that are described in our template by making underlying service calls to AWS.\nAWS CDK allows us to define our infrastructure in our favorite programming language instead of using a declarative language like JSON or YAML as in CloudFormation.\nSetting up the Prerequisites for CDK To work through some examples, let us first set up our development environment for writing AWS CDK apps. We need to complete the following activities for working with CDK:\n  Configure programmatic access to an AWS account: We will need access to an AWS account where our infrastructure will be created. We need access keys to make programmatic calls to AWS. We can create access keys from the AWS IAM console and set that up in our credentials file.\n  Install the CDK Toolkit: The AWS CDK Toolkit is the primary tool for interacting with the AWS CDK app through the CLI command cdk. It is an open-source project in GitHub. Among its capabilities are producing and deploying the AWS CloudFormation templates generated by the AWS CDK.\nWe can install the AWS CDK globally with npm:\nnpm install -g aws-cdk This will install the latest version of the CDK toolkit in our environment which we can verify with:\ncdk --version   Set up language-specific prerequisites: CDK supports multiple languages. We will be using Java in our examples here. We can create AWS CDK applications in Java using the language\u0026rsquo;s familiar tools like the JDK (Oracle\u0026rsquo;s, or an OpenJDK distribution such as Amazon Corretto) and Apache Maven. Prerequisites for other languages can be found in the official documentation.\n  Creating a New CDK Project Let\u0026rsquo;s create a new CDK project using the CDK CLI using the cdk init command:\nmkdir cdk-app cd cdk-app cdk init --language java Here we have created an empty directory cdk-app and used the cdk init command to create a Maven-based CDK project in Java language.\nRunning the cdk init command also displays the important CDK commands as shown here:\nApplying project template app for java # Welcome to your CDK Java project! ... ... ## Useful commands * `mvn package` compile and run tests * `cdk ls` list all stacks in the app * `cdk synth` emits the synthesized CloudFormation template * `cdk deploy` deploy this stack to your default AWS account/region * `cdk diff` compare deployed stack with current state * `cdk docs` open CDK documentation The following files are generated by the CDK toolkit arranged in this folder structure:\n├── README.md ├── cdk.json ├── pom.xml └── src ├── main │ └── java │ └── com │ └── myorg │ ├── CdkAppApp.java │ └── CdkAppStack.java └── test └── java └── com └── myorg └── CdkAppTest.java We can see two Java classes CdkAppApp and CdkAppStack have been generated along with a test class in a Maven project. The CdkAppApp class contains the main() method and is the entry point of the application. Its name is a bit funny because it\u0026rsquo;s generated from the fol;der name cdk-app and CDK automatically adds the App suffix again.\nWe will understand more about the function of the App and the Stack classes and build on this further to define our infrastructure resources in the following sections.\nIntroducing Constructs - the Basic Building Blocks Before working any further with the files generated in our project, we need to understand the concept of constructs which are the basic building blocks of an AWS CDK application.\nConstructs are reusable components in which we bundle a bunch of infrastructure resources that can be further composed together for building more complex pieces of infrastructure.\nA construct can represent a single AWS resource, such as an Amazon S3 bucket, or it can be a higher-level abstraction consisting of multiple AWS-related resources. Constructs are represented as a tree starting with a root construct and multiple child constructs arranged in a hierarchy.\nIn all CDK-supported languages, a construct is represented as a base class from which all other types of constructs inherit.\nStructure of a CDK Application A CDK project is composed of an App construct and one or more constructs of type Stack. When we generated the project by running cdk init, one App and one Stack construct were generated.\nThe App Construct - the CDK Application The App is a construct that represents an entire CDK app. This construct is normally the root of the construct tree. We define an App instance as the entry point of our CDK application and then define the constructs where the App is used as the parent scope.\nWe use the App construct to define one or more stacks within the scope of an application as shown in this code snippet:\npublic class MyCdkApp { public static void main(final String[] args) { App app = new App(); new MyFirstStack(app, \u0026#34;myStack\u0026#34;, StackProps.builder() .env(Environment.builder() .account(\u0026#34;********\u0026#34;) .region(\u0026#34;us-east-1\u0026#34;) .build()) .build()); app.synth(); } } In this example, the App instantiates a stack named myStack and sets the AWS account and region where the resources will be provisioned.\nThe Stack Construct - Unit of Deployment A stack is the unit of deployment in the AWS CDK. All AWS resources defined within the scope of a stack are provisioned as a single unit. We can define any number of stacks within a CDK app.\nFor example, the following code defines an AWS CDK app with two stacks:\npublic class MyCdkApp { public static void main(final String[] args) { App app = new App(); new MyFirstStack(app, \u0026#34;stack1\u0026#34;); new MySecondStack(app, \u0026#34;stack2\u0026#34;); app.synth(); } } Here we are defining two stacks named stack1 and stack2 and calling the synth() method on the app instance to generate the CloudFormation template. The call to app.synth() always has to be the last step in a CDK app. In this step, CDK \u0026ldquo;synthesizes\u0026rdquo; CloudFormation templates (i.e. JSON files) from the CDK code.\nDefining the Infrastructure with CDK After understanding the App and the Stack constructs, let us return to the project we generated earlier for creating our infrastructure resources.\nWe will first change the App class in our project to specify the stack properties: the AWS account and the region where we want to create our infrastructure. We do this by specifying these values in an environment object as shown here:\npublic class CdkAppApp { public static void main(final String[] args) { App app = new App(); new CdkAppStack(app, \u0026#34;CdkAppStack\u0026#34;, StackProps.builder() .env(Environment.builder() .account(\u0026#34;**********\u0026#34;) .region(\u0026#34;us-east-1\u0026#34;) .build()) .build()); app.synth(); } } We have defined the region as us-east-1 along with our AWS account in the env() method.\nNext, we will modify our stack class to define some infrastructure resources: an AWS EC2 instance with a security group in the default VPC of the AWS account:\npublic class CdkAppStack extends Stack { public CdkAppStack(final Construct scope, final String id) { this(scope, id, null); } public CdkAppStack( final Construct scope, final String id, final StackProps props) { super(scope, id, props); // Look up the default VPC  IVpc vpc = Vpc.fromLookup( this, \u0026#34;vpc\u0026#34;, VpcLookupOptions .builder() .isDefault(true) .build()); // Create a SecurityGroup which will allow all outbound traffic  SecurityGroup securityGroup = SecurityGroup .Builder .create(this, \u0026#34;sg\u0026#34;) .vpc(vpc) .allowAllOutbound(true) .build(); // Create EC2 instance of type T2.micro  Instance.Builder.create(this, \u0026#34;Instance\u0026#34;) .vpc(vpc) .instanceType(InstanceType.of( InstanceClass.BURSTABLE2, InstanceSize.MICRO)) .machineImage(MachineImage.latestAmazonLinux()) .blockDevices(List.of( BlockDevice.builder() .deviceName(\u0026#34;/dev/sda1\u0026#34;) .volume(BlockDeviceVolume.ebs(50)) .build(), BlockDevice.builder() .deviceName(\u0026#34;/dev/sdm\u0026#34;) .volume(BlockDeviceVolume.ebs(100)) .build())) .securityGroup(securityGroup) .build(); } } In this code snippet, we are first looking up the default VPC in our AWS account. After that, we are creating a security group in this VPC that will allow all outbound traffic. Finally, we are creating the EC2 instance with properties: instanceType, machineImage, blockDevices, and securityGroup and put it into the security group defined earlier.\nSynthesizing a Cloudformation Template Synthesizing is the process of executing our CDK app to generate the equivalent of our CDK code as a CloudFormation template. We do this by running the synth command as follows:\ncdk synth If our app contained more than one Stack, we need to specify which Stack(s) to synthesize. We don\u0026rsquo;t have to specify the Stack if it contains only one Stack.\nThe cdk synth command executes our app, which causes the resources defined in it to be translated into an AWS CloudFormation template. The output of cdk synth is a YAML-format template. The beginning of our app\u0026rsquo;s output is shown below:\n\u0026gt; cdk synth Resources: sg29196201: Type: AWS::EC2::SecurityGroup Properties: ... ... InstanceC1063A87: Type: AWS::EC2::Instance Properties: AvailabilityZone: us-east-1a BlockDeviceMappings: - DeviceName: /dev/sda1 ... ... InstanceType: t2.micro SecurityGroupIds: ... ... The output is the CloudFormation template containing the resources defined in the stack under our CDK app.\nDeploying the Cloudformation Template At last, we proceed to deploy the CDK app with the deploy command when the actual resources are provisioned in AWS. Let us run the deploy command by specifying our AWS credentials stored under a profile created in our environment:\ncdk deploy --profile pratikpoc The output of the deploy command looks like this:\n✨ Synthesis time: 8.18s This deployment will make potentially sensitive changes according to your current security approval level (--require-approval broadening). Please confirm you intend to make the following modifications: IAM Statement Changes ┌───┬──────────────────────────────┬────────┬────────────────┬───────────────────────────┬───────────┐ │ │ Resource │ Effect │ Action │ Principal │ Condition │ ├───┼──────────────────────────────┼────────┼────────────────┼───────────────────────────┼───────────┤ │ + │ ${Instance/InstanceRole.Arn} │ Allow │ sts:AssumeRole │ Service:ec2.amazonaws.com │ │ └───┴──────────────────────────────┴────────┴────────────────┴───────────────────────────┴───────────┘ Security Group Changes ┌───┬───────────────┬─────┬────────────┬─────────────────┐ │ │ Group │ Dir │ Protocol │ Peer │ ├───┼───────────────┼─────┼────────────┼─────────────────┤ │ + │ ${sg.GroupId} │ Out │ Everything │ Everyone (IPv4) │ └───┴───────────────┴─────┴────────────┴─────────────────┘ (NOTE: There may be security-related changes not in this list. See https://github.com/aws/aws-cdk/issues/1299) Do you wish to deploy these changes (y/n)? y CdkAppStack: deploying... [0%] start: Publishing 7815fc615f7d50b22e75cf1d134480a5d44b5b8b995b780207e963a44f27e61b:675153449441-us-east-1 [100%] success: Published 7815fc615f7d50b22e75cf1d134480a5d44b5b8b995b780207e963a44f27e61b:675153449441-us-east-1 CdkAppStack: creating CloudFormation changeset... ✅ CdkAppStack ✨ Deployment time: 253.98s Stack ARN: arn:aws:cloudformation:us-east-1:675153449441:stack/CdkAppStack/b9ab5740-7919-11ec-9cad-0a05d9e5c641 ✨ Total time: 262.16s CDK first creates a changeset of the resources that need to change and then we can confirm whether we want to proceed or not.\nDestroying the Infrastructure When we no longer need the infrastructure, we can dispose of all the provisioned resources by running the cdk destroy command:\n\u0026gt; cdk destroy --profile pratikpoc Are you sure you want to delete: CdkAppStack (y/n)? y CdkAppStack: destroying... ✅ CdkAppStack: destroyed As a result of running the destroy command, all the resources under the stack are destroyed as a single unit.\nConstruct Library and the Construct Hub The AWS CDK contains the AWS Construct Library, which includes constructs that represent all the resources available on AWS. This library has three levels of constructs :\n  Level 1 (L1) Constructs: These are low-level constructs also called CFN Resources which directly represent all resources available in AWS CloudFormation. They are named CfnXyz, where Xyz is the name of the resource. We have to configure all the properties of the L1 constructs. For example, we will define an EC2 instance with CfnInstance class and configure all its properties.\n  Level 2 (L2) Constructs: These are slightly higher level, more opinionated constructs than the L1 constructs. L2 constructs have some defaults so that we don\u0026rsquo;t have to set certain properties in our CDK apps. The Instance class that we used in our example to provision an EC2 instance is an L2 construct and comes with default properties set.\n  Level 3 (L3) Constructs: These constructs are also called \u0026ldquo;patterns\u0026rdquo;. They are designed to help us complete common tasks in AWS, often involving multiple kinds of resources. For example, the aws-ecs-patterns provides higher-level Amazon ECS constructs which follow common architectural patterns for application and network Load Balanced Services, Queue Processing Services, and Scheduled Tasks (cron jobs).\n  Similarly, the Construct Hub is a resource to help us discover additional constructs from AWS, third parties, and the open-source CDK community.\nWriting Custom Constructs We can also write our own constructs by extending the Construct base class as shown here:\npublic class MyStorageBucket extends Construct { public MyStorageBucket(final Construct scope, final String id) { super(scope, id); Bucket bucket = new Bucket(this, \u0026#34;mybucket\u0026#34;); LifecycleRule lifecycleRule = LifecycleRule.builder() .abortIncompleteMultipartUploadAfter(Duration.minutes(30)) .enabled(false) .expiration(Duration.minutes(30)) .expiredObjectDeleteMarker(false) .id(\u0026#34;myrule\u0026#34;) .build(); bucket.addLifecycleRule(lifecycleRule); } } This construct can be used for creating an S3 bucket construct with a lifecycle rule attached.\nWe can also create constructs by composing multiple lower-level constructs. This way we can define reusable components and share them with other teams like any other code.\nFor example, in an organization setup, a team can define a construct to enforce security best practices for an AWS resource like EC2 or S3 and share it with other teams in the organization. Other teams can now use this construct when provisioning their AWS resources without breaking the organization\u0026rsquo;s security policies.\nConclusion Here is a list of the major points for a quick reference:\n AWS Cloud Development Kit (CDK) is a framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation. Multiple programming languages are supported by CDK. Constructs are the basic building blocks of CDK. The App construct represents the main construct of a CDK application. We define the resources which we want to provision in the Stack construct. There are three levels of constructs: L1, L2, and L3 (from low to high abstraction). The Construct Hub is a resource to help us discover additional constructs from AWS, third parties, and the open-source CDK community We can curate our constructs usually by composing lower-level constructs. This way we can define reusable components and share them with other teams like any other code. As with all frameworks, AWS CDK has recommended best practices that should be followed for building CDK applications. Important cdk commands: cdk init app --language java // Generate the CDK project cdk synth // Generate the CloudFormation Template cdk diff // Finding the difference between deployed resources and new resources cdk deploy // Deploy the app to provision the resources cdk destroy // Dispose of the infrastructure   You can refer to all the source code used in the article on Github.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS CDK.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\nAlso check out the sample chapters from the book about deploying a Spring Boot application with CDK and how to design a CDK project.\n ","date":"January 23, 2022","image":"https://reflectoring.io/images/stock/0061-cloud-1200x628-branded_hu34d6aa247e0bb2675461b5a0146d87a8_82985_650x0_resize_q90_box.jpg","permalink":"/getting-started-with-aws-cdk/","title":"Getting Started with AWS CDK"},{"categories":["Java"],"contents":"Immutability means that an object\u0026rsquo;s state is constant after the initialization. It cannot change afterward.\nWhen we pass an object into a method, we pass the reference to that object. The parameter of the method and the original object now reference the same value on the heap.\nThis can cause multiple side effects. For example, in a multi-threaded system, one thread can change the value under reference, and it will cause other threads to misbehave. If you want to learn more about the reasons why we should make objects immutable, read the article about the advantages of immutables.\nThe Immutables library generates classes that are immutable, thread-safe, and null-safe, and helps us avoid these side effects. Aside from creating immutable classes, the library helps us write readable and clean code.\nLet us go through several examples showing key functionalities and how to use them properly.\n Example Code This article is accompanied by a working code example on GitHub. Setting up Immutables with Maven Adding the immutables is as simple as can be. We just need to add the dependency:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.immutables\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;value\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.8.8\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; Example Use Case Let us start building a webpage for creating and reading news articles. There are two entities that we want to write:\n User Article  Each user can write multiple articles, and each article has to have an author of type User. We won\u0026rsquo;t go into more details about the logic of the application.\nThe User Entity public class UserWithoutImmutable { private final long id; private final String name; private final String lastname; private final String email; private final String password; private final Role role; private List\u0026lt;ArticleWithoutImmutable\u0026gt; articles; private UserWithoutImmutable( long id, String name, String lastname, String email, String password, Role role, List\u0026lt;ArticleWithoutImmutable\u0026gt; articles) { this.id = id; this.name = name; this.lastname = lastname; this.email = email; this.password = password; this.role = role; this.articles = new ArrayList\u0026lt;\u0026gt;(articles); } public long getId() { return id; } public String getName() { return name; } public String getLastname() { return lastname; } public String getEmail() { return email; } public String getPassword() { return password; } public Role getRole() { return role; } public List\u0026lt;ArticleWithoutImmutable\u0026gt; getArticles() { return articles; } public UserWithoutImmutable addArticle( ArticleWithoutImmutable article) { this.articles.add(article); return this; } public UserWithoutImmutable addArticles( List\u0026lt;ArticleWithoutImmutable\u0026gt; articles) { this.articles.addAll(articles); return this; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; UserWithoutImmutable that = (UserWithoutImmutable) o; return id == that.id \u0026amp;\u0026amp; email.equals(that.email) \u0026amp;\u0026amp; password.equals(that.password); } @Override public int hashCode() { return Objects.hash(id, email, password); } @Override public String toString() { return \u0026#34;UserWithoutImmutable{\u0026#34; + \u0026#34;id=\u0026#34; + id + \u0026#34;, name=\u0026#39;\u0026#34; + name + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#34;, lastname=\u0026#39;\u0026#34; + lastname + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#34;, role= \u0026#39;\u0026#34; + role + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#34;, email=\u0026#39;\u0026#34; + email + \u0026#39;\\\u0026#39;\u0026#39; + \u0026#34;, password= *****\u0026#39;\u0026#34; + \u0026#34;, articles=\u0026#34; + articles + \u0026#39;}\u0026#39;; } public static UserWithoutImmutableBuilder builder() { return new UserWithoutImmutableBuilder(); } public static class UserWithoutImmutableBuilder { private long id; private String name; private String lastname; private Role role; private String email; private String password; private List\u0026lt;ArticleWithoutImmutable\u0026gt; articles; public UserWithoutImmutableBuilder id(long id) { this.id = id; return this; } public UserWithoutImmutableBuilder name(String name) { this.name = name; return this; } public UserWithoutImmutableBuilder lastname(String lastname) { this.lastname = lastname; return this; } public UserWithoutImmutableBuilder role(Role role) { this.role = role; return this; } public UserWithoutImmutableBuilder email(String email) { this.email = email; return this; } public UserWithoutImmutableBuilder password(String password) { this.password = password; return this; } public UserWithoutImmutableBuilder articles( List\u0026lt;ArticleWithoutImmutable\u0026gt; articles) { this.articles = articles; return this; } public UserWithoutImmutable build() { return new UserWithoutImmutable(id, name, lastname, email, password, role, articles); } } } The code shows a manually created User class. Each user has a couple of attributes and a list of articles they wrote.\nWe can see how much code is needed to write a POJO (Plain old Java object) class that doesn\u0026rsquo;t contain any business logic.\nWe added the builder pattern for easier object initialization.\nThe Article Entity public class ArticleWithoutImmutable { private final long id; private final String title; private final String content; private final long userId; private ArticleWithoutImmutable(long id, String title, String content, long userId) { this.id = id; this.title = title; this.content = content; this.userId = userId; } public long getId() { return id; } public String getTitle() { return title; } public String getContent() { return content; } public long getUserId() { return userId; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; ArticleWithoutImmutable that = (ArticleWithoutImmutable) o; return id == that.id \u0026amp;\u0026amp; Objects.equals(title, that.title) \u0026amp;\u0026amp; Objects.equals(content, that.content); } @Override public int hashCode() { return Objects.hash(id, title, content); } public static ArticleWithoutImmutableBuilder builder() { return new ArticleWithoutImmutableBuilder(); } public static class ArticleWithoutImmutableBuilder { private long id; private String title; private String content; private long userId; public ArticleWithoutImmutableBuilder id(long id) { this.id = id; return this; } public ArticleWithoutImmutableBuilder title(String title) { this.title = title; return this; } public ArticleWithoutImmutableBuilder content( String content) { this.content = content; return this; } public ArticleWithoutImmutableBuilder userId(Long userId) { this.userId = userId; return this; } public ArticleWithoutImmutable build() { return new ArticleWithoutImmutable(id, title, content, userId); } } } We built the Article entity by hand to present how much code we needed for a relatively simple entity class.\nThe article class is a standard POJO (Plain old java object) class that doesn\u0026rsquo;t contain any business logic.\nCreating a Basic Immutable Entity Let\u0026rsquo;s now look at how the Immutables library makes it simple to create an immutable entity without that much boilerplate code. Let us only look at the Article entity, because it will be very similar for the User entity.\nImmutable Article Definition In the standard article implementation, we saw how much code we need for creating a simple POJO class with a builder. Thankfully, with Immutables, we can get all that for free by annotating an abstract class:\n@Value.Immutable public abstract class Article { abstract long getId(); abstract String getTitle(); abstract String getContent(); abstract long getUserId(); } The @Value.Immutable annotation instructs the annotation processor that it should generate an implementation for this class. This annotation will create the builder that we defined in the manual implementation.\nIt is important to mention that we can place the @Value.Immutable annotation on a class, an interface, or an annotation type.\nImmutable Article Implementation Let\u0026rsquo;s look at what the Immutables library generates from the definition above:\n@Generated(from = \u0026#34;Article\u0026#34;, generator = \u0026#34;Immutables\u0026#34;) @SuppressWarnings({\u0026#34;all\u0026#34;}) @javax.annotation.processing.Generated( \u0026#34;org.immutables.processor.ProxyProcessor\u0026#34;) public final class ImmutableArticle extends Article { private final long id; private final String title; private final String content; private final long userId; private ImmutableArticle( long id, String title, String content, long userId) { this.id = id; this.title = title; this.content = content; this.userId = userId; } @Override long getId() { return id; } @Override String getTitle() { return title; } @Override String getContent() { return content; } @Override long getUserId() { return userId; } public final ImmutableArticle withId(long value) { if (this.id == value) return this; return new ImmutableArticle(value, this.title, this.content, this.userId); } public final ImmutableArticle withTitle(String value) { String newValue = Objects.requireNonNull(value, \u0026#34;title\u0026#34;); if (this.title.equals(newValue)) return this; return new ImmutableArticle(this.id, newValue, this.content, this.userId); } public final ImmutableArticle withContent(String value) { String newValue = Objects.requireNonNull(value, \u0026#34;content\u0026#34;); if (this.content.equals(newValue)) return this; return new ImmutableArticle(this.id, this.title, newValue, this.userId); } public final ImmutableArticle withUserId(long value) { if (this.userId == value) return this; return new ImmutableArticle(this.id, this.title, this.content, value); } @Override public boolean equals(Object another) { // Implementation omitted  } private boolean equalTo(ImmutableArticle another) { // Implementation omitted  } @Override public int hashCode() { // Implementation omitted  } @Override public String toString() { // Implementation omitted  } public static ImmutableArticle copyOf(Article instance) { if (instance instanceof ImmutableArticle) { return (ImmutableArticle) instance; } return ImmutableArticle.builder() .from(instance) .build(); } public static ImmutableArticle.Builder builder() { return new ImmutableArticle.Builder(); } @Generated(from = \u0026#34;Article\u0026#34;, generator = \u0026#34;Immutables\u0026#34;) public static final class Builder { // Implementation omitted  } } The annotation processor generates the implementation class from the skeleton that we defined. The naming convention is \u0026ldquo;Immutable\u0026rdquo; followed by the name of the annotated class.\nThe implementation class contains each of the methods we defined on the annotated class or interface, backed by attribute values.\nIf we name our methods get*, the implementation will strip the \u0026ldquo;get\u0026rdquo; part and take the rest as the attribute name. Every other naming will take the full method name as the attribute name.\nIn the basic implementation, there is no constructor. The annotation processor generates a builder by default. We omitted the implementation code for the builder class to save some space. If you want to look into the implementation details, please refer to the Github repo.\nFor working with the immutable objects, the annotation processor created wither* methods that help us to build a new object from the current one. Each attribute has its own with method.\nWe can see how it is easy to create a class that provides us with all the perks of immutability. We didn\u0026rsquo;t have to write any boilerplate code.\nUsing a Builder Even though the constructor is the standard way for creating the object instance, the builder pattern makes things easier. The builder pattern allows optional and default attributes.\nDefault Builder The immutable library comes with the builder pattern by default. We don\u0026rsquo;t need to add anything specific to the class definition:\n@Value.Immutable public abstract class Article { abstract long getId(); abstract String getTitle(); abstract String getContent(); abstract long getUserId(); } The class definition is the same as in our previous examples. The @Value.Immutable annotation defines the builder on this entity.\nStrict Builder The builder class is not immutable by default. If we want to use an immutable builder, we can use the strict builder:\n@Value.Immutable @Value.Style(strictBuilder = true) abstract class StrictBuilderArticle { abstract long getId(); abstract String getTitle(); abstract String getContent(); } The @Value.Style annotation is a meta-annotation for defining what will the annotation processor generate. We set the strictBuilder attribute to true, meaning that generated builder should be strict.\nA strict builder means that we cannot set the value to the same variable twice inside building steps. We are making the builder implementation immutable:\npublic class BuildersService { public static StrictBuilderArticle createStrictArticle() { return ImmutableStrictBuilderArticle.builder() .id(0) .id(1) .build(); } } Here, we are setting the id attribute twice, producing the following error:\nException in thread\u0026#34;main\u0026#34;java.lang.IllegalStateException: Builder of StrictBuilderArticle is strict,attribute is already set:id If we were to use a regular builder, the code above wouldn\u0026rsquo;t throw this error.\nStaged builder If we want to make sure that all required attributes are provided to the builder before we create the actual instance, we can use a staged builder:\n@Value.Immutable @Value.Style(stagedBuilder = true) abstract class StagedBuilderArticle { abstract long getId(); abstract String getTitle(); abstract String getContent(); } We use the @Value.Style annotation to tell the annotation processor that we need the staged builder generated:\npublic class BuildersService { public static StagedBuilderArticle createStagedArticle() { return ImmutableStagedBuilderArticle.builder() .id(0) .title(\u0026#34;Lorem ipsum article!\u0026#34;) .build(); } } In this example, we are not setting the content attribute, producing the following compile-time error:\nNo candidates found for method call ImmutableStagedBuilderArticle.builder() .id(0).title(\u0026#34;Lorem ipsum article!\u0026#34;).build() The error shows that we cannot call the build() method if we don\u0026rsquo;t set all required attributes.\nIt is important to mention that the staged builder is a strict builder by implication.\nUsing a Constructor We could be using some libraries that need the constructor for the object creation (e.g., Hibernate). As mentioned, the Immutables library creates a builder by default, leaving the constructor in the private scope.\nLet\u0026rsquo;s look at how to define a class that generates a constructor for us, instead:\n@Value.Immutable public abstract class ConstructorArticle { @Value.Parameter public abstract long getId(); @Value.Parameter public abstract String getTitle(); @Value.Parameter public abstract String getContent(); } By setting the @Value.Immutable annotation we defined that we are building the immutable class.\nTo define the constructor, we need to annotate each attribute that should be part of that constructor with the @Value.Parameter annotation.\nIf we would look into the generated implementation we would see that the constructor has the public scope.\nUsing the of() Method By default, the Immutables library provides the of() method to create a new immutable object:\npublic class ConstructorService { public static ConstructorArticle createConstructorArticle() { return ImmutableConstructorArticle.of(0, \u0026#34;Lorem ipsum article!\u0026#34;, \u0026#34;Lorem ipsum...\u0026#34;); } } Using the new Keyword If we want to use the plain public constructor with the new keyword, we need to define it through the @Value.Style annotation:\n@Value.Immutable @Value.Style(of = \u0026#34;new\u0026#34;) public abstract class PlainPublicConstructorArticle { @Value.Parameter public abstract long getId(); @Value.Parameter public abstract String getTitle(); @Value.Parameter public abstract String getContent(); } First, we define that our class should be immutable. Then we annotate which attribute should be part of the public constructor.\nThe last thing that we need to do is to add @Value.Style(of=\u0026quot;new\u0026quot;) annotation to the class definition.\nAfter defining the @Value.Style annotation we can create the instance using the newkeyword:\npublic class ConstructorService { public static PlainPublicConstructorArticle createPlainPublicConstructorArticle() { return new ImmutablePlainPublicConstructorArticle(0, \u0026#34;Lorem ipsum\u0026#34;, \u0026#34;Lorem ipsum...\u0026#34;); } } The article is created using the new keyword.\nOptional and Default Attributes All attributes in the immutable class are mandatory by default. If we want to create a field where we can omit the value, we can approach it in two different ways:\n use Java\u0026rsquo;s Optional type use a default provider  Optional Attributes The Immutables library supports Java\u0026rsquo;s Optional type. If we want to make some fields optional, we can just wrap them into an Optional object:\n@Value.Immutable abstract class OptionalArticle { abstract Optional\u0026lt;Long\u0026gt; getId(); abstract Optional\u0026lt;String\u0026gt; getTitle(); abstract Optional\u0026lt;String\u0026gt; getContent(); } By wrapping each object into the Optional, we are sure that the code will not fail if we don\u0026rsquo;t provide the value.\nWe need to be careful not to overuse this approach. We should wrap only those attributes that should be optional. Everything else, by default, should go as a mandatory attribute.\nDefault Attributes Default Attribute on the Class If we want to provide default values to the attributes that are not set using the builder or the constructor we can use the @Value.Default annotation:\n@Value.Immutable abstract class DefaultArticle { abstract Long getId(); @Value.Default String getTitle() { return \u0026#34;Default title!\u0026#34;; } abstract String getContent(); } The methods annotated with the @Value.Default annotation should then return the default value.\nDefault Attribute on the Interface We can provide the default value to the attribute defined in the interface. We use the same @Value.Default annotation as in the previous example:\n@Value.Immutable interface DefaultArticleInterface { Long getId(); @Value.Default default String getTitle() { return \u0026#34;Default title!\u0026#34;; } String getContent(); } Since we are working with the interface, the method annotated with the @Value.Default annotation has to have the default keyword.\nDerived and Lazy Attributes Derived Attributes If we need to create a default value from other attributes, we can use the @Value.Derived annotation:\n@Value.Immutable abstract class DerivedArticle { abstract Long getId(); abstract String getTitle(); abstract String getContent(); @Value.Derived String getSummary() { String summary = getContent().substring(0, getContent().length() \u0026gt; 50 ? 50 : getContent().length()); return summary.length() == getContent().length() ? summary : summary + \u0026#34;...\u0026#34;; } } Again, we first annotated the abstract class with the @Value.Immutable annotation.\nThe summary attribute should be derived from the value of the content attribute. We want to take only the first fifty characters from the content. After creating the method for getting the summary we need to annotate it with the @Value.Derived annotation.\nLazy Attributes Deriving the value can be an expensive operation we might want to do it only once and only when it is needed. To do this we can use the @Value.Lazy annotation:\n@Value.Immutable abstract class LazyArticle { abstract Long getId(); abstract String getTitle(); abstract String getContent(); @Value.Lazy String summary() { String summary = getContent().substring(0, getContent().length() \u0026gt; 50 ? 50 : getContent().length()); return summary.length() == getContent().length() ? summary : summary + \u0026#34;...\u0026#34;; } } After initializing the method with the @Value.Lazy we are sure that this value will be computed only when it is used the first time.\nWorking with Collections The User Entity Our user entity has a list of articles. When I started writing this article, I was wondering how do collections behave with immutability.\n@Value.Immutable public abstract class User { public abstract long getId(); public abstract String getName(); public abstract String getLastname(); public abstract String getEmail(); public abstract String getPassword(); public abstract List\u0026lt;Article\u0026gt; getArticles(); } The User entity was built as any other immutable entity we created in this article. We annotated the class with the @Value.Immutable annotation and created abstract methods for attributes that we wanted.\nAdding to a Collection Let us see how, and when, can we add values to the articles list inside the user entity:\npublic class CollectionsService { public static void main(String[] args) { Article article1 = ...; Article article2 = ...; Article article3 = ...; User user = ImmutableUser.builder() .id(1l) .name(\u0026#34;Mateo\u0026#34;) .lastname(\u0026#34;Stjepanovic\u0026#34;) .email(\u0026#34;mock@mock.com\u0026#34;) .password(\u0026#34;mock\u0026#34;) .addArticles(article1) .addArticles(article2) .build(); user.getArticles().add(article3); } } After creating several articles, we can move on to user creation. The Immutables library provided us with the method addArticles(). The method allows us to add articles one by one, even when we use the strict builder.\nBut what happens when we try to add a new article on an already built user?\nException in thread\u0026#34;main\u0026#34;java.lang.UnsupportedOperationException at java.base/java.util.Collections$UnmodifiableCollection.add(Collections.java:1060) at com.reflectoring.io.immutables.collections.CollectionsService.main(CollectionsService.java:45) After adding the new article on the already built user, we get an UnsupportedOperationException. After building, the list is immutable, and we cannot add anything new to it. If we want to expand this list, we need to create a new user.\nStyles The @Value.Style is the annotation with which we control what code the annotation processor will generate. So far, we have used the @Value.Style annotation to generate the standard constructor format.\nWe can use the annotation on several levels:\n on the package level on the top class level on the nested class level on the annotation level  The package level annotation will apply the style to the whole package.\nThe class level will take effect on the class where we placed it and on all nested classes.\nUsed on an annotation as a meta-annotation, all classes annotated with that annotation will use the given style. The next section shows how to create and use the meta-annotation.\nThere are several things that we need to be aware of:\n If there is mixing in the applied styles they will be selected nondeterministically. Styles are never merged. A style can be a powerful tool, and we need to be careful when using them. Styles are cached. When changing something on the style, we need to rebuild the project or even restart the IDE.  Note: One or more meta-annotation instead of the class or the package level style will result in easier maintenance and upgrades.\nCreating a Style Meta Annotation Let\u0026rsquo;s look at how to define new meta-annotation with a given style:\n@Target({ElementType.PACKAGE, ElementType.TYPE}) @Retention(RetentionPolicy.CLASS) @Value.Style( of = \u0026#34;new\u0026#34;, strictBuilder = true, allParameters = true, visibility = Value.Style.ImplementationVisibility.PUBLIC ) public @interface CustomStyle { } After defining @Target and @Retention as usual with an annotation, we come to the @Value.Style annotation. The first value defined that we want to use the new keyword. The next thing that we define is that we want to use the strictBuilder and that all attributes should be annotated with the @Value.Parameter annotation. The last style defined is that the implementation visibility will be public.\nUsing a Style Meta Annotation After defining the new style meta-annotation we can use it as we would use standard @Value.Style annotation:\n@Value.Immutable @CustomStyle abstract class StylesArticle { abstract long getId(); abstract String getTitle(); abstract String getContent(); } The @CustomStyle annotation will create everything that we defined in the previous chapter.\nFor more information about style possibilities, please refer to the official documentation.\nConclusion We saw how the Immutables library helps us build immutable, thread-safe, and null-safe domain objects. It helps us build clean and readable POJO classes.\nSince it is a powerful tool, we need to be careful how to use it. We can easily stray down the wrong path and overuse its features. For example, derived attributes can end up in cycles which would break our code. The style definition can cause unexpected behavior in the code generation process if we are not careful enough. We can get indeterministic behavior that we don\u0026rsquo;t want to experience.\nThe last thing that I want to point out is the @Value.Style annotation. The @Value.Immutable annotation tells what will be generated, while the @Value.Style tells how it will be generated. This annotation can be a slippery slope, and we need to be careful and go outside of the default setting only when we are certain that we need to.\nFor deeper reading on the Immutables library please refer to the official page.\nYou can check out the code from the examples on GitHub.\n","date":"January 17, 2022","image":"https://reflectoring.io/images/stock/0065-java-1200x628-branded_hu49f406cdc895c98f15314e0c34cfd114_116403_650x0_resize_q90_box.jpg","permalink":"/immutables-library/","title":"Complete Guide to the Immutables Java Library"},{"categories":["Java"],"contents":"Collections are containers to group multiple items in a single unit. For example, a collection can represent a stack of books, products of a category, a queue of text messages, etc.\nThey are an essential feature of almost all programming languages, most of which support different types of collections such as List, Set, Queue, Stack, etc.\nJava also supports a rich set of collections packaged in the Java Collections Framework.\nIn this article, we will look at some examples of performing common operations on collections like addition (joining), splitting, finding the union, and the intersection between two or more collections.\n Example Code This article is accompanied by a working code example on GitHub. Java Collections Framework A Collections Framework is a unified architecture for representing and manipulating collections and is one of the core parts of the Java programming language. It provides a set of interfaces and classes to implement various data structures and algorithms along with several methods to perform various operations on collections.\nThe Collection interface is the root interface of the Collections framework hierarchy.\nJava does not provide direct implementations of the Collection interface but provides implementations of its subinterfaces like List, Set, and Queue.\nThe official documentation of the Java Collection Interface is the go-to guide for everything related to collections. Here, we will cover only the methods to perform common operations between one or more collections.\nWe have divided the common operations on collections which we will look at here, into two groups:\n Logical Operations: AND, OR, NOT, and XOR between two collections Other Operations on Collections based on class methods of the Collection and Stream classes.  Logical Operations on Collections We will look at the following logical Operations between two collections :\n OR: for getting a union of elements in two collections AND: for getting an intersection of elements in two collections XOR: exclusive OR for finding mismatched elements from two collections NOT: for finding elements of one collection not present in a second collection  OR - Union of Two Collections The union of two collections A and B is a set containing all elements that are in A or B or both:\n   Collection Elements     A [9, 8, 5, 4, 7]   B [1, 3, 99, 4, 7]   A OR B [9, 8, 5, 4, 7, 1, 3, 99]    We can find the union of two collections by using the collection of type Set which can hold only distinct elements:\npublic class CollectionHelper { public List\u0026lt;Integer\u0026gt; union( final List\u0026lt;Integer\u0026gt; collA, final List\u0026lt;Integer\u0026gt; collB){ Set\u0026lt;Integer\u0026gt; set = new LinkedHashSet\u0026lt;\u0026gt;(); // add all elements of collection A  set.addAll(collA); // add all elements of collection B  set.addAll(collB); return new ArrayList\u0026lt;\u0026gt;(set); } } Here we are first adding all the elements of each collection to a Set, which excludes any repeating elements by its property of not containing any duplicate elements.\nWe have used the LinkedHashSet implementation of the Set interface to preserve the order of the elements in the resulting collection.\nAND - Intersection of Two Collections The intersection of two collections contains only those elements that are in both collections:\n   Collection Elements     A [9, 8, 5, 4, 7]   B [1, 3, 99, 4, 7]   A AND B [4, 7]    We will use Java\u0026rsquo;s Stream class for finding the intersection of two collections:\npublic class CollectionHelper { public List\u0026lt;Integer\u0026gt; intersection( final List\u0026lt;Integer\u0026gt; collA, final List\u0026lt;Integer\u0026gt; collB){ List\u0026lt;Integer\u0026gt; intersectElements = collA .stream() .filter(collB :: contains) .collect(Collectors.toList()); if(!intersectElements.isEmpty()) { return intersectElements; }else { return Collections.emptyList(); } } } For finding the intersection of two collections, we run the filter() method on the first collection to identify and collect the matching elements from the second collection.\nXOR - Finding Different Elements from Two Collections XOR (eXclusive OR) is a boolean logic operation that returns 0 or false if the bits are the same and 1 or true for different bits. With collections, the XOR operation will contain all elements that are in one of the collections, but not in both:\n   Collection Elements     A [1, 2, 3, 4, 5, 6]   B [3, 4, 5, 6, 7, 8, 9]   A XOR B [1, 2, 7, 8, 9]    The Java code for an XOR operation may look something like this:\npublic class CollectionHelper { public List\u0026lt;Integer\u0026gt; xor(final List\u0026lt;Integer\u0026gt; collA, final List\u0026lt;Integer\u0026gt; collB){ // Filter elements of A not in B  List\u0026lt;Integer\u0026gt; listOfAnotInB = collA .stream() .filter(element-\u0026gt;{ return !collB.contains(element); }) .collect(Collectors.toList()); // Filter elements of B not in A  List\u0026lt;Integer\u0026gt; listOfBnotInA = collB .stream() .filter(element-\u0026gt;{ return !collA.contains(element); }) .collect(Collectors.toList()); // Concatenate the two filtered lists  return Stream.concat( listOfAnotInB.stream(), listOfBnotInA.stream()) .collect(Collectors.toList()); } } Here we are first using the filter() method of the Stream interface to include only the elements in the first collection which are not present in the second collection. Then we perform a similar operation on the second collection to include only the elements which are not present in the first collection followed by concatenating the two filtered collections.\nNOT - Elements of one Collection Not Present in the Second Collection We use the NOT operation to select elements from one collection which are not present in the second collection as shown in this example:\n   Collection Elements     A [1, 2, 3, 4, 5, 6]   B [3, 4, 5, 6, 7, 8, 9]   A NOT B [1, 2]   B NOT A [7, 8, 9]    To calculate this in JAva, we can again take advantage of filtering:\npublic class CollectionHelper { public List\u0026lt;Integer\u0026gt; not(final List\u0026lt;Integer\u0026gt; collA, final List\u0026lt;Integer\u0026gt; collB){ List\u0026lt;Integer\u0026gt; notList = collA .stream() .filter(element-\u0026gt;{ return !collB.contains(element); }) .collect(Collectors.toList()); return notList; } } Here we are using the filter() method to include only the elements in the first collection which are not present in the second collection.\nOther Common Operations on Collections We will now look at some more operations on collections mainly involving splitting and joining.\nSplitting a Collection into Two Parts Splitting a collection into multiple sub-collections is a very common task when building applications.\nWe want to have a result something like this:\n   Collection Elements     A [9, 8, 5, 4, 7, 15, 15]   First half of A [9, 8, 5, 4]   Second half of A [7, 15, 15]    In this example, we are splitting a collection from the center into two sub lists:\nclass CollectionHelper { public \u0026lt;T\u0026gt; List\u0026lt;T\u0026gt;[] split(List\u0026lt;T\u0026gt; listToSplit){ // determine the endpoints to use in `list.subList()` method  int[] endpoints = {0, (listToSplit.size() + 1)/2, listToSplit.size()}; List\u0026lt;List\u0026lt;T\u0026gt;\u0026gt; sublists = IntStream.rangeClosed(0, 1) .mapToObj( i -\u0026gt; listToSplit .subList( endpoints[i], endpoints[i + 1])) .collect(Collectors.toList()); // return an array containing both lists  return new List[] {sublists.get(0), sublists.get(1)}; } } Here we have used the subList() method of the List interface to split the list passed as input into two sublists and returned the output as an array of List elements.\nSplitting a Collection into n Equal Parts We can generalize the previous method to partition a collection into equal parts each of a specified chunk size:\n   Collection Elements     A [9, 8, 5, 4, 7, 15, 15]   First chunk of size 2 [9, 8]   Second chunk of size 2 [5,4]   Third chunk of size 2 [7,15]   Fourth chunk of size 2 [15]    The code for this looks like this:\npublic class CollectionHelper { // partition collection into size equal to chunkSize  public Collection\u0026lt;List\u0026lt;Integer\u0026gt;\u0026gt; partition( final List\u0026lt;Integer\u0026gt; collA, final int chunkSize){ final AtomicInteger counter = new AtomicInteger(); final Collection\u0026lt;List\u0026lt;Integer\u0026gt;\u0026gt; result = collA .stream() .collect( Collectors.groupingBy( it -\u0026gt; counter.getAndIncrement() / chunkSize)) .values(); return result; } } Removing Duplicates from a Collection Removing duplicate elements from a collection is another frequently used operation in applications.:\n   Collection Elements     A [9, 8, 5, 4, 7, 15, 15]   After removal of duplicates [9, 8, 5, 4, 7, 15, ]    In this example, the removeDuplicates() method removes any values that exist more than once in the collection, leaving only one instance of each value in the output:\npublic class CollectionHelper { public List\u0026lt;Integer\u0026gt; removeDuplicates(final List\u0026lt;Integer\u0026gt; collA){ List\u0026lt;Integer\u0026gt; listWithoutDuplicates = new ArrayList\u0026lt;\u0026gt;( new LinkedHashSet\u0026lt;\u0026gt;(collA)); return listWithoutDuplicates; } } Concatenating (Joining) Two or More Collections Sometimes, we want to join two or more collections to a single big collection:\n   Collection Elements     A [9, 8, 5, 4]   B [1, 3, 99, 4, 7]   Concatenation of A and B [9, 8, 5, 4, 1, 3, 99, 4, 7]    The Stream class introduced since Java 8 provides useful methods for supporting sequential and parallel aggregate operations. In this example, we are performing the concatenation of elements from two collections using the Stream class:\npublic class CollectionHelper { public List\u0026lt;Integer\u0026gt; add(final List\u0026lt;Integer\u0026gt; collA, final List\u0026lt;Integer\u0026gt; collB){ return Stream.concat( collA.stream(), collB.stream()) .collect(Collectors.toList()); } } Here we are concatenating two collections in the add() method of the CollectionHelperclass. For adding, we have used the concat() method of the Stream class. We can also extend this method to join more than two collections at a time.\nJoining Collections by Applying a Condition If we only want to concatenate values for which a condition is true (for example, they have to be \u0026gt; 2), it would look like this:\n   Collection Elements     A [9, 8, 5, 4]   B [1, 3, 99, 4, 7]   Concatenation of A and B for elements \u0026gt; 2 [9, 8, 5, 4, 3, 99, 4, 7]    To code this, we can enrich the previous example further to concatenate elements of a collection only if they meet certain criteria as shown below:\npublic class CollectionHelper { public List\u0026lt;Integer\u0026gt; addWithFilter( final List\u0026lt;Integer\u0026gt; collA, final List\u0026lt;Integer\u0026gt; collB){ return Stream.concat( collA.stream(), collB.stream()) .filter(element -\u0026gt; element \u0026gt; 2 ) .collect(Collectors.toList()); } } Here we are concatenating two collections in the addWithFilter() method. In addition to the concat() method, we are also applying the filter() method of the Stream class to concatenate only elements that are greater than 2.\nConclusion In this tutorial, we wrote methods in Java to perform many common operations between two or more collections. Similar operations on collections are also available in open source libraries like the Guava Library and Apache Commons Collections.\nWhen creating Java applications, we can use a judicious mix of using methods available in the open-source libraries or build custom functions to work with collections efficiently.\nYou can refer to all the source code used in the article on Github.\n","date":"January 15, 2022","image":"https://reflectoring.io/images/stock/0074-stack-1200x628-branded_hu068f2b0d815bda96ddb686d2b65ba146_143922_650x0_resize_q90_box.jpg","permalink":"/common-operations-on-java-collections/","title":"Common Operations on Java Collections"},{"categories":["Spring Boot"],"contents":"In a distributed system, many services can be involved in creating a response to a single request. Not only for debugging purposes it’s essential that the path of such a request can be traced through all involved services. This tutorial gives an overview of the traceability problem in distributed systems and provides a complete guide on how to implement tracing with Spring Boot, OpenTelemetry, and Jaeger.\n Example Code This article is accompanied by a working code example on GitHub. Spans and Traces Even in a monolithic system, tracing a bug can be hard enough. To find the root cause of an error you search through the log files of the application servers around the point in time the error occurred and hope that you find a stack trace that explains the error. Ideally, the error message contains a correlation ID that uniquely identifies the error, so that you can just search for that correlation ID in the log files. It’s a plus when the log files are structured and aggregated in a central, searchable log service like Logz.io.\nIn a distributed system, tracing gets even harder since many different services running on different machines may be involved in responding to a single request. Here, a central log server and a correlation ID are not negotiable. But the correlation ID can now come from any of a set of distributed services.\nAs an example for this article, let’s have a look at a distributed system with two services:\nThe browser makes a request to the API service to get a detail view of a customer and display it to the user. The API service can\u0026rsquo;t answer that request by itself and has to make two calls to the customer service to get the names and addresses of the customers, respectively.\nThis is just a simple example for this article. In the real world, there can be dozens of services involved in answering a request.\nEach \u0026ldquo;hop\u0026rdquo; from one service to the next is called a \u0026ldquo;span\u0026rdquo;. All spans that are involved in responding to a request to the end-user together make up a \u0026ldquo;trace\u0026rdquo;.\nEach span and trace gets a unique id. The first span of a trace often re-uses the trace ID as the span ID. Each service is expected to pass the trace ID to the next service it calls so that the next service can use the same trace ID as a correlation ID in its logs. This propagation of the trace ID is usually done via an HTTP header.\nIn addition to using trace and span IDs in logs, to correlate log output from different services, we can send those traces and spans to a central tracing server that allows us to analyze traces. That\u0026rsquo;s what we\u0026rsquo;re going to do in the rest of this article.\nThe Tracing Setup Let\u0026rsquo;s have a look at what we\u0026rsquo;re going to build in this article:\nWe have the API and customer service that we mentioned above. The API service depends on the customer service to provide customer data. Both services are Spring Boot applications.\nUltimately, we want to use Jaeger as the tool to analyze our traces. Jaeger (German for \u0026ldquo;hunter\u0026rdquo;) provides a user interface that allows us to query for and analyze traces. In this article, we are going to use a managed Jaeger instance provided by Logz.io. We\u0026rsquo;ll need to get the traces from our Spring Boot applications to Jaeger, somehow.\nTo get the traces and spans to Jaeger, we make a detour through an OpenTelemetry Collector. OpenTelemetry is a project that aims to provide a ubiquitous standard for tracing use cases. The collector aggregates the traces from our services and forwards them to Jaeger.\nTo propagate traces between our Spring Boot services, we\u0026rsquo;re using Spring Cloud Sleuth. To send the traces to the OpenTelemetry Collector, we\u0026rsquo;re using Spring Cloud Sleuth OTel, an extension to Sleuth.\nThe Example Application Before we go into the details of setting up tracing, let\u0026rsquo;s have a look at the example application I\u0026rsquo;ve built for this tutorial. You can look up the working code on GitHub.\nAPI Service The API service provides a REST API to get customer data. For this, it exposes the endpoint /customers/{id} implemented in this REST controller:\n@RestController public class Controller { private CustomerClient customerClient; private AddressClient addressClient; private Logger logger = LoggerFactory.getLogger(Controller.class); @Autowired public Controller(CustomerClient customerClient, AddressClient addressClient) { this.customerClient = customerClient; this.addressClient = addressClient; } @GetMapping(path = \u0026#34;customers/{id}\u0026#34;) public CustomerAndAddress getCustomerWithAddress(@PathVariable(\u0026#34;id\u0026#34;) long customerId) { logger.info(\u0026#34;COLLECTING CUSTOMER AND ADDRESS WITH ID {} FROM UPSTREAM SERVICE\u0026#34;, customerId); Customer customer = customerClient.getCustomer(customerId); Address address = addressClient.getAddressForCustomerId(customerId); return new CustomerAndAddress(customer, address); } } This is a pretty standard REST controller. The interesting bit is that it\u0026rsquo;s making use of an AddressClient and a CustomerClient to call the customer service to get the customer addresses and names, respectively.\nLet\u0026rsquo;s take a look at one of these clients:\n@Component public class CustomerClient { private static final Logger logger = LoggerFactory.getLogger(CustomerClient.class); private RestTemplate restTemplate; private String baseUrl; public CustomerClient( RestTemplate restTemplate, @Value(\u0026#34;${customerClient.baseUrl}\u0026#34;) String baseUrl) { this.restTemplate = restTemplate; this.baseUrl = baseUrl; } Customer getCustomer(@PathVariable(\u0026#34;id\u0026#34;) long id) { String url = String.format(\u0026#34;%s/customers/%d\u0026#34;, baseUrl, id); return restTemplate.getForObject(url, Customer.class); } } The CustomerClient uses a plain RestTemplate to make REST calls to the customer service. No magic here. The base URL to the customer service is made configurable through Spring\u0026rsquo;s @Value annotation. To configure the base URL, we add it to the service\u0026rsquo;s application.yml file:\nserver: port: 8080 addressClient: baseUrl: http://customer-service:8081  customerClient: baseUrl: http://customer-service:8081 Both base URLs for the addressClient and the customerClient are pointing to the customer service, which we\u0026rsquo;re going to run in Docker later. To make the whole setup work locally, we configured the API service to run on port 8080 and the customer service to run on port 8081.\nFinally, to make the service runnable in Docker, we create a Dockerfile:\nFROM adoptopenjdk/openjdk11:alpine-jre ARG JAR_FILE=target/*.jar COPY ${JAR_FILE} application.jar EXPOSE 8080 ENTRYPOINT [\u0026#34;java\u0026#34;,\u0026#34;-jar\u0026#34;,\u0026#34;/application.jar\u0026#34;] After building the service with ./mvnw package, we can now run docker build to package the service in a Docker container.\nCustomer Service The customer service looks very similar. It has a REST controller that provides the /customers/{id} and /addresses/{id} endpoints, that return the customer name and address for a given customer ID:\n@RestController public class Controller { private Logger logger = LoggerFactory.getLogger(Controller.class); @GetMapping(path = \u0026#34;customers/{id}\u0026#34;) public ResponseEntity\u0026lt;Customer\u0026gt; getCustomer(@PathVariable(\u0026#34;id\u0026#34;) long customerId) { logger.info(\u0026#34;GETTING CUSTOMER WITH ID {}\u0026#34;, customerId); Customer customer = // ... get customer from \u0026#34;database\u0026#34;  return new ResponseEntity\u0026lt;\u0026gt;(customer, HttpStatus.OK); } @GetMapping(path = \u0026#34;addresses/{id}\u0026#34;) public ResponseEntity\u0026lt;Address\u0026gt; getAddress(@PathVariable(\u0026#34;id\u0026#34;) long customerId) { logger.info(\u0026#34;GETTING ADDRESS FOR CUSTOMER WITH ID {}\u0026#34;, customerId); Address address = // ... get address from \u0026#34;database\u0026#34;  return new ResponseEntity\u0026lt;\u0026gt;(address, HttpStatus.OK); } } In the example implementation on GitHub, the controller has a hard-coded list of customer names and addresses in memory and returns one of those.\nThe customer service\u0026rsquo;s application.yml file looks like this:\nserver.port: 8081 As mentioned above, we change the port of the customer service to 8081 so it doesn\u0026rsquo;t clash with the API service on port 8080 when we run both services locally.\nThe Dockerfile of the customer service looks exactly like the Dockerfile of the API service:\nFROM adoptopenjdk/openjdk11:alpine-jre ARG JAR_FILE=target/*.jar COPY ${JAR_FILE} application.jar EXPOSE 8080 ENTRYPOINT [\u0026#34;java\u0026#34;,\u0026#34;-jar\u0026#34;,\u0026#34;/application.jar\u0026#34;] Configuring Spring Boot to Send Traces to an OpenTelemetry Collector Next, we\u0026rsquo;re going to add Spring Cloud Sleuth to our Spring Boot services and configure it to send traces to our OpenTelemetry Collector.\nFirst, we need to add some configuration to each of our services' pom.xml:\n\u0026lt;properties\u0026gt; \u0026lt;release.train.version\u0026gt;2020.0.4\u0026lt;/release.train.version\u0026gt; \u0026lt;spring-cloud-sleuth-otel.version\u0026gt;1.0.0-M12\u0026lt;/spring-cloud-sleuth-otel.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-dependencies\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${release.train.version}\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-sleuth-otel-dependencies\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${spring-cloud-sleuth-otel.version}\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; \u0026lt;repositories\u0026gt; \u0026lt;repository\u0026gt; \u0026lt;id\u0026gt;spring-milestones\u0026lt;/id\u0026gt; \u0026lt;url\u0026gt;https://repo.spring.io/milestone\u0026lt;/url\u0026gt; \u0026lt;/repository\u0026gt; \u0026lt;/repositories\u0026gt; \u0026lt;pluginRepositories\u0026gt; \u0026lt;pluginRepository\u0026gt; \u0026lt;id\u0026gt;spring-milestones\u0026lt;/id\u0026gt; \u0026lt;url\u0026gt;https://repo.spring.io/milestone\u0026lt;/url\u0026gt; \u0026lt;/pluginRepository\u0026gt; \u0026lt;/pluginRepositories\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-web\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-actuator\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-sleuth\u0026lt;/artifactId\u0026gt; \u0026lt;exclusions\u0026gt; \u0026lt;exclusion\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-sleuth-brave\u0026lt;/artifactId\u0026gt; \u0026lt;/exclusion\u0026gt; \u0026lt;/exclusions\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-sleuth-otel-autoconfigure\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.opentelemetry\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;opentelemetry-exporter-otlp-trace\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.grpc\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;grpc-okhttp\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.42.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; This is the whole boilerplate to add Spring Cloud Sleuth including the OpenTelemetry support.\nImportant to note is that we have to exclude spring-cloud-sleuth-brave from the spring-cloud-starter-sleuth dependency and instead add in the spring-cloud-sleuth-otel-autoconfigure dependency. This replaces the default tracing implementation based on Brave with the implementation based on OpenTelemetry.\nAlso, we have to add the opentelemetry-exporter-otlp-trace and grpc-okhttp dependencies to make the OpenTelemetry Exporter work. The OpenTelemetry Exporter is the component in Spring Cloud Sleuth OTel that sends traces to an OpenTelemetry Collector.\nBy now, the setup will already propagate trace IDs across service boundaries. I.e. Sleuth automatically configures the RestTemplate used in the API service to add the trace ID in an HTTP header and the customer service will automatically read this header and attach the trace ID to the threads that are processing incoming requests.\nAfter this is done, we need to update our services' application.yml files:\nspring: application: name: api-service # or \u0026#34;customer-service\u0026#34;  sleuth: otel: exporter: otlp: endpoint: http://collector:4317 We set the spring.application.name property to the name of the respective service. Spring Cloud Sleuth will use this name in the traces it sends, so it\u0026rsquo;s kind of important if we want to know which services were involved in a specific trace.\nWe also set the spring.sleuth.otel.exporter.otlp.endpoint property to point to our OpenTelemetry collector (we\u0026rsquo;ll later start the collector in Docker). Sleuth will now send the traces in OpenTelemetry format to that endpoint.\nWith this configuration done, we\u0026rsquo;re ready to combine all the pieces and run everything on our local machines in Docker.\nRunning Everything in Docker To test the setup, we run everything in Docker Compose: the API service, the customer service, and the OpenTelemetry Collector. For this, we create a docker-compose.yml file with the following content:\nservices: api-service: build: api-service/  image: api-service:latest ports: - \u0026#34;8080:8080\u0026#34; customer-service: build: ./customer-service/  image: customer-service:latest ports: - \u0026#34;8081:8081\u0026#34; collector: image: logzio/otel-collector-traces environment: - LOGZIO_REGION=${LOGZIO_REGION}  - LOGZIO_TRACES_TOKEN=${LOGZIO_TRACES_TOKEN}  ports: - \u0026#34;1777:1777\u0026#34; - \u0026#34;9411:9411\u0026#34; - \u0026#34;9943:9943\u0026#34; - \u0026#34;6831:6831\u0026#34; - \u0026#34;6832:6832\u0026#34; - \u0026#34;14250:14250\u0026#34; - \u0026#34;14268:14268\u0026#34; - \u0026#34;4317:4317\u0026#34; - \u0026#34;55681:55681\u0026#34; - \u0026#34;8888:8888\u0026#34; This will spin up both our Spring Boot services using Docker\u0026rsquo;s build command. It requires that we run the docker-compose command from the parent directory that contains both the api-service and the customer-service sub-directories. Don\u0026rsquo;t forget to run ./mvnw clean package before running docker-compose, because otherwise, you might start an old version of our services.\nAdditionally, we include a collector service based on the logzio/otel-collector-traces Docker image provided by Logz.io. This image contains an OpenTelemetry Collector that is preconfigured to send the traces to Logz.io. It requires the environment variables LOGZIO_REGION and LOGZIO_TRACES_TOKEN, which you will get in the \u0026ldquo;Tracing\u0026rdquo; section of your Logz.io account. You can clone the example code from GitHub and register for a free Logz.io trial if you want to play along.\nIf we run LOGZIO_REGION=... LOGZIO_TRACES_TOKEN=... docker-compose up now, Docker will start all three components locally and we\u0026rsquo;re ready to generate and analyze some traces!\nAnalyzing Traces in Jaeger With the Docker Compose stack up and running, we can now hit the API service\u0026rsquo;s endpoint. You can type https://localhost:8080/customers/1 into your browser to call the API service and the API service will, in turn, call the customer service to get the names and addresses. Your browser should show something like this:\n{ \u0026#34;customer\u0026#34;: { \u0026#34;id\u0026#34;: 1, \u0026#34;name\u0026#34;: \u0026#34;Yukiko Yawn\u0026#34; }, \u0026#34;address\u0026#34;: { \u0026#34;id\u0026#34;: 1, \u0026#34;street\u0026#34;: \u0026#34;Cambridge Road\u0026#34; } } If you look at the log output from the docker-compose command, you should also see some activity there. It will show something like this:\napi-service_1 | INFO [api-service,e9d9d371ac07ea32bdb12c4d898535ee,a96ea4b352976715] : COLLECTING CUSTOMER AND ADDRESS WITH ID 1 FROM UPSTREAM SERVICE customer-service_1 | INFO [customer-service,e9d9d371ac07ea32bdb12c4d898535ee,f69c5aa9ddf8624c] : GETTING CUSTOMER WITH ID 1 customer-service_1 | INFO [customer-service,e9d9d371ac07ea32bdb12c4d898535ee,dd27f1fefaf7b9aa] : GETTING ADDRESS FOR CUSTOMER WITH ID 1 The logs show that the API service has received the request from our browser and created the trace ID e9d9... and the span ID a96e.... The following log events show that the customer service has received two requests to get the customer name and address and that it\u0026rsquo;s using the same trace ID in the logs, but a different span ID each time.\nAfter a minute or so, we should also see the traces in the Logz.io Jaeger dashboard and we can now run some queries.\nBrowsing Traces In the Jaeger UI, we can now browse the traces and will see something like this:\nThis is exactly what we expected: the API service received an HTTP GET request and then makes two consecutive calls to the customer service. We can see that the API service made the first call to the customer service approximately 2ms after it got the request from the browser and that the customer service took 1.35ms to respond. This gives great visibility to where our services spend their time!\nClicking on one of the elements of the trace, we can expand it and view all the tags that Spring Cloud Sleuth has added to the trace:\nIf we want, we can add custom tags to our traces using Spring Cloud Sleuth\u0026rsquo;s tagging feature.\nThe tags are indexed and searchable in Jaeger, making for a very convenient way to investigate issues in a distributed system.\nLet\u0026rsquo;s look at a few tracing use cases.\nFinding Long-running Spans Imagine that users are complaining about slowly loading pages but every user is complaining about a different page so we don\u0026rsquo;t know what\u0026rsquo;s causing this performance issue.\nThe Jaeger UI allows us to search for traces that have been longer than a given time. We can search for all traces that have taken longer than 1000ms, for example. Drilling down into one of the long-running traces of our example app, we might get a result like this:\nThis shows very clearly that the most time in this trace is spent in the second call to the customer service, so we can focus our investigation on that code to improve it. And indeed, I had added a Thread.sleep() to that piece of code.\nFinding Traces with Errors Say a user is complaining about getting errors on a certain page of the application but to render that page the application is calling a lot of other services and we want to know which service is responsible for the error.\nIn the Jaeger UI, we can search for http.status_code=500 and will see something like this:\nThis shows clearly that the call to http://customer-service:8081/customers/1 is the culprit and we can focus on that code to fix the error.\nFinding Traces that Involve a specific Controller Another use case for tracing is to help make decisions for future development. Say we want to make a change to the REST API of our customer service and want to notify the teams that are using this API so they know about the upcoming change. We can search for service=customer-service mvc.controller.class=Controller to get a list of all traces that go through this REST controller.\nWe would see at a glance which other services we would need to notify about the upcoming API changes. This requires that all of those other services are sending their traces to Jaeger, of course.\nConclusion Above, we have discussed a few tracing use cases, but there are a lot more in real distributed systems.\nTracing is a very powerful tool that makes the chaos of distributed systems a little more manageable. You\u0026rsquo;ll get the most out of it if all your services are instrumented properly and are sending traces to a central tracing dashboard like Jaeger.\nTo save the hassle of installing and running your own Jaeger instance, you can use one managed in the cloud by a provider like Logz.io, as I did in this article.\n","date":"January 9, 2022","image":"https://reflectoring.io/images/stock/0115-footsteps-1200x628-branded_hua899b6193371243242cc9f28efcd32ad_86611_650x0_resize_q90_box.jpg","permalink":"/spring-boot-tracing/","title":"Tracing with Spring Boot, OpenTelemetry, and Jaeger"},{"categories":["Meta"],"contents":"It\u0026rsquo;s the time of the year again when I reflect about the Reflectoring year (pun intended). In this article, I share a bit about my successes and failures as a creator of code, text, and (coming up) a SaaS product.\nAs always, I\u0026rsquo;m probably the one who benefits most from this account, because it allows me to reflect on my projects and make decisions for the next year.\nProjects Let\u0026rsquo;s start with the projects I\u0026rsquo;ve been working on in 2021. Each of those projects is a side project next to my full-time software developer job in my new(ish) home in Sydney, Australia.\nReflectoring Blog This blog isn\u0026rsquo;t just me anymore. Over the course of the year, 17 other people have contributed 39 articles to the blog. I wrote another 22, making a total of 61 articles in 2021. That\u0026rsquo;s more than the goal of 1 article per week that I set at the beginning of the year.\nI\u0026rsquo;ve paid about $5,000 to those authors for their articles. Well deserved! Thanks to everyone who contributed!\nOne author stands out: Pratik has contributed 18 articles over the course of the year! That\u0026rsquo;s insane! It\u0026rsquo;s been really great to have a consistent writer like him on board and seeing him grow to become an ever better writer!\nThat investment didn\u0026rsquo;t pay out as much as I wanted, though. The number of visitors \u0026ldquo;only\u0026rdquo; grew from 1.1 million in 2020 to 1.4 million in 2021.\nTo monetize the blog, I tried a different tactic regarding advertisement this year. Previously, I was running subtle ads managed by the CarbonAds network. That netted about $200 a month. But since CarbonAds sits between the advertiser and myself, they take quite a chunk out of it. And with 150,000 monthly visitors, this chunk feels quite big.\nSo I tried direct advertisment. I cold-emailed 10 or so tech companies whose products I\u0026rsquo;m interested in and whose products I think my readers might be interested in and sent them a link to my advertisment page.\nAnd with two of them I made a deal! One deal with LaunchDarkly for 12 months of exclusive ads and 12 sponsored blog posts about feature flagging. And another deal with Logz.io for 3 sponsored blog posts about logging and tracing.\nI feel very lucky to have these sponsoring partners. I get paid to write about topics I\u0026rsquo;m interested in, those articles will help my readers, and my sponsoring partners will get some attention. That\u0026rsquo;s what I call a win/win/win situation.\nTogether, I made about $15,000 from these partnerships. That\u0026rsquo;s quite a bit more than the $200 per month that I made with CarbonAds! But it\u0026rsquo;s also more work, so it\u0026rsquo;s not really comparable - I have yet to write 7 more blog posts to fulfill my part of the bargain!\nOn top of that I made about $1,600 with affiliate links to Philip\u0026rsquo;s Testing Spring Boot Applications Masterclass, which I placed in some of my top-performing Spring Boot articles (by the way, the previous link is an affiliate link :) ).\nLesson: cut the middle man. If you are interested in placing ads or are looking for affiliates, let me know and we\u0026rsquo;ll have a chat!\nThe plan for next year is to invest into even more authors and to find one or two editors who can help me get their articles over the finish line. More high-quality articles mean more visitors to the blog, which means more ad revenue and more conversions to any products I offer.\nNewsletter The Newsletter got a complete overhaul in 2021. Previously, the newsletter pretty much only contained a link to the latest article published on the blog. Not very valuable. A newsfeed app can do that job better than my newsletter.\nSo in late 2020 I started to add value to the newsletter by including what I called an \u0026ldquo;inspirational nugget\u0026rdquo; about some \u0026ldquo;softer\u0026rdquo; topics like productivity or habits with each newsletter. I also increased the frequency of the newsletter from one every other week to one per week (more than 50 weeks in a row without missing one!).\nI changed the wording on the newsletter subscription form a bit, too, to clarify the value proposition.\nThe newsletter started 2021 with 2,300 subscribers and ended the year with 4,600 subscribers. The first 2,300 subscribers took me 3 years and the last 2,300 took me only a year! Nice to see growth on that front.\nWriting about those \u0026ldquo;soft\u0026rdquo; topics helped me clear my mind, too. I\u0026rsquo;ve started the year with writing some haphazard productivity tips and ended the year with a collection of some pretty cool insights.\nFor 2022, I\u0026rsquo;m going to give the newsletter a rebranding to make it its own \u0026ldquo;product\u0026rdquo; apart from my blog. I\u0026rsquo;ll continue with the \u0026ldquo;soft\u0026rdquo; topics, but I\u0026rsquo;ll probably reduce the frequency to biweekly again, because it was quite taxing to come up with something worthwhile every week. I\u0026rsquo;m loosely planning to polish last year\u0026rsquo;s content into book form and publish the book chapters in the newsletter. Now I just have to find the time for writing that book\u0026hellip; .\nI also started doing Newsletter ads. All together, I made $400 with paid newsletter ads. I realized that it\u0026rsquo;s a lot of work to get newsletter sponsors, so I decided to not spend too much time on it, because it\u0026rsquo;s not worth my while. Anyhow, if you have a product that is interesting to my newsletter audience of software engineers, let me know!\nGet Your Hands Dirty on Clean Architecture My book \u0026ldquo;Get Your Hands Dirty on Clean Architecture\u0026rdquo; is still my flagship product. I published it back in 2019 and it\u0026rsquo;s selling pretty consistently since then. It\u0026rsquo;s also getting great reviews on Amazon, Goodreads, and Gumroad.\nI have made the book available on Gumroad this year, for the main reason to get to know Gumroad as a platform. It\u0026rsquo;s also still available on Leanpub, but Gumroad provides ratings and some other community features I haven\u0026rsquo;t used yet, so I wanted to give it a go.\nAlso, the team from educative.io has asked me if they could make a course from the book. For a couple of months, the course has now been live on educative under the name \u0026ldquo;Hexagonal Software Architecture for Web Applications\u0026rdquo;. It\u0026rsquo;s only making $60 or so per month, though (I only get 30% royalties \u0026hellip; I could have gotten 70% if I had created the course myself instead of letting them do it, but I didn\u0026rsquo;t have the time).\nAnyway, with the sales from educative, Gumroad, Leanpub, and the print version with Packt (which has been translated into Korean last year!), the book netted a total of about $19,000 in 2021. Not bad for a book that runs on autopilot most of the time.\nIf you haven\u0026rsquo;t read the book yet, make sure you do. It provides very valuable software architecture insights that were eye-opening to me when I wrote it.\nStratospheric Back in 2020, I started a project with Philip \u0026amp; Bjoern to write a book about AWS and Spring Boot, because there was no book about this combination of technologies out there, yet. In August 2021, we finally released version 1.0 of the book.\nIt\u0026rsquo;s been a great experience to work with those two. There was virtually no argument, no bad blood, no mismatched expectations. But it took longer than expected (which was to be expected, I guess, because projects always take longer than expected).\nThe result is \u0026ldquo;Stratospheric - From Zero to Production with Spring Boot and AWS\u0026rdquo;, in which we show how to deploy and run a Spring app on ECS. I have learned a lot while writing this book (which was the main driver for me). If you\u0026rsquo;re interested in Spring Boot and AWS, make sure to take a look.\nThe book made me about $4,000 in 2021 (one third of the proceeds, because we\u0026rsquo;re three authors). This includes a sponsoring from AWS through the GitHub sponsors program, which was unexpected, but welcome.\nWe\u0026rsquo;re planning to add a video course in 2022. That\u0026rsquo;s going to be another insane amount of work, but video editing is something I wanted to learn, anyway, so there\u0026rsquo;s synergy.\nBlogtrack Then there is my neglected step child project - Blogtrack. I started coding it almost 2 years ago and the other projects were always more important so that it never took off. After finishing the Stratospheric book, Philip joined me as a co-founder, we put in some renewed effort, and we\u0026rsquo;re going to finally start a public beta in January - with a brand new landing page and everything.\nIf you have a blog (that already has some content) and want to get some insights into your analytics, sign up for the newsletter on https://blogtrack.io.\nI\u0026rsquo;m not sure where the journey will go - the feature set is kind of unpolished, yet - so we need a lot of feedback from users to make it the best tool possible to help them grow their blogs.\nAs for income, it has only produced costs, so far. Most of it for paying a developer to flesh out some features and the rest for running AWS servers. This year I want Blogtrack to earn back its monthly costs, at least.\nSummary It was a lot of work nurturing these side projects next to my day job, but I\u0026rsquo;ve been rewarded with some fun partnerships and opportunities, and of course with some money (if my crappy accounting is correct, about $55,000 of revenue with $36,000 of that being profit - if I wasn\u0026rsquo;t living in Sydney, that would be a lot of money\u0026hellip;).\nIt helped that my day job doesn\u0026rsquo;t require me to go to the office anymore. This saves me 1.5 hours of commute time every day, which I can now share between my family and my side projects.\nAnd the good thing is, that although I have feeled stressed sometimes, I have never felt burned out. I guess the good Sydney weather plays its part for my mental wellbeing, as I\u0026rsquo;ve taken every opportunity to grab my laptop and sit outside for a bit of work.\nHabits that got me through the year What also helped to keep me sane through all the work and the general shitshow that is the world these days, is to stick to some habits I\u0026rsquo;ve built up over the years and to start some new ones.\nI\u0026rsquo;m sharing those habits here because they have been greatly valuable for me and I can imagine they might be valuable for others, too.\nGetting !@#$ Done in the Morning I have started this habit when I started writing my book \u0026ldquo;Get Your Hands Dirty on Clean Architecture\u0026rdquo;. I had to make some time for writing the book. So, I started to get up at 6 am - before the family is awake - and write a couple hundred words before getting ready for my day job at 7.30 or so. I have also started to go to bed earlier, naturally.\nThis morning time has become very enjoyable for me. I have a sense of accomplishment before the day even really starts. It\u0026rsquo;s so enjoyable that I\u0026rsquo;ve become a bit compulsory about it. When I sleep in late because I went to bed too late, for example, I feel bad about having \u0026ldquo;lost\u0026rdquo; an hour of \u0026ldquo;me time\u0026rdquo; before the family wakes up.\nToday, I\u0026rsquo;m still getting up at 6 am almost every day to work on one project or another to get !@#$ done before the actual day starts. And I enjoy it a lot.\nShaping Tomorrow Since I\u0026rsquo;m working on different projects, some mornings I couldn\u0026rsquo;t decide what to work on. Should I continue writing that blog post? Or should I review my authors' article submissions? Or write the next newsletter? Or write a page for the book?\nA couple times this decision was just too hard for me the first thing in the morning and I ended up procrastinating and not doing anything. Starting the day with an hour of active procrastination doesn\u0026rsquo;t really set you up for a good day.\nSo, I started \u0026ldquo;shaping\u0026rdquo; the next day the night before (I wrote about it in my newsletter). Every evening before going to bed, I take my trusted e-ink tablet and write down what I want to work on tomorrow. I pick one or two things to do for my day job and one or two things to do for my side projects.\nIn the morning, when I sit down at my desk, I already know what to work on and just get started. No more freezing up in decision procrastination.\nReading I\u0026rsquo;ve been reading fiction novels since my early teens. I always have a sci-fi or fantasy novel lying next to my bed. In the evening, I make it about 5 pages in before I get too sleepy to continue. It takes quite long to get through a book this way.\nWhen I wanted to read a non-fiction book, I would replace that fiction novel with a non-fiction book. That means I would get through 1 or 2 non-fiction books per year, tops.\nNow, I\u0026rsquo;m taking reading non-fiction more seriously. I have a stack of books waiting for me at any time so when I\u0026rsquo;m done with one, I can pick the next depending on what I feel like.\nI also read a chapter or so in every lunch break, and more on weekends in my leisure time. What motivates me to read is that I \u0026ldquo;collect\u0026rdquo; book notes on my e-ink tablet (and in a paper notebook before that). Finishing a book gives me a sense of accomplishment not only because I finished the book, but also because I have recorded all the insights that book gave me.\nLater, I transcribe these hand-written notes into more structured book notes (see next section on taking notes). This makes them searchable and usable for any of my projects. The ideas for many of my newsletter episodes have come from these book notes.\nSimilar to getting up early, I have become a bit compulsory about taking notes when reading a non-fiction book. If there\u0026rsquo;s no pen or paper around and I can\u0026rsquo;t take notes, I don\u0026rsquo;t read.\nReading has become an activity I very much look forward to and I enjoy it a lot to take notes and think through the things I just read, and then think through them again when I transcribe the notes into electronic form weeks or even months later.\nTaking Notes As I mentioned above, I\u0026rsquo;ve become somewhat of a note-taking maniac. I\u0026rsquo;ve made it a habit to write everything down into my Obsidian vault. It\u0026rsquo;s basically just a bunch of Markdown files, hierarchically organized into folders and linked with hyperlinks.\nI\u0026rsquo;m using Obsidian for pretty much every writing task these days. I\u0026rsquo;m writing this blog post in Obsidian right now. I will later copy it into my blog and publish it from there. But I will still have the raw version of this blog post in my notes, just in case that it could spark some inspiration for another idea when looking through my notes.\nWriting blog posts is only one of the things I\u0026rsquo;m doing in my notes. Others are:\n a note for each person I\u0026rsquo;m interacting with (at work and in my side projects) containing that person\u0026rsquo;s expectations toward me, my expectations toward them, feedback notes, etc. a note for each recurring meeting with a section for each time the meeting took place and my notes of that meeting a \u0026ldquo;project log\u0026rdquo; note for each project that I\u0026rsquo;m working on, where I add a new line with today\u0026rsquo;s date whenever I did some task for that project, and where I keep a list of the very next tasks that need my attention book notes transcribed from my hand-written notes as discussed in the section about reading above - one note for each idea from the book  I have only started with the \u0026ldquo;project log\u0026rdquo; notes two months or so ago, and they are a lifesaver. They make it easier to get back into the context of a project, saving a lot of time! They also make it easy to talk with others about your work, for example with my manager, to make sure he knows what I\u0026rsquo;m doing.\nVideo Calls with Peers Aside from my team mates in my day job, I haven\u0026rsquo;t seen a lot of people last year. With lockdowns coming and going and coming again, that\u0026rsquo;s no wonder (at least that\u0026rsquo;s my excuse - I\u0026rsquo;m not sure if I had seen more people if there hadn\u0026rsquo;t been any lockdowns).\nAnyway, I enjoyed meeting people online a lot, especially like-minded creators with similar (but different) problems and ideas. With Philip \u0026amp; Bjoern I meet almost every 2 weeks about the progress in the Stratospheric project. I\u0026rsquo;ve also started to meet up every couple of weeks with Matthias (who builds GetTheAudience.com) and Felix (who builds devops-metrics.com) to just talk about our side businesses.\nAlso, I joined Monica Lent\u0026rsquo;s Blogging for Devs community last year and took part in an \u0026ldquo;Accelerator\u0026rdquo; program where I met with a group of other bloggers every week for two months. This has created the accountability I needed to write those cold emails to get the advertisement deals I wrote about earlier. Lots of opportunity here!\nMission for 2022: Simplify! Recently, I\u0026rsquo;ve thought hard about what I\u0026rsquo;m trying to achieve with all my side projects. Sure, I want to make some money on the side (and potentially get to a point where I can reduce my day job and get more control over my own time). But there has to be more than that.\nI\u0026rsquo;ve come to realize that the common theme across all my work, be it in my day job or my side projects, is to simplify things.\nI hate complexity with a passion. That\u0026rsquo;s what made me write \u0026ldquo;Simplicity nerd\u0026rdquo; into my Twitter bio.\n In my day job, I write simple code that makes it easy to understand and maintain. With my blog, I write simple text that makes it easy for readers (future me included) to understand a certain concept. Same with my books, just with a bigger concept. With my newsletter, I write about simple ideas that make it easy to grow as a person and as a developer. With Blogtrack, I organize information in a simple way to make it easy for bloggers to grow their blogs.  Things have to be as simple as possible. My brain is just not smart enough for complex things.\nMy mission is to simplify things. For myself and others. And that\u0026rsquo;s what I\u0026rsquo;m going to do 2022 and beyond.\n","date":"January 2, 2022","image":"https://reflectoring.io/images/stock/0115-2021-1200x628-branded_hu899dfbba2165ac8841569ac659fabf20_112336_650x0_resize_q90_box.jpg","permalink":"/review-2021/","title":"Reflectoring Review 2021 - Growing Slowly"},{"categories":["Spring Boot"],"contents":"REST-styled APIs are all around us. Many applications need to invoke REST APIs for some or all of their functions. Hence for applications to function gracefully, they need to consume APIs elegantly and consistently.\nRestTemplate is a class within the Spring framework that helps us to do just that. In this tutorial, we will understand how to use RestTemplate for invoking REST APIs of different shapes.\n Example Code This article is accompanied by a working code example on GitHub. What is Spring RestTemplate? According to the official documentation, RestTemplate is a synchronous client to perform HTTP requests.\nIt is a higher-order API since it performs HTTP requests by using an HTTP client library like the JDK HttpURLConnection, Apache HttpClient, and others.\nThe HTTP client library takes care of all the low-level details of communication over HTTP while the RestTemplate adds the capability of transforming the request and response in JSON or XML to Java objects.\nBy default, RestTemplate uses the class java.net.HttpURLConnection as the HTTP client. However, we can switch to another HTTP client library which we will see in a later section.\nSome Useful Methods of RestTemplate Before looking at the examples, it will be helpful to take a look at the important methods of the RestTemplate class.\nRestTemplate provides higher-level methods for each of the HTTP methods which make it easy to invoke RESTful services.\nThe names of most of the methods are based on a naming convention:\n the first part in the name indicates the HTTP method being invoked the second part in the name indicates returned element.  For example, the method getForObject() will perform a GET and return an object.\ngetForEntity(): executes a GET request and returns an object of ResponseEntity class that contains both the status code and the resource as an object.\ngetForObject() : similar to getForEntity(), but returns the resource directly.\nexchange(): executes a specified HTTP method, such as GET, POST, PUT, etc, and returns a ResponseEntity containing both the HTTP status code and the resource as an object.\nexecute() : similar to the exchange() method, but takes additional parameters: RequestCallback and ResultSetExtractor.\nheadForHeaders(): executes a HEAD request and returns all HTTP headers for the specified URL.\noptionsForAllow(): executes an OPTIONS request and uses the Allow header to return the HTTP methods that are allowed under the specified URL.\ndelete(): deletes the resources at the given URL using the HTTP DELETE method.\nput(): updates a resource for a given URL using the HTTP PUT method.\npostForObject() : creates a new resource using HTTP POST method and returns an entity.\npostForLocation(): creates a new resource using the HTTP POST method and returns the location of the newly created resource.\nFor additional information on the methods of RestTemplate, please refer to the Javadoc.\nWe will see how to use the above methods of RestTemplate with the help of some examples in subsequent sections.\nProject Setup for Running the Examples To work with the examples of using RestTemplate, let us first create a Spring Boot project with the help of the Spring boot Initializr, and then open the project in our favorite IDE. We have added the web dependency to the Maven pom.xml. .\nThe dependency spring-boot-starter-web is a starter for building web applications. This dependency contains a dependency to the RestTemplate class.\nWe will use this POJO class Product in most of the examples:\npublic class Product { public Product(String name, String brand, Double price, String sku) { super(); id = UUID.randomUUID().toString(); this.name = name; this.brand = brand; this.price = price; this.sku = sku; } private String id; private String name; private String brand; private Double price; private String sku; ... } We also have built a minimal REST web service with the following @RestController:\n@RestController public class ProductController { private List\u0026lt;Product\u0026gt; products = List.of( new Product(\u0026#34;Television\u0026#34;, \u0026#34;Samsung\u0026#34;,1145.67,\u0026#34;S001\u0026#34;), new Product(\u0026#34;Washing Machine\u0026#34;, \u0026#34;LG\u0026#34;,114.67,\u0026#34;L001\u0026#34;), new Product(\u0026#34;Laptop\u0026#34;, \u0026#34;Apple\u0026#34;,11453.67,\u0026#34;A001\u0026#34;)); @GetMapping(value=\u0026#34;/products/{id}\u0026#34;, produces=MediaType.APPLICATION_XML_VALUE) public @ResponseBody Product fetchProducts( @PathParam(\u0026#34;id\u0026#34;) String productId){ return products.get(1); } @GetMapping(\u0026#34;/products\u0026#34;) public List\u0026lt;Product\u0026gt; fetchProducts(){ return products; } @PostMapping(\u0026#34;/products\u0026#34;) public ResponseEntity\u0026lt;String\u0026gt; createProduct( @RequestBody Product product){ // Create product with ID;  String productID = UUID.randomUUID().toString(); product.setId(productID); products.add(product); return ResponseEntity.ok().body( \u0026#34;{\\\u0026#34;productID\\\u0026#34;:\\\u0026#34;\u0026#34;+productID+\u0026#34;\\\u0026#34;}\u0026#34;); } @PutMapping(\u0026#34;/products\u0026#34;) public ResponseEntity\u0026lt;String\u0026gt; updateProduct( @RequestBody Product product){ products.set(1, product); // Update product. Return success or failure without response body  return ResponseEntity.ok().build(); } @DeleteMapping(\u0026#34;/products\u0026#34;) public ResponseEntity\u0026lt;String\u0026gt; deleteProduct( @RequestBody Product product){ products.remove(1); // Update product. Return success or failure without response body  return ResponseEntity.ok().build(); } } The REST web service contains the methods to create, read, update, and delete product resources and supports the HTTP verbs GET, POST, PUT, and DELETE.\nWhen we run our example, this web service will be available at the endpoint http://localhost:8080/products.\nWe will consume all these APIs using RestTemplate in the following sections.\nMaking an HTTP GET Request to Obtain the JSON Response The simplest form of using RestTemplate is to invoke an HTTP GET request to fetch the response body as a raw JSON string as shown in this example:\npublic class RestConsumer { public void getProductAsJson() { RestTemplate restTemplate = new RestTemplate(); String resourceUrl = \u0026#34;http://localhost:8080/products\u0026#34;; // Fetch JSON response as String wrapped in ResponseEntity  ResponseEntity\u0026lt;String\u0026gt; response = restTemplate.getForEntity(resourceUrl, String.class); String productsJson = response.getBody(); System.out.println(productsJson); } } Here we are using the getForEntity() method of the RestTemplate class to invoke the API and get the response as a JSON string. We need to further work with the JSON response to extract the individual fields with the help of JSON parsing libraries like Jackson.\nWe prefer to work with raw JSON responses when we are interested only in a small subset of an HTTP response composed of many fields.\nMaking an HTTP GET Request to Obtain the Response as a POJO A variation of the earlier method is to get the response as a POJO class. In this case, we need to create a POJO class to map with the API response.\npublic class RestConsumer { public void getProducts() { RestTemplate restTemplate = new RestTemplate(); String resourceUrl = \u0026#34;http://localhost:8080/products\u0026#34;; // Fetch response as List wrapped in ResponseEntity  ResponseEntity\u0026lt;List\u0026gt; response = restTemplate.getForEntity(resourceUrl, List.class); List\u0026lt;Product\u0026gt; products = response.getBody(); System.out.println(products); } } Here also we are calling the getForEntity() method for receiving the response as a List of Product objects.\nInstead of using getForEntity() method, we could have used the getForObject() method as shown below:\npublic class RestConsumer { public void getProductObjects() { RestTemplate restTemplate = new RestTemplate(); String resourceUrl = \u0026#34;http://localhost:8080/products\u0026#34;; // Fetching response as Object  List\u0026lt;?\u0026gt; products = restTemplate.getForObject(resourceUrl, List.class); System.out.println(products); } Instead of the ResponseEntity object, we are directly getting back the response object.\nWhile getForObject() looks better at first glance, getForEntity() returns additional important metadata like the response headers and the HTTP status code in the ResponseEntity object.\nMaking an HTTP POST Request After the GET methods, let us look at an example of making a POST request with the RestTemplate.\nWe are invoking an HTTP POST method on a REST API with the postForObject() method:\npublic class RestConsumer { public void createProduct() { RestTemplate restTemplate = new RestTemplate(); String resourceUrl = \u0026#34;http://localhost:8080/products\u0026#34;; // Create the request body by wrapping  // the object in HttpEntity  HttpEntity\u0026lt;Product\u0026gt; request = new HttpEntity\u0026lt;Product\u0026gt;( new Product(\u0026#34;Television\u0026#34;, \u0026#34;Samsung\u0026#34;,1145.67,\u0026#34;S001\u0026#34;)); // Send the request body in HttpEntity for HTTP POST request  String productCreateResponse = restTemplate .postForObject(resourceUrl, request, String.class); System.out.println(productCreateResponse); } } Here the postForObject() method takes the request body in the form of an HttpEntity class. The HttpEntity is constructed with the Product class which is the POJO class representing the HTTP request.\nUsing exchange() for POST In the earlier examples, we saw separate methods for making API calls like postForObject() for HTTP POST and getForEntity() for GET. RestTemplate class has similar methods for other HTTP verbs like PUT, DELETE, and PATCH.\nThe exchange() method in contrast is more generalized and can be used for different HTTP verbs. The HTTP verb is sent as a parameter as shown in this example:\npublic class RestConsumer { public void createProductWithExchange() { RestTemplate restTemplate = new RestTemplate(); String resourceUrl = \u0026#34;http://localhost:8080/products\u0026#34;; // Create the request body by wrapping  // the object in HttpEntity  HttpEntity\u0026lt;Product\u0026gt; request = new HttpEntity\u0026lt;Product\u0026gt;( new Product(\u0026#34;Television\u0026#34;, \u0026#34;Samsung\u0026#34;,1145.67,\u0026#34;S001\u0026#34;)); ResponseEntity\u0026lt;String\u0026gt; productCreateResponse = restTemplate .exchange(resourceUrl, HttpMethod.POST, request, String.class); System.out.println(productCreateResponse); } } Here we are making the POST request by sending HttpMethod.POST as a parameter in addition to the request body and the response type POJO.\nUsing exchange() for PUT with an Empty Response Body Here is another example of using the exchange() for making a PUT request which returns an empty response body:\npublic class RestConsumer { public void updateProductWithExchange() { RestTemplate restTemplate = new RestTemplate(); String resourceUrl = \u0026#34;http://localhost:8080/products\u0026#34;; // Create the request body by wrapping  // the object in HttpEntity  HttpEntity\u0026lt;Product\u0026gt; request = new HttpEntity\u0026lt;Product\u0026gt;( new Product(\u0026#34;Television\u0026#34;, \u0026#34;Samsung\u0026#34;,1145.67,\u0026#34;S001\u0026#34;)); // Send the PUT method as a method parameter  restTemplate.exchange( resourceUrl, HttpMethod.PUT, request, Void.class); } } Here we are sending HttpMethod.PUT as a parameter to the exchange() method. Since the REST API returns an empty body, we are using the Void class to represent the same.\nUsing execute() for Downloading Large Files The execute() in contrast to the exchange() method is the most generalized way to perform a request, with full control over request preparation and response extraction via callback interfaces.\nWe will use the execute() method for downloading large files.\nThe execute() method takes a callback parameter for creating the request and a response extractor callback for processing the response as shown in this example:\npublic class RestConsumer { public void getProductasStream() { final Product fetchProductRequest = new Product(\u0026#34;Television\u0026#34;, \u0026#34;Samsung\u0026#34;,1145.67,\u0026#34;S001\u0026#34;); RestTemplate restTemplate = new RestTemplate(); String resourceUrl = \u0026#34;http://localhost:8080/products\u0026#34;; // Set HTTP headers in the request callback  RequestCallback requestCallback = request -\u0026gt; { ObjectMapper mapper = new ObjectMapper(); mapper.writeValue(request.getBody(), fetchProductRequest); request.getHeaders() .setAccept(Arrays.asList( MediaType.APPLICATION_OCTET_STREAM, MediaType.ALL)); }; // Processing the response. Here we are extracting the  // response and copying the file to a folder in the server.  ResponseExtractor\u0026lt;Void\u0026gt; responseExtractor = response -\u0026gt; { Path path = Paths.get(\u0026#34;some/path\u0026#34;); Files.copy(response.getBody(), path); return null; }; restTemplate.execute(resourceUrl, HttpMethod.GET, requestCallback, responseExtractor ); } } Here we are sending a request callback and a response callback to the execute() method. The request callback is used to prepare the HTTP request by setting different HTTP headers like Content-Type and Authorization.\nThe responseExtractor used here extracts the response and creates a file in a folder in the server.\nInvoking APIs with application/form Type Input Another class of APIs takes HTTP form as an input. To call these APIs, we need to set the Content-Type header to application/x-www-form-urlencoded in addition to setting the request body. This allows us to send a large query string containing name and value pairs separated by \u0026amp;  to the server.\nWe send the request in form variables by wrapping them in a LinkedMultiValueMap object and use this to create the HttpEntity class as shown in this example:\npublic class RestConsumer { public void submitProductForm() { RestTemplate restTemplate = new RestTemplate(); String resourceUrl = \u0026#34;http://localhost:8080/products\u0026#34;; HttpHeaders headers = new HttpHeaders(); headers.setContentType(MediaType.APPLICATION_FORM_URLENCODED); // Set the form inputs in a multivaluemap  MultiValueMap\u0026lt;String, String\u0026gt; map= new LinkedMultiValueMap\u0026lt;\u0026gt;(); map.add(\u0026#34;sku\u0026#34;, \u0026#34;S34455\u0026#34;); map.add(\u0026#34;name\u0026#34;, \u0026#34;Television\u0026#34;); map.add(\u0026#34;brand\u0026#34;, \u0026#34;Samsung\u0026#34;); // Create the request body by wrapping  // the MultiValueMap in HttpEntity  HttpEntity\u0026lt;MultiValueMap\u0026lt;String, String\u0026gt;\u0026gt; request = new HttpEntity\u0026lt;\u0026gt;(map, headers); ResponseEntity\u0026lt;String\u0026gt; response = restTemplate.postForEntity( resourceUrl+\u0026#34;/form\u0026#34;, request , String.class); System.out.println(response.getBody()); } } Here we have sent three form variables sku, name, and brand in the request by first adding them to a MultiValueMap and then wrapping the map in HttpEntity. After that, we are invoking the postForEntity() method to get the response in a ResponseEntity object.\nConfiguring the HTTP Client in RestTemplate The simplest form of RestTemplate is created as a new instance of the class with an empty constructor as seen in the examples so far.\nAs explained earlier, RestTemplate uses the class java.net.HttpURLConnection as the HTTP client by default. However, we can switch to a different HTTP client library like Apache HttpComponents, Netty, OkHttp, etc. We do this by calling the setRequestFactory() method on the class.\nIn the example below , we are configuring the RestTemplate to use Apache HttpClient library. For this, we first need to add the client library as a dependency.\nLet us add a dependency on the httpclient module from the Apache HttpComponents project:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-web\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.httpcomponents\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;httpclient\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; Here we can see the dependency on httpclient added in Our Maven pom.xml.\nNext we will configure the HTTP client with settings like connect timeout, socket read timeout, pooled connection limit, idle connection timeout, etc as shown below:\nimport org.springframework.http.client.ClientHttpRequestFactory; import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.web.client.RestTemplate; public class RestConsumer { private ClientHttpRequestFactory getClientHttpRequestFactory() { // Create an instance of Apache HttpClient  HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory(); int connectTimeout = 5000; int readTimeout = 5000; clientHttpRequestFactory.setConnectTimeout(connectTimeout); clientHttpRequestFactory.setReadTimeout(readTimeout); return clientHttpRequestFactory; } public void fetchProducts() { RestTemplate restTemplate = new RestTemplate( getClientHttpRequestFactory()); ... ... } } In this example, we have specified the HTTP connection timeout and socket read timeout intervals to 5 seconds. This allows us to fine-tune the behavior of the HTTP connection.\nOther than the default HttpURLConnection and Apache HttpClient, Spring also supports Netty and OkHttp client libraries through the ClientHttpRequestFactory abstraction.\nAttaching an ErrorHandler to RestTemplate RestTemplate is associated with a default error handler which throws the following exceptions:\n HTTP status 4xx: HttpClientErrorException HTTP status 5xx: HttpServerErrorException unknown HTTP status: UnknownHttpStatusCodeException  These exceptions are subclasses of RestClientResponseException which is a subclass of RuntimeException. So if we do not catch them they will bubble up to the top layer.\nThe following is a sample of an error produced by the default error handler when the service responds with an HTTP status of 404:\nDefault error handler::org.springframework.web.client.DefaultResponseErrorHandler@30b7c004 ... ... ...org.springframework.web.client.RestTemplate - Response 404 NOT_FOUND Exception in thread \u0026#34;main\u0026#34; org.springframework.web.client .HttpClientErrorException$NotFound: 404 : \u0026#34;{\u0026#34;timestamp\u0026#34;:\u0026#34;2021-12-20T07:20:34.865+00:00\u0026#34;,\u0026#34;status\u0026#34;:404, \u0026#34;error\u0026#34;:\u0026#34;Not Found\u0026#34;,\u0026#34;path\u0026#34;:\u0026#34;/product/error\u0026#34;}\u0026#34; at org.springframework.web.client.HttpClientErrorException .create(HttpClientErrorException.java:113) ... at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:122) at org.springframework.web.client.ResponseErrorHandler .handleError(ResponseErrorHandler.java:63) RestTemplate allows us to attach a custom error handler. Our custom error handler looks like this:\n// Custom runtime exception public class RestServiceException extends RuntimeException { private String serviceName; private HttpStatus statusCode; private String error; public RestServiceException( String serviceName, HttpStatus statusCode, String error) { super(); this.serviceName = serviceName; this.statusCode = statusCode; this.error = error; } } // Error POJO public class RestTemplateError { private String timestamp; private String status; private String error; private String path; ... ... } // Custom error handler public class CustomErrorHandler implements ResponseErrorHandler{ @Override public boolean hasError(ClientHttpResponse response) throws IOException { return ( response.getStatusCode().series() == HttpStatus.Series.CLIENT_ERROR || response.getStatusCode().series() == HttpStatus.Series.SERVER_ERROR ); } @Override public void handleError(ClientHttpResponse response) throws IOException { if (response.getStatusCode().is4xxClientError() || response.getStatusCode().is5xxServerError()) { try (BufferedReader reader = new BufferedReader( new InputStreamReader(response.getBody()))) { String httpBodyResponse = reader.lines() .collect(Collectors.joining(\u0026#34;\u0026#34;)); ObjectMapper mapper = new ObjectMapper(); RestTemplateError restTemplateError = mapper .readValue(httpBodyResponse, RestTemplateError.class); throw new RestServiceException( restTemplateError.getPath(), response.getStatusCode(), restTemplateError.getError()); } } } } The CustomErrorHandler class implements the ResponseErrorHandler interface. It also uses an error POJO: RestTemplateError and a runtime exception class RestServiceException.\nWe override two methods of the ResponseErrorHandler interface: hasError() and handleError(). The error handling logic is in the handleError() method. In this method, we are extracting the service path and error message from the error response body returned as a JSON with the Jackson ObjectMapper.\nThe response with our custom error handler looks like this:\nerror occured: [Not Found] in service:: /product/error The output is more elegant and can be produced in a format compatible with our logging systems for further diagnosis.\nWhen using RestTemplate in Spring Boot applications, we can use an auto-configured RestTemplateBuilder to create RestTemplate instances as shown in this code snippet:\n@Service public class InventoryServiceClient { private RestTemplate restTemplate; public InventoryServiceClient(RestTemplateBuilder builder) { restTemplate = builder.errorHandler( new CustomErrorHandler()) .build(); ... ... } } Here the RestTemplateBuilder autoconfigured by Spring is injected in the class and used to attach the CustomErrorHandler class we created earlier.\nAttaching MessageConverters to the RestTemplate REST APIs can serve resources in multiple formats (XML, JSON, etc) to the same URI following a principle called content negotiation. REST clients request for the format they can support by sending the accept header in the request. Similarly, the Content-Type header is used to specify the format of the request.\nThe conversion of objects passed to the methods of RestTemplate is converted to HTTP requests by instances of HttpMessageConverter interface. This converter also converts HTTP responses to Java objects.\nWe can write our converter and register it with RestTemplate to request specific representations of a resource. In this example, we are requesting the XML representation of the Product resource:\npublic class RestConsumer { public void getProductAsXML() { RestTemplate restTemplate = new RestTemplate(); restTemplate.setMessageConverters(getXmlMessageConverter()); HttpHeaders headers = new HttpHeaders(); headers.setAccept( Collections.singletonList(MediaType.APPLICATION_XML)); HttpEntity\u0026lt;String\u0026gt; entity = new HttpEntity\u0026lt;\u0026gt;(headers); String productID = \u0026#34;P123445\u0026#34;; String resourceUrl = \u0026#34;http://localhost:8080/products/\u0026#34;+productID; ResponseEntity\u0026lt;Product\u0026gt; response = restTemplate.exchange( resourceUrl, HttpMethod.GET, entity, Product.class, \u0026#34;1\u0026#34;); Product resource = response.getBody(); } private List\u0026lt;HttpMessageConverter\u0026lt;?\u0026gt;\u0026gt; getXmlMessageConverter() { XStreamMarshaller marshaller = new XStreamMarshaller(); marshaller.setAnnotatedClasses(Product.class); MarshallingHttpMessageConverter marshallingConverter = new MarshallingHttpMessageConverter(marshaller); List\u0026lt;HttpMessageConverter\u0026lt;?\u0026gt;\u0026gt; converters = new ArrayList\u0026lt;\u0026gt;(); converters.add(marshallingConverter); return converters; } } Here we have set up the RestTemplate with a message converter XStreamMarshaller since we are consuming XML representation of the Product resource.\nComparison with Other HTTP Clients As briefly mentioned in the beginning RestTemplate is a higher-level construct which makes use of a lower-level HTTP client.\nStarting with Spring 5, the RestTemplate class is in maintenance mode. The non-blocking WebClient is provided by the Spring framework as a modern alternative to the RestTemplate.\nWebClient offers support for both synchronous and asynchronous HTTP requests and streaming scenarios. Therefore, RestTemplate will be marked as deprecated in a future version of the Spring Framework and will not contain any new functionalities.\nRestTemplate is based on a thread-per-request model. Every request to RestTemplate blocks until the response is received. As a result, applications using RestTemplate will not scale well with an increasing number of concurrent users.\nThe official Spring documentation also advocates using WebClient instead of RestTemplate.\nHowever, RestTemplate is still the preferred choice for applications stuck with an older version(\u0026lt; 5.0) of Spring or those evolving from a substantial legacy codebase.\nConclusion Here is a list of the major points for a quick reference:\n RestTemplate is a synchronous client for making REST API calls over HTTP RestTemplate has generalized methods like execute() and exchange() which take the HTTP method as a parameter. execute() method is most generalized since it takes request and response callbacks which can be used to add more customizations to the request and response processing. RestTemplate also has separate methods for making different HTTP methods like getForObject() and getForEntity(). We have the option of getting the response body in raw JSON format which needs to be further processed with a JSON parser or a structured POJO that can be directly used in the application. Request body is sent by wrapping the POJOs in a HttpEntity class. RestTemplate can be customized with an HTTP client library, error handler, and message converter. Lastly, calling RestTemplate methods results in blocking the request thread till the response is received. Reactive WebClient is advised to be used for new applications.  You can refer to all the source code used in the article on Github.\n","date":"December 29, 2021","image":"https://reflectoring.io/images/stock/0074-stack-1200x628-branded_hu068f2b0d815bda96ddb686d2b65ba146_143922_650x0_resize_q90_box.jpg","permalink":"/spring-resttemplate/","title":"Complete Guide to Spring RestTemplate"},{"categories":["Simplify"],"contents":"It\u0026rsquo;s not so long ago that I learned the term \u0026ldquo;scrappy\u0026rdquo;. Someone was looking to build a \u0026ldquo;scrappy\u0026rdquo; team of software engineers to build the next big thing, and build it fast.\nNot being an English native speaker, I had to look up the word \u0026ldquo;scrappy\u0026rdquo; and was confronted with two definitions:\n \u0026ldquo;consisting of disorganized, untidy, or incomplete parts.\u0026rdquo; \u0026ldquo;determined, argumentative, or pugnacious\u0026rdquo;  Why would anyone want to be part of a scrappy team? I don\u0026rsquo;t enjoy working in a disorganized team and even less in a team with argumentative people. Do you?\nFalse Scrappiness The intended definition of \u0026ldquo;scrappy\u0026rdquo;, of course, was to describe the spirit of iterating quickly and only doing the necessary to get to product-market fit and make the product successful.\nAnd those are good and valid agile principles to build upon.\nI feel, however, that terms like \u0026ldquo;scrappy\u0026rdquo;, \u0026ldquo;iterating quickly\u0026rdquo;, or even \u0026ldquo;agile\u0026rdquo; are more often than not an excuse for writing crappy code.\nIn reality, it often means that the team skips engineering best practices to get ahead faster:\n we don\u0026rsquo;t write abstractions because it costs time. we don\u0026rsquo;t write tests because the code evolves so fast that it\u0026rsquo;s not worth the time. we drop junior developers into the cold water to develop important code without enough guidance, because it\u0026rsquo;s going to be rewritten by \u0026ldquo;real developers\u0026rdquo; later, anyway.  It\u0026rsquo;s all about saving time. And it\u0026rsquo;s all false assumptions.\nWith a small investment of time now, we can avoid an enormous investment of time later, be it to clean up the mess we made or to live with the mess (which usually costs more time than cleaning it up).\nHere are a couple of things you can do to avoid false scrappiness.\nStart Clean When starting a new project or codebase, start clean. Set the expectations toward the new codebase. Establish the rules of engagement with the codebase. Write about the decisions you made.\nLet everyone know that this code is worth caring about because that means that working on the code will be faster and less error-prone (your managers will love that argument).\nManage the Architecture Before starting to code, think about the architecture. It doesn\u0026rsquo;t matter if it\u0026rsquo;s a small codebase. Every codebase has some structure. The question is whether this structure is hidden in heaps of code without abstractions or visible in plain sight to help the next developer working on the next feature.\nWhat components will the codebase have? How do they communicate? A simple boxes-and-arrows diagram goes a long way.\nOnce the code has been written, write tests that assert that the dependencies between those components go in the right direction (you can get some inspiration from this article about clean boundaries). Avoiding unwanted dependencies is like vaccinating your codebase against sprawl.\nPair Up Pair up with different developers to spread the vision of the codebase. Documenting architecture is one thing. But sitting next to someone with a vision while coding is quite a bit more impactful.\nPairing up will avoid long code review cycles and establish a common understanding of the codebase at the same time. Especially junior developers benefit from pairing, of course, but even experienced developers will learn a thing or two.\nIn any case, after a few pairing sessions, the team will have a shared vision and can communicate better because of their common understanding.\nBe Proactive Friends don\u0026rsquo;t let friends write crappy code.\nIf you see something that\u0026rsquo;s not to your standard, show how it can be done better. Don\u0026rsquo;t be pedantic about it, however. Don\u0026rsquo;t block people\u0026rsquo;s code because of minor style issues. Do block people\u0026rsquo;s code because of major architecture issues.\nIn any case, speak up if something doesn\u0026rsquo;t feel right.\nDon\u0026rsquo;t Wait for Later We can prevent a big investment of time to clean up messy code LATER by investing just a little time NOW. Sadly, it seems like we humans are blind to the future, as is evident in the way we handle climate change and even the current epidemic.\nBe scrappy, but responsibly so.\n","date":"December 12, 2021","image":"https://reflectoring.io/images/stock/0114-scrap-1200x628-branded_hu6a7dd4bb95d6cf47d7225edabcf544f3_330186_650x0_resize_q90_box.jpg","permalink":"/be-scrappy-not-crappy/","title":"Be Scrappy, Not Crappy"},{"categories":["Spring Boot"],"contents":"Internationalization is the process of making an application adaptable to multiple languages and regions without major changes in the source code.\nIn this tutorial, we will understand the concepts of internationalization, and illustrate how to internationalize a Spring Boot application.\n Example Code This article is accompanied by a working code example on GitHub. Internationalization (i18n) vs. Localization (l10n) Internationalization is a mechanism to create multilingual software that can be adapted to different languages and regions.\nAn internationalized application has the following characteristics:\n The application can be adapted to run in multiple regions by adding region or language-specific configuration data. Text elements like information messages and the user interface labels, are stored outside the source code and retrieved at runtime. Supporting new languages does not require code changes. Culturally-dependent data like dates and currencies are displayed in formats of the end user\u0026rsquo;s region and language.  Internationalization is also abbreviated as i18n because there is a total of 18 characters between the first letter i and the last letter n.\nThe following figures illustrate a website supporting internationalization.\nAmazon e-commerce site in German language from www.amazon.de:\nAmazon e-commerce site in French language from www.amazon.fr:\nIn these screenshots, we can observe that the content of the Amazon website is being rendered in the French and German languages depending on whether the HTTP URL used in the browser ends with .fr or .de.\nInternationalization is most often a one-time process undertaken during the initial stages of design and development.\nA related term: Localization is the process of adapting the internationalized application to a specific language and region by adding region-specific text and components.\nFor example, when we add support for the French language, we are localizing the application for French. Without localization, the text will be shown in the default English language to the user who is viewing the website from a non-English region.\nLocalization is usually conducted by translators on the user-facing components of the software. It also refers to localizing the time and date differences, currency, culturally appropriate images, symbols, spelling, and other locale-specific components (including the right-to-left (RTL) languages like Arabic).\nUnlike internationalization, localization is the process of adding language files and region-specific content every time we add support for a new language.\nLocalization is also abbreviated as l10n because there is a total of 10 characters between the first letter l and the last letter n.\nIntroducing the Locale A locale is a fundamental concept in internationalization. It represents a user\u0026rsquo;s language, geographical region, and any specific variant like dialect.\nWe use the locale of a user to tailor the information displayed to the user according to the user\u0026rsquo;s language or region. These operations are called locale-sensitive. For example, we can display a date formatted according to the locale of the user as dd/MM/yy or MM/dd/yy or display a number with a locale-specific decimal separator like a comma (3,14 in French) or dot (3.14 in the US).\nJava provides the Locale class for working with internationalization use cases. The Locale class is used by many classes in Java containing locale-sensitive functions like the NumberFormat class used for formatting numbers.\nWe will see the use of locale to perform various kinds of locale-sensitive operations in the following sections using classes provided by Java as well as the helper classes like resolvers and interceptors in the Spring framework.\nCreating the Spring Boot Application for Internationalization To work with some examples of internationalization, let us first create a Spring Boot project with the help of the Spring boot Initializr, and then open the project in our favorite IDE. We don\u0026rsquo;t need to add any extra dependencies to the Maven pom.xml since the internationalization support is part of the core module of the Spring framework.\nWe will next create a web application with this project using Spring Web MVC framework which will render an HTML page in different languages depending on the user\u0026rsquo;s language selection.\nSteps for Internationalization Internationalization of applications broadly follows the below steps:\n Resolving the user\u0026rsquo;s preferred locale from the incoming request from the user either in the form of a request parameter, cookies, or a request header. Intercepting the change of locale in the incoming request and storing it in the user\u0026rsquo;s session or cookies. Defining locale-specific resources, for example, language files for supported languages. Mapping the region and language-sensitive elements in the view (HTML page, mobile app UI, etc) to elements capable of reading content at runtime based on the user\u0026rsquo;s language and region.  Let us look at these steps in detail in the following sections.\nResolving the Locale with LocaleResolver This is invariably the first step for internationalization: identify the locale of a user.\nWe use the LocaleResolver interface for resolving the locale of a user from the incoming request.\nSpring provides the following implementations of the LocaleResolver interface that determine the current locale based on the session, cookies, the Accept-Language header, or sets the locale to a fixed value:\n FixedLocaleResolver: mostly used for debugging purposes. It resolves the locale to a fixed language mentioned in the application. properties. AcceptHeaderLocaleResolver: resolves the locale using an accept-language HTTP header retrieved from an HTTP request.  Sometimes web applications provide options to the users to select a preferred language. After a user selects a language, it is remembered for subsequent user interactions. These scenarios of remembering a locale selected by a user are handled with the following implementations of LocaleResolver:\n SessionLocaleResolver: stores the locale selected by a user in an attribute of HTTPSession of the user and resolves the locale by reading that attribute from the HTTPSession for all subsequent requests from the same user. CookieLocaleResolver: stores the locale selected by a user in a cookie on the user’s machine and resolves the locale by reading that cookie for all subsequent requests from the same user.  Let us update our application by adding a LocaleResolver bean to our Spring configuration class:\n@Configuration public class MessageConfig implements WebMvcConfigurer{ @Bean public LocaleResolver localeResolver() { SessionLocaleResolver slr = new SessionLocaleResolver(); slr.setDefaultLocale(Locale.US); slr.setLocaleAttributeName(\u0026#34;session.current.locale\u0026#34;); slr.setTimeZoneAttributeName(\u0026#34;session.current.timezone\u0026#34;); return slr; } } Here we have configured a SessionLocaleResolver that will store the locale in a session. The default locale is set to US. We have also set the names of the session attributes that will store the current locale and time zone.\nIntercepting the Locale Change with LocaleChangeInterceptor Next, our application will need to detect any change in the user\u0026rsquo;s locale and then switch to the new locale.\nThis function is performed with the help of the LocaleChangeInterceptor class.\nThe LocaleChangeInterceptor class is a specialization of the HandlerInterceptor component of the Spring MVC framework which is used for changing the current locale on every request, via a configurable request parameter (default parameter name: locale).\nLet\u0026rsquo;s add a LocaleChangeInterceptor bean to our Spring configuration class:\n@Configuration public class MessageConfig implements WebMvcConfigurer{ ... @Bean public LocaleChangeInterceptor localeChangeInterceptor() { LocaleChangeInterceptor localeChangeInterceptor = new LocaleChangeInterceptor(); localeChangeInterceptor.setParamName(\u0026#34;language\u0026#34;); return localeChangeInterceptor; } @Override public void addInterceptors(InterceptorRegistry registry) { registry.addInterceptor(localeChangeInterceptor()); } } Here we have defined the LocaleChangeInterceptor bean in a Spring configuration class: MessageConfig that will switch to a new locale based on the value of the language parameter appended to an HTTP request URL.\nFor example, the application will use a German locale when the HTTP URL of the web application is http://localhost:8080/index?language=de based on the value of the request parameter language as de. Similarly, the application will switch to a French locale, when the HTTP URL of the web application is http://localhost:8080/index?language=fr.\nWe have also added this interceptor bean to the InterceptorRegistry.\nThe MessageConfig configuration class in this example, also implements the WebMvcConfigurer interface which defines the callback methods to customize the default Java-based configuration for Spring MVC.\nConfiguring the Resource Bundles Now, we will create the resource bundles for defining various texts for the corresponding locales that we want to support in our application.\nA resource bundle in the Java platform is a set of properties files with the same base name and a language-specific suffix.\nFor example, if we create messages_en.properties and messages_de.properties, they together form a resource bundle with a base name of messages.\nThe resource bundle should also have a default properties file with the same name as its base name, that is used as the fallback if a specific locale is not supported.\nThe following diagram shows the properties files of a resource bundle with a base name of language/messages:\nHere, we can see resource bundles for three languages: English, French, and German with English being the default.\nEach resource bundle contains the same items, but the items are translated for the locale represented by that resource bundle.\nFor example, both messages.properties and messages_de.properties have a text with a key: label.title that is used as the title of a page as shown below:\nLabel in English defined in messages.properties:\nlabel.title = List of Products Label in German defined in messages_de.properties:\nlabel.title = Produktliste In messages.properties the text contains \u0026lsquo;List of Products\u0026rsquo; and in messages_de.properties it contains the German Translation Produktliste.\nSpring provides the ResourceBundleMessageSource class which is an implementation of the MessageSource interface and accesses the Java resource bundles using specified base names.\nWhen configuring the MessageSource we define the path for storing the message files for the supported languages in a Sping configuration class as shown in this code snippet:\n@Configuration public class MessageConfig implements WebMvcConfigurer{ @Bean(\u0026#34;messageSource\u0026#34;) public MessageSource messageSource() { ResourceBundleMessageSource messageSource = new ResourceBundleMessageSource(); messageSource.setBasenames(\u0026#34;language/messages\u0026#34;); messageSource.setDefaultEncoding(\u0026#34;UTF-8\u0026#34;); return messageSource; } ... } Here we have defined the base name of our resource bundle as language/messages.\nAlternatively we can configure the MessageSource in our application.properties file:\nspring.messages.basename=language/messages Internationalizing the View Now it is time to internationalize the view which will render in the language of the user\u0026rsquo;s chosen locale.\nOne of the common techniques of internationalizing an application is by using placeholders for text in our user interface code instead of hardcoding the text in a particular language.\nDuring runtime, the placeholder will be replaced by the text corresponding to the language of the user viewing the website. The view in our application will be defined in HTML where we will use Thymeleaf tags for the labels instead of hard coding a fixed text.\nThymeleaf is a Java template engine for processing and creating HTML, XML, JavaScript, CSS, and plain text.\nSpring Boot provides auto-configuration for Thymeleaf when we add the thymeleaf starter dependency to Maven\u0026rsquo;s pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-thymeleaf\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Adding the spring-boot-starter-thymeleaf dependency configures the necessary defaults including the path for HTML files for the view. By default, the HTML files are placed in the resources/templates location. We have created an HTML file index.html in the same path.\nHere is the Thymeleaf HTML code to display the value associated with the key label.title in our resource bundle configured to a MessageSource bean in the Spring configuration class:\n\u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta http-equiv=\u0026#34;Content-Type\u0026#34; content=\u0026#34;text/html; charset=utf-8\u0026#34; /\u0026gt; \u0026lt;title data-th-text=\u0026#34;#{label.title}\u0026#34;\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h2 data-th-text=\u0026#34;#{label.title}\u0026#34;\u0026gt;\u0026lt;/h2\u0026gt; ... ... \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; In this HTML code snippet, we are using thymeleaf tags for the text for the HTML page title and header. The data-th-text=”#{key from properties file}” tag attribute is used to display values from property files configured as a MessageSource bean in the Spring configuration class in the previous section.\nThe values of the text for the key label.title for different locales are in the resource bundles for three languages: English, French, and German with English being the default:\nLabel in English defined in messages.properties:\nlabel.title = List of Products Label in French defined in messages_fr.properties:\nlabel.title = Liste des produits Label in German defined in messages_de.properties:\nlabel.title = Produktliste In messages.properties, we have assigned \u0026lsquo;List of Products\u0026rsquo; as the value of the key label.title and the french and German translations of \u0026lsquo;List of Products\u0026rsquo; text in messages_fr.properties and messages_de.properties for the same key.\nWe can similarly define the remaining HTML labels in the resource bundles:\nThe text for the English language are defined in the default message file messages.properties:\nlabel.product.name = Product Name label.product.price = Price label.product.lastUpdated = Last Updated label.title = List of Products label.chooseLang = Choose language ... Similarly the text for the French language are defined in messages_fr.properties :\nlabel.product.name = Nom du produit label.product.price = Prix label.product.lastUpdated = Dernière mise à jour label.title = Liste des produits label.chooseLang = Choisissez la langue ... As we can see from these resource bundles for the French and English (used as default), the keys for the values that will be localized are the same in every file.\nIf a key does not exist in a requested locale, then the application will fall back to the value of the key defined in the default locale. For example, if we do not define a key in the French language, then the text will be displayed in English language.\nAdding the Spring MVC Components At last, we will add the controller class for Spring MVC by annotating it with the @Controller annotation. This will mark the class as a Spring Controller which will contain the endpoints:\n@Controller public class ProductsController { @GetMapping(\u0026#34;/index\u0026#34;) public ModelAndView index() { ModelAndView modelAndView = new ModelAndView(); modelAndView.setViewName(\u0026#34;index\u0026#34;); List\u0026lt;Product\u0026gt; products = fetchProducts(); modelAndView.addObject(\u0026#34;products\u0026#34;, products); return modelAndView; } /** * Dummy method to simulate fetching products from a data source. * * @return */ private List\u0026lt;Product\u0026gt; fetchProducts() { Locale locale = LocaleContextHolder.getLocale(); List\u0026lt;Product\u0026gt; products = new ArrayList\u0026lt;Product\u0026gt;(); Product product = new Product(); product.setName(\u0026#34;television\u0026#34;); product.setPrice(localizePrice(locale, 15678.43)); product.setLastUpdated(localizeDate(locale, LocalDate.of(2021, Month.SEPTEMBER, 22))); products.add(product); product = new Product(); product.setName(\u0026#34;washingmachine\u0026#34;); product.setPrice(localizePrice(locale, 152637.76)); product.setLastUpdated(localizeDate(locale, LocalDate.of(2021, Month.SEPTEMBER, 20))); products.add(product); return products; } private String localizeDate(final Locale locale, final LocalDate date ) { String localizedDate = DateTimeFormatter.ISO_LOCAL_DATE.format(date); return localizedDate; } private String localizePrice(final Locale locale, final Double price ) { NumberFormat numberFormat=NumberFormat.getInstance(locale); String localizedPrice = numberFormat.format(price); return localizedPrice; } } Here we have added ProductsController as the controller class. We have added the index method where we are populating the model for a collection of products. The view name is set to index which maps to the view index.html.\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta http-equiv=\u0026#34;Content-Type\u0026#34; content=\u0026#34;text/html; charset=utf-8\u0026#34; /\u0026gt; \u0026lt;title data-th-text=\u0026#34;#{label.title}\u0026#34;\u0026gt;\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; ... ... \u0026lt;table border=\u0026#34;1\u0026#34;\u0026gt; ... ... \u0026lt;tr th:each=\u0026#34;product: ${products}\u0026#34;\u0026gt; \u0026lt;td data-th-text=\u0026#34;#{__${product.name}__}\u0026#34;\u0026gt;\u0026lt;/td\u0026gt; \u0026lt;td data-th-text=\u0026#34;${product.price}\u0026#34; /\u0026gt; \u0026lt;td data-th-text=\u0026#34;${product.lastUpdated}\u0026#34; /\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; In this index.html, we have used the data-th-text to read the values from our resource bundles based on the user\u0026rsquo;s locale.\nRunning the Internationalized Application Next, we run the application and open the URL: http://localhost:8080/index in the browser. The website is rendered in the default locale with the links for changing the language of the page to English, French, or German.\nWhen we click on the links, the page is refreshed with the text elements rendered in the language selected by the user by a click on the link:\nThe links are formed with the URL appended with a parameter: language. The locale is switched with the help of LocaleChangeInterceptor defined in our Spring configuration class: MessageConfig that switches to a new locale based on the value of the language parameter appended to an HTTP request URL like http://localhost:8080/index?language=de, as explained in a previous section.\nConclusion Here is a list of the major points for a quick reference:\n Internationalization is a mechanism to create multilingual software that can be adapted to different languages and regions. A related term: Localization is the process of adapting the internationalized application to a specific language and region by adding region-specific text and components. A locale in the context of internationalization represents a user\u0026rsquo;s language, geographical region, and any specific variant like dialect. Language-specific text is defined in a resource bundle which is a set of properties files with the same base name and a language-specific suffix. Spring Boot uses the ResourceBundleMessageSource to access the Java resource bundles using specified base names. The user\u0026rsquo;s locale is resolved from the incoming request through the LocaleResolver class and change in the locale is intercepted by the LocaleChangeInterceptor classes.  You can refer to all the source code used in the article on Github.\n","date":"December 4, 2021","image":"https://reflectoring.io/images/stock/0113-flags-1200x628-branded_huaafcef19c0da8c565f62c79af2f0f229_274569_650x0_resize_q90_box.jpg","permalink":"/spring-boot-internationalization/","title":"How to Internationalize a Spring Boot Application"},{"categories":["Java"],"contents":"A lot has changed in Java from its beginnings in 1995 until today. Java 8 was a revolutionary release that put Java back on the pedestal of the best programming languages.\nWe will go through most of the changes in the Java language that happened from Java 8 in 2014 until today. We will try to be as brief as possible on every feature. The intention is to have a reference for all features between Java 8 and Java 17 inclusively.\n Example Code This article is accompanied by a working code example on GitHub. Java 8 The main changes of the Java 8 release were these:\n Lambda Expression and Stream API Method Reference Default Methods Type Annotations Repeating Annotations Method Parameter Reflection  Lambda Expressions and Stream API Java was always known for having a lot of boilerplate code. With the release of Java 8, this statement became a little less valid. The stream API and lambda expressions are the new features that move us closer to functional programming.\nIn our examples, we will see how we use lambdas and streams in the different scenarios.\nThe World Before Lambda Expressions We own a car dealership business. To discard all the paperwork, we want to create a piece of software that finds all currently available cars that have run less than 50,000 km.\nLet us take a look at how we would implement a function for something like this in a naive way:\npublic class LambdaExpressions { public static List\u0026lt;Car\u0026gt; findCarsOldWay(List\u0026lt;Car\u0026gt; cars) { List\u0026lt;Car\u0026gt; selectedCars = new ArrayList\u0026lt;\u0026gt;(); for (Car car : cars) { if (car.kilometers \u0026lt; 50000) { selectedCars.add(car); } } return selectedCars; } } To implement this, we are creating a static function that accepts a List of cars. It should return a filtered list according to a specified condition.\nUsing a Stream and a Lambda Expression We have the same problem as in the previous example.\nOur client wants to find all cars with the same criteria.\nLet us see a solution where we used the stream API and lambda expression:\npublic class LambdaExpressions { public static List\u0026lt;Car\u0026gt; findCarsUsingLambda(List\u0026lt;Car\u0026gt; cars) { return cars.stream().filter(car -\u0026gt; car.kilometers \u0026lt; 50000) .collect(Collectors.toList()); } } We need to transfer the list of cars into a stream by calling the stream() method. Inside the filter() method we are setting our condition. We are evaluating every entry against the desired condition. We are keeping only those entries that have less than 50,000 kilometers. The last thing that we need to do is to wrap it up into a list.\nMore about lambda expressions can be found in the docs.\nMethod Reference Without Method Reference We still own a car dealership shop, and we want to print out all the cars in the shop. For that, we will use a method reference.\nA method reference allows us to call functions in classes using a special kind of syntax ::. There are four kinds of method references:\n Reference to a static method Reference to an instance method on a object Reference to an instance method on a type Reference to a constructor  Let us see how to do it using the standard method call:\npublic class MethodReference { List\u0026lt;String\u0026gt; withoutMethodReference = cars.stream().map(car -\u0026gt; car.toString()) .collect(Collectors.toList()); } We are using a lambda expression to call the toString() method on each car.\nUsing a Method Reference Now, let us see how to use a method reference in the same situation:\npublic class MethodReference { List\u0026lt;String\u0026gt; methodReference = cars.stream().map(Car::toString) .collect(Collectors.toList()); } We are, again, using a lambda expression, but now we call the toString() method by method reference. We can see how it is more concise and easier to read.\nTo read more about method reference please look at the docs.\nDefault Methods Let us imagine that we have a simple method log(String message) that prints log messages on invocation. We realized that we want to provide timestamps to messages so that logs are easily searchable. We don\u0026rsquo;t want our clients to break after we introduce this change. We will do this using a default method implementation on an interface.\nDefault method implementation is the feature that allows us to create a fallback implementation of an interface method.\nUse Case Let us see how our contract looks:\npublic class DefaultMethods { public interface Logging { void log(String message); } public class LoggingImplementation implements Logging { @Override public void log(String message) { System.out.println(message); } } } We are creating a simple interface with just one method and implementing it in LoggingImplementation class.\nAdding New Method We will add new method inside the interface. The method accepts the second argument called date that represents timestamp.\npublic class DefaultMethods { public interface Logging { void log(String message); void log(String message, Date date); } } We are adding a new method but not implementing it inside all client classes. The compiler will fail with exception:\nClass \u0026#39;LoggingImplementation\u0026#39; must either be declared abstract or implement abstract method \u0026#39;log(String, Date)\u0026#39; in \u0026#39;Logging\u0026#39;`. Using Default Methods After adding a new method inside the interface, our compiler threw exceptions. We are going to solve this using default method implementation for the new method.\nLet us look at how to create a default method implementation:\npublic class DefaultMethods { public interface Logging { void log(String message); default void log(String message, Date date) { System.out.println(date.toString() + \u0026#34;: \u0026#34; + message); } } } Putting the default keyword allows us to add the implementation of the method inside the interface. Now, our LoggingImplementation class does not fail with a compiler error even though we didn\u0026rsquo;t implement this new method inside of it.\nTo read more about default methods please refer to the docs.\nType Annotations Type annotations are one more feature introduced in Java 8. Even though we had annotations available before, now we can use them wherever we use a type. This means that we can use them on:\n a local variable definition constructor calls type casting generics throw clauses and more  Tools like IDEs can then read these annotations and show warnings or errors based on the annotations.\nLocal Variable Definition Let us see how to ensure that our local variable doesn\u0026rsquo;t end up as a null value:\npublic class TypeAnnotations { public static void main(String[] args) { @NotNull String userName = args[0]; } } We are using annotation on the local variable definition here. A compile-time annotation processor could now read the @NotNull annotation and throw an error when the string is null.\nConstructor Call We want to make sure that we cannot create an empty ArrayList:\npublic class TypeAnnotations { public static void main(String[] args) { List\u0026lt;String\u0026gt; request = new @NotEmpty ArrayList\u0026lt;\u0026gt;(Arrays.stream(args).collect( Collectors.toList())); } } This is the perfect example of how to use type annotations on a constructor. Again, an annotation processor can evaluate the annotation and check if the array list is not empty.\nGeneric Type One of our requirements is that each email has to be in a format \u0026lt;name\u0026gt;@\u0026lt;company\u0026gt;.com. If we use type annotations, we can do it easily:\npublic class TypeAnnotations { public static void main(String[] args) { List\u0026lt;@Email String\u0026gt; emails; } } This is a definition of a list of email addresses. We use @Email annotation that ensures that every record inside this list is in the desired format.\nA tool could use reflection to evaluate the annotation and check that each of the elements in the list is a valid email address.\nFor more information about type annotations please refer to the docs.\nRepeating Annotations Let us imagine we have an application with fully implemented security. It has different levels of authorization. Even though we implemented everything carefully, we want to make sure that we log every unauthorized action. On each unauthorized action, we are sending an email to the owner of the company and our security admin group email. Repeating annotations are our way to go on this example.\nRepeating annotations allows us to place multiple annotations on the same class.\nCreating a Repeating Annotation For the example, we are going to create a repeating annotation called @Notify:\npublic class RepeatingAnnotations { @Repeatable(Notifications.class) public @interface Notify { String email(); } public @interface Notifications { Notify[] value(); } } We create @Notify as a regular annotation, but we add the @Repeatable (meta-)annotation to it. Additionally, we have to create a \u0026ldquo;container\u0026rdquo; annotation Notifications that contains an array of Notify objects. An annotation processor can now get access to all repeating Notify annotations through the container annotation Noifications.\nPlease note that this is a mock annotation just for demonstration purposes. This annotation will not send emails without an annotation processor that reads it and then sends emails.\nUsing Repeating Annotations We can add a repating annotation multiple times to the same construct:\n@Notify(email = \u0026#34;admin@company.com\u0026#34;) @Notify(email = \u0026#34;owner@company.com\u0026#34;) public class UserNotAllowedForThisActionException extends RuntimeException { final String user; public UserNotAllowedForThisActionException(String user) { this.user = user; } } We have our custom exception class that we will throw whenever a user tries to do something that the user is not allowed. Our annotations to this class say that we want to notify two emails when code throws this exception.\nTo read more about repeating annotations please refer to the docs.\nJava 9 Java 9 introduced these main features:\n Java Module System Try-with-resources Diamond Syntax with Inner Anonymous Classes Private Interface Methods  Java Module System A module is a group of packages, their dependencies, and resources. It provides a broader set of functionalities than packages.\nWhen creating the new module, we need to provide several attributes:\n Name Dependencies Public Packages - by default, all packages are module private Services Offered Services Consumed Reflection Permissions  Without going into many details, let us create our first module. Inside our example, we will show several options and keywords that one can use when creating a module.\nCreating Modules Inside IntelliJ First, we will go with a simple example. We will build a Hello World application where we print \u0026ldquo;Hello\u0026rdquo; from one module, and we call the second module to print \u0026ldquo;World!\u0026rdquo;.\nSince I am working in the IntelliJ IDEA there is something that we need to understand first. IntelliJ IDEA has the concept of modules. For it to work, each Java module has to correspond to one IntelliJ module.\nWe have two modules: hello.module and world.module. They correspond to hello and world IntelliJ modules, respectively. Inside each of them, we have created the module-info.java file. This file defines our Java module. Inside, we declare which packages we need to export and on which modules we are dependent.\nDefining our First Module We are using the hello module to print the word: \u0026ldquo;Hello\u0026rdquo;. Inside, we call the method inside the world module, which will print \u0026ldquo;World !\u0026rdquo;. The first thing that we need to do is to declare export of the package containing our World.class inside module-info.java:\nmodule world.module { exports com.reflectoring.io.app.world; } We use the keyword module with the module name to reference the module.\nThe next keyword that we use is exports. It tells the module system that we are making our com.reflectoring.io.app.world package visible outside of our module.\nThere are several other keywords can be used:\n requires requires transitive exports to uses provides with open opens opens to  Out of these we will show only requires declaration. Others can be found in the docs.\nDefining our Second Module After we created and exported the world module, we can proceed with creating the hello module:\nmodule hello.module { requires world.module; } We define dependencies using requires keyword. We are referencing our newly created, hello.module. Packages that are not exported are, by default, module private and cannot be seen from outside of the module.\nTo read more about the Java module system please refer to the docs\nTry-with-resources Try-with-resources is a feature that enables us to declare new autoclosable resources on a try-catch block. Declaring them inside a try-catch block tells the JVM to release them after the code has run. The only condition is that the declared resource implements an Autoclosable interface.\nClosing a Resource Manually We want to read text using BufferedReader. BufferedReader is a closable resource, so we need to make sure that it is properly closed after use. Before Java 8 we would do it like this:\npublic class TryWithResources { public static void main(String[] args) { BufferedReader br = new BufferedReader( new StringReader(\u0026#34;Hello world example!\u0026#34;)); try { System.out.println(br.readLine()); } catch (IOException e) { e.printStackTrace(); } finally { try { br.close(); } catch (IOException e) { e.printStackTrace(); } } } } In finally block, we would call close(). The finally block ensures that the reader is always properly closed.\nClosing a Resource with try-with-resources Java 8 introduced the try-with-resource feature that enables us to declare our resource inside try definition. This will ensure that our closable is closed without using finally. Let us take a look at some example of using the BufferedReader to read string:\npublic class TryWithResources { public static void main(String[] args) { final BufferedReader br3 = new BufferedReader( new StringReader(\u0026#34;Hello world example3!\u0026#34;)); try (BufferedReader reader = br3) { System.out.println(reader.readLine()); } catch (IOException e) { System.out.println(\u0026#34;Error happened!\u0026#34;); } } } Inside the try definition, we are assigning our previously created reader to the new variable. Now we know that our reader will get closed every time.\nTo read more about the try-with-resources feature please refer to the docs.\nDiamond Syntax with Inner Anonymous Classes Before Java 9 we couldn\u0026rsquo;t use a diamond operator inside the inner anonymous class.\nFor our example, we will create the abstract class, StringAppender. The class has only one method that appends two strings with - between them. We will use the anonymous class for providing the implementation for the append() method:\npublic class DiamondOperator { StringAppender\u0026lt;String\u0026gt; appending = new StringAppender\u0026lt;\u0026gt;() { @Override public String append(String a, String b) { return new StringBuilder(a).append(\u0026#34;-\u0026#34;).append(b).toString(); } }; public abstract static class StringAppender\u0026lt;T\u0026gt; { public abstract T append(String a, String b); } } We use the diamond operator to omit type on the constructor call new StringAppender\u0026lt;\u0026gt;(). Since we are using Java 8, in this example we will get a compiler error:\njava: cannot infer type arguments for com.reflectoring.io.java9.DiamondOperator.StringAppender\u0026lt;T\u0026gt; reason: \u0026#39;\u0026lt;\u0026gt;\u0026#39; with anonymous inner classes is not supported in -source 8 (use -source 9 or higher to enable \u0026#39;\u0026lt;\u0026gt;\u0026#39; with anonymous inner classes) In Java 9, this compiler error is no longer happening.\nPrivate Interface Methods We already mentioned how we use default methods in interfaces.\nHow do we split the implementation into several methods? When working with classes, we can achieve it using private methods. Could that be the solution in our case?\nAs of Java 9, yes. We can create private methods inside our interfaces.\nUsage of Private Interface Methods For our example, we want to print out a set of names.\nInterface containing this functionality had default method defined. We decided that we should if the client doesn\u0026rsquo;t provide the implementation, provide a set of predefined names that we read from the resource folder:\npublic class PrivateInterfaceMethods { public static void main(String[] args) { TestingNames names = new TestingNames(); System.out.println(names.fetchInitialData()); } public static class TestingNames implements NamesInterface { public TestingNames() { } } public interface NamesInterface { default List\u0026lt;String\u0026gt; fetchInitialData() { try (BufferedReader br = new BufferedReader( new InputStreamReader(this.getClass() .getResourceAsStream(\u0026#34;/names.txt\u0026#34;)))) { return readNames(br); } catch (IOException e) { e.printStackTrace(); return null; } } private List\u0026lt;String\u0026gt; readNames(BufferedReader br) throws IOException { ArrayList\u0026lt;String\u0026gt; names = new ArrayList\u0026lt;\u0026gt;(); String name; while ((name = br.readLine()) != null) { names.add(name); } return names; } } } We are using BufferedReader to read the file containing default names that we share with the client. To encapsulate our code and, possibly, make it reusable in other methods, we decided to move code for reading and saving names into a List to the separate method. This method is private and, now, we can use it anywhere inside our interface.\nAs mentioned, the main benefit of this feature inside Java 9 is better encapsulation and reusability of the code.\nJava 10 Local Variable Type Inference Java always needed explicit types on local variables.\nWhen writing and reading code, we always know which type we expect. On the other hand, a lot of the code is just types with no usability.\nThe var type allows us to omit type from the left-hand side of our statements.\nOld Way Let us look into the example here. We want to create small a set of people, put everything in one list and then go through that list in the for loop to print out their name and last name:\npublic class LocalTypeVar { public void explicitTypes() { Person Roland = new Person(\u0026#34;Roland\u0026#34;, \u0026#34;Deschain\u0026#34;); Person Susan = new Person(\u0026#34;Susan\u0026#34;, \u0026#34;Delgado\u0026#34;); Person Eddie = new Person(\u0026#34;Eddie\u0026#34;, \u0026#34;Dean\u0026#34;); Person Detta = new Person(\u0026#34;Detta\u0026#34;, \u0026#34;Walker\u0026#34;); Person Jake = new Person(\u0026#34;Jake\u0026#34;, \u0026#34;Chambers\u0026#34;); List\u0026lt;Person\u0026gt; persons = List.of(Roland, Susan, Eddie, Detta, Jake); for (Person person : persons) { System.out.println(person.name + \u0026#34; - \u0026#34; + person.lastname); } } } This is the type of code that we can see in most cases in Java. We use explicit types to make sure that we know what the method expects.\nImplicit Typing with var Now, we will look into the same example, but using the var keyword that Java 10 introduced. We still want to create several person objects and put them in a list. After that, we will go through that list and print out the name of each person:\npublic class LocalTypeVar { public void varTypes() { var Roland = new Person(\u0026#34;Roland\u0026#34;, \u0026#34;Deschain\u0026#34;); var Susan = new Person(\u0026#34;Susan\u0026#34;, \u0026#34;Delgado\u0026#34;); var Eddie = new Person(\u0026#34;Eddie\u0026#34;, \u0026#34;Dean\u0026#34;); var Detta = new Person(\u0026#34;Detta\u0026#34;, \u0026#34;Walker\u0026#34;); var Jake = new Person(\u0026#34;Jake\u0026#34;, \u0026#34;Chambers\u0026#34;); var persons = List.of(Roland, Susan, Eddie, Detta, Jake); for (var person : persons) { System.out.println(person.name + \u0026#34; - \u0026#34; + person.lastname); } } } We can see some of the most typical examples of using var type on local variables. First, we use them for defining local variables. It can be a standalone object or even a list with the diamond operator.\nFor more details about local type inference please visit the docs.\nJava 11 Local Variable Type in Lambda Expressions Java 11 introduced an improvement to the previously mentioned local type inference. This allows us to use var inside lambda expressions.\nWe will, again, create several persons, collect them into the list and filter out entries that don\u0026rsquo;t have an \u0026lsquo;a\u0026rsquo; inside their name:\npublic class LocalTypeVarLambda { public void explicitTypes() { var Roland = new Person(\u0026#34;Roland\u0026#34;, \u0026#34;Deschain\u0026#34;); var Susan = new Person(\u0026#34;Susan\u0026#34;, \u0026#34;Delgado\u0026#34;); var Eddie = new Person(\u0026#34;Eddie\u0026#34;, \u0026#34;Dean\u0026#34;); var Detta = new Person(\u0026#34;Detta\u0026#34;, \u0026#34;Walker\u0026#34;); var Jake = new Person(\u0026#34;Jake\u0026#34;, \u0026#34;Chambers\u0026#34;); var filteredPersons = List.of(Roland, Susan, Eddie, Detta, Jake) .stream() .filter((var x) -\u0026gt; x.name.contains(\u0026#34;a\u0026#34;)) .collect(Collectors.toList()); System.out.println(filteredPersons); } } Inside the filter() method we are using var to infer the type instead of explicitly mentioning the type.\nPlease note that it doesn\u0026rsquo;t make a difference if we use var or type inference without it. It will work the same for both.\nJava 14 Switch Expressions Switch expressions allowed us to omit break calls inside every case block. It helps with the readability of the code and better understanding.\nIn this section, we will see several ways of how to use switch expressions.\nOld Way of Switch Statements We have a method where a client provides the desired month, and we return the number of days inside that month.\nThe first thing that comes to our mind is to build it with switch-case statements:\npublic class SwitchExpression { public static void main(String[] args) { int days = 0; Month month = Month.APRIL; switch (month) { case JANUARY, MARCH, MAY, JULY, AUGUST, OCTOBER, DECEMBER : days = 31; break; case FEBRUARY : days = 28; break; case APRIL, JUNE, SEPTEMBER, NOVEMBER : days = 30; break; default: throw new IllegalStateException(); } } } We need to make sure that we put a break statement inside our case code block. Failing it will result in checking on other conditions after we match the first one.\nUsing Switch Expressions We will look into the same method as before. The user wants to send the month and get the number of days in that month:\npublic class SwitchExpression { public static void main(String[] args) { int days = 0; Month month = Month.APRIL; days = switch (month) { case JANUARY, MARCH, MAY, JULY, AUGUST, OCTOBER, DECEMBER -\u0026gt; 31; case FEBRUARY -\u0026gt; 28; case APRIL, JUNE, SEPTEMBER, NOVEMBER -\u0026gt; 30; default -\u0026gt; throw new IllegalStateException(); }; } } We are using a bit different notation in the case block. We are using -\u0026gt; instead of the colon. Even though we are not invoking the break statement, we will still jump out of the switch statement on the first valid condition.\nThis will do the same thing as the code shown in the previous example.\nThe yield Keyword The logic inside the case block can be a bit more complicated than just returning a value. For example, we want to log which month the user sent us:\npublic class SwitchExpression { public static void main(String[] args) { int days = 0; Month month = Month.APRIL; days = switch (month) { case JANUARY, MARCH, MAY, JULY, AUGUST, OCTOBER, DECEMBER -\u0026gt; { System.out.println(month); yield 31; } case FEBRUARY -\u0026gt; { System.out.println(month); yield 28; } case APRIL, JUNE, SEPTEMBER, NOVEMBER -\u0026gt; { System.out.println(month); yield 30; } default -\u0026gt; throw new IllegalStateException(); }; } } In a multi-line code block, we have to use the yield keyword to return a value from a case block.\nTo read more about using switch expressions please refer to the docs.\nJava 15 Text Blocks Text block is an improvement on formatting String variables. From Java 15, we can write a String that spans through several lines as regular text.\nExample Without Using Text Blocks We want to send an HTML document via email. We are storing the email template into a variable:\npublic class TextBlocks { public static void main(String[] args) { System.out.println( \u0026#34;\u0026lt;!DOCTYPE html\u0026gt;\\n\u0026#34; + \u0026#34;\u0026lt;html\u0026gt;\\n\u0026#34; + \u0026#34; \u0026lt;head\u0026gt;\\n\u0026#34; + \u0026#34; \u0026lt;title\u0026gt;Example\u0026lt;/title\u0026gt;\\n\u0026#34; + \u0026#34; \u0026lt;/head\u0026gt;\\n\u0026#34; + \u0026#34; \u0026lt;body\u0026gt;\\n\u0026#34; + \u0026#34; \u0026lt;p\u0026gt;This is an example of a simple HTML \u0026#34; + \u0026#34;page with one paragraph.\u0026lt;/p\u0026gt;\\n\u0026#34; + \u0026#34; \u0026lt;/body\u0026gt;\\n\u0026#34; + \u0026#34;\u0026lt;/html\u0026gt;\\n\u0026#34;); } } We are formatting our string like in the example above. We need to take care of new lines and append all the lines to a single string.\nExample of Using Text Blocks Let us look into the same example of an HTML template for email. We want to send an example email with some straightforward HTML formatting. This time we will use a text block:\npublic class TextBlocks { public static void main(String[] args) { System.out.println( \u0026#34;\u0026#34;\u0026#34; \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;Example\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;p\u0026gt;This is an example of a simple HTML page with one paragraph.\u0026lt;/p\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; \u0026#34;\u0026#34;\u0026#34; ); } } We used special syntax for opening and closing quotes: \u0026quot;\u0026quot;\u0026quot;. This allows us to treat our string as if we are writing it in a .txt file.\nThere are some rules that we need to abide by when using a text block. We need to make sure that we put a new line after our opening quotes, or our compiler will throw an error:\nIllegal text block start: missing new line after opening quotes. If we want to end our string with \\n we can do it by putting new line before closing \u0026quot;\u0026quot;\u0026quot; like in the example above.\nTo read more about text blocks please refer to the docs.\nJava 16 Pattern Matching of instanceof Pattern matching on the instanceof allows us to cast our variable inline and use it inside the desired if-else block without explicitly casting it.\nExample Without Pattern Matching We have a base class called Vehicle and two classes that extends it: Car and Bicycle. We omitted the code for this, and you can look it up in the GitHub repo.\nOur algorithm for calculating prices is depending on the instance of the vehicle that is sent to it:\npublic class PatternMatching { public static double priceOld(Vehicle v) { if (v instanceof Car) { Car c = (Car) v; return 10000 - c.kilomenters * 0.01 - (Calendar.getInstance().get(Calendar.YEAR) - c.year) * 100; } else if (v instanceof Bicycle) { Bicycle b = (Bicycle) v; return 1000 + b.wheelSize * 10; } else throw new IllegalArgumentException(); } } Since we are not using pattern matching, we need to cast the vehicle into the correct type inside each if-else block. As we can see, it is a typical example of boilerplate code for which Java is famous.\nUsing Pattern Matching Let\u0026rsquo;s see how can we can discard the boilerplate part from the example above:\npublic class PatternMatching { public static double price(Vehicle v) { if (v instanceof Car c) { return 10000 - c.kilomenters * 0.01 - (Calendar.getInstance().get(Calendar.YEAR) - c.year) * 100; } else if (v instanceof Bicycle b) { return 1000 + b.wheelSize * 10; } else throw new IllegalArgumentException(); } } One thing to note is the scope of the casted variable. It\u0026rsquo;s visible only within the if statement.\nFor more information about pattern matching in instanceof method please refer to the docs.\nRecords How many POJOs (Plain Old Java Objects) have you written?\nWell, I can answer for myself: \u0026ldquo;Too many!\u0026rdquo;.\nJava has had a bad reputation for boilerplate code. Lombok allowed us to stop worrying about getters, setters, etc. Java 16 finally introduced records to remove a lot of boilerplate code.\nA record class is nothing more than regular POJO, for which most of the code is generated from the definition.\nPlain Old Java Object definition Let us look into the example of the POJO class before Java 16 introduced records:\npublic class Vehicle { String code; String engineType; public String getCode() { return code; } public void setCode(String code) { this.code = code; } public String getEngineType() { return engineType; } public void setEngineType(String engineType) { this.engineType = engineType; } public Vehicle(String code, String engineType) { this.code = code; this.engineType = engineType; } @Override public boolean equals(Object o) ... @Override public int hashCode() ... @Override public String toString() ... } There are almost 50 lines of code for object that contains only two properties. The IDE generated this code, but still, it is there and has to be maintained.\nRecord Definition Definition of a vehicle record, with the same two properties, can be done in just one line:\npublic record VehicleRecord(String code, String engineType) {} This one line has all the same getters, setters, constructors, etc. as from the example above. One thing to note is that the record class is, by default, final, and we need to comply with that. That means we cannot extend a record class, but most other things are available for us.\nTo read more about record classes please refer to the docs.\nJava 17 Sealed Classes The final modifier on a class doesn\u0026rsquo;t allow anyone to extend it. What about when we want to extend a class but only allow it for some classes?\nWe are back at our car dealership business. We are so proud of our algorithm for calculating prices that we want to expose it. We don\u0026rsquo;t want anyone using our Vehicle representation, though. It is valid just for our business. We can see a bit of a problem here. We need to expose class but constrain it also.\nThis is where Java 17 comes into play with sealed classes. The sealed class allows us to make class effectively final for everyone except explicitly mentioned classes.\npublic sealed class Vehicle permits Bicycle, Car {...} We added a sealed modifier to our Vehicle class, and we had to add the permits keyword with a list of classes that we allow to extend it. After this change, we are still getting errors from the compiler.\nThere is one more thing that we need to do here.\nWe need to add final, sealed, or non-sealed modifiers to classes that will extend our class.\npublic final class Bicycle extends Vehicle {...} Constraints Several constraints have to be met for the sealed class to work:\n Permitted subclasses must be accessible by the sealed class at compile time Permitted subclasses must directly extend the sealed class Permitted subclasses must have one of the following modifiers:  final sealed non-sealed   Permitted subclasses must be in the same Java module  More details about sealed classes can be found in the docs.\n","date":"November 29, 2021","image":"https://reflectoring.io/images/stock/0065-java-1200x628-branded_hu49f406cdc895c98f15314e0c34cfd114_116403_650x0_resize_q90_box.jpg","permalink":"/java-release-notes/","title":"Java Features from Java 8 to Java 17"},{"categories":["Java"],"contents":"A hash is a piece of text computed with a cryptographic hashing function. It is used for various purposes mainly in the security realm like securely storing sensitive information and safeguarding data integrity.\nIn this post, we will illustrate the creation of common types of hashes in Java along with examples of using hashes for generating checksums of data files and for storing sensitive data like passwords and secrets.\n Example Code This article is accompanied by a working code example on GitHub. Features of Hash Functions Most cryptographic hash functions take a string of any arbitrary length as input and produce the hash as a fixed-length value.\nA hashing function is a one-way function, that is, a function for which it is practically infeasible to invert or reverse the computation to produce the original plain text from the hashed output.\nApart from being produced by a unidirectional function, some of the essential features of a hash are:\n The size of the hash is always fixed and does not depend on the size of the input data. A hash of data is always unique. No two distinct data sets are able to produce the same hash. If it does happen, it\u0026rsquo;s called a collision. Collision resistance is one of the measures of the strength of a hashing function.  Hash Types We will look at the following types of hash in this post :\n MD5 Message Digest Secure Hash Algorithm (SHA) Password-Based Key Derivative Function with Hmac-SHA1 (PBKDF2WithHmacSHA1)  MD5 Message Digest Algorithm The MD5 is defined in RFC 1321, as a hashing algorithm to turn inputs of any arbitrary length into a hash value of the fixed length of 128-bit (16 bytes).\nThe below example uses the MD5 hashing algorithm to produce a hash value from a String:\nimport java.security.MessageDigest; public class HashCreator { public String createMD5Hash(final String input) throws NoSuchAlgorithmException { String hashtext = null; MessageDigest md = MessageDigest.getInstance(\u0026#34;MD5\u0026#34;); // Compute message digest of the input  byte[] messageDigest = md.digest(input.getBytes()); hashtext = convertToHex(messageDigest); return hashtext; } private String convertToHex(final byte[] messageDigest) { BigInteger bigint = new BigInteger(1, messageDigest); String hexText = bigint.toString(16); while (hexText.length() \u0026lt; 32) { hexText = \u0026#34;0\u0026#34;.concat(hexText); } return hexText; } } Here we have used the digest() method of the MessageDigest class from the java.security package to create the MD5 hash in bytes and then converted those bytes to hex format to generate the hash as text.\nSome sample hashes generated as output of this program look like this:\n   Input Hash     aristotle 51434272DDCB40E9CA2E2A3AE6231FA9   MyPassword 48503DFD58720BD5FF35C102065A52D7   password123 482C811DA5D5B4BC6D497FFA98491E38    The MD5 hashing function has been found to suffer from extensive vulnerabilities. However, it remains suitable for other non-cryptographic purposes, for example for determining the partition key for a particular record in a partitioned database.\nMD5 is a preferred hashing function in situations which require lower computational resources than the more recent Secure Hash Algorithms (SHA) algorithms covered in the next section.\nSecure Hash Algorithm (SHA) The SHA (Secure Hash Algorithm) is a family of cryptographic hash functions very similar to MD5 except it generates stronger hashes.\nWe will use the same MessageDigest class as before to produce a hash value using the SHA-256 hashing algorithm:\npublic class HashCreator { public String createSHAHash(String input throws NoSuchAlgorithmException { String hashtext = null; MessageDigest md = MessageDigest.getInstance(\u0026#34;SHA-256\u0026#34;); byte[] messageDigest = md.digest(input.getBytes(StandardCharsets.UTF_8)); hashtext = convertToHex(messageDigest); return hashtext; } private String convertToHex(final byte[] messageDigest) { BigInteger bigint = new BigInteger(1, messageDigest); String hexText = bigint.toString(16); while (hexText.length() \u0026lt; 32) { hexText = \u0026#34;0\u0026#34;.concat(hexText); } return hexText; } } Other than the name of the algorithm, the program is exactly the same as before. Some sample hashes generated as output of this program look like this:\n   Input Hash     aristotle 9280c8db01b05444ff6a26c52efbe639b4879a1c49bfe0e2afdc686e93d01bcb   MyPassword dc1e7c03e162397b355b6f1c895dfdf3790d98c10b920c55e91272b8eecada2a   password123 ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f    As we can see, the hashes produced by SHA-256 are 32 bytes in length. Similarly, SHA-512 produces hashes of length 64 bytes.\nJava supports the following SHA-2 algorithms:\n SHA-224 SHA-256 SHA-384 SHA-512 SHA-512/224 SHA-512/256  SHA-3 is considered more secure than SHA-2 for the same hash length. Java supports the following SHA-3 algorithms from Java 9 onwards:\n SHA3-224 SHA3-256 SHA3-384 SHA3-512  Here are some sample hashes generated as output using SHA3-224 as the hashing function:\n   Input Hash     aristotle d796985fc3189fd402ad5ef7608c001310b525c3f495b93a632ad392   MyPassword 5dbf252c33ce297399aefedee5db51559d956744290e9aaba31069f2   password123 cc782e5480878ba3fb6bb07905fdcf4a00e056adb957ae8a03c53a52    We will encounter a NoSuchAlgorithmException exception if we try to use an unsupported algorithm.\nSecuring a Hash with a Salt A salt is a random piece of data that is used as an input in addition to the data that is passed into the hashing function. The goal of salting is to defend against dictionary attacks or attacks against hashed passwords using a rainbow table.\nLet us create a salted MD5 hash by enriching the hash generation method we used in the earlier section:\npublic class HashCreator { public String createPasswordHashWithSalt(final String textToHash) { try { byte[] salt = createSalt(); return createSaltedHash(textToHash, salt); } catch (Exception e) { e.printStackTrace(); } return null; } private String createSaltedHash(String textToHash, byte[] salt) throws NoSuchAlgorithmException { String saltedHash = null; // Create MessageDigest instance for MD5  MessageDigest md = MessageDigest.getInstance(\u0026#34;MD5\u0026#34;); //Add salted bytes to digest  md.update(salt); //Get the hash\u0026#39;s bytes  byte[] bytes = md.digest(textToHash.getBytes()); //Convert it to hexadecimal format to  //get complete salted hash in hex format  saltedHash = convertToHex(bytes); return saltedHash; } //Create salt  private byte[] createSalt() throws NoSuchAlgorithmException, NoSuchProviderException { //Always use a SecureRandom generator for random salt  SecureRandom sr = SecureRandom.getInstance(\u0026#34;SHA1PRNG\u0026#34;, \u0026#34;SUN\u0026#34;); //Create array for salt  byte[] salt = new byte[16]; //Get a random salt  sr.nextBytes(salt); //return salt  return salt; } } Here we are generating a random salt using Java\u0026rsquo;s SecureRandom class. We are then using this salt to update the MessageDigest instance before calling the digest method on the instance to generate the salted hash.\nPassword Based Key Derivative Function with HmacSHA1 (PBKDF2WithHmacSHA1) PBKDF2WithHmacSHA1 is best understood by breaking it into its component parts :\n PBKDF2 Hmac SHA1  Any cryptographic hash function can be used for the calculation of an HMAC (hash-based message authentication code). The resulting MAC algorithm is termed HMAC-MD5 or HMAC-SHA1 accordingly.\nIn the earlier sections, we have seen that the MD5 and SHA algorithms generate hashes which can be made more secure with the help of a salt. But due to the ever-improving computation capabilities of the hardware, hashes can still be cracked with brute force attacks. We can mitigate this by making the brute force attack slower.\nThe PBKDF2WithHmacSHA1 algorithm uses the same concept. It slows down the hashing method to delay the attacks but still fast enough to not cause any significant delay in generating the hash for normal use cases.\nAn example of generating the hash with PBKDF2WithHmacSHA1 is given below:\npublic class HashCreator { public String generateStrongPasswordHash(final String password) throws NoSuchAlgorithmException, InvalidKeySpecException, NoSuchProviderException { int iterations = 1000; byte[] salt = createSalt(); byte[] hash = createPBEHash(password, iterations, salt, 64); // prepend iterations and salt to the hash  return iterations + \u0026#34;:\u0026#34; + convertToHex(salt) + \u0026#34;:\u0026#34; + convertToHex(hash); } //Create salt  private byte[] createSalt() throws NoSuchAlgorithmException, NoSuchProviderException { //Always use a SecureRandom generator for random salt  SecureRandom sr = SecureRandom.getInstance(\u0026#34;SHA1PRNG\u0026#34;, \u0026#34;SUN\u0026#34;); //Create array for salt  byte[] salt = new byte[16]; //Get a random salt  sr.nextBytes(salt); //return salt  return salt; } //Create hash of password with salt, iterations, and keylength  private byte[] createPBEHash( final String password, final int iterations, final byte[] salt, final int keyLength) throws NoSuchAlgorithmException, InvalidKeySpecException { PBEKeySpec spec = new PBEKeySpec(password.toCharArray(), salt, iterations, keyLength * 8); SecretKeyFactory skf = SecretKeyFactory .getInstance(\u0026#34;PBKDF2WithHmacSHA1\u0026#34;); return skf.generateSecret(spec).getEncoded(); } } Here we have configured the algorithm with 1000 iterations and a random salt of length 16. The iterations and salt value is prepended to the hash in the last step. We will need these values for verifying the hash as explained below.\nThis algorithm is used for hashing passwords before storing them in secure storage.\nA sample password hash generated with this program looks like this:\n1000:de4239996e6112a67fb89361def4933f:a7983b33763eb754faaf4c87f735b76c5a1410bb4a81f2a3f23c8159eab67569916e3a86197cc2c2c16d4af616705282a828e0990a53e15be6b82cfa343c70ef If we observe the hash closely, we can see the password hash is composed of three parts containing the number of iterations, salt, and the hash which are separated by :.\nWe will now verify this hash using the below program:\npublic class HashCreator { private boolean validatePassword(final String originalPassword, final String storedPasswordHash) throws NoSuchAlgorithmException, InvalidKeySpecException { // Split the string by :  String[] parts = storedPasswordHash.split(\u0026#34;:\u0026#34;); // Extract iterations, salt, and hash  // from the stored password hash  int iterations = Integer.valueOf(parts[0]); byte[] salt = convertToBytes(parts[1]); byte[] hash = convertToBytes(parts[2]); byte[] originalPasswordHash = createPBEHash( originalPassword, iterations, salt, hash.length); int diff = hash.length ^ originalPasswordHash.length; for (int i = 0; i \u0026lt; hash.length \u0026amp;\u0026amp; i \u0026lt; originalPasswordHash.length; i++) { diff |= hash[i] ^ originalPasswordHash[i]; } return diff == 0; } //Create hash of password with salt, iterations, and keylength  private byte[] createPBEHash( final String password, final int iterations, final byte[] salt, final int keyLength) throws NoSuchAlgorithmException, InvalidKeySpecException { PBEKeySpec spec = new PBEKeySpec(password.toCharArray(), salt, iterations, keyLength * 8); SecretKeyFactory skf = SecretKeyFactory .getInstance(\u0026#34;PBKDF2WithHmacSHA1\u0026#34;); return skf.generateSecret(spec).getEncoded(); } } The validatePassword method in this code snippet takes the password in plain text which we want to verify against the stored hash of the password generated in the previous step.\nIn the first step, we have split the stored hash to extract the iterations, salt, and the hash and then used these values to regenerate the hash for comparing with the stored hash of the original password.\nGenerating a Checksum for Verifying Data Integrity Another common utility of hashes is for verifying whether the data (or file) at rest or during transit between two environments has been tampered with, a concept known as data integrity.\nSince the hash function always produces the same output for the same given input, we can compare a hash of the source file with a newly created hash of the destination file to check that it is intact and unmodified.\nFor this, we generate a hash of the data called the checksum before storing or transferring. We generate the hash again before using the data. If the two hashes match, we determine that the integrity check is passed and the data has not been tampered with.\nHere is a code snippet for generating a checksum of a file:\npublic class HashCreator { public String createChecksum(final String filePath) throws FileNotFoundException, IOException, NoSuchAlgorithmException { MessageDigest md = MessageDigest.getInstance(\u0026#34;SHA-256\u0026#34;); try (DigestInputStream dis = new DigestInputStream( new FileInputStream(filePath), md)) { while (dis.read() != -1) ; md = dis.getMessageDigest(); } String checksum = convertToHex(md.digest()); return checksum; } } The createChecksum() method in this code snippet generates a SHA-256 hash of a file stored in a disk. A sample checksum for textual data stored in a csv file looks like this:\nbcd7affc0dd150c42505513681c01bf6e07a039c592569588e73876d52f0fa27 The hash is generated again before using the data. If the two hashes match, we determine that the integrity check is passed and the data in the file has not been tampered with.\nMD5 hashes are also used to generate checksums files because of their higher computation speed.\nSome Other Uses for Hashes Finding Duplicates: Simple rule of hashing is that the same input generates the same hash. Thus, if two hashes are the same, then it means the inputs are also the same.\nData Structures: Hash tables are extensively used in data structures. Almost all data structures that support key-value pairs use hash tables. For example, HashMap and HashSet in Java, map, and unordered_map in C++ use hash tables.\nConclusion In this post, we looked at the different types of hashes and how they can be generated in Java applications.\nHere are some key points from the post:\n A hash is a piece of text computed with a hashing function that is a one-way function for which it is practically infeasible to reverse the computation to produce the original plain text from the hashed output. No two distinct data sets are able to produce the same hash. This behavior is called a collision. Collision resistance is one of the measures of the strength of a hashing function. The SHA (Secure Hash Algorithm) family of cryptographic hash functions generate stronger hashes than the hashes generated by MD5. We can make a hash more secure by adding a random piece of data called salt to the data that is inputted into the hashing function. The goal of salting is to defend against dictionary attacks or attacks against hashed passwords using a rainbow table. We also saw the usage of hashes for verifying the data integrity of files during transfer and for storing sensitive data like passwords.  You can refer to all the source code used in the article on Github.\n","date":"November 21, 2021","image":"https://reflectoring.io/images/stock/0044-lock-1200x628-branded_hufda82673b597e36c6f6f4e174d972b96_267480_650x0_resize_q90_box.jpg","permalink":"/creating-hashes-in-java/","title":"Creating Hashes in Java"},{"categories":["Java"],"contents":"Hypertext Transfer Protocol (HTTP) is an application-layer protocol for transmitting hypermedia documents, such as HTML, and API payloads in a standard format like JSON and XML.\nIt is a commonly used protocol for communication between applications that publish their capabilities in the form of REST APIs. Applications built with Java rely on some form of HTTP client to make API invocations on other applications.\nA wide array of alternatives exist for choosing an HTTP client. This article provides an overview of some of the major libraries which are used as HTTP clients in Java applications for making HTTP calls.\n Example Code This article is accompanied by a working code example on GitHub. Overview of HTTP Clients We will look at the following HTTP clients in this post :\n \u0026lsquo;HttpClient\u0026rsquo; included from Java 11 for applications written in Java 11 and above Apache HTTPClient from Apache HttpComponents project OkHttpClient from Square Spring WebClient for Spring Boot applications  In order to cover the most common scenarios we will look at examples of sending asynchronous HTTP GET request and synchronous POST request fot each type of client.\nFor HTTP GET requests, we will invoke an API: https://weatherbit-v1-mashape.p.rapidapi.com/forecast/3hourly?lat=35.5\u0026amp;lon=-78.5 with API keys created from the API portal. These values are stored in a constants file URLConstants.java. The API key and value will be sent as a request header along with the HTTP GET requests.\nOther APIs will have different controls for access and the corresponding HTTP clients need to be adapted accordingly.\nFor HTTP POST requests, we will invoke the API: https://reqbin.com/echo/post/json which takes a JSON body in the request.\nWe can observe a common pattern of steps among all the HTTP clients during their usage in our examples:\n Create an instance of the HTTP client. Create a request object for sending the HTTP request. Make the HTTP call either synchronous or asynchronous. Process the HTTP response received in the previous step.  Let us look at each type of client and understand how to use them in our applications:\nNative HttpClient for Applications in Java 11 and Above The native HttpClient was introduced as an incubator module in Java 9 and then made generally available in Java 11 as a part of JEP 321.\nHTTPClient replaces the legacy HttpUrlConnection class present in the JDK since the early versions of Java.\nSome of its features include:\n Support for HTTP/1.1, HTTP/2, and Web Socket. Support for synchronous and asynchronous programming models. Handling of request and response bodies as reactive streams. Support for cookies.  Asynchronous GET Request An example of using HttpClient for making an asynchronous GET request is shown below:\nimport java.net.URI; import java.net.URISyntaxException; import java.net.http.HttpClient; import java.net.http.HttpClient.Redirect; import java.net.http.HttpClient.Version; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.net.http.HttpResponse.BodyHandlers; public class HttpClientApp { public void invoke() throws URISyntaxException { HttpClient client = HttpClient.newBuilder() .version(Version.HTTP_2) .followRedirects(Redirect.NORMAL) .build(); HttpRequest request = HttpRequest.newBuilder() .uri(new URI(URLConstants.URL)) .GET() .header(URLConstants.API_KEY_NAME, URLConstants.API_KEY_VALUE) .timeout(Duration.ofSeconds(10)) .build(); client.sendAsync(request, BodyHandlers.ofString()) .thenApply(HttpResponse::body) .thenAccept(System.out::println) .join(); } } Here we have used the builder pattern to create an instance of HttpClient and HttpRequest and then made an asynchronous call to the REST API. When creating the request, we have set the HTTP method as GET by calling the GET() method and also set the API URL and API key in the header along with a timeout value of 10 seconds.\nSynchronous POST Request For HTTP POST and PUT, we call the methods POST(BodyPublisher body) and PUT(BodyPublisher body) on the builder. The BodyPublisher parameter has several out-of-the-box implementations which simplify sending the request body.\npublic class HttpClientApp { public void invokePost() { try { String requestBody = prepareRequest(); HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest .newBuilder() .uri(URI.create(\u0026#34;https://reqbin.com/echo/post/json\u0026#34;)) .POST(HttpRequest.BodyPublishers.ofString(requestBody)) .header(\u0026#34;Accept\u0026#34;, \u0026#34;application/json\u0026#34;) .build(); HttpResponse\u0026lt;String\u0026gt; response = client.send(request, HttpResponse.BodyHandlers.ofString()); System.out.println(response.body()); } catch (IOException | InterruptedException e) { e.printStackTrace(); } } private String prepareRequest() throws JsonProcessingException { var values = new HashMap\u0026lt;String, String\u0026gt;() { { put(\u0026#34;Id\u0026#34;, \u0026#34;12345\u0026#34;); put(\u0026#34;Customer\u0026#34;, \u0026#34;Roger Moose\u0026#34;); put(\u0026#34;Quantity\u0026#34;, \u0026#34;3\u0026#34;); put(\u0026#34;Price\u0026#34;,\u0026#34;167.35\u0026#34;); } }; var objectMapper = new ObjectMapper(); String requestBody = objectMapper.writeValueAsString(values); return requestBody; } } Here we have created a JSON string in the prepareRequest() method for sending the request body in the HTTP POST() method.\nNext, we are using the builder pattern to create an instance of HttpRequest and then making a synchronous call to the REST API.\nWhen creating the request, we have set the HTTP method as POST by calling the POST() method and also set the API URL and body of the request by wrapping the JSON string in a BodyPublisher instance.\nThe response is extracted from the HTTP response by using a BodyHandler instance.\nUse of HttpClient is preferred if our application is built using Java 11 and above.\nApache HttpComponents HttpComponents is a project under the Apache Software Foundation and contains a toolset of low-level Java components for working with HTTP. The components under this project are divided into :\n HttpCore: A set of low-level HTTP transport components that can be used to build custom client and server-side HTTP services. HttpClient: An HTTP-compliant HTTP agent implementation based on HttpCore. It also provides reusable components for client-side authentication, HTTP state management, and HTTP connection management.  Dependency For API invocation with HttpClient, first we need to include the Apache HTTP Client 5 libraries using our dependency manager:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.httpcomponents.client5\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;httpclient5\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.1.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Here we have added the httpclient5 as a Maven dependency in our pom.xml.\nAsynchronous GET Request A common way to make asynchronous REST API invocation with the Apache HttpClient is shown below:\npublic class ApacheHttpClientApp { public void invoke() { try( CloseableHttpAsyncClient client = HttpAsyncClients.createDefault();) { client.start(); final SimpleHttpRequest request = SimpleRequestBuilder .get() .setUri(URLConstants.URL) .addHeader( URLConstants.API_KEY_NAME, URLConstants.API_KEY_VALUE) .build(); Future\u0026lt;SimpleHttpResponse\u0026gt; future = client.execute(request, new FutureCallback\u0026lt;SimpleHttpResponse\u0026gt;() { @Override public void completed(SimpleHttpResponse result) { String response = result.getBodyText(); System.out.println(\u0026#34;response::\u0026#34;+response); } @Override public void failed(Exception ex) { System.out.println(\u0026#34;response::\u0026#34;+ex); } @Override public void cancelled() { // do nothing  } }); HttpResponse response = future.get(); // Get HttpResponse Status  System.out.println(response.getCode()); // 200  System.out.println(response.getReasonPhrase()); // OK  } catch (InterruptedException | ExecutionException | IOException e) { e.printStackTrace(); } } } Here we are creating the client by instantiating the CloseableHttpAsyncClient with default parameters within an extended try block.\nAfter that, we start the client.\nNext, we are creating the request using SimpleHttpRequest and making the asynchronous call by calling the execute() method and attaching a FutureCallback class to capture and process the HTTP response.\nSynchronous POST Request Let us now make a synchronous POST Request with Apache HttpClient:\npublic class ApacheHttpClientApp { public void invokePost() { StringEntity stringEntity = new StringEntity(prepareRequest()); HttpPost httpPost = new HttpPost(\u0026#34;https://reqbin.com/echo/post/json\u0026#34;); httpPost.setEntity(stringEntity); httpPost.setHeader(\u0026#34;Accept\u0026#34;, \u0026#34;application/json\u0026#34;); httpPost.setHeader(\u0026#34;Content-type\u0026#34;, \u0026#34;application/json\u0026#34;); try( CloseableHttpClient httpClient = HttpClients.createDefault(); CloseableHttpResponse response = httpClient.execute(httpPost);) { // Get HttpResponse Status  System.out.println(response.getCode()); // 200  System.out.println(response.getReasonPhrase()); // OK  HttpEntity entity = response.getEntity(); if (entity != null) { // return it as a String  String result = EntityUtils.toString(entity); System.out.println(result); } } catch (ParseException | IOException e) { e.printStackTrace(); } } private String prepareRequest() { var values = new HashMap\u0026lt;String, String\u0026gt;() { { put(\u0026#34;Id\u0026#34;, \u0026#34;12345\u0026#34;); put(\u0026#34;Customer\u0026#34;, \u0026#34;Roger Moose\u0026#34;); put(\u0026#34;Quantity\u0026#34;, \u0026#34;3\u0026#34;); put(\u0026#34;Price\u0026#34;,\u0026#34;167.35\u0026#34;); } }; var objectMapper = new ObjectMapper(); String requestBody; try { requestBody = objectMapper.writeValueAsString(values); } catch (JsonProcessingException e) { e.printStackTrace(); } return requestBody; } } Here we have created a JSON string in the prepareRequest method for sending the request body in the HTTP POST method.\nNext, we are creating the request by wrapping the JSON string in a StringEntity class and setting it in the HttpPost class.\nWe are making a synchronous call to the API by invoking the execute() method on the CloseableHttpClient class which takes the HttpPost object populated with the StringEntity instance as the input parameter.\nThe response is extracted from the CloseableHttpResponse object returned by the execute() method.\nThe Apache HttpClient is preferred when we need extreme flexibility in configuring the behavior for example providing support for mutual TLS.\nOkHttpClient OkHttpClient is an open-source library originally released in 2013 by Square.\nDependency For API invocation with OkHttpClient, we need to include the okhttp libraries using our dependency manager:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.squareup.okhttp3\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;okhttp\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;4.9.2\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Here we have added the okhttp module as a Maven dependency in our pom.xml.\nAsynchronous GET Request The below code fragment illustrates the execution of the HTTP GET request using the OkHttpClient API:\npublic class OkHttpClientApp { public void invoke() throws URISyntaxException, IOException { OkHttpClient client = new OkHttpClient.Builder() .readTimeout(1000, TimeUnit.MILLISECONDS) .writeTimeout(1000, TimeUnit.MILLISECONDS) .build(); Request request = new Request.Builder() .url(URLConstants.URL) .get() .addHeader(URLConstants.API_KEY_NAME, URLConstants.API_KEY_VALUE) .build(); Call call = client.newCall(request); call.enqueue(new Callback() { public void onResponse(Call call, Response response) throws IOException { System.out.println(response.body().string()); } public void onFailure(Call call, IOException e) { // error  } }); } } Here we are customizing the client by using the builder pattern to set the timeout values of read and write operations.\nNext, we are creating the request using the Request.Builder for setting the API URL and API keys in the HTTP request header. Then we make an asynchronous HTTP call on the client and receive the response by attaching a Callback handler.\nSynchronous POST Request The below code illustrates executing a synchronous HTTP POST request using the OkHttpClient API:\npublic class OkHttpClientApp { public void invokePost() throws URISyntaxException, IOException { OkHttpClient client = new OkHttpClient.Builder() .readTimeout(1000, TimeUnit.MILLISECONDS) .writeTimeout(1000, TimeUnit.MILLISECONDS) .build(); //1. Create JSON Request for sending in the POST method  String requestBody = prepareRequest(); //2. Create Request Body  RequestBody body = RequestBody.create( requestBody, MediaType.parse(\u0026#34;application/json\u0026#34;)); //3. Create HTTP request  Request request = new Request.Builder() .url(\u0026#34;https://reqbin.com/echo/post/json\u0026#34;) .post(body) .addHeader(URLConstants.API_KEY_NAME, URLConstants.API_KEY_VALUE) .build(); //4. Synchronous call to the REST API  Response response = client.newCall(request).execute(); System.out.println(response.body().string()); } // Create JSON string with Jackson library  private String prepareRequest() throws JsonProcessingException { var values = new HashMap\u0026lt;String, String\u0026gt;() { { put(\u0026#34;Id\u0026#34;, \u0026#34;12345\u0026#34;); put(\u0026#34;Customer\u0026#34;, \u0026#34;Roger Moose\u0026#34;); put(\u0026#34;Quantity\u0026#34;, \u0026#34;3\u0026#34;); put(\u0026#34;Price\u0026#34;, \u0026#34;167.35\u0026#34;); } }; var objectMapper = new ObjectMapper(); String requestBody = objectMapper.writeValueAsString(values); return requestBody; } } Here we have created a JSON string in the prepareRequest() method for sending the request body in the HTTP POST method.\nNext, we are creating the request using the Request.Builder for setting the API URL and API keys in the HTTP request header.\nWe are then setting this in the OkHttpClient request while creating the request using the Request.Builder before making a synchronous call to the API by invoking the newCall() method on the OkHttpClient.\nOkHttp performs best when we create a single OkHttpClient instance and reuse it for all HTTP calls in the application. Popular HTTP clients like Retrofit and Picasso used in Android applications use OkHttp underneath.\nSpring WebClient Spring WebClient is an asynchronous, reactive HTTP client introduced in Spring 5 in the Spring WebFlux project to replace the older RestTemplate for making REST API calls in applications built with the Spring Boot framework. It supports synchronous, asynchronous, and streaming scenarios.\nDependency For using WebClient, we need to add a dependency on the Spring WebFlux starter module:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-webflux\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.5.RELEASE\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Here we have added a Maven dependency on spring-boot-starter-webflux in pom.xml. Spring WebFlux is part of Spring 5 and provides support for reactive programming in web applications.\nAsynchronous GET Request This is an example of an asynchronous GET request made with the WebClient:\npublic class WebClientApp { public void invoke() { WebClient client = WebClient.create(); client .get() .uri(URLConstants.URL) .header(URLConstants.API_KEY_NAME, URLConstants.API_KEY_VALUE) .retrieve() .bodyToMono(String.class) .subscribe(result-\u0026gt;System.out.println(result)); } } In this code fragment, we first create the client with default settings. Next, we call the get() method on the client for the HTTP GET request and uri and header methods for setting the API endpoint URL and access control header.\nThe retrieve() method called next in the chain is used to make the API call and get the response body which is converted to Mono with the bodyToMono() method. We finally subscribe in a non-blocking way on the Mono wrapper returned by the bodyToMono() method using the subscribe() method.\nSynchronous POST Request Although Spring WebClient is asynchronous, we can still make a synchronous call by calling the block() method which blocks the thread until the end of execution. We get the result after the method execution.\nLet us see an example of a synchronous POST request made with the WebClient:\npublic class WebClientApp { public void invokePost() { WebClient client = WebClient.create(); String result = client .post() .uri(\u0026#34;https://reqbin.com/echo/post/json\u0026#34;) .body(BodyInserters.fromValue(prepareRequest())) .exchange() .flatMap(response -\u0026gt; response.bodyToMono(String.class)) .block(); System.out.println(\u0026#34;result::\u0026#34; + result); } private String prepareRequest() { var values = new HashMap\u0026lt;String, String\u0026gt;() { { put(\u0026#34;Id\u0026#34;, \u0026#34;12345\u0026#34;); put(\u0026#34;Customer\u0026#34;, \u0026#34;Roger Moose\u0026#34;); put(\u0026#34;Quantity\u0026#34;, \u0026#34;3\u0026#34;); put(\u0026#34;Price\u0026#34;, \u0026#34;167.35\u0026#34;); } }; var objectMapper = new ObjectMapper(); String requestBody; try { requestBody = objectMapper.writeValueAsString(values); } catch (JsonProcessingException e) { e.printStackTrace(); return null; } return requestBody; } } Here we have created a JSON string in the prepareRequest() method and then sent this string as the request body in the HTTP POST method.\nWe have used the exchange() method to call the API here. The exchange() method provides more control in contrast to the retrieve() method used previously by providing access to the response from the HTTP client.\nPlease refer to an earlier post for a more elaborate explanation of using Spring WebClient.\nApache HttpClient vs. OkHttpClient vs. Spring WebClient - Which Client to Use? In this post, we looked at the commonly used HTTP clients in Java applications. We also explored the usage of each of those clients with the help of examples of making HTTP GET and POST requests. Here is a summary of the important points:\nIf we do not want to add any external libraries, Java\u0026rsquo;s native HTTPClient is the first choice for Java 11+ applications.\nSpring WebClient is the preferred choice for Spring Boot applications more importantly if we are using reactive APIs.\nApache HttpClient is used in situations when we want maximum customization and flexibility for configuring the HTTP client. It also has the maximum available documentation on various sites on the internet compared to other libraries due to its widespread use in the community.\nSquare’s OkHttpClient is recommended when we are using an external client library. It is feature-rich, highly configurable, and has APIs which are easier to use compared to the other libraries, as we saw in the examples earlier.\nYou can refer to all the source code used in the article on Github.\n","date":"November 9, 2021","image":"https://reflectoring.io/images/stock/0046-rack-1200x628-branded_hu38983fac43ab7b5246a0712a5f744c11_252723_650x0_resize_q90_box.jpg","permalink":"/comparison-of-java-http-clients/","title":"Comparison of Java HTTP Clients"},{"categories":["Software Craft"],"contents":"Let\u0026rsquo;s talk about what it means to find the one best way of doing something. We\u0026rsquo;ve been trying to answer this question since the dawn of someone teaching anyone how to do a task. Quite simply, it only makes sense to teach the best way you know how to do something if you\u0026rsquo;re going to teach it at all.\nWhat\u0026rsquo;s the \u0026ldquo;One Best Way\u0026rdquo;? When we take a step back to consider what goes into declaring the best way, we have to look at what this meant 100 years ago. Indeed, within the last century, this dilemma was being pondered by Frederick Taylor, and Frank and Lillian Gilbreth, among others. With mass production slowly gaining traction, finding a \u0026lsquo;one best way\u0026rsquo; wasn\u0026rsquo;t simply an academic thought experiment but a million-dollar question.\nTaylor felt that you should find the most efficient worker in the factory and train everybody else to do things the same way that that worker did. This has become known as \u0026ldquo;scientific management\u0026rdquo; or \u0026ldquo;Taylorism\u0026rdquo;.\nThe Gilbreths thought that maybe there was a little more to it and that maybe there wasn\u0026rsquo;t just a single one best way.\nFrank and especially Lillian Gilbreth were also doing some studies that sound modern to us about worker satisfaction, psychological fitness for jobs, feedback loops and fatigue reduction.\nThese discussions were happening in parallel to Taylor\u0026rsquo;s experiments in different silos within Ford Motor Company, essentially reinventing the wheel that scientific management was still building.\nThe emphasis of the conversation was very much on manufacturing physical goods as quickly as efficiently and efficiently as possible. While today\u0026rsquo;s \u0026lsquo;One Best Way\u0026rsquo; discussion centres more on the production of digital goods and services, one salient point stands: if you\u0026rsquo;re going to teach someone to do a task, you\u0026rsquo;re going to set the standards for what\u0026rsquo;s good enough, and you\u0026rsquo;re going to push for the best way to be the cheapest way to make a buck. None of this is inherently wrong. The pursuit of maximizing our output levels is a natural byproduct of industry. It\u0026rsquo;s just how business works.\nThe one best way should be least expensive not only in terms of money but in terms of time, effort, fatigue and materials. In essence, you should be saving on all of that wherever it\u0026rsquo;s possible. So, the one best way is cheaper, and it\u0026rsquo;s teachable. We have to have a working cohort that shares enough of the base understanding to produce a predictable product.\nAdopting the One Best Way It\u0026rsquo;s really difficult to teach a very heterogeneous group of people to do something consistently and reliably. It\u0026rsquo;s one thing to say that \u0026lsquo;all of our developers follow this code style\u0026rsquo;, but if you tried to make people in marketing or sales complete all of their work using JIRA with tickets and Git commits, productivity would plummet. It’s vital to have some level of uniformity underlying the work that you\u0026rsquo;re doing, but not to the overarching detriment of your team.\nOne way that we get people to adopt this uniformity is to punish failure. This phenomenon is partially why parking fines have proven to be an overwhelmingly successful method of deterring long stays in loading bays. But a better way is to make failure harder than success.\nPoka-Yoke (\u0026ldquo;mistake-proofing\u0026rdquo;) - it\u0026rsquo;s not just a great sounding word, but a part of the Toyota method to prevent incorrect operation by the user.\nAs an example, to start your car, your foot has to be pressing down on the brake. It\u0026rsquo;s not a technical requirement to start the engine, but it\u0026rsquo;s a safety interlock. If your foot isn\u0026rsquo;t on the brake, then maybe the car shouldn\u0026rsquo;t start, maybe the dog has climbed across the dashboard, or maybe you\u0026rsquo;re not paying attention to what you\u0026rsquo;re doing. So, cars make it hard to do the wrong thing by saying “your foot must be on the brake, and then you can start the car”. When we\u0026rsquo;re thinking about how to do things the one best way, it\u0026rsquo;s really useful to think about not doing things the worst way and disincentivizing doing things the wrong way.\nContinuous Improvement in Agile Workflows Making failure hard in Agile workflows can be highly effective in building team-wide habits. Error-proofing is also incredibly important when it comes to building resilient continuous improvement (CI) practices. Once you embrace the principle at the heart of Poka-Yoke that mistakes are inevitable, but errors can be eliminated, then you can take an honest look at the way your processes are built to construct the bespoke \u0026lsquo;one best way\u0026rsquo; for any given process. Here\u0026rsquo;s a blueprint that you can leverage within your teams to strengthen agile workflows the Poka-Yoke way:\nCreate a Committee To fully capture the impact of current processes around error handling, your implementation team needs to be composed of cross-functional members. You\u0026rsquo;ll need engineering, QA, product and design as a minimum in order to accurately report on the \u0026lsquo;blast radius\u0026rsquo; at hand. It can be tempting to operate with space between our silos but space is dangerous; space is the act of not learning, and the world is not holding still for that.\nYou probably remember buying computers from companies that no longer make computers, not because we didn\u0026rsquo;t have the technological know-how, but because they stopped innovating. They built space between their processes, and over time, the reciprocation of feedback came too late in the game.\nProvide Proper Context We need to solicit honest feedback and create a blame-free environment. Encourage everyone to bring their learnings to every discussion and make the review process a safe space for everyone to discuss the performance at its worst.\nOne of the things that I find in parenting is the limitations that positive reinforcement can pose. Say for example, I want you to wash your hands, and I go check and see if the towel is wet after you\u0026rsquo;ve washed your hands and see if you\u0026rsquo;ve dried it off. Well, maybe you wash your hands, and maybe you just got the towel wet. I\u0026rsquo;m incentivizing the wrong thing because I\u0026rsquo;m not really testing what I care about. It\u0026rsquo;s really important to have measurements so that you can get this feedback, but it\u0026rsquo;s also really important to be aware that your observation of the system changes the system.\nSpecify Your Metrics The act of observing can flip a beam to a particle and make people use their metrics the way you want to see. I think we\u0026rsquo;ve all experienced this at work where somebody\u0026rsquo;s like \u0026lsquo;well, I get graded on the number of articles I write, so I\u0026rsquo;m just going to write a bunch of low-quality articles\u0026rsquo;. I don\u0026rsquo;t really care if they even get published; you just measured me on how much I\u0026rsquo;m writing.\nSo when we\u0026rsquo;re thinking about how to make continuous improvement, it\u0026rsquo;s really important that we measure something that shows what we care about, and I really feel like the Accelerate book and the DORA report is a great way to look at things that you can measure that are useful proxies for what you want to be doing is expensive. We resist change because it\u0026rsquo;s expensive.\nImplementation Through Validation Change is something that we automatically resist because we know how to do the old way, and we don\u0026rsquo;t know how to do the new one, and that\u0026rsquo;s pretty terrifying for us. So what do we do? First, we measure what matters. You know this problem; whatever you measure becomes the goal. Monitor for improvements in productivity, reduction in issues and overall adoption. Make it easy for your team to determine whether the change was effective.\nRemain Indefinitely Iterative Remember to ask yourself, \u0026lsquo;where\u0026rsquo;s the feedback loop?\u0026rsquo; in your proposed solution. If we operate with a waterfall method and information only flows one way, it can\u0026rsquo;t really come back to improve anything. The waterfall is about top-down, being taught, and the one best way. But if we improve the feedback loop. We get continuous improvement. We get a fountain that can power itself at least for a while, and the more rich and useful our feedback loop is, the faster we can improve, the more continuous our improvement is.\nKeep Changing The one best way isn\u0026rsquo;t any particular way, but rather it\u0026rsquo;s the act of learning and doing. Continual improvement is something that is really hard to do because, quite simply, change is hard. The only way to be right, to make continuous improvement, is to keep changing. Keep changing mindfully and in view of the feedback that you\u0026rsquo;re getting, but keep changing all the time.\nCatch my talk on the One Best Way from this year’s Spring One conference right here.\n","date":"November 7, 2021","image":"https://reflectoring.io/images/stock/0100-motor-1200x628-branded_hu27daf92d9dece49b58b30c88717afe92_170013_650x0_resize_q90_box.jpg","permalink":"/one-best-way/","title":"One Best Way - Continuous Improvement in Software Engineering"},{"categories":["Spring Boot"],"contents":"Feature flags are a great tool to improve confidence in deployments and to avoid impacting customers with unintended changes.\nInstead of deploying a new feature directly to production, we \u0026ldquo;hide\u0026rdquo; it behind an if/else statement in our code that evaluates a feature flag. Only if the feature flag is enabled, will the user see the change in production.\nBy default, feature flags are disabled so that we can deploy with the confidence of knowing that nothing will change for the users until we flip the switch.\nSometimes, however, new features are a bit bigger and a single if/else statement is not the right tool to feature flag the change. Instead, we want to replace a whole method, object, or even a whole module with the flip of a feature flag.\nThis tutorial introduces several ways of feature flagging code in a Spring Boot app.\nIf you are interested in feature flags in general, I recently wrote about using different feature flagging tools and how to do zero-downtime database changes with feature flags.\n Example Code This article is accompanied by a working code example on GitHub. Simple if/else Let\u0026rsquo;s start with the simplest way of feature flagging a change: the if/else statement.\nSay we have a method Service.doSomething() that should return a different value depending on a feature flag. This is what it would look like:\n@Component class Service { private final FeatureFlagService featureFlagService; public Service(FeatureFlagService featureFlagService) { this.featureFlagService = featureFlagService; } public int doSomething() { if (featureFlagService.isNewServiceEnabled()) { return \u0026#34;new value\u0026#34;; } else { return \u0026#34;old value\u0026#34;; } } } We have a FeatureFlagService that we can ask if a certain feature flag is enabled. This service is backed by a feature flagging tool like LaunchDarkly or Togglz or it may be a homegrown implementation.\nIn our code, we simply ask the FeatureFlagService if a certain feature is enabled, and return a value depending on whether the feature is enabled or not.\nThat\u0026rsquo;s pretty straightforward and doesn\u0026rsquo;t even rely on any specific Spring Boot features. Many new changes are small enough to be introduced with a simple if/else block.\nSometimes, however, a change is bigger than that. We would have to add multiple if/else blocks across the codebase and that would unnecessarily pollute the code.\nIn this case, we might want to replace a whole method instead.\nReplacing a Method If we have a bigger feature or simply don\u0026rsquo;t want to sprinkle feature flags all over the code of a long method, we can replace a whole method with a new method.\nIf you want to play along, have a look at the code on GitHub.\nSay we have a class called OldService that implements two methods:\n@Component class OldService { public String doSomething() { return \u0026#34;old value\u0026#34;; } public int doAnotherThing() { return 2; } } We want to replace the doSomething() method with a new method that is only active behind a feature flag.\nIntroduce an Interface The first thing we do is to introduce an interface for the method(s) that we want to make feature flaggable:\ninterface Service { String doSomething(); } @Component class OldService { @Override public String doSomething() { return \u0026#34;old value\u0026#34;; } public int doAnotherThing() { return 2; } } Notice that the interface only declares the doSomething() method and not the other method, because we only want to make this one method flaggable.\nPut the New Feature Behind the Interface Then, we create a class called NewService that implements this interface as well:\n@Component class NewService implements Service { @Override public String doSomething() { return \u0026#34;new value\u0026#34;; } } This class defines the new behavior we want to see, i.e. the behavior that will be activated when we activate the feature flag.\nNow we have two classes OldService and NewService implementing the doSomething() method and we want to toggle between those two implementations with a feature flag.\nImplement a Feature Flag Proxy For this, we introduce a third class named FeatureFlaggedService that also implements our Service interface:\n@Component @Primary class FeatureFlaggedService implements Service { private final FeatureFlagService featureFlagService; private final NewService newService; private final OldService oldService; public FeatureFlaggedService( FeatureFlagService featureFlagService, NewService newService, OldService oldService) { this.featureFlagService = featureFlagService; this.newService = newService; this.oldService = oldService; } @Override public String doSomething() { if (featureFlagService.isNewServiceEnabled()) { return newService.doSomething(); } else { return oldService.doSomething(); } } } This class takes an instance of OldService and an instance of NewService and acts as a proxy for the doSomething() method.\nIf the feature flag is enabled, FeatureFlaggedService.doSomething() will call the NewService.doSomething(), otherwise it will stick to the old service\u0026rsquo;s implementation OldService.doSomething().\nReplacing a Method in Action To demonstrate how we would use this code in a Spring Boot project, have a look at the following integration test:\n@SpringBootTest public class ReplaceMethodTest { @MockBean private FeatureFlagService featureFlagService; @Autowired private Service service; @Autowired private OldService oldService; @BeforeEach void resetMocks() { Mockito.reset(featureFlagService); } @Test void oldServiceTest() { given(featureFlagService.isNewServiceEnabled()).willReturn(false); assertThat(service.doSomething()).isEqualTo(\u0026#34;old value\u0026#34;); assertThat(oldService.doSomethingElse()).isEqualTo(2); } @Test void newServiceTest() { given(featureFlagService.isNewServiceEnabled()).willReturn(true); assertThat(service.doSomething()).isEqualTo(\u0026#34;new value\u0026#34;); // doSomethingElse() is not behind a feature flag, so it  // should return the same value independent of the feature flag  assertThat(oldService.doSomethingElse()).isEqualTo(2); } } In this test, we mock the FeatureFlagService so that we can define the feature flag state to be either enabled or disabled.\nWe let Spring autowire a bean of type Service and a bean of type OldService.\nThe injected Service bean will be backed by the FeatureFlaggedService bean because we have marked it as @Primary above. That means Spring will pick the FeatureFlaggedService bean over the OldService and NewService beans, which are also implementations of Service and which are also available in the application context (because they are both annotated with @Component above).\nIn oldServiceTest(), we disable the feature flag and make sure that service.doSomething() returns the value calculated by the OldService bean.\nIn newServiceTest(), we enable the feature flag and assert that service.doSomething() now returns the value calculated by the NewService bean. We also check that oldService.doSomethingElse() still returns the old value, because this method is not backed by the feature flag and thus shouldn\u0026rsquo;t be affected by it.\nTo recap, we can introduce an interface for the method(s) that we want to put behind a feature flag and implement a \u0026ldquo;proxy\u0026rdquo; bean that switches between two (or more) implementations of that interface.\nSometimes, changes are even bigger and we would like to replace a whole bean instead of just a method or two, though.\nReplacing a Spring Bean If we want to replace a whole bean depending on a feature flag evaluation, we could use the method described above and create a proxy for all methods of the bean.\nHowever, that would require a lot of boilerplate code, especially if we\u0026rsquo;re using this pattern with multiple different services.\nWith the FactoryBean concept, Spring provides a more elegant mechanism to replace a whole bean.\nAgain, we have two beans, OldService and NewService implementing the Service interface:\nWe now want to completely replace the OldService bean with the NewService bean depending on the value of a feature flag. And we want to be able to do this in an ad-hoc fashion, without having to restart the application!\nIf you want to have a look at the code, it\u0026rsquo;s on GitHub.\nImplementing a FeatureFlagFactoryBean We\u0026rsquo;ll take advantage of Spring\u0026rsquo;s FactoryBean concept to replace one bean with another.\nA FactoryBean is a special bean in Spring\u0026rsquo;s application context. Instead of contributing itself to the application context, as normal beans annotated with @Component or @Bean do, it contributes a bean of type \u0026lt;T\u0026gt; to the application context.\nEach time a bean of type \u0026lt;T\u0026gt; is required by another bean in the application context, Spring will ask the FactoryBean for that bean.\nWe can leverage that to check for the feature flag value each time the FactoryBean is asked for a bean of type Service, and then return the NewService or OldService bean depending on the feature flag value.\nThe implementation of our FactoryBean looks like this:\npublic class FeatureFlagFactoryBean\u0026lt;T\u0026gt; implements FactoryBean\u0026lt;T\u0026gt; { private final Class\u0026lt;T\u0026gt; targetClass; private final Supplier\u0026lt;Boolean\u0026gt; featureFlagEvaluation; private final T beanWhenTrue; private final T beanWhenFalse; public FeatureFlagFactoryBean( Class\u0026lt;T\u0026gt; targetClass, Supplier\u0026lt;Boolean\u0026gt; featureFlagEvaluation, T beanWhenTrue, T beanWhenFalse) { this.targetClass = targetClass; this.featureFlagEvaluation = featureFlagEvaluation; this.beanWhenTrue = beanWhenTrue; this.beanWhenFalse = beanWhenFalse; } @Override public T getObject() { InvocationHandler invocationHandler = (proxy, method, args) -\u0026gt; { if (featureFlagEvaluation.get()) { return method.invoke(beanWhenTrue, args); } else { return method.invoke(beanWhenFalse, args); } }; Object proxy = Proxy.newProxyInstance( targetClass.getClassLoader(), new Class[]{targetClass}, invocationHandler); return (T) proxy; } @Override public Class\u0026lt;?\u0026gt; getObjectType() { return targetClass; } } Let\u0026rsquo;s look at what the code does:\n We implement the FactoryBean\u0026lt;T\u0026gt; interface, which requires us to implement the getObject() and getObjectType() methods. In the constructor, we pass a Supplier\u0026lt;Boolean\u0026gt; that evaluates if a feature flag is true or false. We must pass a callback like this instead of just passing the value of the feature flag because the feature flag value can change over time! In the constructor, we also pass two beans of type \u0026lt;T\u0026gt;: one to use when the feature flag is true (beanWhenTrue), another for when it\u0026rsquo;s false (beanWhenFalse). The interesting bit happens in the getObject() method: here we use Java\u0026rsquo;s built-in Proxy feature to create a proxy for the interface of type T. Every time a method on the proxy gets called, it decides based on the feature flag which of the beans to call the method on.  The TL;DR is that the FeatureFlagFactoryBean returns a proxy that forwards method calls to one of two beans, depending on a feature flag. This works for all methods declared on the generic interface of type \u0026lt;T\u0026gt;.\nAdding the Proxy to the Application Context Now we have to put our new FeatureFlagFactoryBean into action.\nInstead of adding our OldService and NewService beans to Spring\u0026rsquo;s application context, we will add a single factory bean like this:\n@Component class FeatureFlaggedService extends FeatureFlagFactoryBean\u0026lt;Service\u0026gt; { public FeatureFlaggedService(FeatureFlagService featureFlagService) { super( Service.class, featureFlagService::isNewServiceEnabled, new NewService(), new OldService()); } } We implement a bean called FeatureFlaggedService that extends our FeatureFlagFactoryBean from above. It\u0026rsquo;s typed with \u0026lt;Service\u0026gt;, so that the factory bean knows which interface to proxy.\nIn the constructor, we pass the feature flag evaluation function, a NewService instance for when the feature flag is true, and an OldService instance for when the feature flag is false.\nNote that the NewService and OldService classes are no longer annotated with @Component, so that our factory bean is the only place that adds them to Spring\u0026rsquo;s application context.\nReplacing a Spring Bean in Action To show how this works in action, let\u0026rsquo;s take a look at this integration test:\n@SpringBootTest public class ReplaceBeanTest { @MockBean private FeatureFlagService featureFlagService; @Autowired private Service service; @BeforeEach void resetMocks() { Mockito.reset(featureFlagService); } @Test void oldServiceTest() { given(featureFlagService.isNewServiceEnabled()).willReturn(false); assertThat(service.doSomething()).isEqualTo(\u0026#34;old value\u0026#34;); } @Test void newServiceTest() { given(featureFlagService.isNewServiceEnabled()).willReturn(true); assertThat(service.doSomething()).isEqualTo(\u0026#34;new value\u0026#34;); } } We let Spring inject a bean of type Service into the test. This bean will be backed by the proxy generated by our FeatureFlagFactoryBean.\nIn oldServiceTest() we disable the feature flag and assert that the doSomething() method returns the value provided by OldService.\nIn newServiceTest() we enable the feature flag and assert that the doSomething() method returns the value provided by NewService.\nMake Features Evident in Your Code This article has shown that you don\u0026rsquo;t need to sprinkle messy if/else statements all over your codebase to implement feature flags.\nInstead, make the features evident in your code by creating interfaces and implementing them in different versions.\nThis allows for simple code, easy switching between implementations, easier-to-understand code, quick cleanup of feature flags, and fewer headaches when deploying features into production.\nThe code from this article (and other articles on feature flags) is available on GitHub for browsing and forking.\n","date":"November 6, 2021","image":"https://reflectoring.io/images/stock/0112-decision-1200x628-branded_hu7f90dfae195e1917856533d5015220f4_81515_650x0_resize_q90_box.jpg","permalink":"/spring-boot-feature-flags/","title":"Feature Flags with Spring Boot"},{"categories":["Software Craft"],"contents":"In discussions around software development, it\u0026rsquo;s almost impossible to avoid quoting a law or two.\n\u0026ldquo;This won\u0026rsquo;t work because of \u0026lsquo;The Law of X\u0026rsquo;!\u0026rdquo; you might have heard people say. Or \u0026ldquo;Don\u0026rsquo;t you know \u0026lsquo;The Y Principle\u0026rsquo;? What kind of software developer are you?\u0026rdquo;.\nThere are many laws and principles to quote and most of them are based on truth. Applying them blindly using absolute statements like above is a sure path toward bruised egos and failure, however.\nThis article enumerates some of the most popular laws and principles that can be applied to software development. For each law, we will quickly discuss its main proposition and then explore how we can apply it to software development (and maybe when we shouldn\u0026rsquo;t).\nPareto Principle (80/20 Rule) What Does It Mean? The Pareto Principle states that more often than not 80% of the results come from 20% of the causes. The numbers 80 and 20 are not exact by any means, but the general idea of the principle is that the results are often not evenly distributed.\nWe can observe this rule in many areas of life, for example:\n the world\u0026rsquo;s richest 20% make 80% of the world\u0026rsquo;s income, 80% of crimes are commited by 20% of the criminals, and since 2020 we know that 80% of virus transmissions come from 20% of the infected population.  How Does It Help in Software Development? The main benefit we can take from the Pareto Principle is focus. It can help us to focus on the important things (the 20%) instead of wasting time and effort on the unimportant things (the other 80%). The unimportant things often seem important to us because there are so many (and they seem urgent). But the best results are often achieved by focusing on the important few.\nIn software development, we can use it to put our focus on building the right features, for example:\n focus on the 20% of product features that make up 80% of the product\u0026rsquo;s value, focus on the 20% of the bugs that cause 80% of user frustration, focus on the 80% of product features take 20% of the total time to build, \u0026hellip;  Just asking \u0026ldquo;what is the most important thing to build right now?\u0026rdquo; can help to build the next most important thing instead of the next most urgent thing.\nModern development methodologies like Agile and DevOps help in gaining that focus, by the way! Quick iterations with regular user feedback allow for data-driven decisions on what is important. Practices like trunk-based development with feature-flagging (for example with LaunchDarkly) help software teams to get there.\nBroken Windows Theorem What Does It Mean? A broken window invites vandalism so that it doesn\u0026rsquo;t take long until all windows are broken.\nIn general: chaos invites more chaos.\nIf our environment is pristine, we are motivated to keep it that way. The more chaos creeps into the environment, the lower is our threshold to add to the chaos. After all there is already chaos \u0026hellip; who cares if we add a bit more to it?\nThe main benefit we can take from this rule is that we should be aware of the chaos around us. If it reaches a level where people get so used to it that they don\u0026rsquo;t care about it anymore, it might be best to bring some order into the chaos.\nHow Does It Help in Software Development? In software development, we can apply it to code quality: every code smell we let into our codebase reduces our threshold to add more code smells. We should Start Clean and keep the code base clean to avoid this from happening. The reason that many codebases are so hard to understand and maintain is that a Broken Window has crept in and hasn\u0026rsquo;t been fixed quickly enough.\nWe can apply the principle to test coverage as well: as soon as a certain amount of code has crept into the codebase that is not covered with tests, more uncovered code will be added. This is an argument to maintain 100% code coverage (of the code that should be covered) so we can see the cracks before a window breaks.\nOccam\u0026rsquo;s Razor What Does It Mean? A philosophical razor is a principle that helps to explain certain things by eliminating (or \u0026ldquo;shaving off\u0026rdquo;) unlikely explanations.\nOccam\u0026rsquo;s Razor states that if there are multiple hypotheses, we should choose the hypothesis with the fewest assumptions (which will most likely be the hypothesis with the simples explanation).\nHow Does It Help in Software Development? We can apply Occam\u0026rsquo;s Razor in incident analysis. You probably have been there: a user reported an issue with your app, but you have no clue what caused the issue. So you\u0026rsquo;re searching through logs and metrics, trying to find the root cause.\nThe next time a user reports an error, maintain an incident investigation document. Write down your hypotheses for what caused the issue. Then, for each hypothesis, list the facts and assumptions. If an assumption proved true, label it as a fact. If an assumption proved false, remove it from the document or label it as false. At any time, you can now focus your time on the most probable hypothesis, instead of wasting time chasing red herrings.\nDunning-Kruger Effect What Does It Mean? The Dunning-Kruger Effect states that inexperienced people tend to overestimate their abilities and experienced people tend to underestimate their abilities.\nIf you\u0026rsquo;re bad at something you think you\u0026rsquo;re good at it. If you\u0026rsquo;re good at something, you think you\u0026rsquo;re bad at it - this can result in Impostor Syndrome which makes you doubt your own abilities so much that you\u0026rsquo;re uncomfortable among other people with similar skill - unnecessarily afraid to be exposed as a fraud.\nHow Does It Help in Software Development? Being aware of this cognitive bias is a good step in the right direction already. It will help you evaluate your own skills better so that you can either ask for help, or overcome your self-doubts and do it yourself.\nA practice that helps to dull the Dunning-Kruger Effect and Impostor Syndrome is pair or mob programming. Instead of working by yourself, basking in your self-doubts or thoughts of superiority, you work closely with other people, exchanging ideas, learning and teaching while you work.\nThis only works in a safe environment, though. In an environment where individualism is glorified, pair or mob programming can lead to increased self-doubts or increased delusions of superiority.\nPeter Principle What Does It Mean? The Peter Principle states that you are promoted as long as your are successful until you end up with a job in which you are incompetent. Since you are not successful anymore, you will not be promoted any more, meaning you will live with a job that doesn\u0026rsquo;t bring you satisfaction or success, often for the rest of your working life.\nA grim outlook.\nHow Does It Help in Software Development? In software development, the Peter Principle often applies when you switch roles from a developer career into a management career. Being a good developer doesn\u0026rsquo;t necessarily mean that you are a good manager, however. Or you might be a good manager, but just don\u0026rsquo;t derive the satisfaction from the manager job that you got from the developer job, meaning that you don\u0026rsquo;t put all your effort into it (this was the case for me). In any case, you\u0026rsquo;re miserable and don\u0026rsquo;t see any future growth in the career path ahead of you.\nIn this case, take a step back and decide what you want your career to look like. Then, switch roles (or companies, if need be) to get the role you want.\nParkinson\u0026rsquo;s Law What Does It Mean? Parkinson\u0026rsquo;s Law states that work will always fill the time that is allotted for it. If your project has a deadline in two weeks, the project will not be finished before then. It may take longer, yes, but never less than the time we allotted for it, because we\u0026rsquo;re filling the time with unnecessary work or procrastination.\nHow Does It Help in Software Development? The main drivers of Parkinson\u0026rsquo;s Law are:\n procrastination (\u0026ldquo;the deadline is so far away, so I don\u0026rsquo;t need to hustle right now\u0026hellip;\u0026quot;), and scope creep (\u0026ldquo;sure, we can add this little feature, it won\u0026rsquo;t cost us too much time\u0026hellip;\u0026quot;).  To fight procrastination, we can set deadlines in days instead of weeks or months. What needs to be done in the next 2-3 days to move towards the goal? A (healthy!) deadline can give us the right amount of motivation to not fall into a procrastination slump.\nTo keep scope creep at bay, we should have a very clear picture of what we\u0026rsquo;re trying to achieve with the project. What are the metrics for success? Does this new feature add to those metrics? Then we should add it if everybody understands that the work will take longer. If the new feature doesn\u0026rsquo;t match the mission statement, leave it be.\nHofstadter\u0026rsquo;s Law What Does It Mean? Hofstadter\u0026rsquo;s law states that **\u0026ldquo;It always takes longer than you expect, even when you take into account Hofstadter\u0026rsquo;s Law\u0026rdquo;.\nEven when you know about this law, and increase the allotment of time for a project, it will still take longer than you expect. This is closely related to Parkinson\u0026rsquo;s Law, which says that work will always fill the time allotted for it. Only that Hofstadter\u0026rsquo;s law says that it fill more than the time allotted.\nThis law is backed by psychology. We\u0026rsquo;re prone to the so-called \u0026ldquo;Planning Fallacy\u0026rdquo; that states that when estimating work we usually don\u0026rsquo;t take all available information into account, even if we think we did. Our estimates are almost always subjective and very seldom correct.\nHow Does It Help in Software Development? In software development (and in any other project-based work, really), our human optimism gets the best of us. Estimates are almost always too optimistic.\nTo reduce the effect of Hofstadter\u0026rsquo;s law, we can try to make an estimate as objective as possible.\nWrite down assumptions and facts about the project. Mark each item as an assumption or a fact to make the quality of the data visible and manage expectations.\nDon\u0026rsquo;t rely on gut feel, because it\u0026rsquo;s different for each person. Write down estimates to get your brain thinking about them. Compare them with estimates from other people and then discuss the differences.\nEven then, it\u0026rsquo;s still just an estimate that very likely does not reflect reality. If an estimate is not based on statistics or other historical data, it has a very low value, so it\u0026rsquo;s always good to manage expectations with whoever asked you for an estimate - it\u0026rsquo;s always going to be wrong. It\u0026rsquo;s just going to be less wrong if you make it as objective as possible.\nConway\u0026rsquo;s Law What Does It Mean? Conway\u0026rsquo;s Law states that any system created by an organization will resemble this organization\u0026rsquo;s team and communication structure. The system will have interfaces where the teams building the system have interfaces. If you have 10 teams working on a system, you\u0026rsquo;ll most likely get 10 subsystems that communicate with each other.\nHow Does It Help in Software Development? We can apply what is called the Inverse Conway Maneuver: create the organizational structure that best supports the architecture of the system we want to build.\nDon\u0026rsquo;t have a fix team structure, but instead be flexible enough to create and disband teams as is best for the current state of the system.\nMurphy\u0026rsquo;s Law What Does It Mean? Murphy\u0026rsquo;s well-known law says that whatever can go wrong, will go wrong. It\u0026rsquo;s often cited after something unexpected happened.\nHow Does It Help in Software Development? Software development is a profession where a lot of things go wrong. The main source of things going wrong are bugs. There is no software that doesn\u0026rsquo;t have bugs or incidents that test the users' patience.\nWe can defend against Murphy\u0026rsquo;s Law by building habits into our daily software development practices that reduce the effect of bugs. We can\u0026rsquo;t avoid bugs altogether, but we can and should reduce their impact to the users.\nThe most helpful practice to fight Murphy’s Law is feature flagging. If we use a feature flagging platform like LaunchDarkly, we can deploy a change into production behind a feature flag. Then, we can use a targeted rollout to activate the flag for internal dogfooding before activating it for a small number of friendly beta users and finally releasing it to all users. This way, we can get feedback about the change from increasingly critical user groups. If a change goes wrong (and it will, at some point), the impact is minimal, because only a small user group will be affected by it. And, the flag can be quickly toggled off.\nBrook\u0026rsquo;s Law What Does It Mean? In the classic book \u0026ldquo;The Mythical Man Month\u0026rdquo;, Fred Brook famously states that adding manpower to a late project makes it later.\nEven though the book is talking about software projects, it applies to most kinds of projects, even outside of software development.\nThe reason that adding people doesn\u0026rsquo;t increase the velocity of a project is that projects have a communication overhead that increases exponentially with each person that is added to the project. Where 2 people have 1 communication path, 5 people already have 5! = 120 possible communication paths. It takes time for new people to settle in and identify the communication paths they need, which is why a late project will be later when adding new people to the project.\nHow Does It Help in Software Development? Pretty simple. Change the deadline instead of adding people to an already late project.\nBe realistic about the expectations of adding new people to a software project. Adding people to a project probably increases the velocity at some point, but not always, and certainly not immediately. People and teams need time to settle into a working routine and at some point work just can\u0026rsquo;t be parallelized enough so adding more people doesn\u0026rsquo;t make sense. Think hard about what tasks a new person should do and what you expect when adding that person to a project.\nPostel\u0026rsquo;s Law What Does It Mean? Postel\u0026rsquo;s law is also called the robustness principle and it states that you should \u0026ldquo;be conservative in what you do and liberal in what you accept from others\u0026rdquo;.\nIn other words, you can accept data in many different forms to make your software as flexible as possible, but you should be very careful in working with that data, so as not to compromise your software due to invalid or hostile data.\nHow Does It Help in Software Development? This law originates from software development, so it\u0026rsquo;s very directly applicable.\nInterfaces between your software and other software or humans should allow different forms of input for robustness:\n for backwards compatibility, a new version of the interface should accept the data in the form of the old version as well as the new, for better user experience, a form in a UI should accept data in different formats so that the user doesn\u0026rsquo;t have to worry about the format.  However, if we are liberal in accepting data in different formats, we have to be conservative in processing this data. We have to vet it for invalid values and make sure that we don\u0026rsquo;t compromise the security of our system by allowing too many different formats. SQL injection is one possible attack which is enabled by being too liberal with user input.\nKerchkhoff\u0026rsquo;s Principle What Does It Mean? Kerchkhoff\u0026rsquo;s principle states that a crypto system should be secure, even if its method is public knowledge. Only the key you use to decrypt something should need to be private.\nHow Does It Help in Software Development? It\u0026rsquo;s simple, really. Never trust a crypto system that requires its method to be private. This is called \u0026ldquo;security by obscurity\u0026rdquo;. A system like that is inherently insecure. Once the method is exposed to the public, it\u0026rsquo;s vulnerable to attacks.\nInstead, rely on publicly vetted and trusted symmetric and asymmetric encryption systems, implemented in open-source packages that can be publicly reviewed. Everyone who wants to know how they work internally can just look at the code and validate if they\u0026rsquo;re secure.\nLinus\u0026rsquo;s Law What Does It Mean? In his book \u0026ldquo;The Cathedral \u0026amp; the Bazaar\u0026rdquo; about the development of the Linux Kernel, Eric Raymond wrote that \u0026ldquo;given enough eyballs, all bugs are shallow\u0026rdquo;. He called this \u0026ldquo;Linus\u0026rsquo;s Law\u0026rdquo; in honor of Linus Torvalds.\nThe meaning is that bugs in code can be better exposed if many people look at the code than if few people look at the code.\nHow Does It Help in Software Development? If you want to get rid of bugs, have other people look at your code.\nA common practice that stems from the open source community is to have a developer raise a pull request with the code changes, and then have other developers review that pull request before it is merged into the main branch. This practice has found its way into closed-source development as well, but according to Linus\u0026rsquo;s law, pull requests are less helpfpul in a closed-source environment (where only a few people look at it) than in an open-source environment (where potentially a lot of contributors look at it).\nOther practices to add more eyeballs to code are pair programming and mob programming. At least in a closed-source environment, these are more effective in avoiding bugs than a pull request review, because everyone takes part in the inception of the code, which gives everyone a better context to understand the code and potential bugs.\nWirth\u0026rsquo;s Law What Does It Mean? Wirth\u0026rsquo;s law states that software is getting slower more rapidly than hardware is getting faster.\nHow Does It Help in Software Development? Don\u0026rsquo;t rely on the hardware being powerful enough to run badly-performing code. Instead, write code that is optimized to perform well.\nThis has to be balanced against the adage of [[Laws of Software Development#Knuth\u0026rsquo;s Optimization Principle]], which is saying that \u0026ldquo;premature optimization is the root of all evil\u0026rdquo;. Don\u0026rsquo;t spend more energy on making code run fast than you spend on building new features for your users.\nAs so often, this is a balancing act.\nKnuth\u0026rsquo;s Optimization Principle What Does It Mean? In one of his works, Donald Knuth wrote the sentence \u0026ldquo;premature optimization is the root of all evil\u0026rdquo;, which is often taken out of context and used as an excuse not to care about optimizing code at all.\nHow Does It Help in Software Development? According to Knuth\u0026rsquo;s law, we should not waste effort to optimize code prematurely. Yet, according to Wirth\u0026rsquo;s law, we also should not rely on hardware being fast enough to execute badly optimized code.\nIn the end, this is what I take away from these principles:\n optimize code where it can be done easily and without much effort: for example, write a couple lines of extra code to avoid going through a loop of potentially a lot of items optimize code in code paths that are executed all the time other than that, don\u0026rsquo;t put a lot of effort in optimizing code, unless you\u0026rsquo;ve identified a performance bottleneck.  Stay Doubtful Laws and principles are good to have. The allow us to evaluate certain situations from a certain perspective that we might not have had without them.\nBlindly applying laws and principles to every situation won\u0026rsquo;t work however. Every situation brings subtleties that may mean that a certain principle cannot or should not be applied.\nStay doubtful about the principles and laws you encounter. The world is not black and white.\n","date":"October 31, 2021","image":"https://reflectoring.io/images/stock/0111-hammer-1200x628-branded_hu6c46c3bc856ee9a8d73e468669803d36_111272_650x0_resize_q90_box.jpg","permalink":"/laws-and-principles-of-software-development/","title":"Laws and Principles of Software Development"},{"categories":["Java"],"contents":"In this series so far, we have learned how to use the Resilience4j Retry, RateLimiter, TimeLimiter, Bulkhead, Circuitbreaker core modules and also seen its Spring Boot support for the Retry and the RateLimiter modules.\nIn this article, we\u0026rsquo;ll focus on the TimeLimiter and see how the Spring Boot support makes it simple and more convenient to implement time limiting in our applications.\n Example Code This article is accompanied by a working code example on GitHub. High-level Overview If you haven\u0026rsquo;t read the previous article on the TimeLimiter, check out the \u0026ldquo;What is Time Limiting?\u0026quot;, \u0026ldquo;When to Use TimeLimiter?\u0026quot;, and \u0026ldquo;Resilience4j TimeLimiter Concepts\u0026rdquo; sections for a quick intro.\nYou can find out how to set up Maven or Gradle for your project here.\nUsing the Spring Boot Resilience4j TimeLimiter Module We will use the same example as the previous articles in this series. Assume that we are building a website for an airline to allow its customers to search for and book flights. Our service talks to a remote service encapsulated by the class FlightSearchService.\nLet\u0026rsquo;s see how to use the various features available in the TimeLimiter module. This mainly involves configuring the TimeLimiter instance in the application.yml file and adding the @TimeLimiter annotation on the Spring @Service component that invokes the remote operation.\nBasic Example Let\u0026rsquo;s say we want to set a time limit of 2s for the flight search call. In other words, if the call doesn\u0026rsquo;t complete within 2s, we want to be notified through an error.\nFirst, we will configure the TimeLimiter instance in the application.yml file:\nresilience4j: instances: basicExample: timeoutDuration: 2s Next, let\u0026rsquo;s add the @TimeLimiter annotation on the method in the bean that calls the remote service:\n@TimeLimiter(name = \u0026#34;basicExample\u0026#34;) CompletableFuture\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; basicExample(SearchRequest request) { return CompletableFuture.supplyAsync(() -\u0026gt; remoteSearchService.searchFlights(request)); } Here, we can see that the remote operation is being invoked asynchronously, with the basicExample() method returning a CompletableFuture to its caller.\nFinally, let\u0026rsquo;s call the time-limited basicExample() method from a different bean:\nSearchRequest request = new SearchRequest(\u0026#34;NYC\u0026#34;, \u0026#34;LAX\u0026#34;, \u0026#34;10/30/2021\u0026#34;); System.out.println(\u0026#34;Calling search; current thread = \u0026#34; + Thread.currentThread().getName()); CompletableFuture\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; results = service.basicExample(request); results.whenComplete((result, ex) -\u0026gt; { if (ex != null) { System.out.println(\u0026#34;Exception \u0026#34; + ex.getMessage() + \u0026#34; on thread \u0026#34; + Thread.currentThread().getName() + \u0026#34; at \u0026#34; + LocalDateTime.now().format(formatter)); } if (result != null) { System.out.println(result + \u0026#34; on thread \u0026#34; + Thread.currentThread().getName()); } }); Here\u0026rsquo;s sample output for a successful flight search that took less than the 2s timeoutDuration we specified:\nCalling search; current thread = main Searching for flights; current time = 13:13:55 705; current thread = ForkJoinPool.commonPool-worker-3 Flight search successful at 13:13:56 716 [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;10/30/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] on thread ForkJoinPool.commonPool-worker-3 The output shows that the search was called from the main thread, and executed on a different thread.\nAnd this is sample output for a flight search that timed out:\nCalling search; current thread = main Searching for flights; current time = 13:16:03 710; current thread = ForkJoinPool.commonPool-worker-3 Exception java.util.concurrent.TimeoutException: TimeLimiter \u0026#39;timeoutExample\u0026#39; recorded a timeout exception. on thread pool-2-thread-1 at 13:16:04 215 java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException: TimeLimiter \u0026#39;timeoutExample\u0026#39; recorded a timeout exception. at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ... other lines omitted ... Flight search successful at 13:16:04 719 The timestamps and thread names above show that the caller got a TimeoutException even as the asynchronous operation finished later on a different thread.\nSpecifying a Fallback Method Sometimes we may want to take a default action when a request times out. For example, if we are not able to fetch a value from a remote service in time, we may want to return a default value or some data from a local cache.\nWe can do this by specifying a fallbackMethod in the @TimeLimiter annotation:\n@TimeLimiter(name = \u0026#34;fallbackExample\u0026#34;, fallbackMethod = \u0026#34;localCacheFlightSearch\u0026#34;) CompletableFuture\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; fallbackExample(SearchRequest request) { return CompletableFuture.supplyAsync(() -\u0026gt; remoteSearchService.searchFlights(request)); } The fallback method should be defined in the same bean as the time-limiting bean. It should have the same method signature as the original method with one additional parameter - the Exception that caused the original one to fail:\nprivate CompletableFuture\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; localCacheFlightSearch(SearchRequest request, TimeoutException rnp) { // fetch results from the cache  return results; } Here\u0026rsquo;s sample output showing the results being fetched from a cache:\nCalling search; current thread = main Searching for flights; current time = 08:58:25 461; current thread = ForkJoinPool.commonPool-worker-3 TimeLimiter \u0026#39;fallbackExample\u0026#39; recorded a timeout exception. Returning search results from cache [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;10/30/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] on thread pool-2-thread-2 Flight search successful at 08:58:26 464 TimeLimiter Events The TimeLimiter has an EventPublisher which generates events of the types TimeLimiterOnSuccessEvent, TimeLimiterOnErrorEvent, and TimeLimiterOnTimeoutEvent. We can listen to these events and log them, for example.\nHowever, since we don\u0026rsquo;t have a reference to the TimeLimiter instance when working with Spring Boot Resilience4j, this requires a little more work. The idea is still the same, but how we get a reference to the TimeLimiterRegistry and then the TimeLimiter instance itself is a bit different.\nFirst, we @Autowire a TimeLimiterRegistry into the bean that invokes the remote operation:\n@Service public class TimeLimitingService { @Autowired private FlightSearchService remoteSearchService; @Autowired private TimeLimiterRegistry timeLimiterRegistry; // other lines omitted } Then we add a @PostConstruct method which sets up the onSuccess and onFailure event handlers:\n@PostConstruct void postConstruct() { EventPublisher eventPublisher = timeLimiterRegistry.timeLimiter(\u0026#34;eventsExample\u0026#34;).getEventPublisher(); eventPublisher.onSuccess(System.out::println); eventPublisher.onError(System.out::println); eventPublisher.onTimeout(System.out::println); } Here, we fetched the TimeLimiter instance by name from the TimeLimiterRegistry and then got the EventPublisher from the TimeLimiter instance.\nInstead of the @PostConstruct method, we could have also done the same in the constructor of TimeLimitingService.\nNow, the sample output shows details of the events:\nSearching for flights; current time = 13:27:22 979; current thread = ForkJoinPool.commonPool-worker-9 Flight search successful 2021-10-03T13:27:22.987258: TimeLimiter \u0026#39;eventsExample\u0026#39; recorded a successful call. Search 3 successful, found 2 flights Searching for flights; current time = 13:27:23 279; current thread = ForkJoinPool.commonPool-worker-7 Flight search successful 2021-10-03T13:27:23.280146: TimeLimiter \u0026#39;eventsExample\u0026#39; recorded a successful call. ... other lines omitted ... 2021-10-03T13:27:24.290485: TimeLimiter \u0026#39;eventsExample\u0026#39; recorded a timeout exception. ... other lines omitted ... Searching for flights; current time = 13:27:24 334; current thread = ForkJoinPool.commonPool-worker-3 Flight search successful TimeLimiter Metrics Spring Boot Resilience4j makes the details about the last one hundred timelimit events available through Actuator endpoints:\n /actuator/timelimiters /actuator/timelimiterevents /actuator/metrics/resilience4j.ratelimiter.waiting_threads  Let\u0026rsquo;s look at the data returned by doing a curl to these endpoints.\n/timelimiters Endpoint This endpoint lists the names of all the time-limiter instances available:\n$ curl http://localhost:8080/actuator/timelimiters { \u0026#34;timeLimiters\u0026#34;: [ \u0026#34;basicExample\u0026#34;, \u0026#34;eventsExample\u0026#34;, \u0026#34;timeoutExample\u0026#34; ] } timelimiterevents Endpoint This endpoint provides details about the last 100 time limit events in the application:\n$ curl http://localhost:8080/actuator/timelimiterevents { \u0026#34;timeLimiterEvents\u0026#34;: [ { \u0026#34;timeLimiterName\u0026#34;: \u0026#34;eventsExample\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;SUCCESS\u0026#34;, \u0026#34;creationTime\u0026#34;: \u0026#34;2021-10-07T08:19:45.958112\u0026#34; }, { \u0026#34;timeLimiterName\u0026#34;: \u0026#34;eventsExample\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;SUCCESS\u0026#34;, \u0026#34;creationTime\u0026#34;: \u0026#34;2021-10-07T08:19:46.079618\u0026#34; }, ... other lines omitted ... { \u0026#34;timeLimiterName\u0026#34;: \u0026#34;eventsExample\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;TIMEOUT\u0026#34;, \u0026#34;creationTime\u0026#34;: \u0026#34;2021-10-07T08:19:47.908422\u0026#34; }, { \u0026#34;timeLimiterName\u0026#34;: \u0026#34;eventsExample\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;TIMEOUT\u0026#34;, \u0026#34;creationTime\u0026#34;: \u0026#34;2021-10-07T08:19:47.909806\u0026#34; } ] } Under the timelimiterevents endpoint, there are two more endpoints available: /actuator/timelimiterevents/{timelimiterName} and /actuator/timelimiterevents/{timeLimiterName}/{type}. These provide similar data as the above one, but we can filter further by the retryName and type (success/timeout).\ncalls Endpoint This endpoint exposes the resilience4j.timelimiter.calls metric:\n$ curl http://localhost:8080/actuator/metrics/resilience4j.timelimiter.calls { \u0026#34;name\u0026#34;: \u0026#34;resilience4j.timelimiter.calls\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The number of successful calls\u0026#34;, \u0026#34;baseUnit\u0026#34;: null, \u0026#34;measurements\u0026#34;: [ { \u0026#34;statistic\u0026#34;: \u0026#34;COUNT\u0026#34;, \u0026#34;value\u0026#34;: 12 } ], \u0026#34;availableTags\u0026#34;: [ { \u0026#34;tag\u0026#34;: \u0026#34;kind\u0026#34;, \u0026#34;values\u0026#34;: [ \u0026#34;timeout\u0026#34;, \u0026#34;successful\u0026#34;, \u0026#34;failed\u0026#34; ] }, { \u0026#34;tag\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;values\u0026#34;: [ \u0026#34;eventsExample\u0026#34;, \u0026#34;basicExample\u0026#34;, \u0026#34;timeoutExample\u0026#34; ] } ] } Conclusion In this article, we learned how we can use Resilience4j\u0026rsquo;s TimeLimiter module to set a time limit on asynchronous, non-blocking operations. We learned when to use it and how to configure it with some practical examples.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"October 24, 2021","image":"https://reflectoring.io/images/stock/0079-stopwatch-1200x628-branded_hud3c835126b8b54498dc5975d82508778_152822_650x0_resize_q90_box.jpg","permalink":"/time-limiting-with-springboot-resilience4j/","title":"Timeouts with Spring Boot and Resilience4j"},{"categories":["Java"],"contents":"In tests, we need to add assertions to make sure that a result is the expected result. For this, we can make use of the AssertJ assertion library.\nTo assert that an object equals the expected object, we can simply write assertThat(actualObject).isEqualTo(expectedObject).\nWhen we\u0026rsquo;re working with lists, however, things quickly get complicated. How can we extract certain elements out of a list to assert them?\nThis article shows how to work with lists in AssertJ.\nLet’s start with setting it up.\n Example Code This article is accompanied by a working code example on GitHub. Setting up AssertJ Maven Setup If you are using Maven and not using Spring or Spring Boot dependencies, you can just import the assertj-core dependency into your project:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.assertj\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;assertj-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;3.20.2\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; If you are using Spring Boot, you can import spring-boot-starter-test as a dependency and start writing your unit test:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-test\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.5.4\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; Gradle Setup If you like Gradle more, or your project just uses Gradle as a build tool, you can import assertj-core like this:\ndependencies { testImplementation \u0026#39;org.assertj:assertj-core:3.11.1\u0026#39; } Or, if you are working with Spring:\ndependencies { testImplementation \u0026#39;org.springframework.boot:spring-boot-starter-test\u0026#39; } Example Use Case For this article, we will build a backend for a simple gym buddy app. We will pick a set of workouts that we want to do, add several sets and the number of reps on each set. Also, we will add friends as our gym buddies and see their workout sessions. You can see the example code on GitHub.\nFiltering Lists The main issue with asserting lists is to get the correct elements of the list to assert against. AssertJ provides some filtering options that we\u0026rsquo;re going to explore.\nFiltering with Basic Conditions Let\u0026rsquo;s say we want to fetch all persons currently in the application and assert that there is a person named “Tony”:\n@Test void checkIfTonyIsInList_basicFiltering(){ assertThat(personService.getAll()) .filteredOn(person -\u0026gt; person.getName().equals(\u0026#34;Tony\u0026#34;).isNotEmpty(); } To do this, we used filteredOn() with a predicate. Predicates use lambda expressions syntax and are easy to write ad-hoc.\nFiltering with Multiple Basic Conditions Let\u0026rsquo;s combine multiple conditions.\nFrom the list of all persons, we want to make sure that there is only one person who\n has the letter \u0026ldquo;o\u0026rdquo; in their name, and has more than one friend:  @Test void filterOnNameContainsOAndNumberOfFriends_complexFiltering(){ assertThat(personService.getAll()) .filteredOn(person -\u0026gt; person.getName().contains(\u0026#34;o\u0026#34;) \u0026amp;\u0026amp; person.getFriends().size() \u0026gt; 1) .hasSize(1); } The implementation is pretty straightforward, but you can see that, with more complex conditions, our filtering statement will grow ever bigger. This could cause issues like lack of readability with more than two conditions.\nFiltering on Nested Properties How can we assert on something that is a property of a property of an object that we have in the list?\nNow, we want to assert that there are four persons in the application that had their workout done today:\n@Test void filterOnAllSessionsThatAreFromToday_nestedFiltering() { assertThat(personService.getAll()) .map(person -\u0026gt; person.getSessions() .stream() .filter(session -\u0026gt; session.getStart().isAfter(LocalDateTime.now().minusHours(1))) .count()) .filteredOn(sessions -\u0026gt; sessions \u0026gt; 0) .hasSize(4); } The entities were modeled so that the session contains the time, and we are provided with a list of persons where each of them contains a list of sessions.\nAs an answer to this issue, we had to count all sessions that are done today, and group them by their owners. Then, we could use predicate filtering to assert that four persons have at least one workout session done today. We will look at how to make this more readable using other AssertJ features.\nField Filtering AssertJ provides us a more elegant way to assert on the list. We call this field filtering. In the next examples, we will see how we can use field filtering and what the upsides and downsides of using it are.\nField Filtering with Basic Condition Previously, we wanted to assert that there is a person in our application that is named “Tony”. This example will show us how we can do this using field filtering:\n@Test void checkIfTonyIsInList_basicFieldFiltering(){ assertThat(personService.getAll()) .filteredOn(\u0026#34;name\u0026#34;, \u0026#34;Tony\u0026#34;) .isNotEmpty(); } Again, we are using filteredOn(). But this time there is no predicate. We are providing just the name of the property as a method argument. The name of the property is hard-coded as a string and this can cause problems in the future. If someone changes the name of the property to something else, and forgets to change the test also, this test will fail with: java.lang.IllegalArgumentException: Cannot locate field “attribute_name” on class “class_name”.\nField Filtering with Complex Conditions Now, we want to assert that only Tony or Carol have more than one gym buddy:\n@Test void filterOnNameContainsOAndNumberOfFriends_complexFieldFiltering() { assertThat(personService.getAll()) .filteredOn(\u0026#34;name\u0026#34;, in(\u0026#34;Tony\u0026#34;,\u0026#34;Carol\u0026#34;)) .filteredOn(person -\u0026gt; person.getFriends().size() \u0026gt; 1) .hasSize(1); } For the first filter, we use field filtering as in the previous example. Here we can see the usage of in() to check if our property value is part of provided list.\nAside from in(), we can use:\n notIn(): to check if an item is not in a list not(): to check if an item does not equal the provide value.  One more thing that we notice is that we cannot do any complex filtering using field filters. That is why the second part of our chained filters is filtering using predicates.\nHandling Null Values Now, one more thing that we need to go over is the behavior of these two types of filtering when it comes to null values in some properties.\nPredicate Filtering with Null Values We want to assert that there is no workout session for Tony inside our application. Since we want to check behavior with null values, we want to change the person property into null for our Tony.\nFirst, let us go with predicate filtering:\n@Test void checkIfTonyIsInList_NullValue_basicFiltering(){ List\u0026lt;Session\u0026gt; sessions = sessionService.getAll().stream().map( session -\u0026gt; { if(session.getPerson().getName().equals(\u0026#34;Tony\u0026#34;)){ return new Session.SessionBuilder() .id(session.getId()) .start(session.getStart()) .end(session.getEnd()) .workouts(session.getWorkouts()) .person(null) .build(); } return session; }) .collect(Collectors.toList()); assertThat(sessions) .filteredOn(session -\u0026gt; session.getPerson().getName().equals(\u0026#34;Tony\u0026#34;)).isEmpty(); // \u0026lt;-- NullPointer! } The first thing that we do is to replace all of Tony’s sessions with a new session where the person property is set to null. After that, we use standard predicate filtering, as explained above. The output of running this part of code will be a NullPointerException since we want to call getName() on a null object.\nField Filtering with Null Values Here, we want to do the same thing as above. We want to assert that there is no workout session for Tony in our application:\n@Test void checkIfTonyIsInList_NullValue_basicFieldFiltering(){ List\u0026lt;Session\u0026gt; sessions = sessionService.getAll().stream().map( session -\u0026gt; { if(session.getPerson().getName().equals(\u0026#34;Tony\u0026#34;)){ return new Session.SessionBuilder() .id(session.getId()) .start(session.getStart()) .end(session.getEnd()) .workouts(session.getWorkouts()) .person(null) .build(); } return session; }) .collect(Collectors.toList()); assertThat(sessions).filteredOn(\u0026#34;person.name\u0026#34;,\u0026#34;Tony\u0026#34;).isEmpty(); // \u0026lt;-- no NullPointer! } After setting person properties to null for all Tony’s sessions, we do field filtering on person.name. In this example, we will not face a NullPointerException. Field filtering is null-safe and isEmpty() will return false.\nUsing Custom Conditions The next feature that we want to go through is creating custom conditions. We will have a separate package for custom conditions. That way we will have them all in one place. Each condition should have a meaningful name, so it is easier to follow. We can use custom conditions for basic conditions, but that would be a bit of an overkill. In that cases we can always use a predicate or field filtering.\nCreating Ad-Hoc Conditions Again, we will use a same example as before. We assert that there is only one person who has the letter \u0026ldquo;o\u0026rdquo; inside their name and more than one friend. We already showed this example using a predicate and something similar using field filtering. Let us go through it once again:\n@Test void filterOnNameContainsOAndNumberOfFriends_customConditionFiltering(){ Condition\u0026lt;Person\u0026gt; nameAndFriendsCondition = new Condition\u0026lt;\u0026gt;(){ @Override public boolean matches(Person person){ return person.getName().contains(\u0026#34;o\u0026#34;) \u0026amp;\u0026amp; person.getFriends().size() \u0026gt; 1; } }; assertThat(personService.getAll()) .filteredOn(nameAndFriendsCondition) .hasSize(1); } Here we created the custom condition nameAndFriendsCondition. We can see that the filtering code is the same as we did with predicate filtering. We created conditions inside our test method using an anonymous class. This way is good when you know you will have a just couple of custom conditions and you will not need to share them with another test.\nCreating a Condition in a Separate Class This example is something similar to predicate filtering on nested properties. We are trying to assert that there are four persons in our application that had their workout session today. Let us first check how we create this condition:\npublic class SessionStartedTodayCondition extends Condition\u0026lt;Person\u0026gt; { @Override public boolean matches(Person person){ return person.getSessions().stream() .anyMatch(session -\u0026gt; session.getStart().isAfter(LocalDateTime.now().minusHours(1))); } } An important note is that this condition is created as its own class in a separate package, so we can share it between different tests.\nThe only thing that we needed to do is to extend Condition class and override its matches() method. Inside that method we write filtering that will return a boolean value depending on our condition.\nOur next example is showing usage of created condition:\n@Test void filterOnAllSessionsThatAreFromToday_customConditionFiltering() { Condition\u0026lt;Person\u0026gt; sessionStartedToday = new SessionStartedTodayCondition(); assertThat(personService.getAll()) .filteredOn(sessionStartedToday) .hasSize(4); } We first need to create an instance of our condition. Then, we call filteredOn() with the given condition as the parameter. Important note is that the condition is validated on each element of the list, one by one.\nExtracting Fields Assume we want to check if all desired values of the object’s property are in our list. We can use field filtering, and that was explained in previous examples, but there is one other way to do it.\nChecking a Single Property Using Field Extracting We want to check if there is Tony, Bruce, Carol, and Natalia in our list of persons and that there is no Peter or Steve on the same list. Our next examples will show how to use field extracting with single values:\n@Test void checkByName_UsingExtracting(){ assertThat(personService.getAll()) .extracting(\u0026#34;name\u0026#34;) .contains(\u0026#34;Tony\u0026#34;,\u0026#34;Bruce\u0026#34;,\u0026#34;Carol\u0026#34;,\u0026#34;Natalia\u0026#34;) .doesNotContain(\u0026#34;Peter\u0026#34;,\u0026#34;Steve\u0026#34;); } We are calling extracting() with the name of the property as a parameter. On that, we call contains() method to check if the list of extracted names contains provided values. After that, we call doesNotContain() to assert that there are no Peter or Steve in our list of names.\nWith field extracting, we face the downside of hard-coded values for property names.\nChecking Multiple Properties Using Field Extracting Now, we know that there are Tony, Bruce, Carol and Natalia on our list of persons. But, are they the ones that we really need? Can we specify a bit more who they are?\nLet us agree that name and last name are enough to distinguish two persons in our application. We want to find out if our application contains Tony Stark, Carol Danvers, Bruce Banner, and Natalia Romanova. Also, we want to make sure that Peter Parker and Steve Rogers are not among people in this list:\n@Test void checkByNameAndLastname_UsingExtracting(){ assertThat(personService.getAll()) .extracting(\u0026#34;name\u0026#34;,\u0026#34;lastname\u0026#34;) .contains(tuple(\u0026#34;Tony\u0026#34;,\u0026#34;Stark\u0026#34;), tuple(\u0026#34;Carol\u0026#34;, \u0026#34;Danvers\u0026#34;), tuple(\u0026#34;Bruce\u0026#34;, \u0026#34;Banner\u0026#34;),tuple(\u0026#34;Natalia\u0026#34;,\u0026#34;Romanova\u0026#34;)) .doesNotContain(tuple(\u0026#34;Peter\u0026#34;, \u0026#34;Parker\u0026#34;), tuple(\u0026#34;Steve\u0026#34;,\u0026#34;Rogers\u0026#34;)); } We implemented it, again, using extracting(), but this time we wanted to extract two properties at the same time. In contains() and doesNotContain() we are using tuple() to represent a tuple of name and last name.\nExtracting Null Values We want to check if Bruce, Carol, and Natalia are part of our list, but first, we need to exclude Tony and let all of his sessions have a null value as person property:\n@Test void checkByNestedAtrribute_PersonIsNUll_UsingExtracting(){ List\u0026lt;Session\u0026gt; sessions = sessionService.getAll().stream().map( session -\u0026gt; { if(session.getPerson().getName().equals(\u0026#34;Tony\u0026#34;)){ return new Session.SessionBuilder() .id(session.getId()) .start(session.getStart()) .end(session.getEnd()) .workouts(session.getWorkouts()) .person(null) .build(); } return session; } ).collect(Collectors.toList()); assertThat(sessions) .filteredOn(session -\u0026gt; session.getStart().isAfter(LocalDateTime.now().minusHours(1))) .extracting(\u0026#34;person.name\u0026#34;) .contains(\u0026#34;Bruce\u0026#34;,\u0026#34;Carol\u0026#34;,\u0026#34;Natalia\u0026#34;); } Extracting properties on null values behaves the same as in field filtering. All properties that we try to extract from null object are considered null. No NullPointerException is thrown in this case.\nFlatmap and Method Call Extracting We saw in this example that finding persons who had their workout session done today was pretty complex. Let’s find out a better way of asserting the list inside the list.\nFlatmap Extracting on Basic Properties Explaining flatmap is best done on actual example. In our use case, we want to assert that Tony, Carol, Bruce, and Natalia have at least one workout session that started today. Let’s see how it is done using flatmap extracting:\n@Test void filterOnAllSessionsThatAreFromToday_flatMapExtracting(){ assertThat(personService.getAll()) .flatExtracting(\u0026#34;sessions\u0026#34;) .filteredOn(session -\u0026gt; ((Session)session).getStart().isAfter(LocalDateTime.now().minusHours(1))) .extracting(\u0026#34;person.name\u0026#34;) .contains(\u0026#34;Tony\u0026#34;, \u0026#34;Carol\u0026#34;,\u0026#34;Bruce\u0026#34;,\u0026#34;Natalia\u0026#34;); } After fetching all persons we want to find sessions that started today. In our example, we start by calling flatExtracting() on the session property of a person. Now, our list is changed from list of persons to list of sessions, and we are doing our further assertion on that new list. Since we have the list of sessions that started today, we can extract names of persons that own that session, and assert the desired values are among them.\nFlatmap Extracting Using Extractor If we want to have a more complex extractor and reuse it across our code, we can implement an extractor class:\npublic class PersonExtractors { public PersonExtractors(){} public static Function\u0026lt;Person, List\u0026lt;Session\u0026gt;\u0026gt; sessions(){ return new PersonSessionExtractor(); } private static class PersonSessionExtractor implements Function\u0026lt;Person, List\u0026lt;Session\u0026gt;\u0026gt; { @Override public List\u0026lt;Session\u0026gt; apply(Person person) { return person.getSessions(); } } } We need to create a class that will have a static method that returns a Java Function. It will return a static object that implements the Function interface and where we set our desired input type and desired output type. In our use case, we are taking one person and returning a list of sessions to that person. Inside that new static function, we override method apply().\nLet’s see an example of how to use the extractor class:\n@Test void filterOnAllSessionsThatAreFromToday_flatMapExtractingMethod(){ assertThat(personService.getAll()) .flatExtracting(PersonExtractors.sessions()) .filteredOn(session -\u0026gt; session.getStart().isAfter(LocalDateTime.now().minusHours(1))) .extracting(\u0026#34;person.name\u0026#34;) .contains(\u0026#34;Tony\u0026#34;, \u0026#34;Carol\u0026#34;,\u0026#34;Bruce\u0026#34;,\u0026#34;Natalia\u0026#34;); } Extracting itself is done inside flatExtracting() method into which we pass the static function called PersonExtractors.sessions().\nMethod Call Extracting Instead of asserting on properties of objects in the list, sometimes, we want to assert the method result of the same properties. A new list is created from those results and our assertion continues on that list.\nLet’s say we want to check how many sessions are there that lasted less than two hours and we don’t save that variable in the database, so it is not inside the entity. Our next test shows that use case:\n@Test void filterOnAllSesionThatAreFomToday_methodCallExtractingMethod(){ assertThat(sessionService.getAll()) .extractingResultOf(\u0026#34;getDurationInMinutes\u0026#34;, Long.class) .filteredOn(duration -\u0026gt; duration \u0026lt; 120l) .hasSize(1); } After fetching all sessions, we call a method called getDurationInMinutes() using extractingResultOf(). This method has to be an inside class we are filtering on. After that, we get the list of outputs on that method, in our use case we get a list of durations of all sessions. Now, we can filter on that one and assert that there is only one session that is shorter than two hours. We passed another argument to extractingResultOf() that represents the type that we expect back. If we don’t provide it method will return Object.class type.\nConclusion AssertJ provides us full functionality on asserting lists. We can split them into two groups:\n Filtering lists and asserting on the filtered list Extracting properties from items in the list and asserting on those  This makes working with lists in tests much simpler.\n","date":"October 4, 2021","image":"https://reflectoring.io/images/stock/0019-magnifying-glass-1200x628-branded_hudd3c41ec99aefbb7f273ca91d0ef6792_109335_650x0_resize_q90_box.jpg","permalink":"/assertj-lists/","title":"Asserting Lists with AssertJ"},{"categories":["Software Craft"],"contents":"Compound interest is both a powerful and apt analogy for when it comes to devising your workflows. Improving a process' efficiency and efficacy by 2% can feel negligible. But when the procedures you\u0026rsquo;re optimizing are daily habits, the overall impact of these changes can produce staggering results.\nWith the right combination of alterations, tooling and automation, you can alleviate your pressures along with freeing up capacity within your team, all without compromising on the quality of your output.\nWhen discussing productivity, it\u0026rsquo;s almost impossible to go without mentioning James Clear and his seminal New York Times bestseller, Atomic Habits. The work has been heralded as a user manual in crafting routines that extract the most value out of your daily activities. The core principles boil down to the idea of reducing the friction between tasks to aid your transition into a flow that overtime becomes second nature.\nWith development time having a clear impact on the velocity of the entire process of building and shipping successful software products, it\u0026rsquo;s no mystery that it\u0026rsquo;s written about and pondered ad nauseum.\nThere isn\u0026rsquo;t six months that goes by without a new “hack”, “plug-in” or “extension” that claims to be the ultimate secret weapon to gaming your brain. While these solutions often do indeed deliver on their promises of being able to help you boost your productivity, it\u0026rsquo;s usually due to them abiding by one core principle - being easily repeatable.\nEat Sleep Code Repeat One of the fundamentals behind Clear\u0026rsquo;s formula for success is repetition. The idea that you can program yourself to repeat the same habits again and again will naturally yield incremental improvements over the long-term. It\u0026rsquo;s simple math and the reason why CI/CD and iterative development processes are responsible for some of the most successful products on the market.\nThe magic to this approach isn\u0026rsquo;t simply to do the same thing repeatedly but recognize the natural deterrents that crop up in our day-to-day actions that can make a routine more complicated than it has to be.\nAfter all, when you\u0026rsquo;re training for a marathon, you don\u0026rsquo;t simply lace up your shoes and set out on a 26.2-mile jog. You\u0026rsquo;ve got to train up, get your sleep the night before, and create the optimum conditions to succeed.\nConsider how you can make the actions that you want to do the most repeatable as possible. Context switching is often the biggest enemy of long stretches of productive development time. To combat its detrimental effects on your daily development efforts, you can go beyond blocking out some time and hoping for the best.\nAs an inspiration, here are three habits for productive software development.\nChange Your IDE to Suit Your Project Type Your current set-up might be the most familiar, but it potentially could be slightly unfit for purpose. As humans, we\u0026rsquo;re notorious creatures of habit and gravitate towards what we know rather than making an honest, objective assessment of whether or not our choice is the best tool for the job.\nWhen you start a new project, use it as an opportunity to rethink your setup. There won\u0026rsquo;t always be a lot that you can change, given that as a team, you need to be working with collaboration in mind, but it\u0026rsquo;s definitely worth taking a minute to rethink your habits to suit the work you\u0026rsquo;re doing.\nCombine Your Knowledge Repository with Your IDE Whenever you start a new project, you can always expect to lose some time getting up to speed and completing all of the peripheral tasks that fall outside of actual build time and code reviews. But the trend in terms of time loss often tracks as an exponential upwards curve, with the result that the average engineer loses nearly half of their time to task switching as soon as they have three projects on their plate.\nWith this in mind, think about how often you actually need to leave your editor. Built-in knowledge repositories like Foam and Org Mode can help you build even more processes within your editor environment than previously possible. When it comes down to it, typing an extra command or looking at a different part of the screen is less likely to interfere with your workflow than exiting into an entirely different view.\nThe same goes for your note-taking app. If you haven’t already switched to Obsidian or Bear, then it’s definitely worth considering. Going with a markdown-first application for your notes will make documentation a breeze.\nUse a Stream Deck to Switch Windows Quickly Now that you\u0026rsquo;ve chosen the three windows that you\u0026rsquo;ll be switching between, albeit as few times as possible, you can now look at the sort of hardware that\u0026rsquo;s going to be the most useful. Multiple engineers can attest to the noticeable differences they\u0026rsquo;ve experienced ever since adopting a streaming deck for general everyday working purposes:\nI am experimenting using a Stream Deck to speed coding of qualitative data in @VerbiSoftware pic.twitter.com/uJGlTPPW1C\n\u0026mdash; Paul Manson (@paulonabike) August 25, 2020  If a deck isn\u0026rsquo;t for you, perhaps think about a set of hotkeys or trackpad shortcuts that you can program to create a sense of continuity when context switching can\u0026rsquo;t be avoided.\nOptimizing your workspace may appear to be an art form, particularly when said set-up results in a delectably smooth deployment. However, it can, in fact, be boiled down to a precise science. Using a series of integrations, LaunchDarkly Software Engineer Dan O\u0026rsquo;Brien explained in a recent talk how he uses feature flagging to enhance his workflows inside Trello and Jira to minimize the friction he experiences when jumping between tasks ultimately.\nOne Size Does Not Fit All While all these tactics can produce noticeable improvements, the essential point to focus on is optimizing your workflow for you. If a habit can\u0026rsquo;t fit naturally into your routine, it might not be a good fit for you.\nRemember to focus on making your processes easy to repeat and replicate across different areas in your organization. Practice realism to ensure you don\u0026rsquo;t end up setting yourself up for failure and, most importantly, reward your efforts.\nEven if you didn\u0026rsquo;t manage to sprint the entire 26 miles on your first go, as long as you\u0026rsquo;re ending the day in a better place than it started, you\u0026rsquo;ll be able to carry forward this trend of continuous improvement and reap the benefits in no time.\nDo you have your own stories around how you\u0026rsquo;ve worked to perfect your workflows and achieve peak productivity across your team? If so, we\u0026rsquo;d love to have you join us at our third annual conference, Trajectory, which takes place November 9-10, 2021. Registrations are now open! Click here to sign up for your place and join us for this year\u0026rsquo;s virtual event.\n","date":"September 25, 2021","image":"https://reflectoring.io/images/stock/0112-ide-1200x628-branded_hu3b7dcb6bd35b7043d8f1c81be3dcbca2_169620_650x0_resize_q90_box.jpg","permalink":"/atomic-habits-in-software-development/","title":"Atomic Habits in Software Development"},{"categories":["Spring Boot"],"contents":"Whenever we make a change in our database schema, we also have to make a change in the code that uses that database schema.\nWhen we add a new column to the database, we need to change the code to use that new column.\nWhen we delete a column from the database, we need to change the code to not use that column anymore.\nIn this tutorial, we\u0026rsquo;ll discuss how we can coordinate the code changes with the database changes and deploy them to our production environment without a downtime. We\u0026rsquo;ll go through an example use case step by step and use feature flags to help us.\n Example Code This article is accompanied by a working code example on GitHub. The Problem: Coordinating Database Changes with Code Changes If we release both the change of the database and the change of the code at the same time, we double the risk that something goes wrong. We have coupled the risk of the database change with the risk of the code change.\nUsually, our application runs on multiple nodes and during a new release, the new code is deployed to one node at a time. This is often called a \u0026ldquo;rolling deployment\u0026rdquo; or \u0026ldquo;round-robin release\u0026rdquo; with the goal of zero downtime. During the deployment, there will be nodes running with the old code that is not compatible with the new database schema! How can we handle this?\nWhat do we do when the deployment of the code change failed because we have introduced a bug? We have to roll back to the old version of the code. But the old version of the code may not be compatible with the database anymore, because we have already applied the database change! So we have to roll back the database change, too! The rollback in itself bears some risk of failure because a rollback is often not a well-planned and well-rehearsed activity. How can we improve this situation?\nThe answer to these questions is to decouple the database changes from the code changes using feature flags.\nWith feature flags, we can deploy database changes and code any time we want, and activate them at any time after the deployment.\nThis tutorial provides a step-by-step guide on how to release database changes and the corresponding code changes safely and with no downtime using Spring Boot, Flyway, and feature flags implemented with a feature flagging platform like LaunchDarkly.\nExample Use Case: Splitting One Database Column into Two As the example use case we\u0026rsquo;re going to split a database column into two.\nInitially, our application looks like this:\nWe have a CustomerController that provides a REST API for our Customer entities. It uses the CustomerRepository, which is a Spring Data repository that maps entries in the CUSTOMER database table to objects of type Customer. The CUSTOMER table has the columns id and address for our example.\nThe address column contains both the street name and street number in the same field. Imagine that due to some new requirements, we have to split up the address column into two columns: streetNumber and street.\nIn the end, we want the application to look like this:\nIn this guide, we\u0026rsquo;ll go through all the changes we need to do to the database and the code and how to release them as safely as possible using feature flags and multiple deployments.\nStep 1: Decouple Database Changes from Code Changes Before we even start with changing code or the database schema, we\u0026rsquo;ll want to decouple the execution of database changes from the deployment of a Spring Boot app.\nBy default, Flyway executes database migration on application startup. This is very convenient but gives us little control. What if the database change is incompatible with the old code? During the rolling deployment, there may be nodes with the old codes still using the database!\nWe want full control over when we execute our database schema changes! With a little tweak to our Spring Boot application, we can achieve this.\nFirst, we disable Flyway\u0026rsquo;s default to execute database migrations on startup:\n@Configuration class FlywayConfiguration { private final static Logger logger = LoggerFactory.getLogger(FlywayConfiguration.class); @Bean FlywayMigrationStrategy flywayStrategy() { return flyway -\u0026gt; logger.info(\u0026#34;Flyway migration on startup is disabled! Call the endpoint /flywayMigrate instead.\u0026#34;); } } Instead of executing all database migrations that haven\u0026rsquo;t been executed, yet, it will now just print a line to the log saying that we should call an HTTP endpoint instead.\nBut we also have to implement this HTTP endpoint:\n@RestController class FlywayController { private final Flyway flyway; public FlywayController(Flyway flyway) { this.flyway = flyway; } @PostMapping(\u0026#34;/flywayMigrate\u0026#34;) String flywayMigrate() { flyway.migrate(); return \u0026#34;success\u0026#34;; } } Whenever we call /flywayMigrate via HTTP POST now, Flyway will run all migration scripts that haven\u0026rsquo;t been executed, yet. Note that you should protect this endpoint in a real application, so that not everyone can call it.\nWith this change in place, we can deploy a new version of the code without being forced to change the database schema at the same time. We\u0026rsquo;ll make use of that in the next step.\nStep 2: Deploy the New Code Behind a Feature Flag Next, we write the code that we need to work with the new database schema:\nSince we\u0026rsquo;re going to change the structure of the CUSTOMER database table, we create the class NewCustomer that maps to the new columns of the table (i.e. streetNumber and street instead of just address). We also create NewCustomerRepository as a new Spring Data repository that binds to the same table as the CustomerRepository but uses the NewCustomer class to map database rows into Java.\nNote that we have deployed the new code, but haven\u0026rsquo;t activated it yet. It can\u0026rsquo;t work, yet, because the database still is in the old state.\nInstead, we\u0026rsquo;ve hidden it behind feature flags. In the CustomerController we now have code that looks something like this:\n@PostMapping(\u0026#34;/customers/create\u0026#34;) String createCustomer() { if (featureFlagService.writeToNewCustomerSchema()) { NewCustomer customer = new NewCustomer(\u0026#34;Bob\u0026#34;, \u0026#34;Builder\u0026#34;, \u0026#34;Build Street\u0026#34;, \u0026#34;21\u0026#34;); newCustomerRepository.save(customer); } else { OldCustomer customer = new OldCustomer(\u0026#34;Bob\u0026#34;, \u0026#34;Builder\u0026#34;, \u0026#34;21 Build Street\u0026#34;); oldCustomerRepository.save(customer); } return \u0026#34;customer created\u0026#34;; } @GetMapping(\u0026#34;/customers/{id}}\u0026#34;) String getCustomer(@PathVariable(\u0026#34;id\u0026#34;) Long id) { if (featureFlagService.readFromNewCustomerSchema()) { Optional\u0026lt;NewCustomer\u0026gt; customer = newCustomerRepository.findById(id); return customer.get().toString(); } else { Optional\u0026lt;OldCustomer\u0026gt; customer = oldCustomerRepository.findById(id); return customer.get().toString(); } } With a feature flagging tool like LaunchDarkly, we have created two feature flags:\nThe boolean flag featureFlagService.writeToNewCustomerSchema() defines whether the write path to the new database schema is active. This feature flag is currently still disabled because we haven\u0026rsquo;t updated the database schema yet.\nThe boolean flag featureFlagService.readFromNewCustomerSchema() defines whether the read path from the new database schema is active. This feature flag is also disabled for now.\nWith the help of feature flags, we have deployed the new code without even touching the database, yet, which we will do in the next step.\nStep 3: Add the New Database Columns With the deployment of the new code in the previous step, we have also deployed a new SQL script for Flyway to execute. After successful deployment, we can now call the /flywayMigrate endpoint that we prepared in step 1. This will execute the SQL script and update the database schema with the new streetNumber and street fields:\nThese new columns will be empty for now. Note that we have kept the existing address column untouched for now. In the end state, we\u0026rsquo;ll want to remove this column, but we have to migrate the data into the new columns first.\nThe feature flags are still disabled for now, so that both reads and writes go into the old address database column.\nStep 4: Activate Writes into the New Database Columns Next, we activate the writeToNewCustomerSchema feature flag so that the application now writes to the new database columns but still reads from the old one:\nEvery time the application now writes a new customer to the database, it uses the new code. Note that the new code will still fill the old address column in addition to the new columns streetNumber and street for backwards compatibility because the old code is still responsible for reading from the database.\nWe can\u0026rsquo;t switch the new code to read data from the database, yet, because the new columns will be empty for most customers. The new columns will fill up slowly over time as the new code is being used to write data to the database.\nTo fill the new columns for all customers, we need to run a migration.\nStep 5: Migrate Data into the New Database Columns Next, we\u0026rsquo;re going to run a migration that goes through all customers in the database whose streetNumber and street fields are still empty, reads the address field, and migrates it into the new fields:\nThis migration can be an SQL script, some custom code, or actual people looking at the customer data one by one and making the migration manually. It depends on the use case, data quality, and complexity of the migration task to decide the best way.\nData Migrations with Flyway?  Note that the type of migration we're talking about in this section is usually not a task for Flyway. Flyway is for executing scripts that migrate the database schema from one state to another. Migrating data is a very different task.  Yes, Flyway can be used for migrating data. After all, a data migration can very well just be an SQL script. However, a data migration can cause issues like long-running queries and table locks, which should not happen in the context of a Flyway migration because we have little control over it there.  Step 6: Activate Reads from the New Database Columns Now that all the customer data is migrated into the new data structure, we can activate the feature flag to use the new code to read from the database:\nThe new code is now being used to write and read from the database. The old code and the old address database column are both not used anymore.\nStep 7: Remove the Old Code and Database Column The last step is to clean up:\nWe can remove the old code that isn\u0026rsquo;t used anymore. And we can run another Flyway migration that removes the old address column from the database.\nWe should also remove the feature flags from the code now because we\u0026rsquo;re no longer using the old code. If we don\u0026rsquo;t remove the old code, we\u0026rsquo;ll accrue technical debt that will make the code harder to understand for the next person. When using feature flags at scale across a whole organization, a feature flagging platform like LaunchDarkly can help with this, because it\u0026rsquo;s tracking the usage of feature flags across the codebase.\nWe can now also rename the NewCustomerRepository to CustomerRepository and NewCustomer to Customer to make the code clean and understandable once more.\nDeploy with Confidence The 7 steps above will be spread out across multiple deployments of the application. Some of them can be combined into a single deployment, but there will be at least two deployments: one to deploy the new code and the feature flags, and one to remove the old code and the feature flags.\nThe feature flags give us a lot of flexibility and confidence in database changes like in the use case we discussed above. Feature flags allow us to decouple the code changes from the database changes. Without feature flags, we can only activate new code by deploying a new version of the application, which makes scenarios that require backwards compatibility with an old database schema a lot harder to manage (and riskier!).\nIf you want to learn more about feature flagging, make sure to read my tutorial about LaunchDarkly and Togglz, two of the most popular feature flagging tools in the JVM world.\n","date":"September 22, 2021","image":"https://reflectoring.io/images/stock/0039-start-1200x628-branded_hu0e786b71aef533dc2d1f5d8371554774_82130_650x0_resize_q90_box.jpg","permalink":"/zero-downtime-deployments-with-feature-flags/","title":"Zero Downtime Database Changes with Feature Flags - Step by Step"},{"categories":["Spring Boot"],"contents":"Scheduling is the process of executing a piece of logic at a specific time in the future. Scheduled jobs are a piece of business logic that should run on a timer. Spring allows us to run scheduled jobs in the Spring container by using some simple annotations.\nIn this article, we will illustrate how to configure and run scheduled jobs in Spring Boot applications.\n Example Code This article is accompanied by a working code example on GitHub. Creating the Spring Boot Application for Scheduling To work with some examples, let us first create a Spring Boot project with the help of the Spring boot Initializr, and then open the project in our favorite IDE. We have not added any dependencies to Maven pom.xml since the scheduler is part of the core module of the Spring framework.\nEnabling Scheduling Scheduling is not enabled by default. Before adding any scheduled jobs we need to enable scheduling explicitly by adding the @enableScheduling annotation:\nimport org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.scheduling.annotation.EnableScheduling; @SpringBootApplication @EnableScheduling public class JobschedulingApplication { public static void main(String[] args) { SpringApplication.run(JobschedulingApplication.class, args); } } Here we have added the @enableScheduling annotation to our application class JobschedulingApplication to enable scheduling.\nAs a best practice we should move this annotation to a dedicated class under a package that contains the code for our scheduled jobs:\nimport org.springframework.scheduling.annotation.EnableScheduling; @EnableScheduling public class SchedulerConfig { } The scheduling will now only be activated when we load the SchedulerConfig class into the application, providing better modularization.\nWhen the @EnableScheduling annotation is processed, Spring scans the application packages to find all the Spring Beans decorated with @Scheduled methods and sets up their execution schedule.\nEnabling Scheduling Based on a Property We would also like to disable scheduling during running tests. For this, we need to add a condition to our SchedulerConfig class. Let us add the @ConditionalOnProperty annotation with the name of the property we want to use to control scheduling:\nimport org.springframework.boot.autoconfigure.condition.ConditionalOnProperty; import org.springframework.context.annotation.Configuration; import org.springframework.scheduling.annotation.EnableScheduling; @Configuration @EnableScheduling @ConditionalOnProperty(name = \u0026#34;scheduler.enabled\u0026#34;, matchIfMissing = true) public class SchedulerConfig { } Here we have specified the property name as scheduler.enabled. We want to enable it by default. For this, we have also set the value of matchIfMissing to true which means we do not have to set this property to enable scheduling but have to set this property to explicitly disable the scheduler.\nAdding Scheduled Jobs After enabling scheduling, we will add jobs to our application for scheduling. We can turn any method in a Spring bean for scheduling by adding the @Scheduled annotation to it.\nThe @Scheduled is a method-level annotation applied at runtime to mark the method to be scheduled. It takes one attribute from cron, fixedDelay, or fixedRate for specifying the schedule of execution in different formats.\nThe annotated method needs to fulfill two conditions:\n The method should not have a return type and so return void. For methods that have a return type, the returned value is ignored when invoked through the scheduler. The method should not accept any input parameters.  In the next sections, we will examine different options of configuring the scheduler to trigger the scheduled jobs.\nRunning the Job with Fixed Delay We use the fixedDelay attribute to configure a job to run after a fixed delay which means the interval between the end of the previous job and the beginning of the new job is fixed.\nThe new job will always wait for the previous job to finish. It should be used in situations where method invocations need to happen in a sequence.\nIn this example, we are computing the price of a product by executing the method in a Spring bean with a fixed delay :\n@Service public class PricingEngine { static final Logger LOGGER = Logger.getLogger(PricingEngine.class.getName()); private Double price; public Double getProductPrice() { return price; } @Scheduled(fixedDelay = 2000) public void computePrice() throws InterruptedException { ... ... LOGGER.info(\u0026#34;computing price at \u0026#34;+ LocalDateTime.now().toEpochSecond(ZoneOffset.UTC)); // added sleep to simulate method  // which takes longer to execute.  Thread.sleep(4000); } } Here we have scheduled the execution of the computePrice method with a fixed delay by setting the fixedDelay attribute to 2000 milliseconds or 2 seconds.\nWe also make the method to sleep for 4 seconds with Thread.sleep() to simulate the situation of a method that takes longer to execute than the delay interval. The next execution will start only after the previous execution ends at least after 4 seconds, even though the delay interval of 2 seconds is elapsed.\nRunning the Job at Fixed Rate We use the fixedRate attribute to specify the interval for executing a job at a fixed interval of time. It should be used in situations where method invocations are independent. The execution time of the method is not taken into consideration when deciding when to start the next job.\nIn this example, we are refreshing the pricing parameters by executing a method at a fixed rate:\n@Service public class PricingEngine { static final Logger LOGGER = Logger.getLogger(PricingEngine.class.getName()); @Scheduled(fixedRate = 3000) @Async public void refreshPricingParameters() { ... ... LOGGER.info(\u0026#34;computing price at \u0026#34;+ LocalDateTime.now().toEpochSecond(ZoneOffset.UTC)); } } @Configuration @EnableScheduling @EnableAsync @ConditionalOnProperty(name=\u0026#34;scheduler.enabled\u0026#34;, matchIfMissing = true) public class SchedulerConfig { } Here we have annotated the refreshPricingParameters method with the @Scheduled annotation and set the fixedRate attribute to 3000 milliseconds or 3 seconds. This will trigger the method every 3 seconds.\nWe have also added an @Async annotation to the method and @EnableAsync to the configuration class: SchedulerConfig.\nThe @Async annotation over a method allows it to execute in a separate thread. As a result of this, when the previous execution of the method takes longer than the fixed-rate interval, the subsequent invocation of a method will trigger even if the previous invocation is still executing.\nThis will allow multiple executions of the method to run in parallel for the overlapped time interval.\nWithout applying @Async annotation, the method will always execute after the previous execution is completed, even if the fixed-rate interval is expired.\nThe main cause of all the scheduled tasks not running in parallel by default is that the thread pool for scheduled task has a default size of 1. So instead of using the @Async annotation, we can also set the property spring.task.scheduling.pool.size to a higher value to allow multiple executions of a method to run in parallel during the overlapped time interval.\nDelaying the First Execution with Initial Delay With both fixedDelay and fixedRate, the first invocation of the method starts immediately after the application context is initialized. However, we can choose to delay the first execution of the method by specifying the interval using the initialDelay attribute as shown below:\n@Service public class PricingEngine { static final Logger LOGGER = Logger.getLogger(PricingEngine.class.getName()); @Scheduled(initialDelay = 2000, fixedRate = 3000) @Async public void refreshPricingParameters() { Random random = new Random(); price = random.nextDouble() * 100; LOGGER.info(\u0026#34;computing price at \u0026#34;+ LocalDateTime.now().toEpochSecond(ZoneOffset.UTC)); } } Here we have set the initialDelay to delay the first execution of the method by 2000 milliseconds or 2 seconds.\nSpecifying Intervals in ISO Duration Format So far in our examples, we have specified the time interval in milliseconds. Specifying higher values of an interval in hours or days which is most often the case in real situations is difficult to read.\nSo instead of specifying a large value like 7200000 for 2 hours, we can specify the time in the ISO duration format like PT02H.\nThe @Scheduler annotation provides the attributes fixedRateString and fixedDelayString which take the interval in the ISO duration format as shown in this code example:\n@Service public class PricingEngine { static final Logger LOGGER = Logger.getLogger(PricingEngine.class.getName()); private Double price; public Double getProductPrice() { return price; } @Scheduled(fixedDelayString = \u0026#34;PT02S\u0026#34;)) public void computePrice() throws InterruptedException { Random random = new Random(); price = random.nextDouble() * 100; LOGGER.info(\u0026#34;computing price at \u0026#34;+ LocalDateTime.now().toEpochSecond(ZoneOffset.UTC)); Thread.sleep(4000); } } Here we have set the value of fixedDelayString as PT02S to specify a fixed delay of at least 2 seconds between successive invocations. Similarly, we can use fixedRateString to specify a fixed rate in this format.\nExternalizing the Interval to a Properties File We can also reference a property value from our properties file as the value of fixedDelayString or fixedRateString attributes to externalize the interval values as shown below:\n@Service public class PricingEngine { static final Logger LOGGER = Logger.getLogger(PricingEngine.class.getName()); private Double price; public Double getProductPrice() { return price; } @Scheduled(fixedDelayString = \u0026#34;${interval}\u0026#34;) public void computePrice() throws InterruptedException { Random random = new Random(); price = random.nextDouble() * 100; LOGGER.info(\u0026#34;computing price at \u0026#34;+ LocalDateTime.now().toEpochSecond(ZoneOffset.UTC)); Thread.sleep(4000); } } interval=PT02S Here we have set the fixed delay interval as a property in our application.properties file. The property named interval is set to 2 seconds in the duration format PT02S.\nUsing Cron Expressions to Define the Interval We can also specify the time interval in UNIX style cron-like expression for more complex scheduling requirements as shown in this example:\n@Service public class PricingEngine { ... ... @Scheduled(cron = \u0026#34;${interval-in-cron}\u0026#34;) public void computePrice() throws InterruptedException { ... ... LOGGER.info(\u0026#34;computing price at \u0026#34;+ LocalDateTime.now().toEpochSecond(ZoneOffset.UTC)); } } interval-in-cron=0 * * * * * Here we have specified the interval using a cron expression externalized to a property named interval-in-cron defined in our application.properties file.\nA cron expression is a string of six to seven fields separated by white space to represent triggers on the second, minute, hour, day of the month, month, day of the week, and optionally the year. However, the cron expression in Spring Scheduler is comprised of six fields as shown below:\n┌───────────── second (0-59) │ ┌───────────── minute (0 - 59) │ │ ┌───────────── hour (0 - 23) │ │ │ ┌───────────── day of the month (1 - 31) │ │ │ │ ┌───────────── month (1 - 12) (or JAN-DEC) │ │ │ │ │ ┌───────────── day of the week (0 - 7) │ │ │ │ │ │ (or MON-SUN -- 0 or 7 is Sunday) │ │ │ │ │ │ * * * * * * For example, a cron expression: 0 15 10 * * * is triggered to run at 10:15 a.m. every day ( every 0th second, 15th minute, 10th hour, every day). * indicates the cron expression matches for all values of the field. For example, * in the minute field means every minute.\nExpressions such as 0 0 * * * * are hard to read. To improve readability, Spring supports macros to represent commonly used sequences like in the following code sample:\n@Service public class PricingEngine { ... ... @Scheduled(cron = \u0026#34;@hourly\u0026#34;) public void computePrice() throws InterruptedException { ... ... LOGGER.info(\u0026#34;computing price at \u0026#34;+ LocalDateTime.now().toEpochSecond(ZoneOffset.UTC)); } } Here we have specified an hourly interval with a cron macro: hourly instead of the less readable cron expression 0 0 * * * *.\nSpring provides the following macros:\n @hourly, @yearly, @monthly, @weekly, and @daily  Deploying Multiple Scheduler Instances with ShedLock As we have seen so far with Spring Scheduler, it is very easy to schedule jobs by attaching the @Scheduler annotation to methods in Spring Beans. However, in distributed environments when we deploy multiple instances of our application, it cannot handle scheduler synchronization over multiple instances. Instead, it executes the jobs simultaneously on every node.\nShedLock is a library that ensures our scheduled tasks when deployed in multiple instances are executed at most once at the same time. It uses a locking mechanism by acquiring a lock on one instance of the executing job which prevents the execution of another instance of the same job.\nShedLock uses an external data store shared across multiple instances for coordination. like Mongo, any JDBC database, Redis, Hazelcast, ZooKeeper, or others for coordination.\nShedLock is designed to be used in situations where we have scheduled tasks that are not ready to be executed in parallel but can be safely executed repeatedly. Moreover, the locks are time-based and ShedLock assumes that clocks on the nodes are synchronized.\nLet us modify our example by adding the dependencies:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;net.javacrumbs.shedlock\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;shedlock-spring\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;4.27.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;net.javacrumbs.shedlock\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;shedlock-provider-jdbc-template\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;4.27.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.h2database\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;h2\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;runtime\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; We have added dependencies on the core module shedlock-spring along with dependencies on shedlock-provider-jdbc-template for jdbc template and on the h2 database to be used as the shared database. In production scenarios, we should use a persistent database like MySQL, Postgres, etc.\nNext we update our scheduler configuration to integrate the library with Spring:\n@Configuration @EnableScheduling @EnableSchedulerLock(defaultLockAtMostFor = \u0026#34;10m\u0026#34;) @EnableAsync @ConditionalOnProperty(name=\u0026#34;scheduler.enabled\u0026#34;, matchIfMissing = true) public class SchedulerConfig { @Bean public LockProvider lockProvider(DataSource dataSource) { return new JdbcTemplateLockProvider( JdbcTemplateLockProvider.Configuration.builder() .withJdbcTemplate(new JdbcTemplate(dataSource)) .usingDbTime() // Works on Postgres, MySQL, MariaDb, MS SQL, Oracle, DB2, HSQL and H2  .build() ); } } Here we have enabled schedule locking by using the @EnableSchedulerLock annotation. We have also configured the LockProvider by creating an instance of JdbcTemplateLockProvider which is connected to a datasource with the in-memory H2 database.\nNext, we will create a table that will be used as the shared database.\nDROP TABLE IF EXISTS shedlock; CREATE TABLE shedlock( name VARCHAR(64) NOT NULL, lock_until TIMESTAMP(3) NOT NULL, locked_at TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3), locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name) ); Finally, we will annotate our scheduled jobs by applying the @SchedulerLock annotation:\n@Service public class PricingEngine { static final Logger LOGGER = Logger.getLogger(PricingEngine.class.getName()); @Scheduled(cron = \u0026#34;${interval-in-cron}\u0026#34;) @SchedulerLock(name = \u0026#34;myscheduledTask\u0026#34;) public void computePrice() throws InterruptedException { Random random = new Random(); price = random.nextDouble() * 100; LOGGER.info(\u0026#34;computing price at \u0026#34;+ LocalDateTime.now().toEpochSecond(ZoneOffset.UTC)); Thread.sleep(4000); } ... ... } Here we have added the @SchedulerLock annotation to the computePrice() method. Only methods annotated with the @SchedulerLock annotation are locked, the library ignores all other scheduled tasks. We have also specified a name for the lock as myscheduledTask. We can execute only one task with the same name at the same time.\nConditions for using Distributed Job Scheduler Quartz Quartz Scheduler is an open-source distributed job scheduler that provides many enterprise-class features like support for JTA transactions and clustering.\nAmong its main capabilities is job persistence support to an external database that is very useful for resuming failed jobs as well as for reporting purposes.\nClustering is another key feature of Quartz that can be used for Fail-safe and/or Load Balancing.\nSpring Scheduler is preferred when we want to implement a simple form of job scheduling like executing methods on a bean every X seconds, or on a cron schedule without worrying about any side-effects of restarting jobs after failures.\nOn the other hand, if we need clustering along with support for job persistence then Quartz is a better alternative.\nConclusion Here is a list of major points from the tutorial for quick reference:\n Scheduling is part of the core module, so we do not need to add any dependencies. Scheduling is not enabled by default. We explicitly enable scheduling by adding the @EnableScheduling annotation to a Spring configuration class. We can make the scheduling conditional on a property so that we can enable and disable scheduling by setting the property. We create scheduled jobs by decorating a method with the @Scheduled annotation. Only methods with void return type and zero parameters can be converted into scheduled jobs by adding @Scheduled annotation. We set the interval of executing by specifying the fixedRate or fixedDelay attribute in the @Scheduled annotation. We can choose to delay the first execution of the method by specifying the interval using the initialDelay attribute. We can deploy multiple Scheduler Instances using the ShedLock library which ensures only one instance to run at a time by using a locking mechanism in a shared database. We can use a Distributed Job Scheduler like Quartz to address more complex scenarios of scheduling like resuming failed jobs, and reporting.  You can refer to all the source code used in the article on Github.\n","date":"September 19, 2021","image":"https://reflectoring.io/images/stock/0111-clock-1200x628-branded_hu11424c7716805d3162fd43f6bfa1fe41_91574_650x0_resize_q90_box.jpg","permalink":"/spring-scheduler/","title":"Running Scheduled Jobs in Spring Boot"},{"categories":["Simplify"],"contents":"How many words are you reading every day?\nAs a software developer, I read a lot. And I\u0026rsquo;m pretty sure that I should read even more. I need to stay up-to-date on my emails, a lot of Slack channels and, most importantly, on the things that other teams within the company are doing (i.e. read a lot of internal blog posts and other updates in our Confluence instance).\nYour situation is probably similar. But how much of the information you read every day do you retain in memory?\nIf I was to guess the ratio of information I retain to all information that I read, I would say it\u0026rsquo;s something around 5%. That means 95% of what I read every day is lost. And these 95% split up into two categories:\n information that I shouldn\u0026rsquo;t have read in the first place because they bring no value to me, and information that I should have read more intently, to keep the knowledge in memory.\nThe first category I can avoid by being selective. Unsubscribe from newsletters I don\u0026rsquo;t read, create email filters to highlight important emails, leave some Slack channels, and so on.  The second category I can avoid by reading intentionally. Some things that I do are:\nMaintain a Reading List I maintain a list of things I need to read on a Trello board with a comment on the card that answers the question of why I should read it.\nWhen I have reading time, I can check if the answer is still valid and then either read it or scrap it.\nTrello also provides the nice functionality that you can forward emails to it, which I use to re-read and answer emails at a later time.\nPrime Your Brain Ask yourself some questions you want answered before starting to read something.\nFor example, I ask myself the question \u0026ldquo;What do I expect to get out of this text?\u0026rdquo;. This helps me to filter out the important information.\nIt also helps to find the sections in the text that are most interesting to me and allows me to skim through the document more purposefully.\nIf I realize that my questions are not answered at all, I may decide to stop reading this document at all.\nSchedule Reading Time Actively scheduling some time for reading each day helps to stay on top of the deluge of information.\nIf it ain\u0026rsquo;t scheduled, it ain\u0026rsquo;t happening.\nI\u0026rsquo;m reading 20 minutes in a nonfiction book in my lunch break every day. And I know I should schedule some reading time for work, as well\u0026hellip; .\nTake Notes Taking notes helps to internalize the knowledge. It forces you to translate the information you read into your own words, increasing memory retention.\nI like to take notes the old-fashioned way on paper and then transfer them into a digital, searchable version. This act of transferring the notes greatly increases retention, at least for my sorry brain.\nGamify It I \u0026ldquo;collect\u0026rdquo; book notes. When I\u0026rsquo;m reading nonfiction books, I publish my book notes on the blog.\nThey are the worst-performing pages on my blog (no one reads them). But the point is that the process of publishing the notes gives me a sense of accomplishment that motivates me to take notes and even re-visit them to get them published. I\u0026rsquo;m basically tricking my brain into having fun reading.\nPurposeful Reading We\u0026rsquo;re all flooded with information every day. What you can\u0026rsquo;t (or don\u0026rsquo;t want to) say \u0026ldquo;no\u0026rdquo; to, you have to read in a way that makes the most of the time you\u0026rsquo;re investing.\nDon\u0026rsquo;t just consume emails, documents, articles, and the like. Either read them with purpose, or don\u0026rsquo;t read them at all.\n","date":"September 12, 2021","image":"https://reflectoring.io/images/stock/0110-reading-1200x628-branded_huaab211db23d9740f3915c248d3eabfd7_219228_650x0_resize_q90_box.jpg","permalink":"/read-intentionally/","title":"Read Intentionally"},{"categories":["Spring Boot","AWS"],"contents":"Metrics provide a quantifiable measure of specific attributes of an application. A collection of different metrics give intelligent insights into the health and performance of an application.\nAmazon CloudWatch is a monitoring and observability service in the AWS cloud platform. One of its main features is collecting metrics and storing the metrics in a time-series database.\nIn this article, we will generate different types of application metrics in a Spring Boot web application and send those metrics to Amazon CloudWatch.\nAmazon CloudWatch will store the metrics data and help us to derive insights about our application by visualizing the metric data in graphs.\nCheck Out the Book!  This article gives only a first impression of what you can do with Amazon CloudWatch.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. What is Amazon CloudWatch? Amazon CloudWatch is a dimensional time-series service in the AWS cloud platform. It provides the following features:\n Collecting and monitoring logs. Storing metrics from AWS resources, and applications running in AWS or outside AWS. Providing system-wide visualization with graphs and statistics. Creating alarms that watch a single or multiple CloudWatch metrics and perform some actions based on the value of the metric.  We will use only the metrics storing and visualization capability of CloudWatch here for the metrics generated by a Spring Boot application.\nHence it will be worthwhile to introduce a few concepts important for creating the metrics in our application:\nMetric: Metric is a fundamental concept in CloudWatch. It is associated with one or more measures of any application attribute at any point in time and is represented by a series of data points with a timestamp.\nNamespace: A namespace is a container for CloudWatch metrics. We specify a namespace for each data point published to CloudWatch.\nDimension: A dimension is a name/value pair that is part of the identity of a metric. We can assign up to 10 dimensions to a metric.\nMetrics are uniquely defined by a name, a namespace, and zero or more dimensions. Each data point in a metric has a timestamp, and optionally a unit of measure.\nWhen we choose CloudWatch to monitor our application, then the data about certain attributes of the application is sent to CloudWatch as a data point for a metric at regular intervals.\nPlease refer to the official documentation or the Stratospheric book for a more elaborate explanation of Amazon CloudWatch concepts and capabilities.\nIn the subsequent sections, we will create a Spring Boot application, generate some metrics in the application, and ship them to Amazon CloudWatch. After the metrics are published in CloudWatch, we will visualize them using CloudWatch graphs.\nAn Example Application for Capturing Metrics With this basic understanding of Amazon CloudWatch, let us now create a web application with the Spring Boot framework for creating our metrics.\nLet us first create a Spring Boot project with the help of the Spring boot Initializr, and then open the project in our favorite IDE. We have added dependencies on the web and lombok modules in our Maven pom.xml.\nOur web application will have a REST API for fetching products in an online shopping application. We have created our API in the following class using the annotations from the Spring Web dependency:\n@RestController @Slf4j public class ProductController { @GetMapping(\u0026#34;/products\u0026#34;) @ResponseBody public List\u0026lt;Product\u0026gt; fetchProducts() { List\u0026lt;Product\u0026gt; products = fetchProductsFromStore(); return products; } /** * Dummy method to fetch products from any datastore * */ private List\u0026lt;Product\u0026gt; fetchProductsFromStore(){ List\u0026lt;Product\u0026gt; products = new ArrayList\u0026lt;Product\u0026gt;(); products.add(Product.builder().name(\u0026#34;Television\u0026#34;).build()); products.add(Product.builder().name(\u0026#34;Book\u0026#34;).build()); return products; } } The fetch products API is created with the fetchProducts() method in this ProductController class will accept HTTP GET requests at http://localhost:8080/products and respond with a JSON representation of a list of products.\nIn the next sections, we will enrich this application to capture three metrics with a specific purpose:\n Measure the number of HTTP requests for the fetch products API. Track the fluctuation in the price of a product. Total execution time of the fetch products API.  Publishing Metrics with the CloudWatch SDK The simplest way for an application to send metrics to CloudWatch is by using the AWS Java SDK. The below code shows a service class for sending metrics to CloudWatch using AWS Java SDK:\n@Configuration public class AppConfig { @Bean public CloudWatchAsyncClient cloudWatchAsyncClient() { return CloudWatchAsyncClient .builder() .region(Region.US_EAST_1) .credentialsProvider( ProfileCredentialsProvider .create(\u0026#34;pratikpoc\u0026#34;)) .build(); } } @Service public class MetricPublisher { private CloudWatchAsyncClient cloudWatchAsyncClient; @Autowired public MetricPublisher(CloudWatchAsyncClient cloudWatchAsyncClient) { super(); this.cloudWatchAsyncClient = cloudWatchAsyncClient; } public void putMetricData(final String nameSpace, final String metricName, final Double dataPoint, final List\u0026lt;MetricTag\u0026gt; metricTags) { try { List\u0026lt;Dimension\u0026gt; dimensions = metricTags .stream() .map((metricTag)-\u0026gt;{ return Dimension .builder() .name(metricTag.getName()) .value(metricTag.getValue()) .build(); }).collect(Collectors.toList()); // Set an Instant object  String time = ZonedDateTime .now(ZoneOffset.UTC) .format(DateTimeFormatter.ISO_INSTANT); Instant instant = Instant.parse(time); MetricDatum datum = MetricDatum .builder() .metricName(metricName) .unit(StandardUnit.NONE) .value(dataPoint) .timestamp(instant) .dimensions(dimensions) .build(); PutMetricDataRequest request = PutMetricDataRequest .builder() .namespace(nameSpace) .metricData(datum) .build(); cloudWatchAsyncClient.putMetricData(request); } catch (CloudWatchException e) { System.err.println(e.awsErrorDetails().errorMessage()); } } } public class MetricTag { private String name; private String value; public MetricTag(String name, String value) { super(); this.name = name; this.value = value; } // Getters  ... ... } In this code snippet, we are establishing the connection to Amazon CloudWatch by setting up the CloudWatchAsyncClient with our AWS profile credentials. The request for sending the metric is created in the putMetricData() method.\nThe metric is created by specifying the name of the metric, and the namespace under which the metrics will be created along with one or more tags associated with the metric called dimensions.\nPublishing Metrics with Micrometer We will make use of the Micrometer library, instead of the AWS Java SDK, to create our metrics and send them to Amazon CloudWatch.\nMicrometer acts as a facade to different monitoring systems by providing a tool-agnostic interface for collecting metrics from our application and publishing the metrics to our target metrics collector:\nThis enables us to support multiple metrics collectors and switch between them with minimal configuration changes.\nMicrometer MeterRegistry and Meters MeterRegistry and Meter are the two central concepts in Micrometer. A Meter is the interface for collecting metrics about an application. Meters in Micrometer are created from and held in a MeterRegistry. A sample code for instantiating a MeterRegistry will look like this:\nMeterRegistry registry = new SimpleMeterRegistry(); SimpleMeterRegistry is a default implementation of MeterRegistry bundled with Micrometer. It holds the latest value of each meter in memory and does not export the data to any metrics collector. The diagram here shows the hierarchy and relationships of important classes and interfaces of the Micrometer.\nWe can see different types of Meters and MeterRegistries in this diagram.\nMeterRegistry represents the monitoring system where we want to push the metrics for storage and visualization.\nEach supported monitoring system has an implementation of MeterRegistry. For example, for sending metrics to Amazon CloudWatch we will use CloudWatchMeterRegistry.\nEach meter type gets converted into one or more metrics in a format compatible with the target monitoring system like Amazon CloudWatch in our application.\nMicrometer comes with the following set of Meters:\n Timer, Counter, Gauge, DistributionSummary, LongTaskTimer, FunctionCounter, FunctionTimer, and TimeGauge.  From these, we will use Timer, Counter, Gauge in our application.\nLet us understand the kind of measures they can be typically used for:\n  Counter: Counter is used to measure numerical values which only increase. They can be used to count requests served, tasks completed, errors that occurred, etc.\n  Gauge: A Gauge represents a numerical value that can both increase and decrease. Gauge is used to measure values like current CPU usage, cache size, the number of messages in a queue, etc.\n  Timer: Timer is used for measuring short-duration latencies, and the frequency of such events. All implementations of Timer report at least the total time and count of events as separate time series.\n  Spring Boot Integration with Micrometer Coming back to our application, we will first integrate Micrometer with our Spring Boot application to produce these metrics. We do this by first adding a dependency on Micrometer core library named micrometer-core :\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.micrometer\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;micrometer-core\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; This library provides classes for creating the meters and pushing the metrics to the target monitoring system.\nWe next add the dependency for the target monitoring system. We are using Amazon CloudWatch so we will add a dependency to micrometer-registry-cloudwatch2 module in our project:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.micrometer\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;micrometer-registry-cloudwatch2\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; This module uses the AWS Java SDK version 2 to integrate with Amazon CloudWatch. An earlier version of the module named micrometer-registry-cloudwatch uses the AWS Java SDK version 1. Version 2 is the recommended version to use.\nThis library does the transformation from Micrometer meters to the format of the target monitoring system. Here the micrometer-registry-cloudwatch2 library converts Micrometer meters to CloudWatch metrics.\nCreating the MeterRegistry We will now create the MeterRegistry implementation for Amazon CloudWatch to create our Meters and push the metrics to Amazon CloudWatch. We do this in a Spring configuration class as shown here:\n@Configuration public class AppConfig { @Bean public CloudWatchAsyncClient cloudWatchAsyncClient() { return CloudWatchAsyncClient .builder() .region(Region.US_EAST_1) .credentialsProvider( ProfileCredentialsProvider .create(\u0026#34;pratikpoc\u0026#34;)) .build(); } @Bean public MeterRegistry getMeterRegistry() { CloudWatchConfig cloudWatchConfig = setupCloudWatchConfig(); CloudWatchMeterRegistry cloudWatchMeterRegistry = new CloudWatchMeterRegistry( cloudWatchConfig, Clock.SYSTEM, cloudWatchAsyncClient()); return cloudWatchMeterRegistry; } private CloudWatchConfig setupCloudWatchConfig() { CloudWatchConfig cloudWatchConfig = new CloudWatchConfig() { private Map\u0026lt;String, String\u0026gt; configuration = Map.of( \u0026#34;cloudwatch.namespace\u0026#34;, \u0026#34;productsApp\u0026#34;, \u0026#34;cloudwatch.step\u0026#34;, Duration.ofMinutes(1).toString()); @Override public String get(String key) { return configuration.get(key); } }; return cloudWatchConfig; } } In this code snippet, we have defined CloudWatchMeterRegistry as a Spring bean. For creating our registry we are first creating a new CloudWatchConfig which is initialized with two configuration properties: cloudwatch.namespace and cloudwatch.step so that it publishes all metrics to the productsApp namespace every minute.\nAfter configuring the MeterRegistry, we will look at how we register and update our meters in the next sections.\nWe will register three meters:\n Counter to measure the count of views of the product list page. Gauge to track the price of a product Timer to record time of execution of fetchProducts() method.  Registering and Incrementing a Counter We want to count the number of views of the products list page in our application. We do this by updating the meter of type counter since this measure always goes up. In our application we register the counter for page views in the constructor and increment the counter when the API is invoked as shown in the code snippet below:\n@RestController @Slf4j public class ProductController { private Counter pageViewsCounter; private MeterRegistry meterRegistry; @Autowired ProductController(MeterRegistry meterRegistry, PricingEngine pricingEngine){ this.meterRegistry = meterRegistry; pageViewsCounter = meterRegistry .counter(\u0026#34;PAGE_VIEWS.ProductList\u0026#34;); } @GetMapping(\u0026#34;/products\u0026#34;) @ResponseBody public List\u0026lt;Product\u0026gt; fetchProducts() { long startTime = System.currentTimeMillis(); List\u0026lt;Product\u0026gt; products = fetchProductsFromStore(); // increment page views counter  pageViewsCounter.increment(); return products; } private List\u0026lt;Product\u0026gt; fetchProductsFromStore(){ List\u0026lt;Product\u0026gt; products = new ArrayList\u0026lt;Product\u0026gt;(); products.add(Product.builder().name(\u0026#34;Television\u0026#34;).build()); products.add(Product.builder().name(\u0026#34;Book\u0026#34;).build()); return products; } } Here we are registering the meter of type counter by calling the counter method on our CloudWatchRegistry object created in the previous section. This method is accepting the name of the meter as a parameter.\nRegistering and Recording a Timer Now we want to record the time taken to execute the API for fetching products. This is a measure of short duration latency so we will make use of a meter of type Timer.\nWe will register the Timer by calling the Timer static method on the registry object in the constructor of our controller class as shown here:\n@RestController @Slf4j public class ProductController { private Timer productTimer; private MeterRegistry meterRegistry; @Autowired ProductController(MeterRegistry meterRegistry, PricingEngine pricingEngine){ this.meterRegistry = meterRegistry; productTimer = meterRegistry .timer(\u0026#34;execution.time.fetchProducts\u0026#34;); } @GetMapping(\u0026#34;/products\u0026#34;) @ResponseBody public List\u0026lt;Product\u0026gt; fetchProducts() { long startTime = System.currentTimeMillis(); List\u0026lt;Product\u0026gt; products = fetchProductsFromStore(); // record time to execute the method  productTimer.record(Duration .ofMillis(System.currentTimeMillis() - startTime)); return products; } private List\u0026lt;Product\u0026gt; fetchProductsFromStore(){ List\u0026lt;Product\u0026gt; products = new ArrayList\u0026lt;Product\u0026gt;(); // Fetch products from database or external API  return products; } } We have set the name of the Timer as execution.time.fetchProducts when registering in the constructor. In the fetchProducts method body we record the execution time by calling the record() method.\nRegistering and Updating a Gauge We will next register a meter of type Gauge to track the price of a product. For our example, we are using a fictitious pricing engine to compute the price at regular intervals. We have used a simple Java method for the pricing engine but in real life, it could be a sophisticated rules-based component. The price can go up and down so Gauge is an appropriate meter to track this measure.\nWe are constructing the Gauge using the fluent builder interface of the Gauge as shown below:\n@RestController @Slf4j public class ProductController { private Gauge priceGauge; private MeterRegistry meterRegistry; private PricingEngine pricingEngine; @Autowired ProductController(MeterRegistry meterRegistry, PricingEngine pricingEngine){ this.meterRegistry = meterRegistry; this.pricingEngine = pricingEngine; priceGauge = Gauge .builder(\u0026#34;product.price\u0026#34;, pricingEngine , (pe)-\u0026gt;{ return pe != null? pe.getProductPrice() : null;} ) .description(\u0026#34;Product price\u0026#34;) .baseUnit(\u0026#34;ms\u0026#34;) .register(meterRegistry); } ... ... } @Service public class PricingEngine { private Double price; public Double getProductPrice() { return price; } @Scheduled(fixedRate = 70000) public void computePrice() { Random random = new Random(); price = random.nextDouble() * 100; } } As we can see in this code snippet, the price is computed every 70000 milliseconds specified by the Scheduled annotation over the computePrice() method.\nWe have already set up the gauge during registration to track the price automatically by specifying the function getProductPrice.\nVisualizing the Metrics in CloudWatch Let us open the AWS CloudWatch console to see the metrics we published in CloudWatch. Our metrics will be grouped under the namespace productApp which we had configured in our application when generating the metrics.\nThe namespace we have used to create our metrics appears under the custom namespaces section as can be seen in this screenshot:\nHere we can see our namespace productApp containing 6 metrics. Let us get inside the namespace to see the list of metrics as shown below:\nThese are the metrics for each of the meters (Counter, Timer, and Gauge) of Micrometer which we had registered and updated in the application in the earlier sections:\n   Micrometer Meter Meter Type CloudWatch Metric     product.price Gauge product.price.value   PAGE_VIEWS.ProductList Counter PAGE_VIEWS.ProductList.count   execution.time.fetchProducts Timer execution.time.fetchProducts.avg execution.time.fetchProducts.count execution.time.fetchProducts.max execution.time.fetchProducts.sum    The metric values rendered in the CloudWatch graph is shown below:\nThe Gauge for tracking the price of a product is mapped to 1 metric named product.price.value.\nThe Counter for measuring the number of page views of a web page showing list of products is mapped to 1 metric named PAGE_VIEWS.ProductList.count. We measured this in our application by incrementing the meter for page views on every invocation of the fetchProducts API.\nThe Timer meter for measuring the execution time of the fetchProducts API is mapped to 3 metrics named execution.time.fetchProducts.count, execution.time.fetchProducts.max, and execution.time.fetchProducts.sum representing the API\u0026rsquo;s total execution time, and maximum and sum of the execution times during an interval.\nGenerating JVM and System Metrics with Actuator We can use the Spring Boot Actuator module to generate useful JVM and system metrics. Spring Boot\u0026rsquo;s Actuator provides dependency management and auto-configuration for Micrometer. So when we add the Actuator dependency, we can remove the dependency on Micrometer\u0026rsquo;s core module micrometer-core:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-actuator\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.micrometer\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;micrometer-registry-cloudwatch2\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Spring Boot provides automatic meter registration for a wide variety of technologies. In most situations, the out-of-the-box defaults provide sensible metrics that can be published to any of the supported monitoring systems.\nFor sending the metrics to CloudWatch we need to add two properties to our application.properties:\nmanagement.metrics.export.cloudwatch.namespace=productsApp management.metrics.export.cloudwatch.batchSize=10 Here we have added a property for namespace where the metrics will be collected in CloudWatch. The other property for batchsize is the value of the number of metrics sent in a single batch to CloudWatch. Auto-configuration will enable JVM metrics using core Micrometer classes. JVM metrics are published under the meter name starting with \u0026ldquo;jvm.\u0026rdquo; as shown below:\nJVM metrics are provided the following information:\n Memory and buffer pool details Garbage collection Statistics Thread utilization The number of classes loaded and unloaded  Auto-configuration will also enable system metrics using core Micrometer classes. System metrics are published under the meter names starting with \u0026ldquo;system.\u0026rdquo; and \u0026ldquo;process.\u0026quot;:\nSystem metrics include the following information :\n CPU metrics File descriptor metrics Uptime metrics (both the amount of time the application has been running as well as a fixed gauge of the absolute start time)  Using the Metrics to Configure Alarms Alarms are one of the key components of any monitoring solution. Without going too deep, we will only look at how we can make use of the metrics from our application to set up an alarm. A metric alarm watches a single CloudWatch metric and performs one or more actions based on the value of the metric.\nWe will create an alarm to monitor the fetch products API. If the API execution time exceeds a particular band, we want to send an email to notify interested parties to take remedial actions.\nThe diagram here shows the sequence of steps to create this alarm to watch over the metric for the execution time of the fetch products API:\nHere we are creating the alarm to watch over metric named execution.time.fetchProducts.max. We have set up the condition for triggering the alarm as \u0026ldquo;execution.time.fetchProducts.max is outside the band (width: 2) for 1 datapoint within 5 minutes\u0026rdquo;. When the alarm is triggered, the action is set to fire a notification to an SNS topic, where we have subscribed to an endpoint to send an email.\nFor more details on creating alarms with CloudWatch, have a look at the Stratospheric book.\nConclusion Here is a list of important points from the article for quick reference:\n Micrometer is used as a facade to publish metrics from our application to different monitoring systems. Micrometer works as a flexible layer of abstraction between our code and the monitoring systems so that we can easily swap or combine them. MeterRegistry and Meter are two important concepts in Micrometer. Counter, Timer, and Gauge are the three commonly used types of Meter. Each monitoring system supported by Micrometer has an implementation of MeterRegistry. The meter types are converted to one or more time-series metrics at the time of publishing to the target monitoring system. Amazon CloudWatch is a monitoring and observability service in AWS Cloud. Namespace, metric, and dimension are three important concepts in Amazon CloudWatch. A metric in CloudWatch is uniquely identified by its name, namespace, and dimension.  You can refer to all the source code used in the article on Github.\nCheck Out the Book!  This article gives only a first impression of what you can do with Amazon CloudWatch.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"September 7, 2021","image":"https://reflectoring.io/images/stock/0032-dashboard-1200x628-branded_hu32014b78b20b83682c90e2a7c4ea87ba_153646_650x0_resize_q90_box.jpg","permalink":"/spring-aws-cloudwatch/","title":"Publishing Metrics from Spring Boot to Amazon CloudWatch"},{"categories":["Simplify"],"contents":"We like telling other people that we\u0026rsquo;re working hard. Hard work gains us respect. We just have to push through to get the results we want. Hard work is glorified.\nImagine someone has had a great success and you ask them about the work they\u0026rsquo;ve done to get that success. They say \u0026ldquo;Nah, the work was easy\u0026rdquo;. In your mind, this exchange probably devalues the work they\u0026rsquo;ve done and the success they\u0026rsquo;ve earned.\nEvery fiber in our bodies wants to resist hard work. So why do we glorify it like that? We don\u0026rsquo;t want to get out of bed in the morning, we don\u0026rsquo;t want to write that long-overdue email, and we don\u0026rsquo;t want to start on that new piece of work that\u0026rsquo;s been waiting in our inbox. What if the results of hard work can be earned with easy work, too? Maybe even better results?\nHere are some ideas on how to make things easy.\nPrepare Triggers To make it easy to establish habits that help you reach your goals, you can set up triggers for those habits.\nHave a pitcher of water waiting for you on the desk in the morning so that it\u0026rsquo;s easy to drink enough water.\nStart the day in your sports clothes so that it\u0026rsquo;s easy to go for a run before work.\nPut your goals on sticky notes on the wall next to your desk to have them in mind every day.\nThink about what you need to do to achieve your goals and then what triggers you can place throughout the day to make it easy to do those things.\nInvert the Problem Instead of asking \u0026ldquo;How do I solve this problem?\u0026rdquo; ask \u0026ldquo;How can I make this easy?\u0026rdquo; and \u0026ldquo;How can I make this enjoyable?\u0026rdquo;.\nSpend some time answering these two questions before throwing yourself at your work. It will work wonders on your productivity.\nDon\u0026rsquo;t Be a Complainer People who complain about everything absolve themselves of any responsibility. The other team is too uncooperative, the code is too complicated, the customer is too choosy. The work is too hard.\nComplaints invite more complaints. They put you in a mood where you don\u0026rsquo;t want to work at all anymore. They make things harder for you.\nSwitch to a mindset of gratefulness, instead. Look for the things you\u0026rsquo;re grateful for, and the negative thoughts won\u0026rsquo;t occupy your mind as much anymore, letting you see solutions to make the work easier.\nExploit Daily Habits Like compound interest drastically increases your wealth over time, establishing daily habits will drastically increase your output over time. Try to set up daily habits of making things easy.\nInstead of pushing through, make it a habit to split your work into units with breaks in between. Make it a habit to do the most important thing first thing in the morning. Make it a habit to spend 30 minutes each day on that important project to make progress on it every day!\nA framework of daily habits makes work easy. Which habits would make your work easy?\nAutomate Automating tasks is something that we, as software developers, are very aware of, but often don\u0026rsquo;t do enough. We automate software releases and deployments, we automate alerting mechanisms to be notified when something is wrong, and we\u0026rsquo;re encoding automations into software every day.\nAutomation is the ultimate tool to make things easy. Think about this the next time you\u0026rsquo;re doing a task a second or third time.\nHard Isn\u0026rsquo;t Better Than Easy! Making work easy doesn\u0026rsquo;t mean that the results of the work are worse. Yes, it\u0026rsquo;s easier to write a rough draft with a lot of errors than a polished piece of text. But writing the rough draft first will make it a lot easier to get to the polished text.\nMany of the ideas above come from the book \u0026ldquo;Effortless\u0026rdquo; by Greg McKeown. Give it a read if you\u0026rsquo;re looking for more inspiration on easy work.\n","date":"September 5, 2021","image":"https://reflectoring.io/images/stock/0109-baloons-1200x628-branded_hu15b32ef1896cb8d755f95ed8d26a8506_191601_650x0_resize_q90_box.jpg","permalink":"/make-it-easy/","title":"Make it Easy"},{"categories":["Spring Boot"],"contents":"In this series so far, we\u0026rsquo;ve learned how to use the Resilience4j Retry, RateLimiter, TimeLimiter, Bulkhead, Circuitbreaker core modules and seen its Spring Boot support for the Retry module.\nIn this article, we\u0026rsquo;ll focus on the RateLimiter and see how the Spring Boot support makes it simple and more convenient to implement rate-limiting in our applications.\n Example Code This article is accompanied by a working code example on GitHub. High-level Overview If you haven\u0026rsquo;t read the previous article on RateLimiter, check out the \u0026ldquo;What is Rate Limiting?\u0026quot;, \u0026ldquo;When to Use RateLimiter?\u0026quot;, and \u0026ldquo;Resilience4j RateLimiter Concepts\u0026rdquo; sections for a quick intro.\nYou can find out how to set up Maven or Gradle for your project here.\nUsing the Spring Boot Resilience4j RateLimiter Module Assume that we are building a website for an airline to allow its customers to search for and book flights. Our service talks to a remote service encapsulated by the class FlightSearchService.\nLet\u0026rsquo;s see how to use the various features available in the RateLimiter module. This mainly involves configuring the RateLimiter instance in the application.yml file and adding the @RateLimiter annotation on the Spring @Service component that invokes the remote operation.\nIn production, we\u0026rsquo;d configure the RateLimiter based on our contract with the remote service. However, in these examples, we\u0026rsquo;ll set the limitForPeriod, limitRefreshPeriod, and the timeoutDuration to low values so we can see the RateLimiter in action.\nBasic Example Suppose our contract with the airline\u0026rsquo;s service says that we can call their search API at 2 rps (requests per second). Then we would configure the RateLimiter like this:\nratelimiter: instances: basic: limitForPeriod: 2 limitRefreshPeriod: 1s timeoutDuration: 1s The limitForPeriod and limitRefreshPeriod configurations together determine the rate (2rps). The timeoutDuration configuration specifies the time we are willing to wait to acquire permission from the RateLimiter before erroring out.\nNext, we annotate the method in the bean that calls the remote service:\n@RateLimiter(name = \u0026#34;basic\u0026#34;) List\u0026lt;Flight\u0026gt; basicExample(SearchRequest request) { return remoteSearchService.searchFlights(request); } Finally, we call the decorated method on this @Service from another bean (like a @Controller):\nfor (int i=0; i\u0026lt;3; i++) { System.out.println(service.basicExample(request)); } The timestamps in the sample output show two requests being made every second:\nSearching for flights; current time = 19:51:09 777 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 19:51:09 803 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 19:51:10 096 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 19:51:10 097 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] If we exceed the limit, the RateLimiter parks the thread. If there no permits available within the 1s timeoutDuration we specified, we get a RequestNotPermitted exception:\nio.github.resilience4j.ratelimiter.RequestNotPermitted: RateLimiter \u0026#39;timeoutExample\u0026#39; does not permit further calls at io.github.resilience4j.ratelimiter.RequestNotPermitted.createRequestNotPermitted(RequestNotPermitted.java:43) at io.github.resilience4j.ratelimiter.RateLimiter.waitForPermission(RateLimiter.java:591) ... other lines omitted ... Applying Multiple Rate Limits Suppose the airline\u0026rsquo;s flight search had multiple rate limits: 2 rps and 40 rpm (requests per minute).\nLet\u0026rsquo;s first configure the two RateLimiters:\nratelimiter: instances: multipleRateLimiters_rps_limiter: limitForPeriod: 2 limitRefreshPeriod: 1s timeoutDuration: 2s multipleRateLimiters_rpm_limiter: limitForPeriod: 40 limitRefreshPeriod: 1m timeoutDuration: 2s Intutitively, we might think that we can annotate both these on the method that calls the remote service:\n@RateLimiter(name = \u0026#34;multipleRateLimiters_rps_limiter\u0026#34;) @RateLimiter(name = \u0026#34;multipleRateLimiters_rpm_limiter\u0026#34;) List\u0026lt;Flight\u0026gt; multipleRateLimitsExample2(SearchRequest request) { return remoteSearchService.searchFlights(request, remoteSearchService); } However, this approach does not work. Since the @RateLimiter annotation is not a repeatable annotation, the compiler does not allow it to be added multiple times to the same method:\njava: io.github.resilience4j.ratelimiter.annotation.RateLimiter is not a repeatable annotation type There is a feature request open for a long time in the Resilience4j Github to add support for this kind of use case. In the future, we may have a new repeatable annotation, but how do we solve our problem in the meantime?\nLet\u0026rsquo;s try another approach. We\u0026rsquo;ll have 2 separate methods - one for our rps RateLimiter and one for the rpm RateLimiter.\nWe\u0026rsquo;ll then call the rpm @RateLimiter annotated method from the rps @RateLimiter annotated one:\n@RateLimiter(name = \u0026#34;multipleRateLimiters_rps_limiter\u0026#34;) List\u0026lt;Flight\u0026gt; rpsLimitedSearch(SearchRequest request) { return rpmLimitedSearch(request, remoteSearchService); } @RateLimiter(name = \u0026#34;multipleRateLimiters_rpm_limiter\u0026#34;) List\u0026lt;Flight\u0026gt; rpmLimitedSearch(SearchRequest request) { return remoteSearchService.searchFlights(request, remoteSearchService); } If we run this, we\u0026rsquo;ll find that this approach doesn\u0026rsquo;t work either. Only the first @RateLimiter is applied and not the second one.\nThis is because when a Spring bean calls another method defined in the same bean, the call does not go through the Spring proxy, and thus the annotation is not evaluated. It would just be a call from one method in the target object to another one in the same object.\nTo get around this, let\u0026rsquo;s define the rpmRateLimitedSearch() method in a new Spring bean:\n@Component class RPMRateLimitedFlightSearchSearch { @RateLimiter(name = \u0026#34;multipleRateLimiters_rpm_limiter\u0026#34;) List\u0026lt;Flight\u0026gt; searchFlights(SearchRequest request, FlightSearchService remoteSearchService) { return remoteSearchService.searchFlights(request); } } Now, we autowire this bean into the one calling the remote service:\n@Service public class RateLimitingService { @Autowired private FlightSearchService remoteSearchService; @Autowired private RPMRateLimitedFlightSearchSearch rpmRateLimitedFlightSearchSearch; // other lines omitted } Finally, we can call one method from the other:\n@RateLimiter(name = \u0026#34;multipleRateLimiters_rps_limiter\u0026#34;) List\u0026lt;Flight\u0026gt; multipleRateLimitsExample(SearchRequest request) { return rpmRateLimitedFlightSearchSearch.searchFlights(request, remoteSearchService); } Let\u0026rsquo;s call the the multipleRateLimitsExample() method more than 40 times:\nfor (int i=0; i\u0026lt;45; i++) { try { System.out.println(service.multipleRateLimitsExample(request)); } catch (Exception e) { e.printStackTrace(); } } The timestamps in the first part of the output show 2 requests being made every second:\nSearching for flights; current time = 16:45:11 710 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 16:45:11 723 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 16:45:12 430 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 16:45:12 460 Flight search successful ....................... other lines omitted ....................... Searching for flights; current time = 16:45:30 431 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] io.github.resilience4j.ratelimiter.RequestNotPermitted: RateLimiter \u0026#39;multipleRateLimiters_rpm_limiter\u0026#39; does not permit further calls And the last part of the output above shows the 41st request being throttled due to the 40 rpm rate limit.\nChanging Limits at Runtime Sometimes, we may want to change at runtime the values we configured for limitForPeriod and timeoutDuration. For example, the remote service may have specified different rate limits based on the time of day or normal hours vs. peak hours, etc.\nWe can do this by calling the changeLimitForPeriod() and changeTimeoutDuration() methods on the RateLimiter, just as we did when working with the RateLimiter core module.\nWhat\u0026rsquo;s different is how we obtain a reference to the RateLimiter. When working with Spring Boot Resilience4j, we usually only use the @RateLimiter annotation and don\u0026rsquo;t deal with the RateLimiter instance itself.\nFirst, we inject the RateLimiterRegistry into the bean that calls the remote service:\n@Service public class RateLimitingService { @Autowired private FlightSearchService remoteSearchService; @Autowired private RateLimiterRegistry registry; // other lines omitted } Next, we add a method that fetches the RateLimiter by name from this registry and changes the values on it:\nvoid updateRateLimits(String rateLimiterName, int newLimitForPeriod, Duration newTimeoutDuration) { io.github.resilience4j.ratelimiter.RateLimiter limiter = registry.rateLimiter(rateLimiterName); limiter.changeLimitForPeriod(newLimitForPeriod); limiter.changeTimeoutDuration(newTimeoutDuration); } Now, we can change the limitForPeriod and timeoutDuration values at runtime by calling this method from other beans:\nservice.updateRateLimits(\u0026#34;changeLimitsExample\u0026#34;, 2, Duration.ofSeconds(2)); The sample output shows requests going through at 1 rps initially and then at 2 rps after the change:\nSearching for flights; current time = 18:43:49 420 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 18:43:50 236 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 18:43:51 236 Flight search successful ... other limes omitted.... Rate limits changed Searching for flights; current time = 18:43:56 240 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 18:43:56 241 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 18:43:57 237 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 18:43:57 237 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] ... other lines omitted .... Using RateLimiter and Retry Together Let\u0026rsquo;s say we want to retry the search when a RequestNotPermitted exception occurs since it\u0026rsquo;s a transient error.\nFirst, we\u0026rsquo;d configure the Retry and RateLimiter instances:\nresilience4j: retry: instances: retryAndRateLimitExample: maxRetryAttempts: 2 waitDuration: 1s ratelimiter: instances: limitForPeriod: 1 limitRefreshPeriod: 1s timeoutDuration: 250ms We can then apply both the @Retry and the @RateLimiter annotations:\n@Retry(name = \u0026#34;retryAndRateLimitExample\u0026#34;) @RateLimiter(name = \u0026#34;retryAndRateLimitExample\u0026#34;) public List\u0026lt;Flight\u0026gt; retryAndRateLimit(SearchRequest request) { return remoteSearchService.searchFlights(request); } The sample output shows the second call getting throttled and then succeeding during the retry:\nSearching for flights; current time = 18:35:04 192 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Retry \u0026#39;retryAndRateLimitExample\u0026#39;, waiting PT1S until attempt \u0026#39;1\u0026#39;. Last attempt failed with exception \u0026#39;io.github.resilience4j.ratelimiter.RequestNotPermitted: RateLimiter \u0026#39;retryAndRateLimitExample\u0026#39; does not permit further calls\u0026#39;. Searching for flights; current time = 18:35:05 475 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] When a method has both the @RateLimiter and @Retry annotations, Spring Boot Resilience4j applies them in this order: Retry ( RateLimiter (method) ).\nSpecifying a Fallback Method Sometimes we may want to take a default action when a request gets throttled. In other words, if the thread is unable to acquire permission in time and a RequestNotPermitted exception occurs, we may want to return a default value or some data from a local cache.\nWe can do this by specifying a fallbackMethod in the @RateLimiter annotation:\n@RateLimiter(name = \u0026#34;fallbackExample\u0026#34;, fallbackMethod = \u0026#34;localCacheFlightSearch\u0026#34;) public List\u0026lt;Flight\u0026gt; fallbackExample(SearchRequest request) { return remoteSearchService.searchFlights(request); } The fallback method should be defined in the same class as the rate-limiting class. It should have the same method signature as the original method with one additional parameter - the Exception that caused the original one to fail:\nprivate List\u0026lt;Flight\u0026gt; localCacheFlightSearch(SearchRequest request, RequestNotPermitted rnp) { // fetch results from the cache  return results; } RateLimiter Events The RateLimiter has an EventPublisher which generates events of the types RateLimiterOnSuccessEvent and RateLimiterOnFailureEvent to indicate if acquiring permission was successful or not. We can listen to these and log them, for example.\nSince we don\u0026rsquo;t have a reference to the RateLimiter instance when working with Spring Boot Resilience4j, this requires a little more work. The idea is still the same, but how we get a reference to the RateLimiterRegistry and then the RateLimiter instance itself is a bit different.\nFirst, we @Autowire a RateLimiterRegistry into the bean that invokes the remote operation:\n@Service public class RateLimitingService { @Autowired private FlightSearchService remoteSearchService; @Autowired private RateLimiterRegistry registry; // other lines omitted } Then we add a @PostConstruct method which sets up the onSuccess and onFailure event handlers:\n@PostConstruct public void postConstruct() { EventPublisher eventPublisher = registry .rateLimiter(\u0026#34;rateLimiterEventsExample\u0026#34;) .getEventPublisher(); eventPublisher.onSuccess(System.out::println); eventPublisher.onFailure(System.out::println); } Here, we fetched the RateLimiter instance by name from the RateLimiterRegistry and then got the EventPublisher from the RateLimiter instance.\nInstead of the @PostConstruct method, we could have also done the same in the constructor of RateLimitingService.\nNow, the sample output shows details of the events:\nRateLimiterEvent{type=SUCCESSFUL_ACQUIRE, rateLimiterName=\u0026#39;rateLimiterEventsExample\u0026#39;, creationTime=2021-08-29T18:52:19.229460} Searching for flights; current time = 18:52:19 241 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/15/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] RateLimiterEvent{type=FAILED_ACQUIRE, rateLimiterName=\u0026#39;rateLimiterEventsExample\u0026#39;, creationTime=2021-08-29T18:52:19.329324} RateLimiter \u0026#39;rateLimiterEventsExample\u0026#39; does not permit further calls Actuator Endpoints Spring Boot Resilience4j makes the details about the last 100 rate limit events available through the Actuator endpoint /actuator/ratelimiterevents. Apart from this, it exposes a few other endpoints:\n /actuator/ratelimiters /actuator/metrics/resilience4j.ratelimiter.available.permissions /actuator/metrics/resilience4j.ratelimiter.waiting_threads  Let\u0026rsquo;s look at the data returned by doing a curl to these endpoints.\nRatelimiters Endpoint This endpoint lists the names of all the rate-limiter instances available:\n$ curl http://localhost:8080/actuator/ratelimiters { \u0026#34;rateLimiters\u0026#34;: [ \u0026#34;basicExample\u0026#34;, \u0026#34;changeLimitsExample\u0026#34;, \u0026#34;multipleRateLimiters_rpm_limiter\u0026#34;, \u0026#34;multipleRateLimiters_rps_limiter\u0026#34;, \u0026#34;rateLimiterEventsExample\u0026#34;, \u0026#34;retryAndRateLimitExample\u0026#34;, \u0026#34;timeoutExample\u0026#34;, \u0026#34;fallbackExample\u0026#34; ] } Permissions Endpoint This endpoint exposes the resilience4j.ratelimiter.available.permissions metric:\n$ curl http://localhost:8080/actuator/metrics/resilience4j.ratelimiter.available.permissions { \u0026#34;name\u0026#34;: \u0026#34;resilience4j.ratelimiter.available.permissions\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The number of available permissions\u0026#34;, \u0026#34;baseUnit\u0026#34;: null, \u0026#34;measurements\u0026#34;: [ { \u0026#34;statistic\u0026#34;: \u0026#34;VALUE\u0026#34;, \u0026#34;value\u0026#34;: 48 } ], \u0026#34;availableTags\u0026#34;: [ { \u0026#34;tag\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;values\u0026#34;: [ \u0026#34;multipleRateLimiters_rps_limiter\u0026#34;, ... other lines omitted ... ] } ] } Waiting Threads Endpoint This endpoint exposes the resilience4j.ratelimiter.waiting_threads metric:\n$ curl http://localhost:8080/actuator/metrics/resilience4j.ratelimiter.available.permissions { \u0026#34;name\u0026#34;: \u0026#34;resilience4j.ratelimiter.waiting_threads\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The number of waiting threads\u0026#34;, \u0026#34;baseUnit\u0026#34;: null, \u0026#34;measurements\u0026#34;: [ { \u0026#34;statistic\u0026#34;: \u0026#34;VALUE\u0026#34;, \u0026#34;value\u0026#34;: 0 } ], \u0026#34;availableTags\u0026#34;: [ { \u0026#34;tag\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;values\u0026#34;: [ \u0026#34;multipleRateLimiters_rps_limiter\u0026#34;, ... other lines omitted ... ] } ] } Conclusion In this article, we learned how we can use Resilience4j RateLimiter\u0026rsquo;s built-in Spring Boot support to implement client-side rate-limiting. We looked at the different ways to configure it with practical examples.\nFor a deeper understanding of Resilience4j RateLimiter concepts and some good practices to follow when implementing rate-limiting in general, check out the related, previous article in this series.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"September 2, 2021","image":"https://reflectoring.io/images/stock/0108-speed-limit-1200x628-branded_hu0f4048910dd781cfc13d6156f43e1822_180547_650x0_resize_q90_box.jpg","permalink":"/rate-limiting-with-springboot-resilience4j/","title":"Rate-Limiting with Spring Boot and Resilience4j"},{"categories":["Spring Boot"],"contents":"Logging is the ultimate resource for investigating incidents and learning about what is happening within your application. Every application has logs of some type.\nOften, however, those logs are messy and it takes a lot of effort to analyze them. In this article, we\u0026rsquo;re going to look at how we can make use of structured logging to greatly increase the value of our logs.\nWe\u0026rsquo;ll go through some very hands-on tips on what to do to improve the value of an application\u0026rsquo;s log data and use Logz.io as a logging platform to query the logs.\n Example Code This article is accompanied by a working code example on GitHub. What are Structured Logs? \u0026ldquo;Normal\u0026rdquo; logs are unstructured. They usually contain a message string:\n2021-08-08 18:04:14.721 INFO 12402 --- [ main] i.r.s.StructuredLoggingApplication : Started StructuredLoggingApplication in 0.395 seconds (JVM running for 0.552) This message contains all the information that we want to have when we\u0026rsquo;re investigating an incident or analyzing an issue:\n the date of the log event the name of the logger that created the log event, and the log message itself.  All the information is in that log message, but it\u0026rsquo;s hard to query for this information! Since all the information is in a single string, this string has to be parsed and searched if we want to get specific information out of our logs.\nIf we want to view only the logs of a specific logger, for example, the log server would have to parse all the log messages, check them for a certain pattern that identifies the logger, and then filter the log messages according to the desired logger.\nStructured logs contain the same information but in, well, structured form instead of an unstructured string. Often, structured logs are presented in JSON:\n{ \u0026#34;timestamp\u0026#34;: \u0026#34;2021-08-08 18:04:14.721\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;INFO\u0026#34;, \u0026#34;logger\u0026#34;: \u0026#34;io.reflectoring....StructuredLoggingApplication\u0026#34;, \u0026#34;thread\u0026#34;: \u0026#34;main\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Started StructuredLoggingApplication ...\u0026#34; } This JSON structure allows log servers to efficiently store and, more importantly, retrieve the logs.\nThe logs can now easily be filtered by timestamp or logger, for example, and the search is much more efficient than parsing strings for certain patterns.\nBut the value of structured logs doesn\u0026rsquo;t end here: we can add any custom fields to our structured log events that we wish! We can add contextual information that can help us identify issues, or we can add metrics to the logs.\nWith all the data that we now have at our fingertips we can create powerful log queries and dashboards and we\u0026rsquo;ll find the information we need even when we\u0026rsquo;ve just been woken up in the middle of a night to investigate an incident.\nLet\u0026rsquo;s now look into a few use cases that show the power of structured logging.\nAdd a Code Path to All Log Events The first thing we\u0026rsquo;re going to look at is code paths. Each application usually has a couple of different paths that incoming requests can take through the application. Consider this diagram:\nThis example has (at least) three different code paths that an incoming request can take:\n User code path: Users are using the application from their browser. The browser sends requests to a web controller and the controller calls the domain code. 3rd party system code path: The application\u0026rsquo;s HTTP API is also called from a 3rd party system. In this example, the 3rd party system calls the same web controller as the user\u0026rsquo;s browser. Timer code path: As many applications do, this application has some scheduled tasks that are triggered by a timer.  Each of these code paths can have different characteristics. The domain service is involved in all three code paths. During an incident that involves an error in the domain service, it will help greatly to know which code path has led to the error!\nIf we didn\u0026rsquo;t know the code path, we\u0026rsquo;d be tempted to make guesses during an incident investigation that lead nowhere.\nSo, we should add the code path to the logs! Here\u0026rsquo;s how we can do this with Spring Boot.\nAdding the Code Path for Incoming Web Requests In Java, the SLF4J logging library provides the MDC class (Message Diagnostic Context). This class allows us to add custom fields to all log events that are emitted in the same thread.\nTo add a custom field for each incoming web request, we need to build an interceptor that adds the codePath field at the start of each request, before our web controller code is even executed.\nWe can do this by implementing the HandlerInterceptor interface:\npublic class LoggingInterceptor implements HandlerInterceptor { @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { if (request.getHeader(\u0026#34;X-CUSTOM-HEADER\u0026#34;) != null) { MDC.put(\u0026#34;codePath\u0026#34;, \u0026#34;3rdParty\u0026#34;); } else { MDC.put(\u0026#34;codePath\u0026#34;, \u0026#34;user\u0026#34;); } return true; } @Override public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) { MDC.remove(\u0026#34;codePath\u0026#34;); } } In the preHandle() method, we call MDC.put() to add the codePath field to all log events. If the request contains a header that identifies that the request comes from the 3rd party system, we set the code path to 3rdParty, otherwise, we assume the request is coming from a user\u0026rsquo;s browser.\nDepending on the application, the logic might be vastly different here, of course, this is just an example.\nIn the postHandle() method we shouldn\u0026rsquo;t forget to call MDC.remove() to remove all previously set fields again because otherwise, the thread would still keep those fields, even when it goes back to a thread pool, and the next request served by that thread might still have those fields set to the wrong values.\nTo activate the interceptor, we need to add it to the InterceptorRegistry:\n@Component public class WebConfigurer implements WebMvcConfigurer { @Override public void addInterceptors(InterceptorRegistry registry) { registry.addInterceptor(new LoggingInterceptor()); } } That\u0026rsquo;s it. All log events that are emitted in the thread of an incoming log event now have the codePath field.\nIf any request creates and starts a child thread, make sure to call MDC.put() at the start of the new thread\u0026rsquo;s life, as well.\nCheck out the log querying section to see how we can use the code path in log queries.\nAdding the Code Path in a Scheduled Job In Spring Boot, we can easily create scheduled jobs by using the @Scheduled and @EnableScheduling annotations.\nTo add the code path to the logs, we need to make sure to call MDC.put() as the first thing in the scheduled method:\n@Component public class Timer { private final DomainService domainService; private static final Logger logger = LoggerFactory.getLogger(Timer.class); public Timer(DomainService domainService) { this.domainService = domainService; } @Scheduled(fixedDelay = 5000) void scheduledHello() { MDC.put(\u0026#34;codePath\u0026#34;, \u0026#34;timer\u0026#34;); logger.info(\u0026#34;log event from timer\u0026#34;); // do some actual work  MDC.remove(\u0026#34;codePath\u0026#34;); } } This way, all log events emitted from the thread that executes the scheduled method will contain the field codePath. We could also create our own @Job annotation or similar that does that job for us, but that is outside of the scope of this article.\nTo make the logs from a scheduled job even more valuable, we could add additional fields:\n job_status: A status indicating whether the job was successful or not. job_id: The ID of the job that was executed. job_records_processed: If the job does some batch processing, it could log the number of records processed. \u0026hellip;  With these fields in the logs, we can query the log server for a lot of useful information!\nAdd a User ID to User-Initiated Log Events The bulk of work in a typical web application is done in web requests that come from a user\u0026rsquo;s browser and trigger a thread in the application that creates a response for the browser.\nImagine some error happened and the stack trace in the logs reveals that it has something to do with a specific user configuration. But we don\u0026rsquo;t know which user the request was coming from!\nTo alleviate this, it\u0026rsquo;s immensely helpful to have some kind of user ID in all log events that have been triggered by a user.\nSince we know that incoming web requests are mostly coming directly from a user\u0026rsquo;s browser, we can add the username field in the same LoggingInterceptor that we\u0026rsquo;ve created to add the codePath field:\npublic class LoggingInterceptor implements HandlerInterceptor { @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal(); if (principal instanceof UserDetails) { String username = ((UserDetails) principal).getUsername(); MDC.put(\u0026#34;username\u0026#34;, username); } else { String username = principal.toString(); MDC.put(\u0026#34;username\u0026#34;, username); } return true; } @Override public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) { MDC.remove(\u0026#34;username\u0026#34;); } } This code assumes we\u0026rsquo;re using Spring Security to manage access to our web application. We\u0026rsquo;re using the SecurityContextHolder to get a hold of the Principal and extract a user name from this to pass it into MDC.put().\nEvery log event emitted from the thread serving the request will now contain the username field with the name of the user.\nWith that field, we can now filter the logs for requests of specific users. If a user reports an issue, we can filter the logs for their name and reduce the logs we have to sight immensely.\nDepending on regulations, you might want to log a more opaque user ID instead of the user name.\nCheck out the log querying section to see how we can use the user ID to query logs.\nAdd a Root Cause to Error Log Events When there is an error in our application, we usually log a stack trace. The stack trace helps us to identify the root cause of the error. Without the stack trace, we wouldn\u0026rsquo;t know which code was responsible for the error!\nBut stack traces are very unwieldy if we want to run statistics on the errors in our application. Say we want to know how many errors our application logs in total each day and how many of those are caused by which root cause exception. We\u0026rsquo;d have to export all stack traces from the logs and do some manual filtering magic on them to get an answer to that question!\nIf we add the custom field rootCause to each error log event, however, we can filter the log events by that field and then create a histogram or a pie chart of the different root causes in the UI of the log server without even exporting the data.\nA way of doing this in Spring Boot is to create an @ExceptionHandler:\n@ControllerAdvice public class WebExceptionHandler { private static final Logger logger = LoggerFactory.getLogger(WebExceptionHandler.class); @ExceptionHandler(Exception.class) @ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR) public void internalServerError(Exception e) { MDC.put(\u0026#34;rootCause\u0026#34;, getRootCause(e).getClass().getName()); logger.error(\u0026#34;returning 500 (internal server error).\u0026#34;, e); MDC.remove(\u0026#34;rootCause\u0026#34;); } private Throwable getRootCause(Exception e) { Throwable rootCause = e; while (e.getCause() != null \u0026amp;\u0026amp; rootCause.getCause() != rootCause) { rootCause = e.getCause(); } return rootCause; } } We create a class annotated with @ControllerAdvice, which means that it\u0026rsquo;s valid across all our web controllers.\nWithin the class, we create a method annotated with @ExceptionHandler. This method is called for all exceptions that bubble up to any of our web controllers. It sets the rootCause MDC field to the fully-qualified name of the exception class that caused the error and then logs the stack trace of the exception.\nThat\u0026rsquo;s it. All the log events that print a stack trace will now have a field rootCause and we can filter by this field to learn about the error distribution in our application.\nCheck out the log querying section to see how we can create a chart with the error distribution of our application.\nAdd a Trace ID to all Log Events If we\u0026rsquo;re running more than one service, for example in a microservice environment, things can quickly get complicated when analyzing an error. One service calls another, which calls another service and it\u0026rsquo;s very hard (if at all possible) to trace an error in one service to an error in another service.\nA trace ID helps to connect log events in one service and log events in another service:\nIn the example diagram above, Service 1 is called and generates the trace ID \u0026ldquo;1234\u0026rdquo;. It then calls Services 2 and 3, propagating the same trace ID to them, so that they can add the same trace ID to their log events, making it possible to connect log events across all services by searching for a specific trace ID.\nFor each outgoing request, Service 1 also creates a unique \u0026ldquo;span ID\u0026rdquo;. While a trace spans the whole request/response cycle of Service 1, a span only spans the request/response cycle between one service and another.\nWe could implement a tracing mechanism like this ourselves, but there are tracing standards and tools that use these standards to integrate into tracing systems like Logz.io\u0026rsquo;s distributed tracing feature.\nSo, we\u0026rsquo;ll stick to using a standard tool for this. In the Spring Boot world, this is Spring Cloud Sleuth, which we can add to our application by simply adding it to our pom.xml:\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-dependencies\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2020.0.3\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-sleuth\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; This automatically adds trace and span IDs to our logs and propagates them from one service to the next via request headers when using supported HTTP clients. You can read more about Spring Cloud Sleuth in the article \u0026ldquo;Tracing in Distributed Systems with Spring Cloud Sleuth\u0026rdquo;.\nAdd Durations of Certain Code Paths The total duration our application requires to answer a request is an important metric. If it\u0026rsquo;s too slow users are getting frustrated.\nUsually, it\u0026rsquo;s a good idea to expose the request duration as a metric and create dashboards that show histograms and percentiles of the request duration so that we know the health of our application at a glance and maybe even get alerted when a certain threshold is breached.\nWe\u0026rsquo;re not looking at the dashboards all the time, however, and we might be interested not only in the total request duration but in the duration of certain code paths. When analyzing logs to investigate an issue, it can be an important clue to know how long a certain path in the code took to execute.\nIn Java, we might do something like this:\nvoid callThirdPartyService() throws InterruptedException { logger.info(\u0026#34;log event from the domain service\u0026#34;); Instant start=Instant.now(); Thread.sleep(2000); // simulating an expensive operation  Duration duration=Duration.between(start,Instant.now()); MDC.put(\u0026#34;thirdPartyCallDuration\u0026#34;,String.valueOf(duration.getNano())); logger.info(\u0026#34;call to third-party service successful!\u0026#34;); MDC.remove(\u0026#34;thirdPartyCallDuration\u0026#34;); } Say we\u0026rsquo;re calling a third-party service and would like to add the duration to the logs. Using Instant.now() and Duration.between(), we calculate the duration, add it to the MDC, and then create a log event.\nThis log event will now have the field thirdPartyCallDuration which we can filter and search for in our logs. We might, for example, search for instances where this call took extra long. Then, we could use the user ID or trace ID, which we also have as fields on the log event to figure out a pattern when this takes especially long.\nCheck out the log querying section to see how we can filter for long queries using Logz.io.\nQuerying Structured Logs in Logz.io If we have set up logging to Logz.io like described in the article about per-environment logging, we can now query the logs in the Kibana UI provided by Logz.io.\nError Distribution We can, for example, query for all log events that have a value in the rootCause field:\n__exists__: \u0026#34;rootCause\u0026#34; This will bring up a list of error events that have a root cause.\nWe can also create a Visualization in the Logz.io UI to show the distribution of errors in a given time frame:\nThis chart shows that almost half of the errors are caused by a ThingyException, so it might be a good idea to check if this exception can be avoided somehow. If it can\u0026rsquo;t be avoided, we should log it on WARN instead of ERROR to keep the error logs clean.\nError Distribution Across a Code Path Say, for example, that users are complaining that scheduled jobs aren\u0026rsquo;t working correctly. If we have added a job_status field to the scheduled method code, we can filter the logs by those jobs that have failed:\njob_status: \u0026#34;ERROR\u0026#34; To get a more high-level view, we can create another pie chart visualization that shows the distribution of job_status and rootCause:\nWe can now see that the majority of our scheduled jobs is failing! We should add some alerting around this! We can also see which exceptions are the root causes of the most scheduled jobs and start to investigate.\nChecking for a User\u0026rsquo;s Errors Or, let\u0026rsquo;s say that the user with the username \u0026ldquo;user\u0026rdquo; has raised a support request specifying a rough date and time when it happened. We can filter the logs using the query username: user to only show the logs for that user and can quickly zero in on the cause of the user\u0026rsquo;s issue.\nWe can also extend the query to show only log events of that user that have a rootCause to directly learn about what went wrong when.\nusername: \u0026#34;user\u0026#34; AND _exists_: \u0026#34;rootCause\u0026#34; Structure Your Logs This article showed just a few examples of how we can add structure to our log events and make use of that structure while querying the logs. Anything that should later be searchable in the logs should be a custom field in the log events. The fields that make sense to add to the log events highly depend on the application we\u0026rsquo;re building, so make sure to think about what information would help you to analyze the logs when you\u0026rsquo;re writing code.\nYou can find the code samples discussed in this article on GitHub.\n","date":"August 30, 2021","image":"https://reflectoring.io/images/stock/0107-puzzle-1200x628-branded_hu54061c11751e36c4d330c77baa0f8ec2_367477_650x0_resize_q90_box.jpg","permalink":"/structured-logging/","title":"Saving Time with Structured Logging"},{"categories":["Spring Boot"],"contents":"Bean Validation is the de-facto standard for implementing validation logic in the Java ecosystem. It\u0026rsquo;s well integrated with Spring and Spring Boot.\nHowever, there are some pitfalls. This tutorial goes over all major validation use cases and sports code examples for each.\n Example Code This article is accompanied by a working code example on GitHub. Using the Spring Boot Validation Starter Spring Boot\u0026rsquo;s Bean Validation support comes with the validation starter, which we can include into our project (Gradle notation):\nimplementation(\u0026#39;org.springframework.boot:spring-boot-starter-validation\u0026#39;) It\u0026rsquo;s not necessary to add the version number since the Spring Dependency Management Gradle plugin does that for us. If you\u0026rsquo;re not using the plugin, you can find the most recent version here.\nHowever, if we have also included the web starter, the validation starter comes for free:\nimplementation(\u0026#39;org.springframework.boot:spring-boot-starter-web\u0026#39;) Note that the validation starter does no more than adding a dependency to a compatible version of hibernate validator, which is the most widely used implementation of the Bean Validation specification.\nBean Validation Basics Very basically, Bean Validation works by defining constraints to the fields of a class by annotating them with certain annotations.\nCommon Validation Annotations Some of the most common validation annotations are:\n @NotNull: to say that a field must not be null. @NotEmpty: to say that a list field must not empty. @NotBlank: to say that a string field must not be the empty string (i.e. it must have at least one character). @Min and @Max: to say that a numerical field is only valid when it\u0026rsquo;s value is above or below a certain value. @Pattern: to say that a string field is only valid when it matches a certain regular expression. @Email: to say that a string field must be a valid email address.  An example of such a class would look like this:\nclass Customer { @Email private String email; @NotBlank private String name; // ... } Validator To validate if an object is valid, we pass it into a Validator which checks if the constraints are satisfied:\nSet\u0026lt;ConstraintViolation\u0026lt;Input\u0026gt;\u0026gt; violations = validator.validate(customer); if (!violations.isEmpty()) { throw new ConstraintViolationException(violations); } More about using a Validator in the section about validating programmatically.\n@Validated and @Valid In many cases, however, Spring does the validation for us. We don\u0026rsquo;t even need to create a validator object ourselves. Instead, we can let Spring know that we want to have a certain object validated. This works by using the the @Validated and @Valid annotations.\nThe @Validated annotation is a class-level annotation that we can use to tell Spring to validate parameters that are passed into a method of the annotated class. We\u0026rsquo;ll learn more about how to use it in the section about validating path variables and request parameters.\nWe can put the @Valid annotation on method parameters and fields to tell Spring that we want a method parameter or field to be validated. We\u0026rsquo;ll learn all about this annotation in the section about validating a request body.\nValidating Input to a Spring MVC Controller Let\u0026rsquo;s say we have implemented a Spring REST controller and want to validate the input that' passed in by a client. There are three things we can validate for any incoming HTTP request:\n the request body, variables within the path (e.g. id in /foos/{id}) and, query parameters.  Let\u0026rsquo;s look at each of those in more detail.\nValidating a Request Body In POST and PUT requests, it\u0026rsquo;s common to pass a JSON payload within the request body. Spring automatically maps the incoming JSON to a Java object. Now, we want to check if the incoming Java object meets our requirements.\nThis is our incoming payload class:\nclass Input { @Min(1) @Max(10) private int numberBetweenOneAndTen; @Pattern(regexp = \u0026#34;^[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}$\u0026#34;) private String ipAddress; // ... } We have an int field that must have a value between 1 and 10, inclusively, as defined by the @Min and @Max annotations. We also have a String field that must contain an IP address, as defined by the regex in the @Pattern annotation (the regex actually still allows invalid IP addresses with octets greater than 255, but we\u0026rsquo;re going to fix that later in the tutorial, when we\u0026rsquo;re building a custom validator).\nTo validate the request body of an incoming HTTP request, we annotate the request body with the @Valid annotation in a REST controller:\n@RestController class ValidateRequestBodyController { @PostMapping(\u0026#34;/validateBody\u0026#34;) ResponseEntity\u0026lt;String\u0026gt; validateBody(@Valid @RequestBody Input input) { return ResponseEntity.ok(\u0026#34;valid\u0026#34;); } } We simply have added the @Valid annotation to the Input parameter, which is also annotated with @RequestBody to mark that it should be read from the request body. By doing this, we\u0026rsquo;re telling Spring to pass the object to a Validator before doing anything else.\nUse @Valid on Complex Types If the Input class contains a field with another complex type that should be validated, this field, too, needs to be annotated with @Valid.  If the validation fails, it will trigger a MethodArgumentNotValidException. By default, Spring will translate this exception to a HTTP status 400 (Bad Request).\nWe can verify this behavior with an integration test:\n@ExtendWith(SpringExtension.class) @WebMvcTest(controllers = ValidateRequestBodyController.class) class ValidateRequestBodyControllerTest { @Autowired private MockMvc mvc; @Autowired private ObjectMapper objectMapper; @Test void whenInputIsInvalid_thenReturnsStatus400() throws Exception { Input input = invalidInput(); String body = objectMapper.writeValueAsString(input); mvc.perform(post(\u0026#34;/validateBody\u0026#34;) .contentType(\u0026#34;application/json\u0026#34;) .content(body)) .andExpect(status().isBadRequest()); } } You can find more details about testing Spring MVC controllers in my article about the @WebMvcTest annotation.\nValidating Path Variables and Request Parameters Validating path variables and request parameters works a little differently.\nWe\u0026rsquo;re not validating complex Java objects in this case, since path variables and request parameters are primitive types like int or their counterpart objects like Integer or String.\nInstead of annotating a class field like above, we\u0026rsquo;re adding a constraint annotation (in this case @Min) directly to the method parameter in the Spring controller:\n@RestController @Validated class ValidateParametersController { @GetMapping(\u0026#34;/validatePathVariable/{id}\u0026#34;) ResponseEntity\u0026lt;String\u0026gt; validatePathVariable( @PathVariable(\u0026#34;id\u0026#34;) @Min(5) int id) { return ResponseEntity.ok(\u0026#34;valid\u0026#34;); } @GetMapping(\u0026#34;/validateRequestParameter\u0026#34;) ResponseEntity\u0026lt;String\u0026gt; validateRequestParameter( @RequestParam(\u0026#34;param\u0026#34;) @Min(5) int param) { return ResponseEntity.ok(\u0026#34;valid\u0026#34;); } } Note that we have to add Spring\u0026rsquo;s @Validated annotation to the controller at class level to tell Spring to evaluate the constraint annotations on method parameters.\nThe @Validated annotation is only evaluated on class level in this case, even though it\u0026rsquo;s allowed to be used on methods (we\u0026rsquo;ll learn why it\u0026rsquo;s allowed on method level when discussing validation groups later).\nIn contrast to request body validation a failed validation will trigger a ConstraintViolationException instead of a MethodArgumentNotValidException. Spring does not register a default exception handler for this exception, so it will by default cause a response with HTTP status 500 (Internal Server Error).\nIf we want to return a HTTP status 400 instead (which makes sense, since the client provided an invalid parameter, making it a bad request), we can add a custom exception handler to our contoller:\n@RestController @Validated class ValidateParametersController { // request mapping method omitted  @ExceptionHandler(ConstraintViolationException.class) @ResponseStatus(HttpStatus.BAD_REQUEST) ResponseEntity\u0026lt;String\u0026gt; handleConstraintViolationException(ConstraintViolationException e) { return new ResponseEntity\u0026lt;\u0026gt;(\u0026#34;not valid due to validation error: \u0026#34; + e.getMessage(), HttpStatus.BAD_REQUEST); } } Later in this tutorial we will look at how to return a structured error response that contains details on all failed validations for the client to inspect.\nWe can verify the validation behavior with an integration test:\n@ExtendWith(SpringExtension.class) @WebMvcTest(controllers = ValidateParametersController.class) class ValidateParametersControllerTest { @Autowired private MockMvc mvc; @Test void whenPathVariableIsInvalid_thenReturnsStatus400() throws Exception { mvc.perform(get(\u0026#34;/validatePathVariable/3\u0026#34;)) .andExpect(status().isBadRequest()); } @Test void whenRequestParameterIsInvalid_thenReturnsStatus400() throws Exception { mvc.perform(get(\u0026#34;/validateRequestParameter\u0026#34;) .param(\u0026#34;param\u0026#34;, \u0026#34;3\u0026#34;)) .andExpect(status().isBadRequest()); } } Validating Input to a Spring Service Method Instead of (or additionally to) validating input on the controller level, we can also validate the input to any Spring components. In order to to this, we use a combination of the @Validated and @Valid annotations:\n@Service @Validated class ValidatingService{ void validateInput(@Valid Input input){ // do something  } } Again, the @Validated annotation is only evaluated on class level, so don\u0026rsquo;t put it on a method in this use case.\nHere\u0026rsquo;s a test verifying the validation behavior:\n@ExtendWith(SpringExtension.class) @SpringBootTest class ValidatingServiceTest { @Autowired private ValidatingService service; @Test void whenInputIsInvalid_thenThrowsException(){ Input input = invalidInput(); assertThrows(ConstraintViolationException.class, () -\u0026gt; { service.validateInput(input); }); } } Validating JPA Entities The last line of defense for validation is the persistence layer. By default, Spring Data uses Hibernate underneath, which supports Bean Validation out of the box.\nIs the Persistence Layer the right Place for Validation?  We usually don't want to do validation as late as in the persistence layer because it means that the business code above has worked with potentially invalid objects which may lead to unforeseen errors. More on this topic in my article about Bean Validation anti-patterns.  Let\u0026rsquo;s say want to store objects of our Input class to the database. First, we add the necessary JPA annotation @Entity and add an ID field:\n@Entity public class Input { @Id @GeneratedValue private Long id; @Min(1) @Max(10) private int numberBetweenOneAndTen; @Pattern(regexp = \u0026#34;^[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}$\u0026#34;) private String ipAddress; // ...  } Then, we create a Spring Data repository that provides us with methods to persist and query for Input objects:\npublic interface ValidatingRepository extends CrudRepository\u0026lt;Input, Long\u0026gt; {} By default, any time we use the repository to store an Input object whose constraint annotations are violated, we\u0026rsquo;ll get a ConstraintViolationException as this integration test demonstrates:\n@ExtendWith(SpringExtension.class) @DataJpaTest class ValidatingRepositoryTest { @Autowired private ValidatingRepository repository; @Autowired private EntityManager entityManager; @Test void whenInputIsInvalid_thenThrowsException() { Input input = invalidInput(); assertThrows(ConstraintViolationException.class, () -\u0026gt; { repository.save(input); entityManager.flush(); }); } } You can find more details about testing Spring Data repositories in my article about the @DataJpaTest annotation.\nNote that Bean Validation is only triggered by Hibernate once the EntityManager is flushed. Hibernate flushes thes EntityManager automatically under certain circumstances, but in the case of our integration test we have to do this by hand.\nIf for any reason we want to disable Bean Validation in our Spring Data repositories, we can set the Spring Boot property spring.jpa.properties.javax.persistence.validation.mode to none.\nA Custom Validator with Spring Boot If the available constraint annotations do not suffice for our use cases, we might want to create one ourselves.\nIn the Input class from above, we used a regular expression to validate that a String is a valid IP address. However, the regular expression is not complete: it allows octets with values greater than 255 (i.e. \u0026ldquo;111.111.111.333\u0026rdquo; would be considered valid).\nLet\u0026rsquo;s fix this by implementing a validator that implements this check in Java instead of with a regular expression (yes, I know that we could just use a more complex regular expression to achieve the same result, but we like to implement validations in Java, don\u0026rsquo;t we?).\nFirst, we create the custom constraint annotation IpAddress:\n@Target({ FIELD }) @Retention(RUNTIME) @Constraint(validatedBy = IpAddressValidator.class) @Documented public @interface IpAddress { String message() default \u0026#34;{IpAddress.invalid}\u0026#34;; Class\u0026lt;?\u0026gt;[] groups() default { }; Class\u0026lt;? extends Payload\u0026gt;[] payload() default { }; } A custom constraint annotation needs all of the following:\n the parameter message, pointing to a property key in ValidationMessages.properties, which is used to resolve a message in case of violation, the parameter groups, allowing to define under which circumstances this validation is to be triggered (we\u0026rsquo;re going to talk about validation groups later), the parameter payload, allowing to define a payload to be passed with this validation (since this is a rarely used feature, we\u0026rsquo;ll not cover it in this tutorial), and a @Constraint annotation pointing to an implementation of the ConstraintValidator interface.  The validator implementation looks like this:\nclass IpAddressValidator implements ConstraintValidator\u0026lt;IpAddress, String\u0026gt; { @Override public boolean isValid(String value, ConstraintValidatorContext context) { Pattern pattern = Pattern.compile(\u0026#34;^([0-9]{1,3})\\\\.([0-9]{1,3})\\\\.([0-9]{1,3})\\\\.([0-9]{1,3})$\u0026#34;); Matcher matcher = pattern.matcher(value); try { if (!matcher.matches()) { return false; } else { for (int i = 1; i \u0026lt;= 4; i++) { int octet = Integer.valueOf(matcher.group(i)); if (octet \u0026gt; 255) { return false; } } return true; } } catch (Exception e) { return false; } } } We can now use the @IpAddress annotation just like any other constraint annotation:\nclass InputWithCustomValidator { @IpAddress private String ipAddress; // ...  } Validating Programmatically There may be cases when we want to invoke validation programmatically instead of relying on Spring\u0026rsquo;s built-in Bean Validation support. In this case, we can use the Bean Validation API directly.\nWe create a Validator by hand and invoke it to trigger a validation:\nclass ProgrammaticallyValidatingService { void validateInput(Input input) { ValidatorFactory factory = Validation.buildDefaultValidatorFactory(); Validator validator = factory.getValidator(); Set\u0026lt;ConstraintViolation\u0026lt;Input\u0026gt;\u0026gt; violations = validator.validate(input); if (!violations.isEmpty()) { throw new ConstraintViolationException(violations); } } } This requires no Spring support whatsoever.\nHowever, Spring Boot provides us with a pre-configured Validator instance. We can inject this instance into our service and use this instance instead of creating one by hand:\n@Service class ProgrammaticallyValidatingService { private Validator validator; ProgrammaticallyValidatingService(Validator validator) { this.validator = validator; } void validateInputWithInjectedValidator(Input input) { Set\u0026lt;ConstraintViolation\u0026lt;Input\u0026gt;\u0026gt; violations = validator.validate(input); if (!violations.isEmpty()) { throw new ConstraintViolationException(violations); } } } When this service is instantiated by Spring, it will automatically have a Validator instance injected into the constructor.\nThe following unit test proves that both methods above work as expected:\n@ExtendWith(SpringExtension.class) @SpringBootTest class ProgrammaticallyValidatingServiceTest { @Autowired private ProgrammaticallyValidatingService service; @Test void whenInputIsInvalid_thenThrowsException(){ Input input = invalidInput(); assertThrows(ConstraintViolationException.class, () -\u0026gt; { service.validateInput(input); }); } @Test void givenInjectedValidator_whenInputIsInvalid_thenThrowsException(){ Input input = invalidInput(); assertThrows(ConstraintViolationException.class, () -\u0026gt; { service.validateInputWithInjectedValidator(input); }); } } Using Validation Groups to Validate Objects Differently for Different Use Cases Often, certain objects are shared between different use cases.\nLet\u0026rsquo;s take the typical CRUD operations, for example: the \u0026ldquo;Create\u0026rdquo; use case and the \u0026ldquo;Update\u0026rdquo; use case will most probably both take the same object type as input. However, there may be validations that should be triggered under different circumstances:\n only in the \u0026ldquo;Create\u0026rdquo; use case, only in the \u0026ldquo;Update\u0026rdquo; use case, or in both use cases.  The Bean Validation feature that allows us to implement validation rules like this is called \u0026ldquo;Validation Groups\u0026rdquo;.\nWe have already seen that all constraint annotations must have a groups field. This can be used to pass any classes that each define a certain validation group that should be triggered.\nFor our CRUD example, we simply define two marker interfaces OnCreate and OnUpdate:\ninterface OnCreate {} interface OnUpdate {} We can then use these marker interfaces with any constraint annotation like this:\nclass InputWithGroups { @Null(groups = OnCreate.class) @NotNull(groups = OnUpdate.class) private Long id; // ...  } This will make sure that the ID is empty in our \u0026ldquo;Create\u0026rdquo; use case and that it\u0026rsquo;s not empty in our \u0026ldquo;Update\u0026rdquo; use case.\nSpring supports validation groups with the @Validated annotation:\n@Service @Validated class ValidatingServiceWithGroups { @Validated(OnCreate.class) void validateForCreate(@Valid InputWithGroups input){ // do something  } @Validated(OnUpdate.class) void validateForUpdate(@Valid InputWithGroups input){ // do something  } } Note that the @Validated annotation must again be applied to the whole class. To define which validation group should be active, it must also be applied at method level.\nTo make certain that the above works as expected, we can implement a unit test:\n@ExtendWith(SpringExtension.class) @SpringBootTest class ValidatingServiceWithGroupsTest { @Autowired private ValidatingServiceWithGroups service; @Test void whenInputIsInvalidForCreate_thenThrowsException() { InputWithGroups input = validInput(); input.setId(42L); assertThrows(ConstraintViolationException.class, () -\u0026gt; { service.validateForCreate(input); }); } @Test void whenInputIsInvalidForUpdate_thenThrowsException() { InputWithGroups input = validInput(); input.setId(null); assertThrows(ConstraintViolationException.class, () -\u0026gt; { service.validateForUpdate(input); }); } } Careful with Validation Groups Using validation groups can easily become an anti-pattern since we're mixing concerns. With validation groups the validated entity has to know the validation rules for all the use cases (groups) it is used in. More on this topic in my article about Bean Validation anti-patterns.  Handling Validation Errors When a validation fails, we want to return a meaningful error message to the client. In order to enable the client to display a helpful error message, we should return a data structure that contains an error message for each validation that failed.\nFirst, we need to define that data structure. We\u0026rsquo;ll call it ValidationErrorResponse and it contains a list of Violation objects:\npublic class ValidationErrorResponse { private List\u0026lt;Violation\u0026gt; violations = new ArrayList\u0026lt;\u0026gt;(); // ... } public class Violation { private final String fieldName; private final String message; // ... } Then, we create a global ControllerAdvice that handles all ConstraintViolationExceptions that bubble up to the controller level. In order to catch validation errors for request bodies as well, we will also handle MethodArgumentNotValidExceptions:\n@ControllerAdvice class ErrorHandlingControllerAdvice { @ExceptionHandler(ConstraintViolationException.class) @ResponseStatus(HttpStatus.BAD_REQUEST) @ResponseBody ValidationErrorResponse onConstraintValidationException( ConstraintViolationException e) { ValidationErrorResponse error = new ValidationErrorResponse(); for (ConstraintViolation violation : e.getConstraintViolations()) { error.getViolations().add( new Violation(violation.getPropertyPath().toString(), violation.getMessage())); } return error; } @ExceptionHandler(MethodArgumentNotValidException.class) @ResponseStatus(HttpStatus.BAD_REQUEST) @ResponseBody ValidationErrorResponse onMethodArgumentNotValidException( MethodArgumentNotValidException e) { ValidationErrorResponse error = new ValidationErrorResponse(); for (FieldError fieldError : e.getBindingResult().getFieldErrors()) { error.getViolations().add( new Violation(fieldError.getField(), fieldError.getDefaultMessage())); } return error; } } What we\u0026rsquo;re doing here is simply reading information about the violations out of the exceptions and translating them into our ValidationErrorResponse data structure.\nNote the @ControllerAdvice annotation which makes the exception handler methods available globally to all controllers within the application context.\nConclusion In this tutorial, we\u0026rsquo;ve gone through all major validation features we might need when building an application with Spring Boot.\nIf you want to get your hands dirty on the example code, have a look at the github repository.\nUpdate History  2021-08-05: updated and polished the article a bit. 2018-10-25: added a word of caution on using bean validation in the persistence layer (see this thread on Twitter).  ","date":"August 5, 2021","image":"https://reflectoring.io/images/stock/0051-stop-1200x628-branded_hu8c71944083c02ce8637d75428e8551b3_133770_650x0_resize_q90_box.jpg","permalink":"/bean-validation-with-spring-boot/","title":"Validation with Spring Boot - the Complete Guide"},{"categories":["Software Craft"],"contents":"Protecting a web application against various security threats and attacks is vital for the health and reputation of any web application. Cross-Site Request Forgery (CSRF or XSRF) is a type of attack on websites.\nWith a successful CSRF attack, an attacker can mislead an authenticated user in a website to perform actions with inputs set by the attacker.\nThis can have serious consequences like the loss of user confidence in the website and even fraud or theft of financial resources if the website under attack belongs to any financial realm.\nIn this article, we will understand:\n What constitutes a Cross-Site Request Forgery (CSRF) attack How attackers craft a CSRF attack What makes websites vulnerable to a CSRF attack What are some methods to secure websites from CSRF attack   Example Code This article is accompanied by a working code example on GitHub. What is CSRF? Modern websites often need to fetch data from other websites for various purposes. For example, the website might call a Google Map API to display a map of the user\u0026rsquo;s current location or render a video from YouTube. These are examples of cross-site requests and can also be a potential target of CSRF attacks.\nCSRF attacks target websites that trust some form of authentication by users before they perform any actions. For example, a user logs into an e-commerce site and makes a payment after purchasing goods. The trust is established when the user is authenticated during login and the payment function in the website uses this trust to identify the user.\nAttackers exploit this trust and send forged requests on behalf of the authenticated user. This illustration shows the making of a CSRF attack:\nAs represented in this diagram, a Cross-Site Request Forgery attack is roughly composed of two parts:\n  Cross-Site: The user is logged into a website and is tricked into clicking a link in a different website that belongs to the attacker. The link is crafted by the attacker in a way that it will submit a request to the website the user is logged in to. This represents the \u0026ldquo;cross-site\u0026rdquo; part of CSRF.\n  Request Forgery: The request sent to the user\u0026rsquo;s website is forged with values crafted by the attacker. When the victim user opens the link in the same browser, a forged request is sent to the website with values set by the attacker along with all the cookies that the victim has associated with that website.\n  CSRF is a common form of attack and has ranked several times in the OWASP Top Ten (Open Web Application Security Project). The OWASP Top Ten represent a broad consensus about the most critical security risks to web applications.\nThe OWASP website defines CSRF as:\n Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they’re currently authenticated. With a little help from social engineering (such as sending a link via email or chat), an attacker may trick the users of a web application into executing actions of the attacker’s choosing.\n Example of a CSRF Attack Let us now understand the anatomy of a CSRF attack with the help of an example:\n Suppose a user logs in to a website www.myfriendlybank.com from a login page. The website is vulnerable to CSRF attacks. The web application for the website authenticates the user and sends back a cookie in the response. The web application populates the cookie with the information that the user is authenticated. As part of a web browser\u0026rsquo;s behavior concerning cookie handling, it will send this cookie to the server for all subsequent interactions. The user next visits a malicious website without logging out of myfriendlybank.com. This malicious site contains a banner that looks like this:  The HTML used to create the banner has the below contents:\n\u0026lt;h1\u0026gt;Congratulations. You just won a bonus of 1 million dollars!!!\u0026lt;/h1\u0026gt; \u0026lt;form action=\u0026#34;http://myfriendlybank.com/account/transfer\u0026#34; method=\u0026#34;post\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;hidden\u0026#34; name=\u0026#34;TransferAccount\u0026#34; value=\u0026#34;9876865434\u0026#34; /\u0026gt; \u0026lt;input type=\u0026#34;hidden\u0026#34; name=\u0026#34;Amount\u0026#34; value=\u0026#34;1000\u0026#34; /\u0026gt; \u0026lt;input type=\u0026#34;submit\u0026#34; value=\u0026#34;Click here to claim your bonus\u0026#34;/\u0026gt; \u0026lt;/form\u0026gt; We can notice in this HTML that the form action posts to the vulnerable website myfriendlybank.com instead of the malicious website. In this example, the attacker sets the request parameters: TransferAccount and Amount to values that are unknown to the actual user.\n The user is enticed to claim the bonus by visiting the malicious website and clicking the submit button.\n  On form submit after the user clicks the submit button, the browser sends the user\u0026rsquo;s authentication cookie to the web application that was received after login to the website in step 2.\n  Since the website is vulnerable to CSRF attacks, the forged request with the user\u0026rsquo;s authentication cookie is processed. Forged requests can be sent for all actions that an authenticated user is allowed to do on the website. In this example, the forged request transfers the amount to the attacker\u0026rsquo;s account.\n  Although this example requires the user to click the submit button, the malicious website could have run JavaScript to submit the form without the user knowing anything about it.\nThis example can be extended to scenarios where an attacker can perform additional damaging actions like changing the user\u0026rsquo;s password and registered email address which will block their access completely depending on the user\u0026rsquo;s permissions in the website.\nHow Does CSRF Work? As explained earlier, a CSRF attack leverages the implicit trust placed in user session cookies by many web applications.\nIn these applications, once the user authenticates, a session cookie is created and all subsequent transactions for that session are authenticated using that cookie including potential actions initiated by an attacker by \u0026ldquo;riding\u0026rdquo; the existing session cookie. Due to this reason, CSRF is also called \u0026ldquo;Session Riding\u0026rdquo;.\nRiding the Session Cookie A CSRF attack exploits the behavior of a type of cookies called session cookies shared between a browser and server. HTTP requests are stateless due to which the server cannot distinguish between two requests sent by a browser.\nBut there are many scenarios where we want the server to be able to relate one HTTP request with another. For example, a login request followed by a request to check account balance or transfer funds. The server will only allow these requests if the login request was successful. We call this group of requests as belonging to a session.\nCookies are used to hold this session information. The server packages the session information for a particular client in a cookie and sends it to the client\u0026rsquo;s browser. For each new request, the browser re-identifies itself by sending the cookie (with the session key) back to the server.\nThe attacker hijacks (or rides) this cookie to trick the user into sending requests crafted by the attacker to the server.\nConstructing a CSRF Attack The broad sequence of steps followed by the attacker to construct a CSRF attack include the following:\n Identifying and exploring the vulnerable website for functions of interest that can be exploited Building an Exploit URL Creating an Inducement for the Victim to open the Exploit URL  Let us understand each step in greater detail.\nIdentifying and Exploring the Vulnerable Website Before planning a CSRF attack, the attacker needs to identify pieces of functionality that are of interest - for example, fund transfers. The attacker also needs to know a valid URL in the website, along with the corresponding patterns of valid requests accepted by the URL.\nThis URL should cause a state-changing action in the target application. Some examples of state-changing actions are:\n Update account balance Create a customer record Transfer money  In contrast to state-changing actions, an inquiry does not change any state in the server. For example, viewing a user profile or viewing the account balance.\nThe attacker also needs to find the right values for the URL parameters. Otherwise, the target application might reject the forged request.\nSome common techniques used to explore the vulnerable website are:\n View HTML Source: Check the HTML source of web pages to identify links or buttons that contain functions of interest. Web Application Debugging Tools: Analyze the information exchanged between the client and the server using web application debugging tools such as WebScarab, and Tamper Dev. Network Sniffing Tools: Analyze the information exchanged between the client and the server with a network sniffing tool such as Wireshark.  For example, let us assume that the attacker has identified a website at https://myfriendlybank.com to try a CSRF attack. The attacker explored this website using the above techniques and found a URL https://myfriendlybank.com/account/transfer with CSRF vulnerabilities which is used to transfer funds.\nBuilding an Exploit URL The attacker will next try to build an exploit URL for sharing with the victim. Let us assume that the transfer function in the application is built using a GET method to submit a transfer request. Accordingly, a legitimate request to transfer 100 USD to another account with account number 1234567 will look like this:\nGET https://myfriendlybank.com/account/transfer? amount=100\u0026amp;accountNumber=1234567 The attacker will create an exploit URL to transfer 15,000 USD to an account with account number 4567876 probably belonging to the attacker:\nGET https://myfriendlybank.com/account/transfer ?amount=15000\u0026amp;accountNumber=4567876 If the victim clicks this exploit URL, 15,000 USD will get transferred to the attacker\u0026rsquo;s account.\nTrick the Victim into Clicking the Exploit URL The attacker then creates an enticement and uses any social engineering attack methods to trick the victim user into clicking the malicious URL. Some examples of these methods are:\n including the exploit URL in HTML image elements placing the exploit URL on pages that are often accessed by the victim user while being logged into the application sending the exploit URL through email.  The following is an example of an image with an exploit URL:\n\u0026lt;img src=“http://myfriendlybank.com/account/transfer ?amount=5000\u0026amp;accountNumber=425654” width=“0” height=“0”\u0026gt; This scenario includes an image tag with zero dimensions embedded in an attacker-crafted email sent to the victim user. Upon receiving and opening the email, the victim user\u0026rsquo;s browser will load the HTML containing the HTML image.\nThe IMG tag of the image will make a GET request to the link in its src attribute. Since browsers send the cookies by default with requests, the request is authenticated, even though it is sent from a different origin than the bank’s website.\nAs a result, without the victim user\u0026rsquo;s permission, a forged request crafted by the attacker is sent to the web application at myfriendlybank.com.\nIf the victim user has an active session opened with myfriendlybank.com, the application would treat this as an authorized account transfer request coming from the victim user. It would then transfer an amount of 5000 to the account 425654 specified by an attacker.\nPreventing CSRF Attacks To prevent CSRF attacks, web applications need to build mechanisms to distinguish a legitimate request by a trusted user from a forged request crafted by an attacker but sent by the trusted user.\nAll the solutions to build defenses against CSRF attacks are built around this principle of sending something in the request that the forged request is unable to provide. Let us look at a few of those.\nIdentifying Legitimate Requests with an CSRF Token An (anti-)CSRF token is a type of server-side CSRF protection. It is a random string shared between the user’s browser and the web application. The CSRF token is usually stored in a session variable or data store. On an HTML page, it is typically sent in a hidden field or HTTP request header that is sent with the request.\nAn attacker creating a forged request will not have any knowledge about the CSRF token. So the web application will reject the requests which do not have a matching value of the CSRF token which it had shared with the browser.\nThere are two common implementation techniques of CSRF tokens known as :\n Synchronizer Token Pattern where the web application is stateful and stores the token Double Submit Cookie where the web application is stateless  Synchronizer Token Pattern A random token is generated by the web application and sent to the browser. The token can be generated once per user session or for each request. Per-request tokens are more secure than per-session tokens as the time range for an attacker to exploit the stolen tokens is minimal.\nAs we can see in this sequence diagram, when the input form is requested, it is initialized with a random token generated by the web application. The web application stores the generated token either in a data store or in-memory in an HTTP session.\nWhen the input form is submitted, the token is sent as a request parameter. On receiving the request, the web application matches the token received as a request parameter with the token stored in the token store. The request is processed only if the two values match.\nDouble Submit Cookie Pattern When using the Double Submit Cookie pattern the token is not stored by the web application. Instead, the web application sets the token in a cookie. The browser should be able to read the token from the cookie and send it as a request parameter in subsequent requests.\nIn this sequence diagram, when the input form is requested, the web application generates a random token and sets it in a cookie. The browser reads the token from the cookie and sends it as a request parameter when submitting the form.\nOn receiving the request, the web application verifies if the cookie value and the value sent as request parameter match. If both the values match, the web application accepts it as a legitimate request processes the request.\nThis cookie must be stored separately from the cookie used as a session identifier.\nUsing the SameSite Flag in Cookies The SameSite flag in cookies is a relatively new method of preventing CSRF attacks and improving web application security. In an earlier example, we saw that the website controlled by the attacker could send a request to https://myfriendlybank.com/ together with a session cookie. This session cookie is unique for every user, so the web application uses it to distinguish between users and determine if they are logged in.\nIf the session cookie is marked as a SameSite cookie, it is only sent along with requests that originate from the same domain. Therefore, when http://myfriendlybank.com wants to make a POST request to http://myfriendlybank/transfer it is allowed.\nHowever, the website controlled by the attacker with a domain like http://malicious.com/ cannot send HTTP requests to http://myfriendlybank.com/transfer. This is because the session cookie originates from a different domain, and thus it is not sent with the request.\nDefenses Against CSRF As users, we can defend ourselves from falling victim to a CSRF attack by cultivating two simple web browsing habits:\n We should log off from a website after using it. This will invalidate the session cookies that the attacker needs to execute the forged request in the exploit URL. We should use different browsers, for example, one browser for accessing sensitive sites and another browser for random surfing. This will prevent the session cookies set in sensitive sites from being used for CSRF attacks launched from a page opened from a different browser.  As developers, we can use the following best practices other than the CSRF token described earlier:\n Configure a lower session timeout value invalidate the session after a period of inactivity Log the user out after a period of inactivity and invalidate the session cookie. Seek confirmation from the user before processing any state-changing action with a confirmation dialog or a captcha. Make it difficult for an attacker to know the structure of the URLs to attack.  Example of CSRF Protection in a Node.js Application This is an example of implementing CSRF protection in a web application written in Node.js using the express framework. We have used an npm library csurf which provides the middleware for CSRF token creation and validation:\nconst express = require(\u0026#39;express\u0026#39;); const csrf = require(\u0026#39;csurf\u0026#39;); const cookieParser = require(\u0026#39;cookie-parser\u0026#39;); // Implement the the double submit cookie pattern // and Store the token secret in a cookie var csrfProtection = csrf({ cookie: true }); var parseForm = express.urlencoded({ extended: false }); var app = express(); app.set(\u0026#39;view engine\u0026#39;,\u0026#39;ejs\u0026#39;) app.use(cookieParser()); // render the input form app.get(\u0026#39;/transfer\u0026#39;, csrfProtection, function (req, res) { // pass the csrfToken to the view  res.render(\u0026#39;transfer\u0026#39;, { csrfToken: req.csrfToken() }); }); // post the form to this URL app.post(\u0026#39;/process\u0026#39;, parseForm, csrfProtection, function (req, res) { res.send(\u0026#39;Transfer Successful!!\u0026#39;); }); app.listen(3000, (err) =\u0026gt; { if (err) console.log(err); console.log(\u0026#39;Server listening on 3000\u0026#39;); }); In this code block, we initialize the csrf library by setting the value of cookie to true. This means that the random token for the user will be stored in a cookie instead of the HTTP session. Storing the random token in a cookie implements the double submit cookie pattern explained earlier.\nThe below HTML page is rendered with the GET request. The random token is generated in this step:\n\u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;CSRF Token Demo\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;form action=\u0026#34;process\u0026#34; method=\u0026#34;POST\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;hidden\u0026#34; name=\u0026#34;_csrf\u0026#34; value=\u0026#34;\u0026lt;%= csrfToken %\u0026gt;\u0026#34;\u0026gt; \u0026lt;div\u0026gt; \u0026lt;label\u0026gt;Amount:\u0026lt;/label\u0026gt;\u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;amount\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;br/\u0026gt; \u0026lt;div\u0026gt; \u0026lt;label\u0026gt;Transfer To:\u0026lt;/label\u0026gt;\u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;account\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;br/\u0026gt; \u0026lt;div\u0026gt; \u0026lt;input type=\u0026#34;submit\u0026#34; value=\u0026#34;Transfer\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/form\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; We can see in this HTML snippet, that the random token is set in a hidden field named _csrf.\nAfter we set up and run the application, we can test a valid request by loading the HTML form with URL http://localhost:3000/transfer :\nThe form is loaded with the csrf token set in a hidden field. When we submit the form after providing the values of the amount and account the request is sent with the csrf token and is processed successfully.\nNext, we can try to send a request from Postman (or any other HTTP request tool) to simulate a forged request in a CSRF attack. The results are shown in this screenshot:\nSince our code is protected with CSRF token, the request is denied by the web application with an error: ForbiddenError: invalid csrf token.\nIf we are using Ajax with JSON requests, then it is not possible to submit the CSRF token within an HTTP request parameter. In this situation, we include the token within an HTTP request header.\nLibraries for CSRF protection similar to csurf are available in other languages. We should prefer to use a vetted library or framework instead of building our own for CSRF prevention. Some other examples are CSRFGuard and Spring Security.\nConclusion CSRF attacks comprise a good percentage of web-based attacks. It is crucial to be aware of the vulnerabilities that could make our website a potential target for CSRF attacks and prevent these attacks by building proper CSRF defenses in our application.\nHere is a list of important points from the article for quick reference:\n A CSRF attack leverages the implicit trust placed in user session cookies by many web applications. To prevent CSRF attacks, web applications need to build mechanisms to distinguish a legitimate request from a trusted user of a website from a forged request crafted by an attacker but sent by the trusted user. An (anti-)CSRF token is a random string shared between the user’s browser and the web application and is a common type of server-side CSRF protection. There are two common implementation techniques of CSRF Tokens known as :   Synchronizer Token Pattern Double Submit Cookie  You can refer to all the source code used in the article on Github.\n","date":"July 31, 2021","image":"https://reflectoring.io/images/stock/0106-hacker-1200x628-branded_hu544bd940684ea1934c7bdfae73a2a70b_119798_650x0_resize_q90_box.jpg","permalink":"/complete-guide-to-csrf/","title":"Complete Guide to CSRF/XSRF (Cross-Site Request Forgery)"},{"categories":["Spring Boot"],"contents":"In this series so far, we have learned how to use the Resilience4j Retry, RateLimiter, TimeLimiter, Bulkhead, and Circuitbreaker core modules. We\u0026rsquo;ll continue the series exploring Resilience4j\u0026rsquo;s built-in support for Spring Boot applications, and in this article, we\u0026rsquo;ll focus on Retry.\nWe will walk through many of the same examples as in the previous articles in this series and some new ones and understand how the Spring support makes Resilience4j usage more convenient.\n Example Code This article is accompanied by a working code example on GitHub. High-level Overview On a high level, when we work with resilience4j-spring-boot2, we do the following steps:\n Add Spring Boot Resilience4j starter as a dependency to our project Configure the Reslience4j instance Use the Resilience4j instance  Let\u0026rsquo;s look at each of these steps briefly.\nStep 1: Adding the Resilience4j Spring Boot Starter Adding Spring Boot Resilience4j starter to our project is like adding any other library dependency. Here\u0026rsquo;s the snippet for Maven\u0026rsquo;s pom.xml:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.github.resilience4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;resilience4j-spring-boot2\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.7.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; In addition, we need to add dependencies to Spring Boot Actuator and Spring Boot AOP:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-actuator\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.4.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-aop\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.4.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; If we were using Gradle, we\u0026rsquo;d add the below snippet to build.gradle file:\ndependencies { compile \u0026#34;io.github.resilience4j:resilience4j-spring-boot2:1.7.0\u0026#34; compile(\u0026#39;org.springframework.boot:spring-boot-starter-actuator\u0026#39;) compile(\u0026#39;org.springframework.boot:spring-boot-starter-aop\u0026#39;) } Step 2: Configuring the Resilience4j Instance We can configure the Resilience4j instances we need in Spring Boot\u0026rsquo;s application.yml file.\nresilience4j: retry: instances: flightSearch: maxRetryAttempts: 3 waitDuration: 2s Let\u0026rsquo;s unpack the configuration to understand what it means.\nThe resilience4j.retry prefix indicates which module we want to use. For the other Resilience4j modules, we\u0026rsquo;d use resilience4j.ratelimiter, resilience4j.timelimiter etc.\nflightSearch is the name of the retry instance we\u0026rsquo;re configuring. We will be referring to the instance by this name in the next step when we use it.\nmaxRetryAttempts and waitDuration are the actual module configurations. These correspond to the available configurations in the corresponding Config class, such as RetryConfig.\nAlternatively, we could configure these properties in the application.properties file.\nStep 3: Using the Resilience4j Instance Finally, we use the Resilience4j instance that we configured above. We do this by annotating the method we want to add retry functionality to:\n@Retry(name = \u0026#34;flightSearch\u0026#34;) public List\u0026lt;Flight\u0026gt; searchFlights(SearchRequest request) { return remoteSearchService.searchFlights(request); } For the other Resilience4j modules, we\u0026rsquo;d use annotations @RateLimiter, @Bulkhead, @CircuitBreaker, etc.\nComparing with Plain Resilience4J Spring Boot Resilience4j lets us easily use the Resilience4j modules in a standard, idiomatic way.\nWe don\u0026rsquo;t have to create Resilience4j configuration object (RetryConfig), Registry object (RetryRegsitry), etc. as we did in the previous articles in this series. All that is handled by the framework based on the configurations we provide in the application.yml file.\nWe also don\u0026rsquo;t need to write code to invoke the operation as a lambda expression or a functional interface. We just need to annotate the method to which we want the resilience pattern to be applied.\nUsing the Spring Boot Resilience4j Retry Module Assume that we are building a website for an airline to allow its customers to search for and book flights. Our service talks to a remote service encapsulated by the class FlightSearchService.\nSimple Retry In a simple retry, the operation is retried if a RuntimeException is thrown during the remote call. We can configure the number of attempts, how long to wait between attempts etc.\nThe example we saw in the previous section was for a simple retry.\nHere\u0026rsquo;s sample output showing the first request failing and then succeeding on the second attempt:\nSearching for flights; current time = 15:46:42 399 Operation failed Searching for flights; current time = 15:46:44 413 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;07/31/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Retrying on Checked Exceptions Let\u0026rsquo;s say we\u0026rsquo;re calling FlightSearchService.searchFlightsThrowingException() which can throw a checked Exception.\nLet\u0026rsquo;s configure a retry instance called throwingException:\nresilience4j: retry: instances: throwingException: maxRetryAttempts: 3 waitDuration: 2s retryExceptions: - java.lang.Exception If there were other Exceptions we wanted to configure, we would add them to the list of retryExceptions. Similarly, we could also specify ignoreExceptions on the retry instance.\nNext, we annotate the method that calls the remote service:\n@Retry(name = \u0026#34;throwingException\u0026#34;) public List\u0026lt;Flight\u0026gt; searchFlightsThrowingException(SearchRequest request) throws Exception { return remoteSearchService.searchFlightsThrowingException(request); } Here\u0026rsquo;s sample output showing the first two requests failing and then succeeding on the third attempt:\nSearching for flights; current time = 11:41:12 908 Operation failed, exception occurred Searching for flights; current time = 11:41:14 924 Operation failed, exception occurred Searching for flights; current time = 11:41:16 926 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;07/31/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Conditional Retry In real-world applications, we may not want to retry for all exceptions. We may want to check the HTTP response status code or look for a particular application error code in the response to decide if we should retry. Let\u0026rsquo;s see how to implement such conditional retries.\nLet\u0026rsquo;s say that the airline\u0026rsquo;s flight service initializes flight data in its database regularly. This internal operation takes a few seconds for a given day\u0026rsquo;s flight data. If we call the flight search for that day while this initialization is in progress, the service returns a particular error code FS-167. The flight search documentation says that this is a temporary error and that the operation can be retried after a few seconds.\nFirst, we define a Predicate that tests for this condition:\nConditionalRetryPredicate implements Predicate\u0026lt;SearchResponse\u0026gt; { @Override public boolean test(SearchResponse searchResponse) { if (searchResponse.getErrorCode() != null) { return searchResponse.getErrorCode().equals(\u0026#34;FS-167\u0026#34;); } return false; } } The logic in this Predicate can be as complex as we want - it could be a check against a set of error codes, or it can be some custom logic to decide if the search should be retried.\nWe then specify this Predicate when configuring the retry instance:\nresilience4j: retry: instances: predicateExample: maxRetryAttempts: 3 waitDuration: 3s resultPredicate: io.reflectoring.resilience4j.springboot.predicates.ConditionalRetryPredicate The sample output shows sample output showing the first request failing and then succeeding on the next attempt:\nSearching for flights; current time = 12:15:11 212 Operation failed Flight data initialization in progress, cannot search at this time Search returned error code = FS-167 Searching for flights; current time = 12:15:14 224 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;01/25/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ...}] Backoff Strategies Our examples so far had a fixed wait time for the retries. Often we want to increase the wait time after each attempt - this is to give the remote service sufficient time to recover in case it is currently overloaded.\nRandomized Interval Here we specify a random wait time between attempts:\nresilience4j: retry: instances: intervalFunctionRandomExample: maxRetryAttempts: 3 waitDuration: 2s enableRandomizedWait: true randomizedWaitFactor: 0.5 The randomizedWaitFactor determines the range over which the random value will be spread with regard to the specifiied waitDuration. So for the value of 0.5 above, the wait times generated will be between 1000ms (2000 - 2000 * 0.5) and 3000ms (2000 + 2000 * 0.5).\nThe sample output shows this behavior:\nSearching for flights; current time = 14:32:48 804 Operation failed Searching for flights; current time = 14:32:50 450 Operation failed Searching for flights; current time = 14:32:53 238 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;07/31/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Exponential Interval For exponential backoff, we specify two values - an initial wait time and a multiplier. In this method, the wait time increases exponentially between attempts because of the multiplier. For example, if we specified an initial wait time of 1s and a multiplier of 2, the retries would be done after 1s, 2s, 4s, 8s, 16s, and so on. This method is a recommended approach when the client is a background job or a daemon.\nLet\u0026rsquo;s configure the retry instance for exponential backoff:\nresilience4j: retry: instances: intervalFunctionExponentialExample: maxRetryAttempts: 6 waitDuration: 1s enableExponentialBackoff: true exponentialBackoffMultiplier: 2 The sample output below shows this behavior:\nSearching for flights; current time = 14:49:45 706 Operation failed Searching for flights; current time = 14:49:46 736 Operation failed Searching for flights; current time = 14:49:48 741 Operation failed Searching for flights; current time = 14:49:52 745 Operation failed Searching for flights; current time = 14:50:00 745 Operation failed Searching for flights; current time = 14:50:16 748 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;07/31/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Acting on Retry Events In all these examples, the decorator has been a black box - we don\u0026rsquo;t know when an attempt failed and the framework code is attempting a retry. Suppose for a given request, we wanted to log some details like the attempt count or the wait time until the next attempt.\nIf we were using the Resilience4j core modules directly, we could have done this easily using the Retry.EventPublisher. We would have listened to the events published by the Retry instance.\nSince we don\u0026rsquo;t have a reference to the Retry instance or the RetryRegistry when working with Spring Boot Resilience4j, this requires a little more work. The idea is still the same, but how we get a reference to the RetryRegistry and Retry instances is a bit different.\nFirst, we @Autowire a RetryRegistry into our retrying service which is the service that invokes the remote operations:\n@Service public class RetryingService { @Autowired private FlightSearchService remoteSearchService; @Autowired private RetryRegistry registry; // other lines omitted  } Then we add a @PostConstruct method which sets up the onRetry event handler:\n@PostConstruct public void postConstruct() { registry .retry(\u0026#34;loggedRetryExample\u0026#34;) .getEventPublisher() .onRetry(System.out::println); } We fetch the Retry instance by name from the RetryRegistry and then get the EventPublisher from the Retry instance.\nInstead of the @PostConstruct method, we could have also done the same in the constructor of RetryingService.\nNow, the sample output shows details of the retry event:\nSearching for flights; current time = 18:03:07 198 Operation failed 2021-07-20T18:03:07.203944: Retry \u0026#39;loggedRetryExample\u0026#39;, waiting PT2S until attempt \u0026#39;1\u0026#39;. Last attempt failed with exception \u0026#39;java.lang.RuntimeException: Operation failed\u0026#39;. Searching for flights; current time = 18:03:09 212 Operation failed 2021-07-20T18:03:09.212945: Retry \u0026#39;loggedRetryExample\u0026#39;, waiting PT2S until attempt \u0026#39;2\u0026#39;. Last attempt failed with exception \u0026#39;java.lang.RuntimeException: Operation failed\u0026#39;. Searching for flights; current time = 18:03:11 213 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;07/31/2021\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Fallback Method Sometimes we may want to take a default action when all the retry attempts to the remote operation fail. This could be returning a default value or returning some data from a local cache.\nWe can do this by specifying a fallbackMethod in the @Retry annotation:\n@Retry(name = \u0026#34;retryWithFallback\u0026#34;, fallbackMethod = \u0026#34;localCacheFlightSearch\u0026#34;) public List\u0026lt;Flight\u0026gt; fallbackExample(SearchRequest request) { return remoteSearchService.searchFlights(request); } The fallback method should be defined in the same class as the retrying class. It should have the same method signature as the retrying method with one additional parameter - the Exception that caused the retry to fail:\nprivate List\u0026lt;Flight\u0026gt; localCacheFlightSearch(SearchRequest request, RuntimeException re) { System.out.println(\u0026#34;Returning search results from cache\u0026#34;); // fetch results from the cache  return results; } Actuator Endpoints Spring Boot Resilience4j makes the retry metrics and the details about the last 100 retry events available through Actuator endpoints:\n /actuator/retries /actuator/retryevents /actuator/metrics/resilience4j.retry.calls  Let\u0026rsquo;s look at the data returned by doing a curl to these endpoints.\nEndpoint /actuator/retries This endpoint lists the names of all the retry instances available:\n$ curl http://localhost:8080/actuator/retries { \u0026#34;retries\u0026#34;: [ \u0026#34;basic\u0026#34;, \u0026#34;intervalFunctionExponentialExample\u0026#34;, \u0026#34;intervalFunctionRandomExample\u0026#34;, \u0026#34;loggedRetryExample\u0026#34;, \u0026#34;predicateExample\u0026#34;, \u0026#34;throwingException\u0026#34;, \u0026#34;retryWithFallback\u0026#34; ] } Endpoint /actuator/retryevents This endpoint provides details about the last 100 retry events in the application:\n$ curl http://localhost:8080/actuator/retryevents { \u0026#34;retryEvents\u0026#34;: [ { \u0026#34;retryName\u0026#34;: \u0026#34;basic\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;RETRY\u0026#34;, \u0026#34;creationTime\u0026#34;: \u0026#34;2021-07-21T11:04:07.728933\u0026#34;, \u0026#34;errorMessage\u0026#34;: \u0026#34;java.lang.RuntimeException: Operation failed\u0026#34;, \u0026#34;numberOfAttempts\u0026#34;: 1 }, { \u0026#34;retryName\u0026#34;: \u0026#34;basic\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;SUCCESS\u0026#34;, \u0026#34;creationTime\u0026#34;: \u0026#34;2021-07-21T11:04:09.741841\u0026#34;, \u0026#34;errorMessage\u0026#34;: \u0026#34;java.lang.RuntimeException: Operation failed\u0026#34;, \u0026#34;numberOfAttempts\u0026#34;: 1 }, { \u0026#34;retryName\u0026#34;: \u0026#34;throwingException\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;RETRY\u0026#34;, \u0026#34;creationTime\u0026#34;: \u0026#34;2021-07-21T11:04:09.753174\u0026#34;, \u0026#34;errorMessage\u0026#34;: \u0026#34;java.lang.Exception: Operation failed\u0026#34;, \u0026#34;numberOfAttempts\u0026#34;: 1 }, ... other lines omitted ... } Under the retryevents endpoint, there are two more endpoints available: /actuator/retryevents/{retryName} and /actuator/retryevents/{retryName}/{type}. These provide similar data as the above one, but we can filter further by the retryName and type (success/error/retry).\nEndpoint /actuator/metrics/resilience4j.retry.calls This endpoint exposes the retry-related metrics:\n$ curl http://localhost:8080/actuator/metrics/resilience4j.retry.calls { \u0026#34;name\u0026#34;: \u0026#34;resilience4j.retry.calls\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The number of failed calls after a retry attempt\u0026#34;, \u0026#34;baseUnit\u0026#34;: null, \u0026#34;measurements\u0026#34;: [ { \u0026#34;statistic\u0026#34;: \u0026#34;COUNT\u0026#34;, \u0026#34;value\u0026#34;: 6 } ], \u0026#34;availableTags\u0026#34;: [ { \u0026#34;tag\u0026#34;: \u0026#34;kind\u0026#34;, \u0026#34;values\u0026#34;: [ \u0026#34;successful_without_retry\u0026#34;, \u0026#34;successful_with_retry\u0026#34;, \u0026#34;failed_with_retry\u0026#34;, \u0026#34;failed_without_retry\u0026#34; ] }, { \u0026#34;tag\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;values\u0026#34;: [ ... list of retry instances ... ] } ] } Conclusion In this article, we learned how we can use Resilience4j Retry\u0026rsquo;s built-in Spring Boot support to make our applications resilient to temporary errors. We looked at the different ways to configure retries and some examples for deciding between the various approaches.\nFor a deeper understanding of Resilience4j Retry concepts and some good practices to follow when implementing retries in general, check out the related, previous article in this series.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"July 24, 2021","image":"https://reflectoring.io/images/stock/0106-fail-1200x628-branded_huae44341560e5484684cf8585dd2c7734_127907_650x0_resize_q90_box.jpg","permalink":"/retry-with-springboot-resilience4j/","title":"Retry with Spring Boot and Resilience4j"},{"categories":["Spring Boot"],"contents":"With the @SpringBootTest annotation, Spring Boot provides a convenient way to start up an application context to be used in a test. In this tutorial, we\u0026rsquo;ll discuss when to use @SpringBootTest and when to better use other tools for testing. We\u0026rsquo;ll also look into different ways to customize the application context and how to reduce test runtime.\n Example Code This article is accompanied by a working code example on GitHub. The \u0026ldquo;Testing with Spring Boot\u0026rdquo; Series This tutorial is part of a series:\n Unit Testing with Spring Boot Testing Spring MVC Web Controllers with Spring Boot and @WebMvcTest Testing JPA Queries with Spring Boot and @DataJpaTest Testing with Spring Boot and @SpringBootTest  If you like learning from videos, make sure to check out Philip\u0026rsquo;s Testing Spring Boot Applications Masterclass (if you buy through this link, I get a cut).\nIntegration Tests vs. Unit Tests Before we start into integration tests with Spring Boot, let\u0026rsquo;s define what sets an integration test apart from a unit test.\nA unit test covers a single \u0026ldquo;unit\u0026rdquo;, where a unit commonly is a single class, but can also be a cluster of cohesive classes that is tested in combination.\nAn integration test can be any of the following:\n a test that covers multiple \u0026ldquo;units\u0026rdquo;. It tests the interaction between two or more clusters of cohesive classes. a test that covers multiple layers. This is actually a specialization of the first case and might cover the interaction between a business service and the persistence layer, for instance. a test that covers the whole path through the application. In these tests, we send a request to the application and check that it responds correctly and has changed the database state according to our expectations.  Spring Boot provides the @SpringBootTest annotation which we can use to create an application context containing all the objects we need for all of the above test types. Note, however, that overusing @SpringBootTest might lead to very long-running test suites.\nSo, for simple tests that cover multiple units we should rather create plain tests, very similar to unit tests, in which we manually create the object graph needed for the test and mock away the rest. This way, Spring doesn\u0026rsquo;t fire up a whole application context each time the test is started.\nTest Slices We can test our Spring Boot application as a whole, unit by unit, and also layer by layer. Using Spring Boot\u0026rsquo;s test slice annotations we can test each layer separately.\nBefore we look into the @SpringBootTest annotation in detail, let\u0026rsquo;s explore the test slice annotation to check if @SpringBootTest is really what you want.\nThe @SpringBootTest annotation loads the complete Spring application context. In contrast, a test slice annotation only loads beans required to test a particular layer. And because of this, we can avoid unnecessary mocking and side effects.\n@WebMvcTest Our web controllers bear many responsibilities, such as listening to the HTTP request, validating the input, calling the business logic, serializing the output, and translating the Exceptions to a proper response. We should write tests to verify all these functionalities.\nThe @WebMvcTest test slice annotation will set up our application context with just enough components and configurations required to test our web controller layer. For instance, it will set up our @Controller\u0026rsquo;s, @ControllerAdvice\u0026rsquo;s, a MockMvc bean, and some other auto configuration.\nTo read more on @WebMvcTest and to find out how we can verify each of those responsibilities, read my article on Testing MVC Web Controllers with Spring Boot and @WebMvcTest.\n@WebFluxTest @WebFluxTest is used when we want to test our WebFlux controllers. @WebFluxTest works similar to the @WebMvcTest annotation difference here is that instead of the Web MVC components and configurations, it spins up the WebFlux ones. One such bean is the WebTestClient, using which we can test our WebFlux endpoints.\n@DataJpaTest Just like @WebMvcTest allows us to test our web layer, @DataJpaTest is used to test the persistence layer.\nIt configures our entities, repositories and also sets up an embedded database. Now, this is all good but, what does testing our persistence layer mean? What exactly are we testing? If queries, then what kind of queries? To find out answers for all these questions and more, read my article on Testing JPA Queries with Spring Boot and @DataJpaTest.\n@DataJdbcTest Spring Data JDBC is another member of the Spring Data family. If we are using this project and want to test the persistence layer then we can make use of the @DataJdbcTest annotation. @DataJdbcTest automatically configures an embedded test database and JDBC repositories defined in our project for us.\nAnother similar project is Spring JDBC which gives us the JdbcTemplate object to perform direct queries. The @JdbcTest annotation autoconfigures the DataSource object that is required to test our JDBC queries.\nDependencies The code examples in this article only need the dependencies to Spring Boot\u0026rsquo;s test starter and to JUnit Jupiter:\ndependencies { testCompile(\u0026#39;org.springframework.boot:spring-boot-starter-test\u0026#39;) testCompile(\u0026#39;org.junit.jupiter:junit-jupiter:5.4.0\u0026#39;) } Creating an ApplicationContext with @SpringBootTest @SpringBootTest by default starts searching in the current package of the test class and then searches upwards through the package structure, looking for a class annotated with @SpringBootConfiguration from which it then reads the configuration to create an application context. This class is usually our main application class since the @SpringBootApplication annotation includes the @SpringBootConfiguration annotation. It then creates an application context very similar to the one that would be started in a production environment.\nWe can customize this application context in many different ways, as described in the next section.\nBecause we have a full application context, including web controllers, Spring Data repositories, and data sources, @SpringBootTest is very convenient for integration tests that go through all layers of the application:\n@ExtendWith(SpringExtension.class) @SpringBootTest @AutoConfigureMockMvc class RegisterUseCaseIntegrationTest { @Autowired private MockMvc mockMvc; @Autowired private ObjectMapper objectMapper; @Autowired private UserRepository userRepository; @Test void registrationWorksThroughAllLayers() throws Exception { UserResource user = new UserResource(\u0026#34;Zaphod\u0026#34;, \u0026#34;zaphod@galaxy.net\u0026#34;); mockMvc.perform(post(\u0026#34;/forums/{forumId}/register\u0026#34;, 42L) .contentType(\u0026#34;application/json\u0026#34;) .param(\u0026#34;sendWelcomeMail\u0026#34;, \u0026#34;true\u0026#34;) .content(objectMapper.writeValueAsString(user))) .andExpect(status().isOk()); UserEntity userEntity = userRepository.findByName(\u0026#34;Zaphod\u0026#34;); assertThat(userEntity.getEmail()).isEqualTo(\u0026#34;zaphod@galaxy.net\u0026#34;); } } @ExtendWith  The code examples in this tutorial use the @ExtendWith annotation to tell JUnit 5 to enable Spring support. As of Spring Boot 2.1, we no longer need to load the SpringExtension because it's included as a meta annotation in the Spring Boot test annotations like @DataJpaTest, @WebMvcTest, and @SpringBootTest.  Here, we additionally use @AutoConfigureMockMvc to add a MockMvc instance to the application context.\nWe use this MockMvc object to perform a POST request to our application and to verify that it responds as expected.\nWe then use the UserRepository from the application context to verify that the request has lead to an expected change in the state of the database.\nCustomizing the Application Context We can turn a lot of knobs to customize the application context created by @SpringBootTest. Let\u0026rsquo;s see which options we have.\nCaution when Customizing the Application Context  Each customization of the application context is one more thing that makes it different from the \"real\" application context that is started up in a production setting. So, in order to make our tests as close to production as we can, we should only customize what's really necessary to get the tests running!  Adding Auto-Configurations Above, we\u0026rsquo;ve already seen an auto-configuration in action:\n@SpringBootTest @AutoConfigureMockMvc class RegisterUseCaseIntegrationTest { ... } There are lot of other auto-configurations available that each add other beans to the application context. Here are some other useful ones from the documentation:\n @AutoConfigureWebTestClient: Adds WebTestClient to the test application context. It allows us to test server endpoints. @AutoConfigureTestDatabase: This allows us to run the test against a real database instead of the embedded one. @RestClientTest: It comes in handy when we want to test our RestTemplates. It autoconfigures the required components plus a MockRestServiceServer object which helps us mock responses for the requests coming from the RestTemplate calls. @JsonTest: Autoconfigures JSON mappers and classes such as JacksonTester or GsonTester. Using these we can verify whether our JSON serialization/deserialization is working properly or not.  Setting Custom Configuration Properties Often, in tests it\u0026rsquo;s necessary to set some configuration properties to a value that\u0026rsquo;s different from the value in a production setting:\n@SpringBootTest(properties = \u0026#34;foo=bar\u0026#34;) class SpringBootPropertiesTest { @Value(\u0026#34;${foo}\u0026#34;) String foo; @Test void test(){ assertThat(foo).isEqualTo(\u0026#34;bar\u0026#34;); } } If the property foo exists in the default setting, it will be overridden by the value bar for this test.\nExternalizing Properties with @ActiveProfiles If many of our tests need the same set of properties, we can create a configuration file application-\u0026lt;profile\u0026gt;.properties or application-\u0026lt;profile\u0026gt;.yml and load the properties from that file by activating a certain profile:\n# application-test.yml foo: bar @SpringBootTest @ActiveProfiles(\u0026#34;test\u0026#34;) class SpringBootProfileTest { @Value(\u0026#34;${foo}\u0026#34;) String foo; @Test void test(){ assertThat(foo).isEqualTo(\u0026#34;bar\u0026#34;); } } Setting Custom Properties with @TestPropertySource Another way to customize a whole set of properties is with the @TestPropertySource annotation:\n# src/test/resources/foo.properties foo=bar @SpringBootTest @TestPropertySource(locations = \u0026#34;/foo.properties\u0026#34;) class SpringBootPropertySourceTest { @Value(\u0026#34;${foo}\u0026#34;) String foo; @Test void test(){ assertThat(foo).isEqualTo(\u0026#34;bar\u0026#34;); } } All properties from the foo.properties file are loaded into the application context. @TestPropertySource also to configure a lot more.\nInjecting Mocks with @MockBean If we only want to test a certain part of the application instead of the whole path from incoming request to database, we can replace certain beans in the application context by using @MockBean:\n@SpringBootTest class MockBeanTest { @MockBean private UserRepository userRepository; @Autowired private RegisterUseCase registerUseCase; @Test void testRegister(){ // given  User user = new User(\u0026#34;Zaphod\u0026#34;, \u0026#34;zaphod@galaxy.net\u0026#34;); boolean sendWelcomeMail = true; given(userRepository.save(any(UserEntity.class))).willReturn(userEntity(1L)); // when  Long userId = registerUseCase.registerUser(user, sendWelcomeMail); // then  assertThat(userId).isEqualTo(1L); } } In this case, we have replaced the UserRepository bean with a mock. Using Mockito\u0026rsquo;s given method, we have specified the expected behavior for this mock in order to test a class that uses this repository.\nYou can read more about the @MockBean annotation in my article about mocking.\nAdding Beans with @Import If certain beans are not included in the default application context, but we need them in a test, we can import them using the @Import annotation:\npackage other.namespace; @Component public class Foo { } @SpringBootTest @Import(other.namespace.Foo.class) class SpringBootImportTest { @Autowired Foo foo; @Test void test() { assertThat(foo).isNotNull(); } } By default, a Spring Boot application includes all components it finds within its package and sub-packages, so this will usually only be needed if we want to include beans from other packages.\nOverriding Beans with @TestConfiguration With @TestConfiguration we can not only include additional beans required for tests but also override the beans already defined in the application. Read more about it in our article on Testing with @TestConfiguration\nCreating a Custom @SpringBootApplication We can even create a whole custom Spring Boot application to start up in tests. If this application class is in the same package as the real application class, but in the test sources rather than the production sources, @SpringBootTest will find it before the actual application class and load the application context from this application instead.\nAlternatively, we can tell Spring Boot which application class to use to create an application context:\n@SpringBootTest(classes = CustomApplication.class) class CustomApplicationTest { } When doing this, however, we\u0026rsquo;re testing an application context that may be completely different from the production environment, so this should be a last resort only when the production application cannot be started in a test environment. Usually, there are better ways, though, such as to make the real application context configurable to exclude beans that won\u0026rsquo;t start in a test environment. Let\u0026rsquo;s look at this in an example.\nLet\u0026rsquo;s say we use the @EnableScheduling annotation on our application class. Each time the application context is started (even in tests), all @Scheduled jobs will be started and may conflict with our tests. We usually don\u0026rsquo;t want the jobs to run in tests, so we can create a second application class without the @EnabledScheduling annotation and use this in the tests. However, the better solution would be to create a configuration class that can be toggled with a property:\n@Configuration @EnableScheduling @ConditionalOnProperty( name = \u0026#34;io.reflectoring.scheduling.enabled\u0026#34;, havingValue = \u0026#34;true\u0026#34;, matchIfMissing = true) public class SchedulingConfiguration { } We have moved the @EnableScheduling annotation from our application class to this special confgiuration class. Setting the property io.reflectoring.scheduling.enabled to false will cause this class not to be loaded as part of the application context:\n@SpringBootTest(properties = \u0026#34;io.reflectoring.scheduling.enabled=false\u0026#34;) class SchedulingTest { @Autowired(required = false) private SchedulingConfiguration schedulingConfiguration; @Test void test() { assertThat(schedulingConfiguration).isNull(); } } We have now successfully deactivated the scheduled jobs in the tests. The property io.reflectoring.scheduling.enabled can be specified in any of the ways described above.\nWhy are my Integration Tests so slow? A code base with a lot of @SpringBootTest-annotated tests may take quite some time to run. The Spring test support is smart enough to only create an application context once and re-use it in following tests, but if different tests need different application contexts, it will still create a separate context for each test, which takes some time for each test.\nAll of the customizing options described above will cause Spring to create a new application context. So, we might want to create one single configuration and use it for all tests so that the application context can be re-used.\nIf you\u0026rsquo;re interested in the time your tests spend for setup and Spring application contexts, you may want to have a look at JUnit Insights, which can be included in a Gradle or Maven build to produce a nice report about how your JUnit 5 tests spend their time.\nConclusion @SpringBootTest is a very convenient method to set up an application context for tests that is very close the one we\u0026rsquo;ll have in production. There are a lot of options to customize this application context, but they should be used with care since we want our tests to run as close to production as possible.\n@SpringBootTest brings the most value if we want to test the whole way through the application. For testing only certain slices or layers of the application, we have other options available.\nThe example code used in this article is available on github.\nIf you like learning from videos, make sure to check out Philip\u0026rsquo;s Testing Spring Boot Applications Masterclass (if you buy through this link, I get a cut).\n","date":"July 22, 2021","image":"https://reflectoring.io/images/stock/0018-cogs-1200x628-branded_huddc0bdf9d6d0f4fdfef3c3a64a742934_149789_650x0_resize_q90_box.jpg","permalink":"/spring-boot-test/","title":"Testing with Spring Boot and @SpringBootTest"},{"categories":["Spring Boot","Java"],"contents":"Application logs are the most important resource when it comes to investigating issues and incidents. Imagine something goes wrong during your on-call rotation and you don\u0026rsquo;t have any logs!\nIf applied smartly, we can even harvest important business metrics from our logs.\nHaving no logs is equivalent to driving a car with your eyes closed. You don\u0026rsquo;t know where you\u0026rsquo;re going and you\u0026rsquo;re very likely to crash.\nTo make log data usable, we need to send it to the right place. When developing an app locally, we usually want to send the logs to the console or a local log file. When the app is running in a staging or production environment, we\u0026rsquo;ll want to send the logs to a log server that the whole team has access to.\nIn this tutorial, we\u0026rsquo;re going to configure a Java application to send logs to the console or to a cloud logging provider depending on the environment the application is running in.\nAs the cloud logging provider, we\u0026rsquo;re going to use logz.io, which provides a managed ELK stack solution with a nice frontend for querying logs. But even if you use a different logging provider, this tutorial will help you configure your Java application\u0026rsquo;s logging.\nWe\u0026rsquo;re going to look at:\n How to configure a plain Java application with Log4J How to configure a plain Java application with Logback, and How to configure a Spring Boot application with Logback.  In all cases, the application will be started with certain environment variables that control the logging behavior to send logs either to the console or the cloud.\nWhy Should I Send My Logs to a Log Server? Before we look at the logging configuration details, let\u0026rsquo;s answer the question of why we\u0026rsquo;re going through all the fuss to configure our logging at all. Isn\u0026rsquo;t it enough to just log everything to standard out or a log file?\nThat\u0026rsquo;s how it was done back in the days. There were sysadmins who guarded the log files. Every time I wanted to access the logs, I would write an email to the sysadmins. Once they read their mail (which was totally dependent on the time of day and their mood), they would run some scripts to collect the log files from all server instances, filter them for the time period I was interested in and put the resulting files on a shared network folder from where I would download them.\nThen I would use command-line tools like grep and sed to search the log files for anything I\u0026rsquo;m interested in. Most often, I would find that the logs I had access to were not enough and I would have to repeat the whole procedure with the sysadmins for logs from a different time period - that was no fun!\nAt some point, log servers like Logstash and Graylog came along. Instead of sending logs into files, we could now send the logs to a server. Instead of asking sysadmins to send us the logs we need, we could now search the logs through a web UI!\nThe whole team now had access to a web UI to search the logs. Everybody who needs log data can easily get it.\nA log server is a key enabler for a \u0026ldquo;you built it, you run it\u0026rdquo; culture! It also reduces the mean time to restore (MTTR) - i.e. the time a team needs to restore a service after an incident - because the log data is directly available for analysis. DevOps is unthinkable without a log server!\nTo make things even easier, today we don\u0026rsquo;t even have to set up our own log server, but we can send the logs to a fully managed log server provider in the cloud. In this article, we\u0026rsquo;ll be sending logs to logz.io and then query the logs via their web UI.\nSo, we\u0026rsquo;ll definitely want to send our logs to a log server. Either by logging to standard out and having some infrastructure in place that forwards them from there to the log server or by configuring our application to send the logs directly to the log server.\nIn this article, we\u0026rsquo;re going to look at configuring our application to send them directly to the log server. But, we only want to send the logs to the server in a staging or production environment. During local development, we don\u0026rsquo;t want to be dependent on an external log server.\nLet\u0026rsquo;s see what we can do to achieve this.\nSetting Up a Logz.io Account If you want to follow along with sending logs to the cloud, set up a free trial account with logz.io. When logged in, click on the gear icon in the upper right and select Settings -\u0026gt; General. Under \u0026ldquo;Account settings\u0026rdquo;, the page will show your \u0026ldquo;shipping token\u0026rdquo;. Copy this token - we\u0026rsquo;ll need it later to configure our application to send logs to the cloud.\nPer-Environment Logging for a Plain Java Application Let\u0026rsquo;s first discuss how we can configure the logging behavior of a plain Java application. We\u0026rsquo;ll have a look at both Log4J and Logback and how to configure them to do different things in different runtime environments.\nYou can clone or browse the full example applications on GitHub (Log4J app, Logback app).\nExample Application Our example application is very simple:\npublic class Main { public static void main(String[] args) { Logger logger = LoggerFactory.getLogger(Main.class); logger.debug(\u0026#34;This is a debug message\u0026#34;); logger.info(\u0026#34;This is an info message\u0026#34;); logger.warn(\u0026#34;This is a warn message\u0026#34;); logger.error(\u0026#34;This is an error message\u0026#34;); } } It\u0026rsquo;s just a small Java program with a main() method that logs a few lines using an SLF4J Logger instance. This program is a placeholder for any real Java application.\nSLF4J is a logging API that abstracts over the actual logging implementation, so we can use it for both Log4J and Logback (and other logging implementations, for that matter). This allows us to always implement against the same logging API, even if we decide to swap out the actual logging library underneath.\nPassing Environment Variables to the Application We want to make the logging behave differently depending on the environment the application is running in. If the application is running on the local machine, we want the above log events to be sent to the console. If it\u0026rsquo;s running in a staging or production environment, we want it to log to our cloud logging provider.\nBut how does the application decide which environment it\u0026rsquo;s running in? This is exactly what environment variables are there for.\nWe\u0026rsquo;ll pass an environment variable with the name LOG_TARGET to the application on startup. There are two possible values for this variable:\n CONSOLE: the app shall send the logs to the console LOGZIO: the app shall send the logs to logz.io cloud  This command will then start the app in \u0026ldquo;local\u0026rdquo; logging mode:\nLOG_TARGET=CONSOLE java -jar app.jar And this command will start the app in \u0026ldquo;staging\u0026rdquo;, or \u0026ldquo;production\u0026rdquo; logging mode:\nLOG_TARGET=LOGZIO java -jar app.jar Let\u0026rsquo;s now see how we can configure Log4J and Logback in our application to respect the LOG_TARGET environment variable.\nConfiguring Log4J with Environment Variables You can browse or clone the full example code of the Log4J application on GitHub.\nLog4J Dependencies To get Log4J working properly, we need to add the following dependencies to our application\u0026rsquo;s pom.xml:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.logging.log4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;log4j-api\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.14.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.logging.log4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;log4j-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.14.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.logging.log4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;log4j-slf4j-impl\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.14.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.logz.log4j2\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logzio-log4j2-appender\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0.12\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; The first two dependencies are the log4j API and the log4J implementation. We could implement logging with just these two dependencies, but we additionally add the log4j-slf4j-impl dependency to include SLF4J. This way, we can use the SLF4J API for our logging instead of relying directly on the Log4J API.\nThe last dependency is a log appender that sends the logs to logz.io so we can view them online.\nLog4J Configuration Next, we need to create a log4j2.xml file in the src/main/resources folder of the codebase. Log4J will automatically pick up this configuration file from the classpath when the application starts up:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;Configuration status=\u0026#34;WARN\u0026#34;\u0026gt; \u0026lt;Appenders\u0026gt; \u0026lt;Console name=\u0026#34;CONSOLE\u0026#34; target=\u0026#34;SYSTEM_OUT\u0026#34;\u0026gt; \u0026lt;PatternLayout pattern=\u0026#34;%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n\u0026#34;/\u0026gt; \u0026lt;/Console\u0026gt; \u0026lt;LogzioAppender name=\u0026#34;LOGZIO\u0026#34;\u0026gt; \u0026lt;logzioToken\u0026gt;${env:LOGZIO_TOKEN}\u0026lt;/logzioToken\u0026gt; \u0026lt;logzioUrl\u0026gt;https://listener.logz.io:8071\u0026lt;/logzioUrl\u0026gt; \u0026lt;logzioType\u0026gt;log4j-example-application\u0026lt;/logzioType\u0026gt; \u0026lt;/LogzioAppender\u0026gt; \u0026lt;/Appenders\u0026gt; \u0026lt;Loggers\u0026gt; \u0026lt;Root level=\u0026#34;INFO\u0026#34;\u0026gt; \u0026lt;AppenderRef ref=\u0026#34;${env:LOG_TARGET:-CONSOLE}\u0026#34;/\u0026gt; \u0026lt;/Root\u0026gt; \u0026lt;/Loggers\u0026gt; \u0026lt;/Configuration\u0026gt; In the log4j2.xml file above we have configured two appenders. An appender is a Log4J concept that takes log events, transforms them, and then sends them to a certain destination.\nThe appender with the name CONSOLE is a standard Log4J appender that sends the logs to standard out. We can define a pattern in which to format the log output.\nThe appender with the name LOGZIO is a special appender that sends the logs to logz.io. We can only use the \u0026lt;LogzioAppender\u0026gt; XML element because we have included the dependency to logzio-log4j2-appender in the pom.xml above. If you want to try sending logs, you have to put the \u0026ldquo;shipping token\u0026rdquo; from your logz.io account into the \u0026lt;logzioToken\u0026gt; element (or, even better, set the LOGZIO_TOKEN environment variable when starting the app).\nFinally, in the \u0026lt;Root\u0026gt; element, we configure which appender the root logger should use. We could just put one of the appender names into the ref attribute of the \u0026lt;AppenderRef\u0026gt; element, but this would hard-code the appender and it wouldn\u0026rsquo;t be configurable.\nSo, instead, we set it to ${env:LOG_TARGET:-CONSOLE}, which tells Log4J to use the value of the LOG_TARGET environment variable, and if this variable is not set, use the value CONSOLE as a default.\nYou can read all about the details of Log4J\u0026rsquo;s configuration in the Log4J docs.\nThat\u0026rsquo;s it. If we run the app without any environment variables, it will log to the console. If we set the environment variable LOG_TARGET to LOGZIO, it will log to logz.io.\nDon't Put Secrets Into Configuration Files!  In the configuration files of Log4J and Logback, you will see that we're using an environment variable called LOGZIO_TOKEN. This variable contains a secret token that you get when creating a logz.io account.  You could just as well hard-code the token into the configuration files, but that's a security risk. You will probably want to push the configuration file to a Git repository and a Git repository is no place for secrets, even if it's a private repository!  Instead, use environment variables to store secrets and set their values when starting the application so you don't have to handle files with secret contents in a Git repo.  Configuring Logback with Environment Variables Let\u0026rsquo;s see how we can configure Logback to send logs to different places depending on an environment variable.\nThe full example application is available on GitHub.\nLogback Dependencies To include Logback in the application, we need to add these dependencies to our pom.xml:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;ch.qos.logback\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logback-classic\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.2.3\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.logz.logback\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logzio-logback-appender\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0.24\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; Logback\u0026rsquo;s dependencies are a bit more convenient than Log4J\u0026rsquo;s. We only have to include the logback-classic dependency to enable Logback. It automatically pulls in the SLF4J dependencies so we can use the SLF4J logging abstraction without explicitly adding a dependency to it.\nThe second dependency is a Logback-specific appender that can send logs to logz.io.\nLogback Configuration The logback configuration looks very similar to the configuration we\u0026rsquo;ve done for Log4J above. We create a file named logback.xml in the src/main/resources folder so Logback finds it in the classpath:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;shutdownHook class=\u0026#34;ch.qos.logback.core.hook.DelayingShutdownHook\u0026#34;/\u0026gt; \u0026lt;appender name=\u0026#34;CONSOLE\u0026#34; class=\u0026#34;ch.qos.logback.core.ConsoleAppender\u0026#34;\u0026gt; \u0026lt;encoder\u0026gt; \u0026lt;pattern\u0026gt;%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n\u0026lt;/pattern\u0026gt; \u0026lt;/encoder\u0026gt; \u0026lt;/appender\u0026gt; \u0026lt;appender name=\u0026#34;LOGZIO\u0026#34; class=\u0026#34;io.logz.logback.LogzioLogbackAppender\u0026#34;\u0026gt; \u0026lt;token\u0026gt;${LOGZIO_TOKEN}\u0026lt;/token\u0026gt; \u0026lt;logzioUrl\u0026gt;https://listener.logz.io:8071\u0026lt;/logzioUrl\u0026gt; \u0026lt;logzioType\u0026gt;logback-example-application\u0026lt;/logzioType\u0026gt; \u0026lt;/appender\u0026gt; \u0026lt;root level=\u0026#34;debug\u0026#34;\u0026gt; \u0026lt;appender-ref ref=\u0026#34;${LOG_TARGET}\u0026#34;/\u0026gt; \u0026lt;/root\u0026gt; \u0026lt;/configuration\u0026gt; In the logback.xml file, we declare two appenders. The appender concept is the same as in Log4J - it takes log data, potentially transforms it, and then sends it to a destination.\nThe CONSOLE appender formats logs in a human-readable way and then sends the logs to standard out.\nThe LOGZIO appender transforms the logs into JSON and sends them to logz.io. We have to specify the \u0026ldquo;shipping token\u0026rdquo; from the logz.io account in the \u0026lt;token\u0026gt; element so that logz.io knows it\u0026rsquo;s us sending the logs.\nFinally, we configure the root logger to use the appender that we define with the environment variable LOG_TARGET. If LOG_TARGET is set to CONSOLE, the application will log to standard out, and if it\u0026rsquo;s set to LOGZIO, the application will log to logz.io.\nYou might notice the \u0026lt;shutdownHook\u0026gt; element in the logging configuration. The shutdown hook takes care of sending all logs that are currently still in the buffer to the target location when the application shuts down. If we don\u0026rsquo;t add this hook, the logs from our sample application might never be sent to logz.io, because the application shuts down before they are sent. Using the hook we can be reasonably sure that the logs of a dying application still reach their destination.\nYou can read about more details of Logback configuration in the Logback docs.\nPer-Environment Logging with Spring Boot As we\u0026rsquo;ve seen above, configuring a plain Java application to log to different destinations requires managing environment variables. To add more environment-specific configuration, we would have to add more and more environment variables. This would quickly become cumbersome.\nWhen we\u0026rsquo;re building a Spring Boot application, we can make use of Spring Boot\u0026rsquo;s powerful configuration mechanism to make our logging configuration a bit more elegant.\nThe full example project is available on GitHub.\nUsing Spring Profiles Spring supports the notion of configuration \u0026ldquo;profiles\u0026rdquo;. Each profile is made up of a set of configuration properties with specific values.\nSince we need a different set of configuration properties for every environment that our application is running in (local machine, staging, production, \u0026hellip;), Spring profiles are very well suited for this task.\nIn this article, we\u0026rsquo;ll only look at the features of Spring profiles that we need to configure different logging behavior. If you want to learn more about profiles, have a look at our guide to Spring Boot profiles.\nExample Application To start, we create a new Spring Boot application using start.spring.io. This application is pre-configured with everything we need.\nWe add a class to the code so that we\u0026rsquo;ll see some log output once the app starts:\n@Component public class StartupLogger implements ApplicationListener\u0026lt;ApplicationReadyEvent\u0026gt; { private static final Logger logger = LoggerFactory.getLogger(StartupLogger.class); @Override public void onApplicationEvent(ApplicationReadyEvent applicationReadyEvent) { logger.debug(\u0026#34;This is a debug message\u0026#34;); logger.info(\u0026#34;This is an info message\u0026#34;); logger.warn(\u0026#34;This is a warn message\u0026#34;); logger.error(\u0026#34;This is an error message\u0026#34;); } } This just generates some test log events once Spring Boot sends the ApplicationReadyEvent.\nConfiguring Logback By default, Spring Boot uses Logback as the logging library. Spring Boot configures Logback with reasonable defaults, but if we want to log to different destinations depending on the environment, we need to override that default configuration.\nWe could just add a logback.xml file like we did in the plain Java application and use the LOG_TARGET environment variable to define where the application should send the logs. Spring Boot would then back off and use this configuration instead.\nHowever, Spring Boot makes configuring Logback even more convenient. Instead of creating a logback.xml file, we create a file named logback-spring.xml in the src/main/resources folder. This file is parsed by Spring Boot before it configures Logback and provides some extra XML elements that we can use for more dynamic logging configuration:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;springProperty name=\u0026#34;logzioToken\u0026#34; source=\u0026#34;logzio.token\u0026#34;/\u0026gt; \u0026lt;shutdownHook class=\u0026#34;ch.qos.logback.core.hook.DelayingShutdownHook\u0026#34;/\u0026gt; \u0026lt;appender name=\u0026#34;LOGZIO\u0026#34; class=\u0026#34;io.logz.logback.LogzioLogbackAppender\u0026#34;\u0026gt; \u0026lt;token\u0026gt;${logzioToken}\u0026lt;/token\u0026gt; \u0026lt;logzioUrl\u0026gt;https://listener.logz.io:8071\u0026lt;/logzioUrl\u0026gt; \u0026lt;logzioType\u0026gt;spring-boot-example-application\u0026lt;/logzioType\u0026gt; \u0026lt;/appender\u0026gt; \u0026lt;appender name=\u0026#34;CONSOLE\u0026#34; class=\u0026#34;ch.qos.logback.core.ConsoleAppender\u0026#34;\u0026gt; \u0026lt;layout class=\u0026#34;ch.qos.logback.classic.PatternLayout\u0026#34;\u0026gt; \u0026lt;Pattern\u0026gt; %cyan(%d{ISO8601}) %highlight(%-5level) [%blue(%-30t)] %yellow(%C{1.}): %msg%n%throwable \u0026lt;/Pattern\u0026gt; \u0026lt;/layout\u0026gt; \u0026lt;/appender\u0026gt; \u0026lt;springProfile name=\u0026#34;local\u0026#34;\u0026gt; \u0026lt;root level=\u0026#34;WARN\u0026#34;\u0026gt; \u0026lt;appender-ref ref=\u0026#34;CONSOLE\u0026#34;/\u0026gt; \u0026lt;/root\u0026gt; \u0026lt;logger name=\u0026#34;io.reflectoring\u0026#34; level=\u0026#34;DEBUG\u0026#34;/\u0026gt; \u0026lt;/springProfile\u0026gt; \u0026lt;springProfile name=\u0026#34;staging\u0026#34;\u0026gt; \u0026lt;root level=\u0026#34;WARN\u0026#34;\u0026gt; \u0026lt;appender-ref ref=\u0026#34;CONSOLE\u0026#34;/\u0026gt; \u0026lt;appender-ref ref=\u0026#34;LOGZIO\u0026#34;/\u0026gt; \u0026lt;/root\u0026gt; \u0026lt;logger name=\u0026#34;io.reflectoring\u0026#34; level=\u0026#34;DEBUG\u0026#34;/\u0026gt; \u0026lt;/springProfile\u0026gt; \u0026lt;springProfile name=\u0026#34;production\u0026#34;\u0026gt; \u0026lt;root level=\u0026#34;WARN\u0026#34;\u0026gt; \u0026lt;appender-ref ref=\u0026#34;LOGZIO\u0026#34;/\u0026gt; \u0026lt;/root\u0026gt; \u0026lt;logger name=\u0026#34;io.reflectoring\u0026#34; level=\u0026#34;WARN\u0026#34;/\u0026gt; \u0026lt;/springProfile\u0026gt; \u0026lt;/configuration\u0026gt; The logback-spring.xml file looks very similar to the static logback.xml file that we created for the plain Java application.\nThe main difference is that we\u0026rsquo;re now using the \u0026lt;springProfile\u0026gt; element to configure the logging for the local, staging, and production profiles. Whatever is in the \u0026lt;springProfile\u0026gt; element is only valid for a certain profile. This way, we\u0026rsquo;re sending logs to the CONSOLE appender in the local environment, to the CONSOLE and the LOGZIO appender in the staging environment, and only to the LOGZIO appender in the production profile.\nThis lets us configure each environment fully independent of the other environments, without managing an environment variable like LOG_TARGET, as we did with the plain logback.xml file above.\nAnother change is that we use the \u0026lt;springProperty\u0026gt; element to load the logzio.token from Spring Boot\u0026rsquo;s environment configuration and map it to the ${logzioToken} variable that we\u0026rsquo;re using to configure the LOGZIO appender. The property logzio.token comes from the application.yml file:\nlogzio: token: ${LOGZIO_TOKEN} Here, we\u0026rsquo;re declaring the logzio.token configuration property to be set to the value of the environment variable LOGZIO_TOKEN. We could have used the environment variable directly in the logback-spring.xml file, but it\u0026rsquo;s good practice to declare all configuration properties that a Spring Boot application needs in the application.yml file so that the properties are easier to find and modify.\nMore details about the Spring Boot logging features in the Spring Boot docs.\nStarting the Application in a Specific Profile Now, all we need to do is to start the Spring Boot application in a certain profile and it will configure Logback accordingly.\nTo start the app locally, we can use the Maven Spring Boot plugin:\nLOGZIO_TOKEN=\u0026lt;YOUR_LOGZIO_TOKEN\u0026gt; ./mvnw spring-boot:run -Dspring-boot.run.profiles=staging This will start the application in the staging profile, which would send the logs to logz.io and the console. If you\u0026rsquo;re interested in other ways of activating Spring Boot profiles, check out the guide to Spring Boot profiles.\nQuerying Logs in the Logz.io GUI If you went along and created a logz.io account to play with the example applications, you can now query the logs via the \u0026ldquo;Kibana\u0026rdquo; view on logz.io:\nIf you configured your token correctly and then started one of the plain Java applications with the environment variable LOG_TARGET set to LOGZIO, or the Spring Boot application in the staging or production profile, you should see the logs in your dashboard.\nConclusion In any investigation of an incident, logs are an invaluable resource. No matter what other observability tools you use, you will always look at the logs.\nThis means you should put some thought into your logging configuration.\nThis tutorial has shown how you can configure a Java application to send logs to the places you want them to be.\nYou can check out the fully functional examples applications for Log4J, Logback, and Spring Boot on GitHub.\n","date":"July 21, 2021","image":"https://reflectoring.io/images/stock/0031-matrix-1200x628-branded_hufb3c207f9151b804bbf7fe86cefe5814_184798_650x0_resize_q90_box.jpg","permalink":"/profile-specific-logging-spring-boot/","title":"Per-Environment Logging with Plain Java and Spring Boot"},{"categories":["Software Craft"],"contents":"“CORS” stands for Cross-Origin Resource Sharing. CORS is a protocol and security standard for browsers that helps to maintain the integrity of a website and secure it from unauthorized access.\nIt enables JavaScripts running in browsers to connect to APIs and other web resources like fonts, and stylesheets from multiple different providers.\nIn this article, we will understand the following aspects of CORS:\n What\u0026rsquo;s the CORS standard? What are the different types of CORS requests? What are different CORS headers and what do we need them for? What security vulnerabilities exist around cross-origin requests? What are the best practices for secure CORS implementations?   Example Code This article is accompanied by a working code example on GitHub. What is CORS? CORS is a security standard implemented by browsers that enable scripts running in browsers to access resources located outside of the browser\u0026rsquo;s domain.\nThe CORS policy is published under the Fetch standard defined by the WHATWG community which also publishes many web standards like HTML5,DOM, and URL.\nAccording to the Fetch standard spec:\n The CORS protocol consists of a set of headers that indicates whether a response can be shared cross-origin. For requests that are more involved than what is possible with HTML’s form element, a CORS-preflight request is performed, to ensure the request’s current URL supports the CORS protocol.\n Some scenarios of browsers fetching resources where CORS comes into play are:\n Display a map of a user\u0026rsquo;s location in an HTML or single page application hosted in a domain xyz.com by calling google\u0026rsquo;s Map API https://maps.googleapis.com/maps/api/js. Show tweets from a public Twitter handle in an HTML hosted in a domain xyz.com by calling a Twitter API https://api.twitter.com/xxx/tweets/xxxxx. Using web fonts like Typekit and Google Fonts in an HTML hosted in a domain xyz.com from their remote domains.  Let us understand in greater detail the role of a CORS policy for fetching resources from remote origins, followed by how CORS policy is enforced by browsers, and how we implement CORS in our applications in the subsequent sections.\nRelaxation of the Same-Origin Policy The role of a CORS policy is to maintain the integrity of a website and secure it from unauthorized access.\nThe CORS protocol was defined to relax the default security policy called the Same-Origin Policy (SOP) used by the browsers to protect their resources.\nThe Same-Origin Policy permits the browser to load resources only from a server hosted in the same-origin as the browser.\nThe SOP was defined in the early years of the web and turned out to be too restrictive for the new age applications where we often need to fetch different kinds of resources from multiple origins.\nThe CORS protocol is implemented by all modern browsers to allow controlled access to resources located outside of the browser\u0026rsquo;s origin.\nCORS Terminology Before going further, let us define some frequently used terms like browsers, servers, origins, cross-origins. We will then use these terms consistently throughout this article.\nWhat is an Origin? An Origin in the context of CORS consists of three elements:\n URI scheme, for example http:// or https:// Hostname like www.xyz.com Port number like 8000 or 80 (default HTTP port)  We consider two URLs to be of the same origin only if all three elements match.\nA more elaborate explanation of the Web Origin Concept is available in RFC 6454.\nOrigin Server and Cross-Origin Server The terms origin server and cross-origin server are not CORS terms. But we will be using these terms for referring to the server that is hosting the source application and the server to which the browser will send the CORS request. This diagram shows the main participants of a CORS flow:\nThe following steps happen, when a user types in a URL: http://www.example.com/index.html in the browser:\n The browser sends the request to a server in a domain named www.example.com. We will call this server \u0026ldquo;Origin server\u0026rdquo; which hosts the page named index.html. The origin server returns the page named index.html as a response to the browser. The origin server also hosts other resources like the movies.json API in this example. The browser can also fetch resources from a server in a different domain like www.xyz.com. We will call this server \u0026ldquo;Cross-Origin server\u0026rdquo;. The browser uses Ajax technology with the built-in XMLHttpRequest object, or since 2017 the new fetch function within JavaScript to load content on the screen without refreshing the page.  These sequence of steps are represented in this sequence diagram:\nWe will use the terms \u0026ldquo;origin server\u0026rdquo; and \u0026ldquo;cross-origin server\u0026rdquo; throughout this article.\nThe origin server is the server from which the web page is fetched and the cross-origin server is any server that is different from the origin server.\nSame-Origin vs. Cross-Origin As stated earlier, the Same-Origin Policy (SOP) is a default security policy implemented by browsers. The SOP permits the browser to load resources only from the origin server.\nIn the absence of the Same-Origin Policy, any scripts downloaded from cross-origin servers will be able to access the document object model (DOM) of our website and allow it to access potentially sensitive data or perform malicious actions without requiring user consent.\nThe following figure shows an HTML page currentPage.html making same or cross-origin requests to targetPage.html:\nAs we can see in this diagram, same-origin requests are allowed and cross-origin requests are blocked by default by the browser.\nThe URLs of targetPage.html that the browser rendering currentPage.html considers to be of the same or cross-origin are listed in this table. The default port is 80 for HTTP and 443 for HTTPS for the URLs in which we have not specified any port:\n   URLs being Matched Same-Origin or Cross-Origin Reason     http://www.mydomain.com/targetPage.html Same-Origin same scheme, host, and port   http://www.mydomain.com/subpage/targetPage.html Same-Origin same scheme, host, and port   https://www.mydomain.com/targetPage.html Cross-Origin same host but different scheme and port   http://pg.mydomain.com/targetPage.html Cross-Origin different host   http://www.mydomain.com:8080/targetPage.html Cross-Origin different port   http://pg.mydomain.com/mypage1.html Cross-Origin different host    If the origins corresponding to the URLs are same, we can run JavaScripts in currentPage.html which can fetch contents from targetPage.html.\nIn contrast, for cross-origin URLs, JavaScripts running in currentPage.html will be prevented from fetching contents from targetPage.html without a CORS policy configured correctly.\nHow Browsers Implement the CORS Policy The CORS protocol is enforced only by the browsers. The browser does this by sending a set of CORS headers to the cross-origin server which returns specific header values in the response. Based on the header values returned in the response from the cross-origin server, the browser provides access to the response or blocks the access by showing a CORS error in the browser console.\nUsing the Header based Protocol of CORS When a request for fetching a resource is made from a web page, the browser detects whether the request is to the origin server or the cross-origin server and applies the CORS policy if the request is for the cross-origin server.\nThe browser sends a header named Origin with the request to the cross-origin server. The cross-origin server processes this request and sends back a header named Access-Control-Allow-Origin in the response.\nThe browser checks the value of the Access-Control-Allow-Origin header in the response and renders the response only if the value of the Access-Control-Allow-Origin header is the same as the Origin header sent in the request.\nThe cross-origin server can also use wild cards like * as the value of the Access-Control-Allow-Origin header to represent a partial match with the value of the Origin header received in the request.\nCORS Failures CORS failures cause errors but specifics about the error are not available to the browser for security reasons because an attacker could take hints from the error message to tailor subsequent attacks to increase the chances of success.\nThe only way to know about the error is by looking at the browser\u0026rsquo;s console for details of the error which is usually in the following form:\nAccess to XMLHttpRequest at \u0026#39;http://localhost:8000/orders\u0026#39; from origin \u0026#39;http://localhost:9000\u0026#39; has been blocked by CORS policy: No \u0026#39;Access-Control-Allow-Origin\u0026#39; header is present on the requested resource. The error displayed in the browser console is accompanied by an error \u0026ldquo;reason\u0026rdquo; message. The reason message can differ across browsers depending on the implementation. To get an idea of some reasons behind CORS errors, we can check the error reason messages for Firefox browser.\nType of CORS Requests Sent by a Browser The browser determines the type of request to be sent to the cross-origin server depending on the kind of operations we want to perform with the resource in the cross-origin server.\nThe browser can send three types of requests to the cross-origin server:\n simple preflight requests with credentials  Let us understand these request types and observe them in the browsers' network log by running an example in the subsequent sections.\nSimple CORS Requests (GET, POST, and HEAD) Simple requests are sent by the browser for performing operations it considers safe like a GET request for fetching data or a HEAD request to check status. The request sent by the browser is simple if one of the below conditions applies:\n The HTTP request method is GET, POST, or HEAD The HTTP request contains a CORS safe-listed header: Accept, Accept-Language, Content-Language, Content-Type. When the HTTP request contains Content-Type header, it contains as it\u0026rsquo;s values: application/x-www-form-urlencoded, multipart/form-data, or text/plain No event listeners are registered on any XMLHttpRequestUpload object No ReadableStream object is used in the request  The browser sends the simple request as a normal request similar to the Same Origin request after adding the Origin header, and the Access-Control-Allow-Origin header is checked by the browser when the response is returned.\nThe browser is able to read and render the response only if the value of the Access-Control-Allow-Origin header matches the value of the Origin header sent in the request. The Origin header contains the source origin of the request.\nPreflight Requests In contrast to simple requests, the browser sends preflight requests for operations that intend to change anything in the cross-origin server like an HTTP PUT method to update a resource or HTTP DELETE for deleting a resource.\nThese requests are not considered safe so the web browser first makes sure that cross-origin communication is allowed by first sending a preflight request before sending the actual request to the cross-origin server. Requests which do not satisfy the criteria for simple request also fall under this category.\nThe preflight request is an HTTP OPTIONS method which is sent automatically by the browser to the cross-origin server, to check that the cross-origin server will permit the actual request. Along with the preflight request, the browser sends the following headers:\n Access-Control-Request-Method: This header contains the HTTP method which will be used when the actual request is made. Access-Control-Request-Headers: This is a list of headers that will be sent with the request including any custom headers. Origin: The origin header that contains the source origin of the request similar to the simple request.  The actual request to the cross-origin server will not be sent if the result of the OPTIONS method is that the request cannot be made.\nAfter the preflight request is complete, the actual PUT method with CORS headers is sent.\nCORS Requests with Credentials In most real-life situations, requests sent to the cross-origin server need to be loaded with some kind of access credentials which could be an Authorization header or cookies. The default behavior of CORS requests is for the requests to be passed without any of these credentials.\nWhen credentials are passed with the request o the cross-origin server, the browser will not allow access to the response unless the cross-origin server sends a CORS header Access-Control-Allow-Credentials with a value of true.\nImplementing CORS in a Web Application For observing the CORS requests, let us run two web applications written in Node.Js which will communicate with each other with the CORS protocol:\n For cross-origin server we will use a web application named OrderProcessor that will contain a REST API with GET and PUT methods. For origin server we will use another web application containing an HTML page. We will run JavaScript in this HTML page to communicate with the REST APIs in the OrderProcessor application which is our cross-origin server.  We can run these applications in our local machine using npm and node. The origin server hosting the HTML page is running on http://localhost:9000. This makes Ajax calls with the XMLHttpRequest object to the OrderProcessor application running on the cross-origin server with URL: http://localhost:8000 as shown in this figure:\nThese are CORS requests since the HTML in the origin server and OrderProcessor application in the cross-origin server are running in different Origins (because of different port numbers: 8000 and 9000 although they use the same scheme: HTTP and host: localhost).\nCross-Origin Server Handling CORS Requests in Node.js Our cross-origin server is a simple Node.js application named OrderProcessor built with Express framework. We have created two REST APIs in the OrderProcessor application with GET and PUT methods for fetching and updating orders.\nThis is a snippet of the GET method of our OrderProcessor application running on cross-origin server on URL: localhost:8000:\napp.get(\u0026#39;/orders\u0026#39;, (req, res) =\u0026gt; { console.log(\u0026#39;Returning orders\u0026#39;); res.send(orders); }); The GET method defined here is used to return a collection of orders.\nClient Sending CORS Requests from JavaScript For sending requests to the cross-origin server containing the OrderProcessor application, we will use an HTML page and package this inside another Node.js application running on localhost:9000. This will be our origin server.\nWe will call the GET and PUT methods from this HTML page using the XMLHttpRequest JavaScript object:\n\u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;script\u0026gt; function load(domainURL) { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 \u0026amp;\u0026amp; this.status == 200) { document.getElementById(\u0026#34;demo\u0026#34;).innerHTML = this.responseText; } }; xhttp.open(\u0026#34;GET\u0026#34;, domainURL, true); xhttp.send(); } function loadFromCrossOrigin() { load(\u0026#34;http://localhost:8000/orders\u0026#34;) } \u0026lt;/script\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;div id=\u0026#34;demo\u0026#34;\u0026gt; \u0026lt;h2\u0026gt;Order Processing\u0026lt;/h2\u0026gt; \u0026lt;div\u0026gt; \u0026lt;button type=\u0026#34;button\u0026#34; onclick=\u0026#34;loadFromCrossOrigin()\u0026#34;\u0026gt; ... \u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; The HTML shown here contains a button which we need to click to trigger the CORS request from the JavaScript method loadFromCrossOrigin.\nCORS Error Due to Same-Origin Policy If we run these applications without any additional configurations (setting CORS headers) in the cross-origin server, we will get a CORS error in our browser console as shown below:\nThis is an error caused by the restriction of accessing cross-origins due to the Same-Origin Policy. The error reason is :\nAccess to `XMLHttpRequest` at `http://localhost:8000/orders` from origin `http://localhost:9000` has been blocked by CORS policy: No `Access-Control-Allow-Origin` header is present on the requested resource.` Fixing the CORS Error For Simple Requests As suggested in the CORS error description, let us modify the code in the cross-origin server to return the CORS header Access-Control-Allow-Origin in the response:\napp.use(function(req, res, next) { res.header(\u0026#34;Access-Control-Allow-Origin\u0026#34;, \u0026#34;http://localhost:9000\u0026#34;); next(); }); app.get(\u0026#39;/orders\u0026#39;, (req, res) =\u0026gt; { console.log(\u0026#39;Returning orders\u0026#39;); res.send(orders); }); We are returning a CORS header Access-Control-Allow-Origin with a value of source origin http://localhost:9000 to fix the CORS error.\nThe CORS relevant request headers and response headers from a simple CORS request are shown below:\nRequest URL: http://localhost:8000/orders Request Method: GET Status Code: 200 OK **Request Headers** Host: localhost:8000 Origin: http://localhost:9000 **Response Headers** Access-Control-Allow-Origin: http://localhost:9000 In this example, the HTML served from http://localhost:9000 sends a request to the cross-origin server containing a REST API with the URL http://localhost:8000/orders.\nThis is a simple CORS request since it is a GET request.\nIn the browser console log, we can see an Origin header sent in the request with a value of http://localhost:9000 which is the URL of the origin server.\nThe cross-origin server responds with a response header Access-Control-Allow-Origin. The browser is able to render the response since the response header Access-Control-Allow-Origin has the value http://localhost:9000 which exactly matches the value of the Origin header sent in the request. We can also configure partial matches by using wild cards in the form of * or http://*localhost:9000.\nCORS Handling for Preflight Request Now we will modify our code in the cross-origin server application to handle preflight request for calls made to the PUT method:\napp.use(function(req, res, next) { res.header(\u0026#34;Access-Control-Allow-Origin\u0026#34;, \u0026#34;http://localhost:9000\u0026#34;); res.header( \u0026#34;Access-Control-Allow-Headers\u0026#34;, \u0026#34;Origin, X-Requested-With, Content-Type, Accept\u0026#34; ); res.header( \u0026#34;Access-Control-Allow-Methods\u0026#34;, \u0026#34;GET, POST, PUT, DELETE\u0026#34; ); next(); }); app.put(\u0026#39;/orders\u0026#39;, (req, res) =\u0026gt; { console.log(\u0026#39;updating orders\u0026#39;); res.send(orders); }); For handling the preflight request, we are returning two more headers: Access-Control-Allow-Headers containing the headers Origin, X-Requested-With, Content-Type, Accept the server should accept. Access-Control-Allow-Methods containing the HTTP methods GET, POST, PUT, DELETE that the browser should send to the server if the preflight request is successful.\nWhen we send the PUT request from our HTML page, we can see two requests in the browser network log:\nThe preflight request with the OPTIONS method is followed by the actual request with the PUT method.\nWe can observe the following request and response headers of the preflight request in the browser console:\nRequest URL: http://localhost:8000/orders Request Method: OPTIONS Status Code: 200 OK .. .. Request Headers: Accept: */* Accept-Encoding: gzip, deflate, br Accept-Language: en-US,en;q=0.9 Access-Control-Request-Headers: content-type Access-Control-Request-Method: PUT Connection: keep-alive Host: localhost:8000 Origin: http://localhost:9000 Response Headers Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept Access-Control-Allow-Methods: GET, POST, PUT, DELETE Access-Control-Allow-Origin: http://localhost:9000 Allow: GET,HEAD,PUT In this example, the browser served from http://localhost:9000 sends a PUT request to a REST API with URL: http://localhost:8000/orders. Since this is a PUT request which will change the state of an existing resource in the cross-origin server, the browser sends a preflight request using the HTTP OPTIONS method. In response, the cross-origin server informs the browser that GET, HEAD, and PUT methods are allowed.\nCORS Handling for Request with Credentials We will now send a credential in the form of a Authorization header in our CORS request:\nfunction sendAuthRequestToCrossOrigin() { var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if (this.readyState == 4 \u0026amp;\u0026amp; this.status == 200) { document.getElementById(\u0026#34;demo\u0026#34;).innerHTML = this.responseText; } }; xhr.open(\u0026#39;GET\u0026#39;, \u0026#34;http://localhost:8000/orders\u0026#34;, true); xhr.setRequestHeader(\u0026#39;Authorization\u0026#39;, \u0026#39;Bearer rtikkjhgffw456tfdd\u0026#39;); xhr.withCredentials = true; xhr.send(); } Here we are sending a bearer token as the value of our Authorization header. To allow the browser to read the response, the cross-origin server needs to send the Access-Control-Allow-Credentials header in the response:\napp.use(function(req, res, next) { res.header(\u0026#34;Access-Control-Allow-Origin\u0026#34;, \u0026#34;http://localhost:9000\u0026#34;); res.header( \u0026#34;Access-Control-Allow-Headers\u0026#34;, \u0026#34;Origin, X-Requested-With, Content-Type, Accept, Authorization\u0026#34; ); res.header( \u0026#34;Access-Control-Allow-Methods\u0026#34;, \u0026#34;GET, POST, PUT, DELETE\u0026#34; ); res.header(\u0026#34;Access-Control-Allow-Credentials\u0026#34;,true); next(); }); app.put(\u0026#39;/orders\u0026#39;, (req, res) =\u0026gt; { console.log(\u0026#39;updating orders\u0026#39;); res.send(orders); }); We have modified our code in the cross-origin server to send a value of true for the Access-Control-Allow-Credentials header so that the browser is able to read the response. We have also added the Authorization header in the list of allowed request headers in the header Access-Control-Allow-Headers.\nWe can see the request and response headers in the browser console:\nRequest URL: http://localhost:8000/orders Request Method: GET Status Code: 200 OK Response Headers: Access-Control-Allow-Credentials: true Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Authorization Access-Control-Allow-Methods: GET, POST, PUT, DELETE Access-Control-Allow-Origin: http://localhost:9000 Request Headers: Accept: */* Accept-Encoding: gzip, deflate, br Accept-Language: en-US,en;q=0.9 Authorization: Bearer rtikkjhgffw456tfdd Origin: http://localhost:9000 In this log, we can see the security credential in the form of the Authorization header in the request which contains a bearer token. The Authorization header is also included in the header named Access-Control-Allow-Headers returned from the cross-origin server. The browser can access the response since the value of the Access-Control-Allow-Credentials header sent by the server is true.\nVulnerabilities Caused by CORS Misconfiguration Communications with CORS protocol also have the potential to introduce security vulnerabilities caused by misconfiguration of CORS protocol on the cross-origin server. Some misconfigurations can allow malicious domains to access the API endpoints, while others allow credentials like cookies to be sent from untrusted sources to the cross-origin server and access sensitive data.\nLet us look at two examples of CORS vulnerabilities caused by any misconfiguration in the code:\nOrigin Reflection - Copying the Value of Origin Header in the Response As we have seen earlier, when the browser sends a request to a cross-origin server, it adds an Origin header containing the value of the domain the request originates from. The cross-origin server needs to return an Access-Control-Allow-Origin header with the value of the Origin header received in the request.\nThere could be a scenario of multiple domains that need access to the resources of the cross-origin server. In that case, the cross-origin server might set the value of the Access-Control-Allow-Origin header dynamically to the value of the domain it receives in the Origin header. A Node.js code setting the header dynamically may look like this:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); ... ... app.get(\u0026#39;/orders\u0026#39;, (req, res) =\u0026gt; { console.log(\u0026#39;Returning orders\u0026#39;); // set to the value received in Origin header  res.header(\u0026#34;Access-Control-Allow-Origin\u0026#34;, req.header(\u0026#39;Origin\u0026#39;)); res.send(orders); }); Here we are reading the value of the Origin header received in the request and setting it to the value of the Access-Control-Allow-Origin header sent in the response.\nDoing this will allow any domain including malicious ones to send requests to the cross-origin server.\nLenient Regular Expression Similar to the earlier example, we can check for the value of the Origin header in the cross-origin server code by applying a regular expression. If we want to allow all subdomains to send requests to the cross-origin server, the code will look like this:\nconst express = require(\u0026#39;express\u0026#39;); const app = express(); ... ... app.get(\u0026#39;/orders\u0026#39;, (req, res) =\u0026gt; { console.log(\u0026#39;Returning orders\u0026#39;); origin = req.getHeader(\u0026#34;Origin\u0026#34;); // allow requests from subdomains of mydomain.com  let re = new RegExp(\u0026#34;https:\\/\\/[a-z]+.mydomain.com\u0026#34;) if re.test(origin, regex){ // set to the value received in Origin header  res.header(\u0026#34;Access-Control-Allow-Origin\u0026#34;, origin); } res.send(orders); }); Since the dot character in the regular expression is not escaped, requests from sites like https://xyzmydomain.com will also be served. Any attacker can exploit this vulnerability by buying xyzmydomain.com and hosting the malicious code there.\nAvoiding Security Vulnerabilities Caused by CORS Misconfiguration Here are some of the best practices we can use to implement CORS securely:\n In the application in the cross-origin server, we can define a whitelist of specific domains that are allowed to access the cross-origin server. When the request arrives, we should validate the Origin header against the whitelist to allow or deny access by populating appropriate values in the CORS response headers. Similarly, for the Access-Control-Allow-Methods header, we should specify exactly what methods are valid for the whitelisted domains to use. We should be validating all domains that need to access resources, and the methods other domains are allowed to use if their access request is granted. We should also use CORS scanners to detect security vulnerabilities caused by CORS misconfigurations. CORS checks should also be part of penetration testing of critical applications. OWASP guidance on testing CORS provides guidelines for identifying endpoints that implement CORS and ensure the security of the CORS configuration.  Conclusion In this article, we learned about CORS and how to use CORS policy to communicate between websites from different origins.\nLet us recap the main points that we covered:\n CORS is a security protocol implemented by browsers that allow us to access resources from a different origin. CORS requests are of three types: Simple, Preflight, and Request with Credentials. Simple requests are used to perform safe operations like an HTTP GET method. Preflight requests are for performing operations with side-affects like PUT and DELETE methods. Towards the end, we looked at examples of security vulnerabilities caused by CORS misconfigurations and some best practices for secure CORS implementation.  I hope this guide will help you to get started with implementing CORS securely and fixing CORS errors.\nYou can refer to all the source code used in the article on Github.\n","date":"July 18, 2021","image":"https://reflectoring.io/images/stock/0105-shield-1200x628-branded_hu6f780e0bde7934463ecfc5107ba3c2f7_174927_650x0_resize_q90_box.jpg","permalink":"/complete-guide-to-cors/","title":"Complete Guide to CORS"},{"categories":["Java"],"contents":"With feature flags, we can reduce the risk of rolling out software changes to a minimum. We deploy the software with the changes, but the changes are behind a deactivated feature flag. After successful deployment, we can choose when and for which users to activate the feature.\nBy reducing the deployment risk, feature flags are a main driver of DevOps metrics like lead time and deployment frequency - which are proven to have a positive impact on organizational performance (see my book notes on \u0026ldquo;Accelerate\u0026rdquo; for more about DevOps metrics).\nIn this article, we\u0026rsquo;re going to implement feature flags with Togglz and LaunchDarkly: Togglz is an extensible Java library, and LaunchDarkly is a cloud-based feature management platform. We\u0026rsquo;ll explore how we can implement some common feature flagging use cases with each of them and discuss the pros and cons of each tool.\nIf you\u0026rsquo;re only interested in one of the two solutions, jump ahead to the section covering it:\n How to implement feature flags with Togglz How to implement feature flags with LaunchDarkly   Code Example You can follow along with the code examples in this article by browsing or cloning the code of a fully functional example application on GitHub.\nFeature Flagging Use Cases Before we dive into the tools, let\u0026rsquo;s take a look at some common feature flagging use cases. We\u0026rsquo;ll try to implement each of these use cases with each of the feature flag tools so we get a feeling of what we can do with them.\nThere are more than the use cases discussed in this article, of course. The idea is to look at the most common use cases to compare what the different feature flagging tools can do.\nUse Case 1: Global Rollout This is the simplest feature flag possible. We want to enable or disable a certain feature for all users.\nWe deploy a new version of the application with a deactivated feature and after successful deployment, we activate (roll out) the feature for all users. We can later decide to deactivate it again - also for all users:\nUse Case 2: Percentage Rollout The global rollout use case is very simple and raises the question of why we would even need a feature flagging tool because we could just implement it ourselves with a simple if/else construct. So let\u0026rsquo;s look at a bit more complex use case.\nA percentage rollout is another very common rollout strategy in which we activate a feature for a small percentage of users first, to see if it\u0026rsquo;s working as expected, and then ramp up the percentage over days or weeks until the feature is active for all users:\nImportant in this use case is that a user stays in the same cohort over time. It\u0026rsquo;s not enough to just enable a feature for 20% of the requests, because a user could issue multiple requests and have the feature enabled for some requests and disabled for others - which make for a rather awkward user experience. So, the evaluation of the feature flag has to take the user into account.\nAlso, if the percentage is increased from 20% to 30%, the new 30% cohort should include the previous 20% cohort so the feature is not suddenly deactivated for the early adopters.\nYou can see that we don\u0026rsquo;t really want to implement this ourselves but instead rely on a tool to do it for us.\nUse Case 3: Rollout Based on a User Attribute The last use case we\u0026rsquo;re going to look at is a targeted rollout based on a user attribute or behavior. A user attribute can be anything: the location of the user, demographic information, or attributes that are specific to our application like \u0026ldquo;the user has done a specific thing in our application\u0026rdquo;.\nIn our example, we\u0026rsquo;ll activate a certain feature after a user has clicked a certain button:\nOur application will set the user\u0026rsquo;s clicked attribute to true after clicking the button. The feature flagging tool should take this attribute into account when evaluating the feature flag.\nTogglz Togglz is a Java library that we can include as a dependency into our application. The concepts of the library rotate around the FeatureManager class:\nOnce configured, we can ask the FeatureManager if a certain feature is active for a given user. Before a feature can be active, it needs to be enabled. This is to ensure that we\u0026rsquo;re not accidentally activating features that are not ready to be served to our users, yet.\nThe FeatureManager has access to a UserProvider, which knows about the user who is currently using our application. This way, Togglz can distinguish between users and we can build features that are active for some users and inactive for others.\nThe FeatureProvider provides the Features that we want to control in our application. Different FeatureProvider implementations load the feature data from different locations. This feature data contains the names of the features, whether they are enabled by default, and their activation strategy. We can decide to load our features from a Java enum, a config file, or from environment variables, for example.\nEach Feature has an ActivationStrategy that defines under which circumstances the feature will be active for a given user.\nFinally, the FeatureManager has access to a StateRepository which stores feature state. Most importantly, this state includes whether the feature is enabled and which ActivationStrategy the feature is using. By default, Togglz is using an in-memory store for the feature states.\nLet\u0026rsquo;s set up Togglz in our Java application to see what it can do!\nInitial Setup We\u0026rsquo;re going to set Togglz up in a Spring Boot application. We need to declare the following dependency in our pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.togglz\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;togglz-spring-boot-starter\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.6.1.Final\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; To get Togglz running, we need to declare our features somewhere. We\u0026rsquo;re choosing to do this in an enum:\npublic enum Features implements Feature { GLOBAL_BOOLEAN_FLAG, //... more features  public boolean isActive() { return FeatureContext.getFeatureManager().isActive(this); } } For each feature that we want to use, we add a new enum constant. We can influence the features with a handful of different annotations.\nWhat\u0026rsquo;s left to do is to tell Togglz that it should use this Features enum. We do this by setting the togglz.feature-enums property in Spring Boot\u0026rsquo;s application.yml configuration file:\ntogglz: feature-enums: io.reflectoring.featureflags.togglz.Features This configuration property points to the fully qualified class name of our Features enum and the Spring Boot Starter that we included in the dependencies will automatically configure Togglz with a FeatureProvider that uses this enum as the source of feature definitions.\nWe\u0026rsquo;re now ready to use Togglz, so let\u0026rsquo;s see how we can implement our feature flagging use cases.\nGlobal Boolean Rollout with Togglz We\u0026rsquo;ve already seen our global boolean feature in the enum, but here it is again:\npublic enum Features implements Feature { GLOBAL_BOOLEAN_FLAG; public boolean isActive() { return FeatureContext.getFeatureManager().isActive(this); } } We can check if the feature is active by asking the Feature Manager like in the isActive() convenience method in the code above.\nFeatures.GLOBAL_BOOLEAN_FLAG.isActive() would return false, currently, because features are disabled by default. Only if a feature is enabled will its ActivationStrategy decide whether the feature should be active for a given user.\nWe can enable the feature by setting a property in application.yml:\ntogglz: features: GLOBAL_BOOLEAN_FLAG: enabled: true Alternatively, we could start the application with the environment variable TOGGLZ_FEATURES_GLOBAL_BOOLEAN_FLAG_ENABLED set to true.\nIf we call Features.GLOBAL_BOOLEAN_FLAG.isActive() now, it will return true.\nBut why is the feature active as soon as we enabled it? Aren\u0026rsquo;t enabled and active different things as explained above? Yes, they are, but we haven\u0026rsquo;t declared an ActivationStrategy for our feature.\nWithout an ActivationStrategy all enabled features are automatically active.\nWe just implemented a global boolean flag that is controlled by a configuration property or environment variable.\nPercentage Rollout with Togglz Next, let\u0026rsquo;s build a percentage rollout. Togglz calls this a \u0026ldquo;gradual rollout\u0026rdquo;.\nA proper percentage rollout only works when Togglz knows which user is currently using the application. So, we have to implement the UserProvider interface:\n@Component public class TogglzUserProvider implements UserProvider { private final UserSession userSession; public TogglzUserProvider(UserSession userSession) { this.userSession = userSession; } @Override public FeatureUser getCurrentUser() { return new FeatureUser() { @Override public String getName() { return userSession.getUsername(); } @Override public boolean isFeatureAdmin() { return false; } @Override public Object getAttribute(String attributeName) { return null; } }; } } This implementation of UserProvider reads the current user from the session. UserSession is a session-scoped bean in the Spring application context (see the full code in the example application).\nWe annotate our implementation with the @Component annotation so that Spring creates an object of it during startup and puts it into the application context. The Spring Boot starter dependency we added previously will automatically pick up UserProvider implementations from the application context and configure Togglz' FeatureManager with it. Togglz will now know which user is currently browsing our application.\nNext, we define our feature in the Features enum like this:\npublic enum Features implements Feature { @EnabledByDefault @DefaultActivationStrategy(id = GradualActivationStrategy.ID, parameters = { @ActivationParameter(name = GradualActivationStrategy.PARAM_PERCENTAGE, value = \u0026#34;50\u0026#34;) }) USER_BASED_PERCENTAGE_ROLLOUT; // ... } This time, we\u0026rsquo;re using the @EnabledByDefault annotation. That means the feature is enabled and will let its activation strategy decide whether the feature is active or not for a given user. That means we don\u0026rsquo;t need to add togglz.features.GLOBAL_BOOLEAN_FLAG.enabled: true to application.yml to enable it.\nWe\u0026rsquo;re also using the @DefaultActivationStrategy annotation to configure this new feature to use the GradualActivationStrategy and configure it to activate the feature for 50% of the users.\nThis activation strategy creates a hashcode of the user name and the feature name, normalizes it to a value between 0 and 100, and then checks if the hashcode is below the percentage value (in our case 50). Only then will it activate the feature. See the full code of this activation strategy here.\nFeatures.USER_BASED_PERCENTAGE_ROLLOUT.isActive() will now return true for approximately 50% of the users using our application. If we have very few users with hashcodes that are close together, it might be considerably more or less than 50%, however.\nRollout Based on a User Attribute with Togglz Now, let\u0026rsquo;s look at how to build a feature that activates only after a user has done a certain action in our application.\nFor this, we\u0026rsquo;re going to implement the getAttribute() method in our UserProvider implementation:\n@Component public class TogglzUserProvider implements UserProvider { // ...  @Override public FeatureUser getCurrentUser() { return new FeatureUser() { @Override public String getName() { return userSession.getUsername(); } @Override public boolean isFeatureAdmin() { return false; } @Override public Object getAttribute(String attributeName) { if (attributeName.equals(\u0026#34;clicked\u0026#34;)) { return userSession.hasClicked(); } return null; } }; } } Similar to getName(), the getAttribute() method returns a value from the session. We\u0026rsquo;re assuming here that userSession.hasClicked() returns true only after a user has clicked a certain button in our application. In a real application, we should persist this value in the database so it will stay the same even between user sessions!\nOur Togglz user objects now have the attribute clicked set to true after they have clicked the button.\nNext, we implement a custom UserClickedActivationStrategy:\npublic class UserClickedActivationStrategy implements ActivationStrategy { @Override public String getId() { return \u0026#34;clicked\u0026#34;; } @Override public String getName() { return \u0026#34;Rollout based on user click\u0026#34;; } @Override public boolean isActive(FeatureState featureState, FeatureUser user) { return (Boolean) user.getAttribute(\u0026#34;clicked\u0026#34;); } @Override public Parameter[] getParameters() { return new Parameter[0]; } } Note that the isActive() method returns the value of the user\u0026rsquo;s clicked attribute, which we just implemented in our custom UserProvider implementation.\nNow we can finally declare the feature in the Features enum:\npublic enum Features implements Feature { @EnabledByDefault @DefaultActivationStrategy(id = \u0026#34;clicked\u0026#34;) USER_ACTION_TARGETED_FEATURE; // ... } Again, we enable it by default, so that we don\u0026rsquo;t have to so manually. As the activation strategy, we\u0026rsquo;re using our custom UserClickedActivationStrategy by passing the ID of that strategy into the DefaultActivationStrategy annotation.\nFeatures.USER_ACTION_TARGETED_FEATURE.isActive() will now return true only after the user has clicked a certain button in our application.\nManaging Feature Flags with the Togglz Web Console Now that we have a few features, we want to toggle them on or off. For example, we want to do a \u0026ldquo;dark launch\u0026rdquo; for a feature. That means we don\u0026rsquo;t enable it by default, deploy the feature in its disabled state, and only then decide to activate it.\nWe could, of course, change the enabled state in the application.yml file and then re-deploy the application, but the point of feature flagging is that we separate deployments from enabling features, so we don\u0026rsquo;t want to do this.\nFor managing features, Togglz offers a web console that we can deploy next to our application. With the Spring Boot integration, we can set a few properties in application.yml to activate it:\ntogglz: console: enabled: true secured: false path: /togglz use-management-port: false The secured property should be set to true in a production environment (or you secure it yourself). If set to true, only users for which FeatureUser.isFeatureAdmin() returns true will have access to the web console. This can be controlled in the UserProvider implementation.\nSetting use-management-port to false will start the web console on the same port as our Spring Boot application.\nOnce the application is started with this configuration, we can access the web console on http://localhost:8080/togglz:\nThe web console allows us to enable and disable features and even to change their activation strategy on the fly. There seems to be a bug that causes the GLOBAL_BOOLEAN_FLAG to be listed twice, probably because the web console reads it once from the Features enum and once from the application.yml file.\nDeploying Togglz into Production In a production environment, we usually want to deploy multiple nodes of our application. So, as soon as we think about a production environment for our application, we need to answer the question of how to use Togglz across multiple application nodes.\nThis diagram outlines what a production deployment could look like:\nOur users are accessing the application over a load balancer that shares the traffic across multiple application nodes. Each of these nodes is using Togglz to decide whether certain features are active or not.\nSince all application nodes should have the same state for all features, we need to connect Togglz to a feature state database that is shared across all application nodes. We can do this by implementing Togglz' StateRepository interface (or use an existing implementation like the JdbcStateRepository) and pointing it to a database.\nTo manage features, we need at least one node that serves the Togglz web console. This can be one (or all) of the application nodes, or a separate node as shown in the diagram above. This web console also has to be connected to the shared feature state database and it has to be protected from unauthorized access.\nOther Togglz Features In addition to what we discussed above, Togglz offers:\n a handful of different activation strategies to control how to activate a feature, a handful of different state repository implementations to store feature state in different databases, some pre-canned user provider implementations that integrate with authentication providers like Spring Security, grouping features in the admin console, support for JUnit 4 and 5 to help control feature state in tests.  In conclusion, Togglz provides a great framework to build your own feature flagging solution, but there\u0026rsquo;s quite some manual work involved. Let\u0026rsquo;s see how we can delegate that work using a feature management service in the cloud.\nLaunchDarkly LaunchDarkly is a full-fledged feature management service that does most of the dirty feature flagging work for us. The name stems from the concept of a \u0026ldquo;dark launch\u0026rdquo;, which is deploying a feature in a deactivated state and only activating it when the time is right.\nLet\u0026rsquo;s take a look at the core LaunchDarkly concepts before diving into the technicalities of controlling feature flags in Java:\nBeing a cloud service, LaunchDarkly provides web UI for us to create and configure feature flags. We could also create Feature Flag programmatically via the UI or various integrations with other tools, but we\u0026rsquo;ll stick to the UI in this article.\nFor each feature flag, we can define one or more variations. A variation is a possible value the feature flag can have for a specific user. A boolean flag, for example, has exactly two variations: true and false. But we\u0026rsquo;re not limited to boolean feature flags, but can create flags with arbitrary numbers, string values, or even JSON snippets.\nTo decide which variation a feature flag will show to a given user, we can define targeting rules for each feature flag. The simplest targeting rule is \u0026ldquo;show variation A for all users\u0026rdquo;. A more complex targeting rule is \u0026ldquo;show variation A for all users with attribute X, variation B for all users with attribute Y, and variation C for all other users\u0026rdquo;. We will define a different targeting rule for each of our feature flagging use cases shortly.\nBy default, targeting for a feature flag is deactivated. That means that the targeting rules will not be evaluated. In this state, a feature flag always serves its default variation (which would be the value false for a boolean flag, for example).\nTo make their decision about which variation to serve, a targeting rule needs to know about the user for whom it\u0026rsquo;s making the decision.\nIn our code, we\u0026rsquo;ll be asking a LaunchDarkly client to tell us the variation of a given feature flag for a given user. The client loads the targeting rules that we have defined in the web UI from the LaunchDarkly server and evaluates them locally.\nSo, even though we are defining the targeting rules in the LaunchDarkly web UI (i.e. on a LaunchDarkly server), the LaunchDarkly client doesn\u0026rsquo;t call out to a LaunchDarkly server to poll for the variation we should serve to a given user! Instead, the client connects to the server on startup, downloads the targeting rules, and then evaluates them on the client side. LaunchDarkly is using a streaming architecture instead of a polling architecture.\nThis architecture is interesting from a scalability perspective because our application doesn\u0026rsquo;t have to make a network call every time we need to evaluate a feature flag. It\u0026rsquo;s also interesting from a resilience perspective because feature flag evaluation will still work if the LaunchDarkly server has exploded and is not answering our calls anymore.\nWith these concepts in mind, let\u0026rsquo;s see how we can use LaunchDarkly in a Spring Boot application.\nInitial Setup To use the LaunchDarkly Java client, we need to first include it as a dependency in our application. We add the following to our pom.xml file:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.launchdarkly\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;launchdarkly-java-server-sdk\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.3.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Before the client can talk to the LaunchDarkly server, we also need to create a LaunchDarkly account. If you want to play along with the example, you can sign up for a free trial account here.\nAfter signup, you get an \u0026ldquo;SDK key\u0026rdquo; that the client uses to authenticate to the server.\nWe will put this key into Spring Boot\u0026rsquo;s application.yml configuration file:\nlaunchdarkly: sdkKey: ${LAUNCHDARKLY_SDK_KEY} This will set the configuration property launchdarkly.sdkKey to the value of the environment variable LAUNCHDARKLY_SDK_KEY on startup of the Spring Boot application.\nWe could have hard-coded the SDK key into the application.yml file, but it\u0026rsquo;s better practice to inject secrets like this via environment variables so they don\u0026rsquo;t accidentally end up in version control and who knows where from there.\nThe final piece of setup is to create an instance of the LaunchDarkly client and make it available to our application:\n@Configuration public class LaunchDarklyConfiguration { private LDClient launchdarklyClient; @Bean public LDClient launchdarklyClient(@Value(\u0026#34;${launchdarkly.sdkKey}\u0026#34;) String sdkKey) { this.launchdarklyClient = new LDClient(sdkKey); return this.launchdarklyClient; } @PreDestroy public void destroy() throws IOException { this.launchdarklyClient.close(); } } This configuration class will create an LDClient instance and add it to the Spring application context. On instantiation, the client will download the current targeting rules from a LaunchDarkly server. This means we should make sure that we don\u0026rsquo;t instantiate a new LDClient instance for each feature flag evaluation.\nTo create the LDClient instance, we inject the SDK key.\nWe also implement a @PreDestroy method that is called when the Spring application context is shutting down (i.e. when the application is shutting down). This method tells the client to close gracefully, sending any events that it might have queued up to the server. Such events include evaluation counters for feature flags and changes in a user\u0026rsquo;s attributes, for example.\nWith this setup, we\u0026rsquo;re ready to implement our first feature flag!\nGlobal Boolean Rollout with LaunchDarkly Let\u0026rsquo;s start with the simplest feature flag possible: a simple boolean toggle that activates a feature for all users or none.\nFirst, we create a feature flag with the key global-boolean-flag in the LaunchDarkly UI:\nNote that we created the feature flag as a boolean flag, which means that it has exactly two variations: true and false. We also have not created a specific targeting rule, so the default rule will always serve the false variation.\nIn the screenshot, you can see that the targeting is already set to \u0026ldquo;on\u0026rdquo;, which means that whatever targeting rules we define will be \u0026ldquo;live\u0026rdquo; and have an effect on our users.\nAs soon as the feature is saved, we can ask our LDClient to evaluate the feature for us:\nLDUser user = new LDUser.Builder(userSession.getUsername()) .build(); boolean booleanFlagActive = launchdarklyClient .boolVariation(\u0026#34;global-boolean-flag\u0026#34;, user, false); To evaluate a feature flag, the LaunchDarkly client needs to know which user the feature should be evaluated for. With our simple global boolean flag, we don\u0026rsquo;t really need a user, because we want to enable the feature for everyone or nobody, but most targeting rules will evaluate differently for different users, so we need to always pass a user to the client.\nIn the example, we\u0026rsquo;re just getting the (unique) username from our session and creating an LDUser object with it. Whatever we pass as a key into the LDUser, it needs to be a unique identifier for the user so that LaunchDarkly can recognize the user.\nA username is not the best key, by the way, because it\u0026rsquo;s personally identifiable information, so a more opaque user ID is probably the better choice in most contexts.\nIn our code, we need to know what kind of variations the feature flag provides to call the appropriate method. In our case, we know the feature flag is a boolean flag, so we use the method boolVariation(). The third parameter to this method (false) is the value the feature should evaluate to in case the client could not make a connection to the LaunchDarkly server.\nIf the feature flag is configured as shown in the screenshot above, the client will know that the targeting is \u0026ldquo;on\u0026rdquo; for the feature global-boolean-flag, and then evaluate the default rule, which evaluates to false. If we change the default rule to true, LaunchDarkly will inform our client and the next call to boolVariation() will evaluate to true.\nPercentage Rollout with LaunchDarkly To implement a percentage rollout with LaunchDarkly, we create a new feature named user-based-percentage-rollout in the LaunchDarkly UI and set the default targeting rule to a percentage rollout:\nIn our code, we can now evaluate this feature flag the same as we did before:\nboolean percentageFlagActive = launchdarklyClient .boolVariation(\u0026#34;user-based-percentage-rollout\u0026#34;, user, false); For each variation of a percentage feature flag, LaunchDarkly creates a bucket. In the case of our example, we have two buckets, one for the variation true, and one for the variation false, and each bucket has the same size (50%).\nThe LaunchDarkly client knows about these buckets. To determine which bucket the current user falls into, the LaunchDarkly client creates a hashcode for the user and uses it to decide on which bucket the user to put in. This allows multiple - potentially distributed - LaunchDarkly clients to evaluate to the same value for the same user, because they calculate the same hashcode.\nRollout Based on a User Attribute with LaunchDarkly We can implement more complex targeting strategies in the same fashion. We configure the targeting rules in the LaunchDarkly UI, and then ask the LaunchDarkly client for the variation for the given user.\nLet\u0026rsquo;s assume that we want to enable a certain feature for users only after they have clicked a certain button in our application. For this case, we can create a targeting rule that serves true only for users with the clicked attribute set to true:\nBut how does LaunchDarkly know about the clicked attribute of a user? We need to pass it into the client:\nLDUser user = new LDUser.Builder(userSession.getUsername()) .custom(\u0026#34;clicked\u0026#34;, userSession.hasClicked()) .build(); boolean clickedFlagActive = launchdarklyClient .boolVariation(\u0026#34;user-clicked-flag\u0026#34;, user, false); When we create the LDUser object, we now set the clicked custom attribute to a value that - in our example - we get from the user session. With the clicked attribute, the LaunchDarkly client can now properly evaluate the feature flag.\nAfter a feature has been evaluated for a user with a given attribute, LaunchDarkly will show the user\u0026rsquo;s attributes in its user dashboard:\nNote that LaunchDarkly only shows these user attributes as a convenience. The user attributes are evaluated by the LaunchDarkly client, not the LaunchDarkly server! So, if our application doesn\u0026rsquo;t set the clicked attribute of the LDUser object, our example feature flag will evaluate to false, even if we have set the clicked attribute to true in a previous call!\nAdditional Features The targeting rules in our examples above are still rather simple examples, given the flexibilty the LaunchDarkly UI offers to create targeting rules.\nAs mentioned, LaunchDarkly not only supports boolean feature flags, but any number of variations of different types like strings, numbers, or JSON. This opens the door to pretty much every feature flagging use case one can think of.\nIn addition to flexible targeting rules, LaunchDarkly offers a lot of features that a geared towards teams and even Enterprises:\n analytics across our feature flags, designing feature workflows with scheduled feature releases and approval steps, auditing on feature flag changes, so we can reconstruct the variations of a feature flag at a given point in time, debugging feature flags in the LaunchDarkly UI to verify that features are evaluated to the expected variation, slicing our user base into segments to target each segment differently, running experiments by pairing a feature flag with a certain metric from our application to gauge how the feature impacts the metric, and a lot more.  Conclusion - What\u0026rsquo;s the Best Feature Flagging Solution for Me? The two solutions discussed in this article are very different. As is often the case when deciding on a tool that solves a specific problem, you can\u0026rsquo;t really say that one solution is \u0026ldquo;better\u0026rdquo; than another without taking your context into account.\nTogglz is a Java library that we can easily extend by implementing some interfaces, but it doesn\u0026rsquo;t scale well with a lot of features (because they will be hard to find in the web console) and we have some custom work to self-host the web console and to integrate it with a database, for example.\nLaunchDarkly, on the other hand, is a full-blown feature management platform that supports many programming languages, allows very flexible targeting rules and scales to an almost limitless number of feature flags without impacting performance too much. But it follows a subscription model and we\u0026rsquo;re sharing our feature data with them.\nFor small teams who are working on a few - exclusively Java - codebases with tens of features, Togglz is a great way to get started with feature flags.\nFor bigger teams or enterprises with multiple codebases - potentially across multiple programming languages - and hundreds or even thousands of feature flags, there is no way around a feature management platform like LaunchDarkly.\nHere\u0026rsquo;s an (incomplete) list of aspects to think about when deciding on a feature flagging solution for your context:\n   Aspect Togglz LaunchDarkly     Targeting strategies By implementing the ActivationStrategy interface By configuring a targeting rule in the UI, via API, or via integration   Changing the targeting Might need redeployment of a new ActivationStrategy Any time by changing a rule in the UI   Targeting by application environment (staging, prod, \u0026hellip;) No concept of application environments Feature flags can be configured to evaluate differently for different environments   Programming Languages Java Many   Feature variations Only boolean Booleans, strings, numbers, and JSON   Feature management Via self-hosted web console Via web console in the cloud   Feature state By implementing a StateRepository interface Managed by LaunchDarkly servers or a self-hosted Relay Proxy   Feature analytics Needs to be custom-built Out-of-the-box   Working in a team Simple feature management in the web console Audit logs, user dashboard, feature ownership, \u0026hellip;   Enterprise Simple feature management in the web console Workflows, custom roles, SSO/SCIM/SAML login, code references, \u0026hellip;   Cost Cost of customizing Per-seat fee   Integrations Spring Boot, Spring Security, EJB No out-of-the-box integrations with Java frameworks    ","date":"July 17, 2021","image":"https://reflectoring.io/images/stock/0104-on-off-1200x628-branded_hue5392027620fc7728badf521ca949f28_116615_650x0_resize_q90_box.jpg","permalink":"/java-feature-flags/","title":"Feature Flags in Java with Togglz and LaunchDarkly"},{"categories":["Spring Boot","AWS"],"contents":"Email is a convenient way to communicate different kinds of events from applications to interested parties.\nAmazon Simple Email Service (SES) is an email platform that provides an easy and cost-effective way to send and receive emails.\nSpring Cloud for Amazon Web Services (AWS) is a sub-project of Spring Cloud which makes it easy to integrate with AWS services using Spring idioms and APIs familiar to Spring developers.\nIn this article, we will look at using Spring Cloud AWS for interacting with AWS Simple Email Service (SES) to send emails with the help of some code examples.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. How Does SES Send Email? When we ask SES to send an email, the request is processed in multiple stages:\n The email sender (either an application or email client) requests Amazon SES to send an email to one or more recipients. SES first validates the request and if successful, creates an email message with the request parameters. This email message is compliant with the Internet Message Format specification (RFC 5322) and consists of header, body, and envelope. SES also scans the message for malicious content and then sends it over the Internet using Simple Mail Transfer Protocol (SMTP) to the recipient\u0026rsquo;s receiver ISP.  After this the following outcomes are possible:\n Successful Delivery: The email is accepted by the Internet service provider (ISP) which delivers the email to the recipient. Hard Bounce: The email is rejected by the ISP because the recipient\u0026rsquo;s address is invalid. The ISP sends the hard bounce notification back to Amazon SES, which notifies the sender through email or by publishing it to an Amazon Simple Notification Service (Amazon SNS) topic set up to receive this notification. Soft Bounce: The ISP cannot deliver the email to the recipient due to reasons like the recipient\u0026rsquo;s mailbox is full, the domain does not exist, or due to any temporary condition, such as the ISP being too busy to handle the request. The ISP sends a soft bounce notification to SES and retries the email up to a specified period of time. If SES cannot deliver the email within that time, it sends a hard bounce notification through email or by publishing the event to an SNS topic. Complaint: The recipient marks the email as spam in his or her email client. If Amazon SES has a feedback loop set up with the ISP, then a complaint notification is sent to Amazon SES, which forwards the complaint notification to the sender. Auto response: The receiver ISP sends an automatic response such as an out-of-office message to Amazon SES, which forwards the auto-response notification to the sender.  When delivery fails, Amazon SES will respond to the sender with an error and will drop the email.\nSending Mails With SES When we send an email with SES, we are using SES as our outbound email server. We can also use any other email server and configure it to send outgoing emails through SES. We can send emails with SES in multiple ways:\nSending Mails From the SES Console We can use the SES console to send emails with minimal setup. However, it is mainly used to monitor our sending activity. We can view the number of emails that we have sent along with the number of bounces and complaints as shown here:\nSending Mails Using SMTP Simple mail transfer protocol (SMTP) is the communication protocol for sending emails, receiving emails, and relaying outgoing mail between email senders and receivers. When we send an email, the SMTP server processes our email, decides which server to send the message to, and relays the message to that server.\nWe can access Amazon SES through the SMTP in two ways :\n by sending emails to SES from an SMTP enabled software from an SMTP compatible programming language like Java by using the Java Mail API  We can find the information for connecting to the SMTP endpoint from the SES console:\nSending Mails Using the SES API We can send emails by calling the SES Query API with any REST client or by using the AWS SDK. We can send both formatted email or emails in plain text.\nWe\u0026rsquo;re going to look at this in the upcoming section.\nSending Mails with Amazon SES using Spring Cloud AWS Spring Cloud AWS includes a module for SES called spring-cloud-aws-ses which simplifies working with Amazon SES. This module for SES contains two classes: SimpleEmailServiceMailSender and SimpleEmailServiceJavaMailSender. The class hierarchy containing these classes is shown in this diagram:\nThis class diagram shows that the SimpleEmailServiceJavaMailSender class inherits from the SimpleEmailServiceMailSender which implements the MailSender interface. The MailSender interface is part of Spring\u0026rsquo;s mail abstraction that contains the send() method for sending emails.\nThe SimpleEmailServiceMailSender class sends E-Mails with the Amazon Simple Email Service. This implementation has no dependencies on the Java Mail API. It can be used to send simple mail messages that do not have any attachments.\nThe SimpleEmailServiceJavaMailSender class allows sending emails with attachments and other mime parts inside mail messages\nSetting Up the SES Sandbox Environment The Amazon SES provides a sandbox environment to test the capabilities of Amazon SES. By default, our account is in sandbox mode.\nWe can only send emails to verified identities when our account is in sandbox mode. A verified identity is a domain or email address that we use to send email. Before we can send an email using SES in sandbox mode, we must create and verify each identity that we want to use as a From, To, Source, Sender, or Return-Path address. Verifying an identity with Amazon SES confirms our ownership and helps to prevent it\u0026rsquo;s unauthorized use.\nThere are also limits to the volume of email we can send each day, and on the number of messages, we can send per second.\nWe will need a few email addresses to test our examples. Let us verify these first by following the steps in the SES documentation. The figure below outlines some of the steps we need to perform in the AWS SES console:\nAs we can see in this figure, we first add our email in SES which triggers a verification email which the owner of the email needs to verify by visiting the link in the verification email.\nSending Emails in Spring Boot With our emails verified, let us now create a Spring Boot project with the help of the Spring boot Initializr, and then open the project in our favorite IDE.\nAdding the Dependencies We will first add all the dependencies of Spring Cloud AWS and SES. For Spring Cloud AWS, we will add a separate Spring Cloud AWS BOM in our pom.xml file using this dependencyManagement block :\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws-dependencies\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.0\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; For adding the support for SES, we need to include the module dependency which is available as a starter modulespring-cloud-starter-aws-ses:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-aws-ses\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; spring-cloud-starter-aws-ses includes the transitive dependencies for spring-cloud-starter-aws, and spring-cloud-aws-ses.\nConfiguring the Mail Sender Beans Spring Cloud AWS provides SimpleEmailServiceMailSender which is an implementation of the MailSender interface from Spring\u0026rsquo;s mail abstraction. SimpleEmailServiceMailSender sends emails with Amazon SES using the AWS SDK for Java. It can be used to send simple email messages in plain text without any attachments. A configuration with the necessary elements will look like this:\n@Configuration public class MailConfig { @Bean public AmazonSimpleEmailService amazonSimpleEmailService() { return AmazonSimpleEmailServiceClientBuilder.standard() .withCredentials(new ProfileCredentialsProvider(\u0026#34;pratikpoc\u0026#34;)) .withRegion(Regions.US_EAST_1) .build(); } @Bean public MailSender mailSender( AmazonSimpleEmailService amazonSimpleEmailService) { return new SimpleEmailServiceMailSender(amazonSimpleEmailService); } } Here we are setting up the AmazonSimpleEmailService bean with credentials for our AWS account using the ProfileCredentialsProvider. After that, we are using this AmazonSimpleEmailService bean for creating the SimpleEmailServiceMailSender bean.\nSending Simple Email We will now inject the SimpleEmailServiceMailSender bean in our service class from where we will send an email in text format without any attachments:\n@Service public class NotificationService { @Autowired private MailSender mailSender; @Autowired private JavaMailSender javaMailSender; public void sendMailMessage( final SimpleMailMessage simpleMailMessage) { this.mailSender.send(simpleMailMessage); } } Here we are calling the send method on the mailSender reference to send our email. The method takes SimpleMailMessage as parameter which is a container for email attributes like from address, to address and email text which we will send from our test class below.\nWe test this set up by calling this method from a test class :\n@SpringBootTest class NotificationServiceTest { @Autowired private NotificationService notificationService; @Test void testSendMail() { SimpleMailMessage simpleMailMessage = new SimpleMailMessage(); simpleMailMessage.setFrom(\u0026#34;pratikd2000@gmail.com\u0026#34;); simpleMailMessage.setTo(\u0026#34;pratikd2027@gmail.com\u0026#34;); simpleMailMessage.setSubject(\u0026#34;test subject\u0026#34;); simpleMailMessage.setText(\u0026#34;test text\u0026#34;); notificationService.sendMailMessage(simpleMailMessage); } } Here we are using two test emails as our from and to email addresses which we verified earlier from the SES console. We are setting these emails along with the subject and contents of the email in the SimpleMailMessage class. As explained before, we are using a sandbox environment that will only work with verified email addresses.\nSending Email with Attachments We will now send an email with an attachment for which we will use the SimpleEmailServiceJavaMailSender class. Let us update our configuration by setting up the bean for SimpleEmailServiceJavaMailSender:\n@Configuration public class MailConfig { @Bean public AmazonSimpleEmailService amazonSimpleEmailService() { return AmazonSimpleEmailServiceClientBuilder.standard() .withCredentials(new ProfileCredentialsProvider(\u0026#34;pratikpoc\u0026#34;)) .withRegion(Regions.US_EAST_1) .build(); } @Bean public JavaMailSender javaMailSender( AmazonSimpleEmailService amazonSimpleEmailService) { return new SimpleEmailServiceJavaMailSender(amazonSimpleEmailService); } } Here we follow similar steps as we did for configuring the SimpleEmailServiceMailSender earlier.\nWe will now inject the SimpleEmailServiceJavaMailSender through the JavaMailSender interface in our service class. The JavaMailSender interface is part of Spring\u0026rsquo;s mail abstraction which adds specialized JavaMail features like MIME message support. JavaMailSender also provides a callback interface for the preparation of JavaMail MIME messages, called MimeMessagePreparator.\n@Service public class NotificationService { @Autowired private MailSender mailSender; @Autowired private JavaMailSender javaMailSender; public void sendMailMessageWithAttachments() { this.javaMailSender.send(new MimeMessagePreparator() { @Override public void prepare(MimeMessage mimeMessage) throws Exception { MimeMessageHelper helper = new MimeMessageHelper(mimeMessage, true, \u0026#34;UTF-8\u0026#34;); helper.addTo(\u0026#34;foo@bar.com\u0026#34;); helper.setFrom(\u0026#34;bar@baz.com\u0026#34;); InputStreamSource data = new ByteArrayResource(\u0026#34;\u0026#34;.getBytes()); helper.addAttachment(\u0026#34;test.txt\u0026#34;, data ); helper.setSubject(\u0026#34;test subject with attachment\u0026#34;); helper.setText(\u0026#34;mime body\u0026#34;, false); } }); } } Here we are using the callback interface MimeMessagePreparator to construct the email message by setting the to and from email addresses along with the subject and text of the email.\nEnabling Production Access We finally need to move our account out of the sandbox so that we can send emails to any recipient, irrespective of whether the recipient\u0026rsquo;s address or domain is verified. But, we still have to verify all identities that we use such as From, Source, Sender, or Return-Path addresses. We need to submit a request for production aceess as shown below:\nHere We are submiting the production access request from the AWS Management Console.\nWe can also submit the production access request from the AWS CLI. Submitting the request with the AWS CLI is useful when we want to request production access for a large number of identities (domains or email addresses), or when we want to automate the process of setting up Amazon SES.\nConclusion In this article, we looked at the important concepts of Amazon Simple Email Service (SES) and the libraries provided by Spring Cloud AWS to interact with it. We also developed a Spring Boot application with a REST API that can send email using the SES module of Spring Cloud AWS.\nI hope this post has given you a good introduction to Amazon Simple Email Service (SES) and how we can use this service to send emails.\nYou can refer to all the source code used in the article on Github.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"June 27, 2021","image":"https://reflectoring.io/images/stock/0075-envelopes-1200x628-branded_hu2f9dd448936f3159981d5b962b2c979c_136735_650x0_resize_q90_box.jpg","permalink":"/spring-cloud-aws-ses/","title":"Sending Emails with Amazon SES and Spring Cloud AWS"},{"categories":["Spring Boot","AWS"],"contents":"ElastiCache is a fully managed caching service available in AWS Cloud.\nSpring Cloud AWS helps us to simplify the communication of Spring Boot application with AWS services. From taking care of security to auto-configuring the beans required for the communication, it takes care of a lot of essential steps.\nIn this article, we will look at how we can use it to connect our application to AWS ElastiCache for Redis.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. Why Caching? Caching is a common technique of temporarily storing a copy of data or result of a computation in memory for quick and frequent access. We use caching primarily to:\n Improve the throughput of the application. Prevent overwhelming the application or services the application is calling with redundant requests.  We can either implement caching in our application by using an in-memory Map based data structure, or we can use a full-blown caching solution such as Redis.\nWhat is ElastiCache? ElastiCache is a fully managed in-memory caching service in AWS Cloud. It currently supports two caching engines: Memcached and Redis.\nElastiCache for Redis Redis is a popular in-memory data structure store. It is open-source and widely used in the industry for caching. It stores the data as key-value pairs and supports many varieties of data structures like string, hash, list, set, sorted set with range queries, bitmap, hyperloglog, geospatial index, and streams.\nIn AWS, one of the ways of using Redis for caching is by using the ElastiCache service.\nElastiCache hosts the Redis caching engine and provides High Availability, Scalability, and Resiliency to it. It also takes care of all the networking and security requirements under the shared responsibility model.\nThe basic building block of ElastiCache is the cluster. A cluster can have one or more nodes. Each node runs an instance of the Redis cache engine software. Please refer AWS ElastiCache User Guide for more details.\nSpring Cloud AWS For Caching Spring supports a unified caching abstraction by providing the Cache and CacheManager interfaces to unify different caching technologies.\nIt also supports JCache (JSR-107) annotations to allow us to leverage a variety of caching technologies.\nSpring Cloud AWS integrates the Amazon ElastiCache service into the Spring unified caching abstraction by providing an implementation of CacheManager based on the Memcached and Redis protocols. The caching support for Spring Cloud AWS provides an implementation of Memcached for ElastiCache and uses Spring Data Redis for Redis caches.\nConfiguring Dependencies for Spring Cloud AWS To use Spring Cloud AWS, first, we need to add Spring Cloud AWS BOM (Bill of material). BOM will help us to manage our dependency versions:\ndependencyManagement { imports { mavenBom \u0026#39;io.awspring.cloud:spring-cloud-aws-dependencies:2.3.1\u0026#39; } } Next, we need to add the following dependencies:\nimplementation \u0026#39;org.springframework.boot:spring-boot-starter-data-redis\u0026#39; implementation \u0026#39;io.awspring.cloud:spring-cloud-starter-aws\u0026#39; implementation \u0026#39;com.amazonaws:aws-java-sdk-elasticache\u0026#39; Let\u0026rsquo;s talk a bit about these dependencies:\n spring-cloud-starter-aws provides core AWS Cloud dependencies such as spring-cloud-aws-context and spring-cloud-aws-autoconfiguration. Out of the box spring-cloud-aws-context provides support for Memcached but for Redis, it needs the Spring Data Redis dependency. Spring Data Redis gives us access to Spring Cache abstraction, and also Lettuce which is a popular Redis client.  spring-cloud-aws-autoconfiguration glues everything together and configures a CacheManager which is required by the Spring Cache abstraction to provide caching services to the application.\nSpring Cloud AWS does all the heavy lifting of configuring the caches for us. All we need to do is to provide the name of the cache. Let\u0026rsquo;s look at how we can do that.\nCaching with Spring Boot The easiest way to implement caching in a Spring Boot application is by using Spring Boot\u0026rsquo;s Cache Abstraction. Please read our article on Implementing Cache in a Spring Application to dive deeper into the topic.\nIn this section, we will only understand the configuration required for the integration of Spring Cloud AWS with ElastiCache.\nThe first thing we need to do is to enable caching in our application using @EnableCaching annotation:\n@Configuration @EnableCaching public class EnableCache { //... } Here we have used a separate configuration class to enable caching.\nNext, we need to identify the methods that we need to cache. In our example application we have decided to cache methods of two services ProductService and UserService:\n@Service @AllArgsConstructor @CacheConfig(cacheNames = \u0026#34;product-cache\u0026#34;) public class ProductService { private final ProductRepository repository; @Cacheable public Product getProduct(String id) { return repository.findById(id).orElseThrow(()-\u0026gt; new RuntimeException(\u0026#34;No such product found with id\u0026#34;)); } //.... } @Service @AllArgsConstructor @CacheConfig(cacheNames = \u0026#34;user-cache\u0026#34;) public class UserService { private final UserRepository repository; @Cacheable public User getUser(String id){ return repository.findById(id).orElseThrow(()-\u0026gt; new RuntimeException(\u0026#34;No such user found with id\u0026#34;)); } } Here we have decorated the getProduct() and getUser() methods with @Cacheable annotation to cache their responses. Both the methods will retrieve entities from the database when called for the first time. Subsequent calls to these methods with the same value of parameter id will return the response from the cache instead of the database.\nOne important requirement of the @Cacheable annotation is that the cache name is provided via the @CacheConfig annotation. @CacheConfig is used when we have used multiple Spring Cache annotations in the class and all of them share a common configuration. In our case, the common configuration is the cache name.\nNow, Spring Cloud AWS provides us two ways to connect to ElastiCache:\n Cluster Name Approach Stack Name Approach  Cluster Name Approach Spring Cloud AWS requires clusters of the same name as the cache name to exist in the ElastiCache:\nTechnically, Spring Cloud AWS looks for nodes with the same name but since these are Single Node clusters the name of the node is the same as the cluster name.\nWe also need to define cluster names in the application.yml. Spring Cloud AWS will use this to scan the ElastiCache to find the clusters:\ncloud: aws: elasticache: clusters: - name: product-cache expiration: 100 - name: user-cache expiration: 6000 Here, we can provide a list of clusters. Since we have used two caches in our application we have to specify both product-cache and user-cache. We have also provided different Time-To-Live (expiration) in seconds for both caches. In case we want a common expiration time for all the caches we can do so using cloud.aws.elasticache.default-expiration property.\nStack Name Approach If we are using CloudFormation to deploy our application stack in the AWS then one more approach exists for us.\nInstead of giving cluster names, we only need to provide the stack name. Say the stack name is example-stack:\ncloud: aws: stack: name: example-stack Spring Cloud AWS retrieves all the cache clusters from our stack and builds CacheManager with the names of the resources as cache names instead of the actual cluster names. The correct terminology here is the Logical Name which is the name of the resource in the Cloudformation script and Physical Name which is the name of the cache cluster.\nWe need to specify the Logical Name of the cache cluster as cache names in our configuration:\n@CacheConfig(cacheNames = \u0026#34;ProductCache\u0026#34;) public class ProductService { //... } @CacheConfig(cacheNames = \u0026#34;UserCache\u0026#34;) public class UserService { //... } We also need to make sure to add the following dependency when using the stack name approach:\nimplementation \u0026#39;com.amazonaws:aws-java-sdk-cloudformation\u0026#39; Spring Cloud AWS uses this dependency to retrieve the Cloudformation stack details at the time of application startup.\nHow Does Spring Cloud AWS Configure the CacheManager? In this section, we will dive a bit deeper into the inner workings of Spring Cloud AWS and see how it autoconfigures the cache for us.\nAs we know that for caching to work in a Spring application we need a CacheManager bean. The job of Spring Cloud AWS is to essentially create that bean for us.\nLet\u0026rsquo;s look at the steps it performs along with the classes involved in building CacheManager:\n When our application starts in the AWS environment, ElastiCacheAutoConfiguration reads cluster names from the application.yml or stack name if cluster configuration is not provided. ElastiCacheAutoConfiguration then passes the Cache Cluster names to ElastiCacheCacheConfigurer object. In the case of Stack configuration it first retrieves all the ElastiCache Cluster details from the Cloudformation stack. Then ElastiCacheCacheConfigurer creates the CacheManager with the help of ElastiCacheFactoryBean class. ElastiCacheFactoryBean scans the ElastiCache in the same availability zone and retrieves the host and port names of the nodes. To allow our service to scan ElastiCache we need to provide AmazonElastiCacheReadOnlyAccess permission to our service and also AWSCloudFormationReadOnlyAccess if we are using the stack name approach. ElastiCacheFactoryBean passes this host and port to RedisCacheFactory which then uses Redis clients such as Lettuce to create the connection object which then actually establishes a connection with the nodes and performs the required operations.  Conclusion While ElastiCache is already making our life easier by managing our Redis Clusters, Spring Cloud AWS further simplifies our lives by simplifying the configurations required for communicating with it.\nIn this article, we saw those configurations and also how to apply them. Hope this was helpful!\nThank you for reading! You can find the working code at GitHub.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"June 27, 2021","image":"https://reflectoring.io/images/stock/0071-disk-1200x628-branded_hu2106704273edaf8554081f1ec02d7286_111877_650x0_resize_q90_box.jpg","permalink":"/spring-cloud-aws-redis/","title":"Caching with ElastiCache for Redis and Spring Cloud AWS"},{"categories":["Simplify"],"contents":"As software developers, we\u0026rsquo;re painfully aware of technical debt.\nUsually, we curse our predecessors for taking shortcuts, making wrong decisions, and for just not working professionally in general.\nThat\u0026rsquo;s unfair, of course. Whoever wrote a piece of code that is technical debt in our eyes had to work with the knowledge and in the constraints of the time when they wrote that code. There probably were time constraints and technical constraints (among other constraints) that we have collectively forgotten about today.\nBut still, when we get the chance to start a new codebase or a new module within a codebase, we say to ourselves that we\u0026rsquo;re going to do it better. We\u0026rsquo;re going to build something that our successors will praise us for. We even proudly put our name in the header of the source files!\nBut then the constraints hit us. People start asking us when it\u0026rsquo;s done. The API we\u0026rsquo;re using turns out to suck and we need to build workarounds.\nThat\u0026rsquo;s life. We can\u0026rsquo;t control everything. But that\u0026rsquo;s not a reason to give up on the things that we can control.\nWhen we start something new, we have a responsibility to make things as good as we can, within the constraints that we can\u0026rsquo;t control. If we skimp on the things that we can control, our successors are right to accuse us of being unprofessional.\nHere are some things that we can usually control when starting a new project.\nDocument Decisions With every decision, think about whether this decision is an important information for the developer you\u0026rsquo;re going to hand over to at some point. Does it provide context? Does it explain something that would otherwise surprise a new developer? If in doubt, document the decision.\nExplain the high-level architecture There are few things as frustrating as sitting in front of an unfamiliar codebase and not having a clue what the code is about. This is very easily fixed by providing a high-level architecture overview in a README file. A simple boxes and arrows diagram does wonders for understanding! And it doesn\u0026rsquo;t have to be very detailed, so it\u0026rsquo;s easy to keep up-to-date!\nStructure the code into modules When you\u0026rsquo;re starting a fresh codebase, even if it\u0026rsquo;s just a few sourcecode files, yet, split the code into modules or packages. This will make your life easier to understand the code and it will make it so much easier for the next person to understand it. Having clear modules from the start will make it easier to draw a high-level architecture diagram, making it even easier to understand for newcomers.\nLeave a README in the code repository As mentioned above, a README can contain a high-level architecture diagram. But it can also contain instructions how to set up the local developer environment and any tips and tricks on how to deal with the codebase. Open source projects usually do a good job of providing a README because many people without much context are working on the code. Why should a README be exclusive to open source? It will help just as much (or even more) in internal codebases where people are working on the code every day.\nPolish the code Once you\u0026rsquo;re done with the initial design of a new codebase, make a polishing pass over it. Now is the time that you still have the context of project and you can make improvements. There will never be a better time to do this. If you don\u0026rsquo;t do this, you will open up the codebase to instant pollution because other developers will add code in the same style as the code that is there already. Every line of code we\u0026rsquo;re creating is legacy code for the next developer. Take the time and review your own code after a day or two and ask yourself what you would want the code to look like if you had to work on it a year from now.\nGood Conscience Doing these things, chances are that your successors will still curse you for the things that you didn\u0026rsquo;t have control over, but at least you can sleep at night, knowing you did everything within your control to make things as good as they can be.\n","date":"June 21, 2021","image":"https://reflectoring.io/images/stock/0103-blank-1200x628-branded_hu43e3e0ef78f4016268b5949be25855f4_108974_650x0_resize_q90_box.jpg","permalink":"/start-clean/","title":"Start Clean!"},{"categories":["Spring Boot"],"contents":"Apache Camel is an integration framework with a programming model for integrating a wide variety of applications.\nIt is also a good fit for microservice architectures where we need to communicate between different microservices and other upstream and downstream systems like databases and messaging systems.\nIn this article, we will look at using Apache Camel for building integration logic in microservice applications built with Spring Boot with the help of code examples.\n Example Code This article is accompanied by a working code example on GitHub. What is Apache Camel As explained at the start, Apache Camel is an integration framework. Camel can do :\n Routing: Take a data payload also called \u0026ldquo;message\u0026rdquo; from a source system to a destination system Mediation: Message processing like filtering the message based on one or more message attributes, modifying certain fields of the message, enrichment by making API calls, etc.  Some of the important concepts of Apache Camel used during integration are shown in this diagram:\nLet us get a basic understanding of these concepts before proceeding further.\nCamel Context Camel context is the runtime container of all the Camel constructs and executes the routing rules. The Camel context activates the routing rules at startup by loading all the resources required for their execution.\nThe Camel context is described by the CamelContext interface and is autoconfigured by default if running in a Spring container.\nRoutes and Endpoints A Route is the most basic construct which we use to define the path a message should take while moving from source to destination. We define routes using a Domain Specific Language (DSL).\nRoutes are loaded in the Camel context and are used to execute the routing logic when the route is triggered. Each route is identified by a unique identifier in the Camel context.\nEndpoints represent the source and destination of a message. They are usually referred to in the Domain Specific Language (DSL) via their URIs. Examples of an endpoint can be a URL of a web application or source or destination of a messaging system.\nDomain Specific Language (DSL) We define routes in Apache Camel with a variety of Domain Specific Languages (DSL). The Java DSL and the Spring XML DSL are the two main types of DSLs used in Spring applications.\nHere is an example of a route defined in Java DSL using the RouteBuilder class:\nRouteBuilder builder = new RouteBuilder() { @Override public void configure() throws Exception { // Route definition in Java DSL for  // moving file from jms queue to file system.  from(\u0026#34;jms:queue:myQueue\u0026#34;).to(\u0026#34;file://mysrc\u0026#34;); } }; Here we have defined a route with a JMS queue as a source and a file endpoint as a destination by using the RouteBuilder class. The RouteBuilder class creates routing rules using the DSL. Instances of RouteBuilder class are added to the Camel context.\nThe same route defined using Spring XML DSL looks like this :\n\u0026lt;beans xmlns=\u0026#34;http://www.springframework.org/schema/beans\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34; http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\u0026#34; \u0026gt; \u0026lt;camelContext id=\u0026#34;sendtoqueue\u0026#34; xmlns=\u0026#34;http://camel.apache.org/schema/spring\u0026#34;\u0026gt; \u0026lt;route\u0026gt; \u0026lt;from uri=\u0026#34;jms:queue:myQueue\u0026#34;/\u0026gt; \u0026lt;to uri=\u0026#34;file://mysrc\u0026#34;/\u0026gt; \u0026lt;/route\u0026gt; \u0026lt;/camelContext\u0026gt; \u0026lt;/beans\u0026gt; Components The transport of a message from the source to the destination goes through multiple steps. Processing in each step might require connecting to different types of resources in the message flow like an invocation of a bean method or calling an API. We use components to perform the function of connecting to these resources.\nFor example, the route defined with the RouteBuilder class in Java DSL uses the file component to bridge to the file system and the jms component to bridge to the JMS provider.\nRouteBuilder builder = new RouteBuilder() { @Override public void configure() throws Exception { // Route definition in Java DSL for  // moving file from jms queue to file system.  from(\u0026#34;jms:queue:myQueue\u0026#34;).to(\u0026#34;file://mysrc\u0026#34;); } }; Camel has several pre-built components and many others built by communities. Here is a snippet of the components available in Camel which gives us an idea of the wide range of systems we can integrate using the framework:\n ActiveMQ AMQP Async HTTP Client Atom Avro RPC AWS2 DynamoDB AWS2 Lambda AWS2 SQS AWS2 SNS Azure CosmosDB Azure Storage Blob Azure Storage Queue Bean Cassandra CQL Consul CouchDB Cron Direct Docker Elasticsearch Facebook FTP Google Cloud Storage Google Cloud Function GraphQL Google Pubsub gRPC HTTP  These functions are grouped in separate Jar files. Depending on the component we are using, we need to include the corresponding Jar dependency.\nFor our example, we need to include the camel-jms dependency and use the component by referring to the documentation of Camel JMS component.\nWe can also build our own components by implementing the Component interface.\nUsing Apache Camel in Spring Boot Camel support for Spring Boot includes an opinionated auto-configuration of the Camel context and starters for many Camel components. The auto-configuration of the Camel context detects Camel routes available in the Spring context and registers the key Camel utilities (like producer template, consumer template, and the type converter) as Spring beans.\nLet us understand this with the help of an example. We will set up a simple route for calling a bean method and invoke that route from a REST endpoint.\nLet us first create a Spring Boot project with the help of the Spring boot Initializr, and then open the project in our favorite IDE.\nAdding the Dependencies Apache Camel ships a Spring Boot Starter module camel-spring-boot-starter that allows us to use Camel in Spring Boot applications.\nLet us first add the Camel Spring Boot BOM to our Maven pom.xml :\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;!-- Camel BOM --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel.springboot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-spring-boot-bom\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${project.version}\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;!-- ... other BOMs or dependencies ... --\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; The camel-spring-boot-bom contains all the Camel Spring Boot starter JAR files.\nNext, let us add the Camel Spring Boot starter:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-spring-boot-starter\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Adding the camel-spring-boot-starter sets up the Camel Context.\nWe need to further add the starters for the components required by our Spring Boot application :\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel.springboot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-servlet-starter\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel.springboot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-jackson-starter\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel.springboot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-swagger-java-starter\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Here we have added three dependencies with the starters for using the components for servlet, jackson, and swagger which will perform the following functions:\n The servlet component will provide HTTP based endpoints for consuming HTTP requests arriving at an HTTP endpoint bound to a published Servlet. The jackson component will be used for marshaling and unmarshalling between JavaScript Object Notation (JSON) and object representations. The swagger component will expose the REST services and their APIs using Swagger/Open API specification.  Defining a Route with Java DSL\u0026rsquo;s RouteBuilder Let us now create a route for fetching products by using a Spring bean method. We create Camel routes by extending the RouteBuilder class and overriding its configure method to define our routing rules in Java Domain Specific Language (DSL).\nEach of the router classes is instantiated once and is registered with the CamelContext object.\nOur class containing the routing rule defined using Java Domain Specific Language (DSL) looks like this:\n@Component public class FetchProductsRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\u0026#34;direct:fetchProducts\u0026#34;) .routeId(\u0026#34;direct-fetchProducts\u0026#34;) .tracing() .log(\u0026#34;\u0026gt;\u0026gt;\u0026gt; ${body}\u0026#34;) .bean(ProductService.class, \u0026#34;fetchProductsByCategory\u0026#34;) .end(); } } Here we are creating the route by defining the Java DSL in a class FetchProductsRoute by extending RouteBuilder class. We defined the endpoint as direct:fetchProducts and provided a route identifier direct-fetchProducts. The prefix direct: in the name of the endpoint makes it possible to call the route from another Camel route using the direct Camel component.\nTriggering a Route with Templates We can invoke the routes with ProducerTemplate and ConsumerTemplate. The ProducerTemplate is used as an easy way of sending messages to a Camel endpoint.\nBoth of these templates are similar to the template utility classes in the Spring Framework like JmsTemplate or JdbcTemplate that simplify access to the JMS and JDBC APIs.\nLet us invoke the route we created earlier from a resource class in our application :\n@RestController public class ProductResource { @Autowired private ProducerTemplate producerTemplate; @GetMapping(\u0026#34;/products/{category}\u0026#34;) @ResponseBody public List\u0026lt;Product\u0026gt; getProductsByCategory( @PathVariable(\u0026#34;category\u0026#34;) final String category){ producerTemplate.start(); List\u0026lt;Product\u0026gt; products = producerTemplate .requestBody(\u0026#34;direct:fetchProducts\u0026#34;, category, List.class); producerTemplate.stop(); return products; } } @Configuration public class AppConfig { @Autowired private CamelContext camelContext; ... ... @Bean ProducerTemplate producerTemplate() { return camelContext.createProducerTemplate(); } @Bean ConsumerTemplate consumerTemplate() { return camelContext.createConsumerTemplate(); } } Here we have defined a REST endpoint in our resource class with a GET method for fetching products by category. We are invoking our Camel route inside the method by using the producerTemplate which we configured in our Spring configuration.\nIn our Spring configuration we have defined the producerTemplate and consumerTemplate by calling corresponding methods on the CamelContext which is available in the ApplicationContext.\nDefining a Route with Splitter-Aggregator Enterprise Integration Pattern Let us now look at a route where we will use an Enterprise Integration Pattern.\nCamel provides implementations for many of the Enterprise Integration Patterns from the book by Gregor Hohpe and Bobby Woolf. We will use the Splitter and Aggregator integration patterns in our example.\nWe can split a single message into multiple fragments with the Splitter and process them individually. After that, we can use the Aggregator to combine those individual fragments into a single message.\nSelecting the Enterprise Integration Pattern (EIP) Before trying to build our integration logic, we should look for the integration pattern most appropriate for fulfilling our use case.\nLet us see an example of defining a route with the Splitter and Aggregate integration patterns. Here we will consider a hypothetical scenario of building a REST API for an E-Commerce application for processing an order placed by a customer. We will expect our order processing API to perform the following steps:\n Fetch the list of items from the shopping cart Fetch the price of each order line item in the cart Calculate the sum of prices of all order line items to generate the order invoice.  After finishing step 1, we want to fetch the price of each order line item in step 2. We want to fetch them in parallel since they are not dependent on each other. There are multiple ways of doing this kind of processing.\nHowever, since design patterns are accepted solutions to recurring problems within a given context, we will search for a pattern closely resembling our problem from our list of Enterprise Integration Patterns. After looking through the list, we find that the Splitter and Aggregator patterns are best suited to do this processing.\nApplying the Enterprise Integration Pattern (EIP) Next, we will refer to Apache Camel\u0026rsquo;s documentation to learn about the usage of the Splitter and Aggregator integration patterns to build our routes.\nLet us apply these patterns by performing the below steps:\n Fetch the order lines from the shopping cart and then split them into individual order line items with the Splitter EIP. For each order line item, fetch the price, apply discounts, etc. These steps are running in parallel. Aggregate price from each line item in PriceAggregationStrategy class which implements AggregationStrategy interface.  Our route for using this Enterprise Integration Pattern (EIP) looks like this:\n@Component public class OrderProcessingRoute extends RouteBuilder { @Autowired private PriceAggregationStrategy priceAggregationStrategy; @Override public void configure() throws Exception { from(\u0026#34;direct:fetchProcess\u0026#34;) .split(body(), priceAggregationStrategy).parallelProcessing() .to(\u0026#34;bean:pricingService?method=calculatePrice\u0026#34;) .end(); } } @Component public class PriceAggregationStrategy implements AggregationStrategy{ @Override public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { OrderLine newBody = newExchange.getIn().getBody(OrderLine.class); if (oldExchange == null) { Order order = new Order(); order.setOrderNo(UUID.randomUUID().toString()); order.setOrderDate(Instant.now().toString()); order.setOrderPrice(newBody.getPrice()); order.addOrderLine(newBody); newExchange.getIn().setBody(order, Order.class); return newExchange; } OrderLine newOrderLine = newExchange.getIn() .getBody(OrderLine.class); Order order = oldExchange.getIn().getBody(Order.class); order.setOrderPrice(order.getOrderPrice() + newOrderLine.getPrice()); order.addOrderLine(newOrderLine); oldExchange.getIn().setBody(order); return oldExchange; } } @Service public class PricingService { public OrderLine calculatePrice(final OrderLine orderLine ) { String category = orderLine.getProduct().getProductCategory(); if(\u0026#34;Electronics\u0026#34;.equalsIgnoreCase(category)) orderLine.setPrice(300.0); ... ... return orderLine; } } Here we have defined a route in Java DSL which splits the incoming message (collection of order lines) into individual order line items. Each order line item is sent to the calculatePrice method of the PricingService class to compute the price of the items.\nNext, we have tied up an aggregator after the split step. The aggregator implements the AggregationStrategy interface and our aggregation logic is inside the overridden aggregate() method. In the aggregate() method, we take each of the order line items and consolidate them into a single order object.\nConsuming the Route with Splitter Aggregator Pattern from REST Styled DSL Let us next use the REST styled DSL in Apache Camel to define REST APIs with the HTTP verbs like GET, POST, PUT, and, DELETE. The actual REST transport is leveraged by using Camel REST components such as Netty HTTP, Servlet, and others that have native REST integration.\nTo use the Rest DSL in Java, we need to extend the RouteBuilder class and define the routes in the configure method similar to how we created regular Camel routes earlier.\nLet us define a hypothetical REST service for processing orders by using the rest construct in the Java DSL to define the API. We will also generate a specification for the API based on the OpenAPI Specification (OAS):\n@Component public class RestApiRoute extends RouteBuilder { @Autowired private Environment env; @Override public void configure() throws Exception { restConfiguration() .contextPath(\u0026#34;/ecommapp\u0026#34;) .apiContextPath(\u0026#34;/api-doc\u0026#34;) .apiProperty(\u0026#34;api.title\u0026#34;, \u0026#34;REST API for processing Order\u0026#34;) .apiProperty(\u0026#34;api.version\u0026#34;, \u0026#34;1.0\u0026#34;) .apiProperty(\u0026#34;cors\u0026#34;, \u0026#34;true\u0026#34;) .apiContextRouteId(\u0026#34;doc-api\u0026#34;) .port(env.getProperty(\u0026#34;server.port\u0026#34;, \u0026#34;8080\u0026#34;)) .bindingMode(RestBindingMode.json); rest(\u0026#34;/order/\u0026#34;) .get(\u0026#34;/process\u0026#34;).description(\u0026#34;Process order\u0026#34;) .route().routeId(\u0026#34;orders-api\u0026#34;) .bean(OrderService.class, \u0026#34;generateOrder\u0026#34;) .to(\u0026#34;direct:fetchProcess\u0026#34;) .endRest(); } This defines a REST service of type GET with URL mappings /order/process.\nWe then route directly to the Camel endpoint of our route named direct:fetchProcess using the Splitter and Aggregator Enterprise Integration pattern that we created earlier using the to construct in the DSL.\nWhen to Use and Not to Use Apache Camel As we saw in our examples, we can easily accomplish the above tasks with custom coding instead of using Apache Camel. Let us understand some of the situations when we should consider using Apache Camel for our integration requirements:\n Apache Camel with a rich set of components will be useful in applications requiring integration with systems over different protocols (like files, APIs, or JMS Queues). Apache Camel\u0026rsquo;s implementation of Enterprise Integration Patterns is useful to fulfill complex integration requirements with tried and tested solutions for recurring integration scenarios. Orchestration and choreography in microservices can be defined with Domain Specific Language in Apache Camel routes. Routes help to keep the core business logic decoupled from the communication logic and satisfy one of the key Microservice principles of SRP (single responsibility principle). Apache Camel works very well with Java and Spring applications. Working with Java Objects (POJOs): Apache Camel is a Java framework, so it is especially good at working with Java objects. So if we are working with a file format like XML, JSON that can be de-serialized into a Java object then it will be handled easily by Camel.  On the contrary, we should avoid using Apache Camel in the following scenarios:\n If we have simple integration involving calling few APIs Camel is not known to perform well for heavy data processing Camel will also not be good for teams lacking in Java skills  Generally, the best use cases for Camel are where we have a source of data that we want to consume from like incoming messages on a queue, or fetching data from an API and a target, where we want to send the data to.\nConclusion In this article, we looked at the important concepts of Apache Camel and used it to build integration logic in a Spring Boot application. Here is a summary of the things we covered:\n Apache Camel is an integration framework providing a programming model along with implementations of many Enterprise Integration Patterns. We use different types of Domain Specific Languages (DSL) to define the routing rules of the message. A Route is the most basic construct which we specify with a DSL to define the path a message should take while moving from source to destination. Camel context is the runtime container for executing Camel routes. We built a route with the Splitter and Aggregator Enterprise Integration Patterns and invoked it from a REST DSL to demonstrate solving integration problems by applying Enterprise Integration Patterns because patterns are accepted solutions to recurring problems within a given context. Finally we looked at some scenarios where using Apache Camel will benefit us.  I hope this post has given you a good introduction to Apache Camel and we can use Camel with Spring Boot applications. This should help you to get started with building applications using Spring with Apache Camel.\nYou can refer to all the source code used in the article on Github.\n","date":"June 21, 2021","image":"https://reflectoring.io/images/stock/0046-rack-1200x628-branded_hu38983fac43ab7b5246a0712a5f744c11_252723_650x0_resize_q90_box.jpg","permalink":"/spring-camel/","title":"Getting Started with Apache Camel and Spring Boot"},{"categories":["Software Craft"],"contents":"Protecting a web application against various security threats and attacks is vital for the health and security of a website. Cross Site Request Forgery (CSRF) is a type of such attack on websites.\nWith a successful CSRF attack, an attacker can mislead an authenticated user in a website to perform actions with inputs set by the attacker.\nThis can have serious consequences like the loss of user confidence in the website and even fraud or theft of financial resources if the website under attack belongs to any financial realm.\nIn this article, we will understand:\n What constitutes a Cross Site Request Forgery (CSRF) attack How attackers craft a CSRF attack What makes websites vulnerable to a CSRF attack What are some methods to secure websites from CSRF attack   Example Code This article is accompanied by a working code example on GitHub. What is CSRF? New-age websites often need to fetch data from other websites for various purposes. For example, the website might call a Google Map API to display a map of the user\u0026rsquo;s current location or render a video from YouTube. These are examples of cross-site requests and can also be a potential target of CSRF attacks.\nCSRF attacks target websites that trust some form of authentication by users before they perform any actions. For example, a user logs into an e-commerce site and makes a payment after purchasing goods. The trust is established when the user is authenticated during login and the payment function in the website uses this trust to identify the user.\nAttackers exploit this trust and send forged requests on behalf of the authenticated user. This illustration shows the making of a CSRF attack:\nAs represented in this diagram, a Cross Site Request Forgery attack is roughly composed of two parts:\n  Cross-Site: The user is logged into a website and is tricked into clicking a link in a different website that belongs to the attacker. The link is crafted by the attacker in a way that it will submit a request to the website the user is logged in to. This represents the \u0026ldquo;cross-site\u0026rdquo; part of CSRF.\n  Request Forgery: The request sent to the user\u0026rsquo;s website is forged with values crafted by the attacker. When the victim user opens the link in the same browser, a forged request is sent to the website with values set by the attacker along with all the cookies that the victim has associated with that website.\n  CSRF is a common form of attack and has figured several times in the OWASP Top ten Web Application Security Risks. Open Web Application Security Project (OWASP) Top Ten represents a broad consensus about the most critical security risks to web applications.\nThe OWASP website defines CSRF as:\n Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they’re currently authenticated. With a little help from social engineering (such as sending a link via email or chat), an attacker may trick the users of a web application into executing actions of the attacker’s choosing.\n Example of CSRF Attack Let us now understand the anatomy of a CSRF attack with the help of an example:\n Suppose a user logs in to a website www.myfriendlybank.com from a login page. The website is vulnerable to CSRF attacks. The web application for the website authenticates the user and sends back a cookie in the response. The web application populates the cookie with the information that the user is authenticated. As part of a web browser\u0026rsquo;s behavior concerning cookie handling, it will send this cookie to the server for all subsequent interactions. The user next visits a malicious website without logging out of myfriendlybank.com. This malicious site contains a banner that looks like this:  The HTML used to create the banner has the below contents:\n\u0026lt;h1\u0026gt;Congratulations. You just won a bonus of 1 million dollars!!!\u0026lt;/h1\u0026gt; \u0026lt;form action=\u0026#34;http://myfriendlybank.com/account/transfer\u0026#34; method=\u0026#34;post\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;hidden\u0026#34; name=\u0026#34;TransferAccount\u0026#34; value=\u0026#34;9876865434\u0026#34; /\u0026gt; \u0026lt;input type=\u0026#34;hidden\u0026#34; name=\u0026#34;Amount\u0026#34; value=\u0026#34;1000\u0026#34; /\u0026gt; \u0026lt;input type=\u0026#34;submit\u0026#34; value=\u0026#34;Click here to claim your bonus\u0026#34;/\u0026gt; \u0026lt;/form\u0026gt; We can notice in this HTML that the form action posts to the vulnerable website myfriendlybank.com instead of the malicious website. In this example, the attacker sets the request parameters: TransferAccount and Amount to values that are unknown to the actual user.\n The user is enticed to claim the bonus by visiting the malicious website and clicking the submit button.\n  On form submit after the user clicks the submit button, the browser sends the user\u0026rsquo;s authentication cookie to the web application that was received after login to the website in step 2.\n  Since the website is vulnerable to CSRF attacks, the forged request with the user\u0026rsquo;s authentication cookie is processed. Forged requests can be sent for all actions that an authenticated user is allowed to do on the website. In this example, the forged request transfers the amount to the attacker\u0026rsquo;s account.\n  Although this example requires the user to click the submit button, the malicious website could have run JavaScript to submit the form without the user knowing anything about it.\nThis example although very trivial can be extended to scenarios where an attacker can perform additional damaging actions like changing the user\u0026rsquo;s password and registered email address which will block their access completely depending on the user\u0026rsquo;s permissions in the website.\nHow does CSRF work? As explained earlier, a CSRF attack leverages the implicit trust placed in user session cookies by many web applications.\nIn these applications, once the user authenticates, a session cookie is created and all subsequent transactions for that session are authenticated using that cookie including potential actions initiated by an attacker by \u0026ldquo;riding\u0026rdquo; the existing session cookie. Due to this reason, CSRF is also called \u0026ldquo;Session Riding\u0026rdquo;.\nRiding the Session Cookie A CSRF attack exploits the behavior of a type of cookies called session cookies shared between a browser and server. HTTP requests are stateless due to which the server cannot distinguish between two requests sent by a browser.\nBut there are many scenarios where we want the server to be able to relate one HTTP request with another. For example, a login request followed by a request to check account balance or transfer funds. The server will only allow these requests if the login request was successful. We call this group of requests as belonging to a session.\nCookies are used to hold this session information. The server packages the session information for a particular client in a cookie and sends it to the client\u0026rsquo;s browser. For each new request, the browser re-identifies itself by sending the cookie (with the session key) back to the server.\nThe attacker hijacks(or rides) this cookie to trick the user into sending requests crafted by the attacker to the server.\nConstructing a CSRF Attack The broad sequence of steps followed by the attacker to construct a CSRF attack include the following:\n Identifying and exploring the vulnerable website for functions of interest that can be exploited Building an Exploit URL Creating an Inducement for the Victim to open the Exploit URL  Let us understand each step in greater detail.\nIdentifying and Exploring the Vulnerable Website Before planning a CSRF attack, the attacker needs to identify pieces of functionality that are of interest for example fund transfers. The attacker also needs to know a valid URL in the website, along with the corresponding patterns of valid requests accepted by the URL.\nThis URL should cause a state-changing action in the target application. Some examples of state-changing actions are:\n Update account balance Create a customer record Transfer money  In contrast to state-changing actions, an inquiry does not change any state in the server. For example, view user profile, view account balance, etc which do not update anything in the server.\nThe attacker also needs to find the right values for the URL parameters. Otherwise, the target application might reject the forged request.\nSome common techniques used to explore the vulnerable website are:\n View HTML Source: Check the HTML source of web pages to identify links or buttons that contain functions of interest. Web Application Debugging Tools: Analyze the information exchanged between the client and the server using web application debugging tools such as WebScarab, and Tamper Dev. Network Sniffing Tools: Analyze the information exchanged between the client and the server with a network sniffing tool such as Wireshark.  For example, let us assume that the attacker has identified a website at https://myfriendlybank.com to try a CSRF attack. The attacker explored this website using the above techniques and found a URL https://myfriendlybank.com/account/transfer with CSRF vulnerabilities which is used to transfer funds.\nBuilding an Exploit URL The attacker will next try to build an exploit URL for sharing with the victim. Let us assume that the transfer function in the application is built using a GET method to submit a transfer request. Accordingly, a legitimate request to transfer 100 USD to another account with account number 1234567 will look like this:\nGET https://myfriendlybank.com/account/transfer?amount=100\u0026amp;accountNumber=1234567\nThe attacker will create an exploit URL to transfer 15,000 USD to another dubious account with account number 4567876 probably belonging to the attacker:\nhttps://myfriendlybank.com/account/transfer?amount=15000\u0026amp;accountNumber=4567876\nIf the victim clicks this exploit URL, 15,000 USD will get transferred to the attacker\u0026rsquo;s account.\nCreating an Inducement for the Victim to Click the Exploit URL After creating the exploit URL, the attacker must also trick the victim user into clicking it. For this, the attacker creates an inducement and uses any social engineering attack methods to trick the victim user into clicking the malicious URL. Some examples of these methods are:\n including the exploit URL in HTML image elements placing the exploit URL on pages that are often accessed by the victim user while being logged into the application sending the exploit URL through email.  The following is an example of an image with an exploit URL:\n\u0026lt;img src=“http://myfriendlybank.com/account/transfer?amount=5000\u0026amp;accountNumber=425654” width=“0” height=“0”\u0026gt;\nThis scenario includes an image tag with zero dimensions embedded in an attacker-crafted email sent to the victim user. Upon receiving and opening the email, the victim user\u0026rsquo;s browser will load the HTML containing the HTML image.\nThe IMG tag of the image will make a GET request to the link in its src attribute. Since browsers send the cookies by default with requests, the request is authenticated, even though it is sent from a different origin than the bank’s website.\nAs a result, without the victim user\u0026rsquo;s permission, a forged request crafted by the attacker is sent to the web application at myfriendlybank.com.\nIf the victim user has an active session opened with myfriendlybank.com, the application would treat this as an authorized account transfer request coming from the victim user. It would then transfer an amount of 5000 to the account 425654 specified by an attacker.\nPreventing CSRF attacks To prevent CSRF attacks, web applications need to build mechanisms to distinguish a legitimate request from a trusted user of a website from a forged request crafted by an attacker but sent by the trusted user.\nAll the solutions to build defenses against CSRF attacks are built around this principle of sending something in the request that the forged request is unable to provide. Let us look at a few of those.\nIdentifying Legitimate Requests with Anti-CSRF Token An anti-CSRF token is a type of server-side CSRF protection. It is a random string shared between the user’s browser and the web application. The anti-CSRF token is usually stored in a session variable or data store. On an HTML page, it is typically sent in a hidden field or HTTP request header that is sent with the request.\nAn attacker creating a forged request will not have any knowledge about the anti-CSRF token. So the web application will reject the requests which do not have a matching value of the anti-CSRF token which it had shared with the browser.\nThere are two common implementation techniques of Anti-CSRF Tokens known as :\n Synchronizer Token Pattern where the web application is stateful and stores the token Double Submit Cookie where the web application is stateless  Synchronizer Token Pattern A random token is generated by the web application and sent to the browser. The token can be generated once per user session or for each request. Per-request tokens are more secure than per-session tokens as the time range for an attacker to exploit the stolen tokens is minimal.\nAs we can see in this sequence diagram, when the input form is requested, it is initialized with a random token generated by the web application. The web application stores the generated token either in a data store or in-memory in an HTTP session.\nWhen the input form is submitted, the token is sent as a request parameter. On receiving the request, the web application matches the token received as a request parameter with the token stored in the token store. The request is processed only if the two values match.\nDouble Submit Cookie Pattern When using the Double Submit Cookie Pattern the token is not stored by the web application. Instead, the web application sets the token in a cookie. The browser should be able to read the token from the cookie and send it as a request parameter in subsequent requests.\nIn this sequence diagram, when the input form is requested, the web application generates a random token and sets it in a cookie. The browser reads the token from the cookie and sends it as a request parameter when submitting the form.\nOn receiving the request, the web application verifies if the cookie value and the value sent as request parameter match. If both the values match, the web application accepts it as a legitimate request processes the request.\nThis cookie must be stored separately from the cookie used as a session identifier.\nUsing the SameSite Flag in Cookies The SameSite flag in cookies is a relatively new method of preventing CSRF attacks and improving web application security. In an earlier example, we saw that the website controlled by the attacker could send a request to https://myfriendlybank.com/ together with a session cookie. This session cookie is unique for every user, so the web application uses it to distinguish between users and determine if they are logged in.\nIf the session cookie is marked as a SameSite cookie, it is only sent along with requests that originate from the same domain. Therefore, when http://myfriendlybank.com wants to make a POST request to http://myfriendlybank/transfer it is allowed.\nHowever, the website controlled by the attacker with a domain like http://malicious.com/ cannot send HTTP requests to http://myfriendlybank.com/transfer. This is because the session cookie originates from a different domain, and thus it is not sent with the request.\nDefenses against CSRF As users, we can defend ourselves from falling victim to a CSRF attack by cultivating two simple web browsing habits:\n We should log off from a website after using it. This will invalidate the session cookies that the attacker needs to execute the forged request in the exploit URL. We should use different browsers, for example, one browser for accessing sensitive sites and another browser for random surfing. This will prevent the session cookies set in sensitive sites from being used for CSRF attacks launched from a page opened from a different browser.  As developers, we can use the following best practices other than the anti-CSRF token described earlier:\n Configure lower session time out value invalidate the session after a period of inactivity Logoff the user, after a period of inactivity and invalidate the session cookie. Seek confirmation from the user before processing any state-changing action with a confirmation dialog or a captcha. Make it difficult for an attacker to know the structure of the URLs to attack  Example of CSRF Protection in Node.js Application This is an example of implementing CSRF protection in a web application written in Node.js using the express framework. We have used an npm library csurf which provides the middleware for CSRF token creation and validation:\nconst express = require(\u0026#39;express\u0026#39;); const csrf = require(\u0026#39;csurf\u0026#39;); const cookieParser = require(\u0026#39;cookie-parser\u0026#39;); // Implement the the double submit cookie pattern // and Store the token secret in a cookie var csrfProtection = csrf({ cookie: true }); var parseForm = express.urlencoded({ extended: false }); var app = express(); app.set(\u0026#39;view engine\u0026#39;,\u0026#39;ejs\u0026#39;) app.use(cookieParser()); // render the input form app.get(\u0026#39;/transfer\u0026#39;, csrfProtection, function (req, res) { // pass the csrfToken to the view res.render(\u0026#39;transfer\u0026#39;, { csrfToken: req.csrfToken() }); }); // post the form to this URL app.post(\u0026#39;/process\u0026#39;, parseForm, csrfProtection, function (req, res) { res.send(\u0026#39;Transfer Successful!!\u0026#39;); }); app.listen(3000, (err) =\u0026gt; { if (err) console.log(err); console.log(\u0026#39;Server listening on 3000\u0026#39;); } ); In this code block, we initialize the csrf library by setting the value of cookie to true. This means that the random token for the user will be stored in a cookie instead of the HTTP session. Storing the random token in a cookie implements the double submit cookie pattern explained earlier.\nThe below HTML page is rendered with the GET request. The random token is generated in this step:\n\u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;CSRF Token Demo\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;form action=\u0026#34;process\u0026#34; method=\u0026#34;POST\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;hidden\u0026#34; name=\u0026#34;_csrf\u0026#34; value=\u0026#34;\u0026lt;%= csrfToken %\u0026gt;\u0026#34;\u0026gt; \u0026lt;div\u0026gt; \u0026lt;label\u0026gt;Amount:\u0026lt;/label\u0026gt;\u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;amount\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;br/\u0026gt; \u0026lt;div\u0026gt; \u0026lt;label\u0026gt;Transfer To:\u0026lt;/label\u0026gt;\u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;account\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;br/\u0026gt; \u0026lt;div\u0026gt; \u0026lt;input type=\u0026#34;submit\u0026#34; value=\u0026#34;Transfer\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/form\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; We can see in this HTML snippet, that the random token is set in a hidden field named _csrf.\nAfter we set up and run the application, we can test a valid request by loading the HTML form with URL http://localhost:3000/transfer :\nThe form is loaded with the csrf token set in a hidden field. When we submit the form after providing the values of the amount and account the request is sent with the csrf token and is processed successfully.\nNext, we can try to send a request from postman tool to simulate a forged request in a CSRF attack. The results are shown in this screenshot:\nSince our code is protected with CSRF token, the request is denied by the web application with an error: ForbiddenError: invalid csrf token.\nIf we are using Ajax with JSON requests, then it is not possible to submit the CSRF token within an HTTP request parameter. In this situation, we include the token within an HTTP request header.\nLibraries of CSRF protection similar to csurf are available in other languages. We should prefer to use a vetted library or framework instead of building our own for CSRF prevention. Some other examples are CSRFGuard, and Spring Security.\nConclusion CSRF attacks comprise a good percentage of web-based attacks. It is crucial to be aware of the vulnerabilities that could make our website a potential target for CSRF attacks and prevent these attacks by building proper CSRF defenses in our application.\nHere is a list of important points from the article for quick reference:\n A CSRF attack leverages the implicit trust placed in user session cookies by many web applications. To prevent CSRF attacks, web applications need to build mechanisms to distinguish a legitimate request from a trusted user of a website from a forged request crafted by an attacker but sent by the trusted user. An anti-CSRF token is a random string shared between the user’s browser and the web application and is a common type of server-side CSRF protection. There are two common implementation techniques of Anti-CSRF Tokens known as :   Synchronizer Token Pattern Double Submit Cookie  You can refer to all the source code used in the article on Github.\n","date":"June 14, 2021","image":"https://reflectoring.io/images/stock/0074-stack-1200x628-branded_hu068f2b0d815bda96ddb686d2b65ba146_143922_650x0_resize_q90_box.jpg","permalink":"/complete-guide-to-csrf/","title":"Complete Guide to CSRF"},{"categories":["Spring Boot","AWS"],"contents":"AWS DynamoDB is a NoSQL database service available in AWS Cloud.\nDynamoDB provides many benefits starting from a flexible pricing model, stateless connection, and a consistent response time irrespective of the database size.\nDue to this reason, DynamoDB is widely used as a database with serverless compute services like AWS Lambda and in microservice architectures.\nIn this tutorial, we will look at using the DynamoDB database in microservice applications built with Spring Boot along with code examples.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. AWS DynamoDB Concepts Amazon DynamoDB is a key-value database. A key-value database stores data as a collection of key-value pairs. Both the keys and the values can be simple or complex objects.\nThere is plenty to know about DynamoDB for building a good understanding for which we should refer to the official documentation.\nHere we will only skim through the main concepts that are essential for designing our applications.\nTables, Items and Attributes Like in many databases, a table is the fundamental concept in DynamoDB where we store our data. DynamoDB tables are schemaless. Other than the primary key, we do not need to define any additional attributes when creating a table.\nThis diagram shows the organization of order records placed by a customer in a Order table. Each order is uniquely identified by a combination of customerID and orderID.\nA table contains one or more items. An item is composed of attributes, which are different elements of data for a particular item. They are similar to columns in a relational database.\nEach item has its own attributes. Most of the attributes are scalar like strings and numbers while some are of nested types like lists, maps, or sets. In our example, each order item has OrderValue, OrderDate as scalar attributes and products list as a nested type attribute.\nUniquely Identifying Items in a Table with Primary Key The primary key is used to uniquely identify each item in an Amazon DynamoDB table. A primary key is of two types:\n  Simple Primary Key: This is composed of one attribute called the Partition Key. If we wanted to store a customer record, then we could have used customerID or email as a partition key to uniquely identify the customer in the DynamoDB table.\n  Composite Primary Key: This is composed of two attributes - a partition key and a sort keys. In our example above, each order is uniquely identified by a composite primary key with customerID as the partition key and orderID as the sort key.\n  Data Distribution Across Partitions A partition is a unit of storage for a table where the data is stored by DynamoDB.\nWhen we write an item to the table, DynamoDB uses the value of the partition key as input to an internal hash function. The output of the hash function determines the partition in which the item will be stored.\nWhen we read an item from the table, we must specify the partition key value for the item. DynamoDB uses this value as input to its hash function, to locate the partition in which the item can be found.\nQuerying with Secondary Indexes We can use a secondary index to query the data in the table using an alternate key, in addition to queries against the primary key. Secondary Indexes are of two types:\n Global Secondary Index (GSI): An index with a partition key and sort key that are different from the partition key and sort key of the table. Local Secondary Index (LSI): An index that has the same partition key as the table, but a different sort key.  Writing Applications with DynamoDB DynamoDB is a web service, and interactions with it are stateless. So we can interact with DynamoDB via REST API calls over HTTP(S). Unlike connection protocols like JDBC, applications do not need to maintain a persistent network connections.\nWe usually do not work with the DynamoDB APIs directly. AWS provides an SDK in different programming languages which we integrate with our applications for performing database operations.\nWe will describe two ways for accessing DynamoDB from Spring applications:\n Using DynamoDB module of Spring Data Using Enhanced Client for DynamoDB which is part of AWS SDK 2.0.  Both these methods roughly follow the similar steps as in any Object Relational Mapping (ORM) frameworks:\n  We define a data class for our domain objects like customer, product, order, etc. and then define the mapping of this data class with table residing in the database. The mapping is defined by putting annotations on the fields of the data class to specify the keys and attributes.\n  We define a repository class to define the CRUD methods using the mapping object created in the previous step.\n  Let us see some examples creating applications by using these two methods in the following sections.\nAccessing DynamoDB with Spring Data The primary goal of the Spring® Data project is to make it easier to build Spring-powered applications by providing a consistent framework to use different data access technologies. Spring Data is an umbrella project composed of many different sub-projects each corresponding to specific database technologies.\nThe Spring Data module for DynamoDB is a community module for accessing AWS DynamoDB with familiar Spring Data constructs of data objects and repository interfaces.\nInitial Setup Let us first create a Spring Boot project with the help of the Spring boot Initializr, and then open the project in our favorite IDE.\nFor configuring Spring Data, let us add a separate Spring Data release train BOM in our pom.xml file using this dependencyManagement block :\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.data\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-data-releasetrain\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;Lovelace-SR1\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; For adding the support for Spring Data, we need to include the module dependency for Spring Data DynamoDB into our Maven configuration. We do this by adding the modulespring-data-dynamodb in our pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.github.derjust\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-data-dynamodb\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.1.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Creating the Configuration Next let us establish the connectivity with AWS by initializing a bean with our AWS credentials in our Spring configuration:\nConfiguration @EnableDynamoDBRepositories (basePackages = \u0026#34;io.pratik.dynamodbspring.repositories\u0026#34;) public class AppConfig { @Bean public AmazonDynamoDB amazonDynamoDB() { AWSCredentialsProvider credentials = new ProfileCredentialsProvider(\u0026#34;pratikpoc\u0026#34;); AmazonDynamoDB amazonDynamoDB = AmazonDynamoDBClientBuilder .standard() .withCredentials(credentials) .build(); return amazonDynamoDB; } } Here we are creating a bean amazonDynamoDB and initializing it with the credentials from a named profile.\nCreating the Mapping with DynamoDB Table in a Data Class Let us now create a DynamoDB table which we will use to store customer records from our application:\nWe are using the AWS console to create a table named Customer with CustomerID as the partition key.\nWe will next create a class to represent the Customer DynamoDB table which will contain the mapping with the keys and attributes of an item stored in the table:\n@DynamoDBTable(tableName = \u0026#34;Customer\u0026#34;) public class Customer { private String customerID; private String name; private String email; // Partition key  @DynamoDBHashKey(attributeName = \u0026#34;CustomerID\u0026#34;) public String getCustomerID() { return customerID; } public void setCustomerID(String customerID) { this.customerID = customerID; } @DynamoDBAttribute(attributeName = \u0026#34;Name\u0026#34;) public String getName() { return name; } public void setName(String name) { this.name = name; } @DynamoDBAttribute(attributeName = \u0026#34;Email\u0026#34;) public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } } We have defined the mappings with the table by decorating the class with @DynamoDBTable annotation and passing in the table name. We have used the DynamoDBHashKey attribute over the getter method of the customerID field.\nFor mapping the remaining attributes, we have decorated the getter methods of the remaining fields with the @DynamoDBAttribute passing in the name of the attribute.\nDefining the Repository Interface We will next define a repository interface by extending CrudRepository typed to the domain or data class and an ID type for the type of primary key. By extending the CrudRepository interface, we inherit ready to call queries like findAll(), findById(), save(), etc.\n@EnableScan public interface CustomerRepository extends CrudRepository\u0026lt;Customer, String\u0026gt; { } @Service public class CustomerService { @Autowired private CustomerRepository customerRepository; public void createCustomer(final Customer customer) { customerRepository.save(customer); } } Here we have created a repository interface CustomerRepository and injected it in a service class CustomerService and defined a method createCustomer() for creating a customer record in the DynamoDB table.\nWe will use invoke this method a JUnit test:\n@SpringBootTest class CustomerServiceTest { @Autowired private CustomerService customerService; ... ... @Test void testCreateCustomer() { Customer customer = new Customer(); customer.setCustomerID(\u0026#34;CUST-001\u0026#34;); customer.setName(\u0026#34;John Lennon\u0026#34;); customer.setEmail(\u0026#34;john.lennon@lenno.com\u0026#34;); customerService.createCustomer(customer); } } In our test, we are calling the createCustomer() method in our service class to create a customer record in the table.\nUsing the DynamoDB Enhanced Client If we do not want to use Spring Data in our application, we can use choose to access DynamoDB with the Enhanced DynamoDB Client module of the AWS SDK for Java 2.0.\nThe Enhanced DynamoDB Client module provides a higher level API to execute database operations directly with the data classes in our application.\nWe will follow similar steps as our previous example using Spring Data.\nInitial Setup Let us create one more Spring Boot project with the help of the Spring boot Initializr. We will access DynamoDB using the Enhanced DynamoDB Client in this application.\nFirst let us include the DynamoDB Enhanced Client module in our application:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;software.amazon.awssdk\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;dynamodb-enhanced\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.16.74\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Here we are adding thedynamodb-enhanced module as a Maven dependency in our pom.xml.\nCreating the Configuration We will next initialize the dynamodbEnhancedClient in our Spring configuration:\n@Configuration public class AppConfig { @Bean public DynamoDbClient getDynamoDbClient() { AwsCredentialsProvider credentialsProvider = DefaultCredentialsProvider.builder() .profileName(\u0026#34;pratikpoc\u0026#34;) .build(); return DynamoDbClient.builder() .region(Region.US_EAST_1) .credentialsProvider(credentialsProvider).build(); } @Bean public DynamoDbEnhancedClient getDynamoDbEnhancedClient() { return DynamoDbEnhancedClient.builder() .dynamoDbClient(getDynamoDbClient()) .build(); } } Here we are creating a bean dynamodbClient with our AWS credentials and using this to create a bean for DynamoDbEnhancedClient.\nCreating the Mapping Class Let us now create one more DynamoDB table to store the orders placed by a customer. This time we will define a composite primary key for the Order table :\nAs we can see here, we are using the AWS console to create a table named Order with a composite primary key composed ofCustomerID as the partition key and OrderID as the sort key.\nWe will next create a Order class to represent the items in the Order table:\n@DynamoDbBean public class Order { private String customerID; private String orderID; private double orderValue; private Instant createdDate; @DynamoDbPartitionKey @DynamoDbAttribute(\u0026#34;CustomerID\u0026#34;) public String getCustomerID() { return customerID; } public void setCustomerID(String customerID) { this.customerID = customerID; } @DynamoDbSortKey @DynamoDbAttribute(\u0026#34;OrderID\u0026#34;) public String getOrderID() { return orderID; } public void setOrderID(String orderID) { this.orderID = orderID; } ... ... } Here we are decorating the Order data class with the @DynamoDB annotation to designate the class as a DynamoDB bean’. We have also added an annotation @DynamoDbPartitionKey for the partition key and another annotation @DynamoDbSortKey on the getter for the sort key of the record.\nCreating the Repository Class In the last step we will inject this DynamoDbEnhancedClient in a repository class and use the data class created earlier for performing different database operations:\n@Repository public class OrderRepository { @Autowired private DynamoDbEnhancedClient dynamoDbenhancedClient ; // Store the order item in the database  public void save(final Order order) { DynamoDbTable\u0026lt;Order\u0026gt; orderTable = getTable(); orderTable.putItem(order); } // Retrieve a single order item from the database  public Order getOrder(final String customerID, final String orderID) { DynamoDbTable\u0026lt;Order\u0026gt; orderTable = getTable(); // Construct the key with partition and sort key  Key key = Key.builder().partitionValue(customerID) .sortValue(orderID) .build(); Order order = orderTable.getItem(key); return order; } private DynamoDbTable\u0026lt;Order\u0026gt; getTable() { // Create a tablescheme to scan our bean class order  DynamoDbTable\u0026lt;Order\u0026gt; orderTable = dynamoDbenhancedClient.table(\u0026#34;Order\u0026#34;, TableSchema.fromBean(Order.class)); return orderTable; } } Here we are constructing a TableSchema by calling TableSchema.fromBean(Order.class) to scan our bean class Order. This will use the annotations in the Order class defined earlier to determine the attributes which are partition and sort keys.\nWe are then associating this Tableschema with our actual table name Order to create an instance of DynamoDbTable which represents the object with a mapped table resource Order.\nWe are using this mapped resource to save the order item in the save method by calling the putItem method and fetch the item by calling the getItem method.\nWe can similarly perform all other table-level operations on this mapped resource as shown here:\n@Repository public class OrderRepository { @Autowired private DynamoDbEnhancedClient dynamoDbenhancedClient; ... ... public void deleteOrder(final String customerID, final String orderID) { DynamoDbTable\u0026lt;Order\u0026gt; orderTable = getTable(); Key key = Key.builder() .partitionValue(customerID) .sortValue(orderID) .build(); DeleteItemEnhancedRequest deleteRequest = DeleteItemEnhancedRequest .builder() .key(key) .build(); orderTable.deleteItem(deleteRequest); } public PageIterable\u0026lt;Order\u0026gt; scanOrders(final String customerID, final String orderID) { DynamoDbTable\u0026lt;Order\u0026gt; orderTable = getTable(); return orderTable.scan(); } public PageIterable\u0026lt;Order\u0026gt; findOrdersByValue(final String customerID, final double orderValue) { DynamoDbTable\u0026lt;Order\u0026gt; orderTable = getTable(); AttributeValue attributeValue = AttributeValue.builder() .n(String.valueOf(orderValue)) .build(); Map\u0026lt;String, AttributeValue\u0026gt; expressionValues = new HashMap\u0026lt;\u0026gt;(); expressionValues.put(\u0026#34;:value\u0026#34;, attributeValue); Expression expression = Expression.builder() .expression(\u0026#34;orderValue \u0026gt; :value\u0026#34;) .expressionValues(expressionValues) .build(); // Create a QueryConditional object that is used in  // the query operation  QueryConditional queryConditional = QueryConditional .keyEqualTo(Key.builder().partitionValue(customerID) .build()); // Get items in the Customer table and write out the ID value  PageIterable\u0026lt;Order\u0026gt; results = orderTable .query(r -\u0026gt; r.queryConditional(queryConditional) .filterExpression(expression)); return results; } } In this snippet, we are calling the delete, scan, and query methods on the mapped object orderTable.\nHandling Nested Types We can handle nested types by adding @DynamoDbBean annotation to the class being nested as shown in this example:\n@DynamoDbBean public class Order { private String customerID; private String orderID; private double orderValue; private Instant createdDate; private List\u0026lt;Product\u0026gt; products; .. .. } @DynamoDbBean public class Product { private String name; private String brand; private double price; ... ... } Here we have added a nested collection of Product class to the Order class and annotated the Product class with @DynamoDbBean annotation.\nA Quick Note on Source Code Organization The source code of the example project is organized as a multi-module Maven project into two separate Maven projects under a common parent project. We have used Spring boot Initializr to generate these projects which gets generated with this parent tag in pom.xml :\n\u0026lt;parent\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-parent\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.4.5\u0026lt;/version\u0026gt; \u0026lt;relativePath /\u0026gt; \u0026lt;!-- lookup parent from repository --\u0026gt; \u0026lt;/parent\u0026gt; We have changed this to point to the common parent project:\n\u0026lt;parent\u0026gt; \u0026lt;groupId\u0026gt;io.pratik\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;dynamodbapp\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.0.1-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;/parent\u0026gt; The Spring Boot dependency is added under the dependencyManagement:\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-dependencies\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.4.0\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; Conclusion In this article, we looked at the important concepts of AWS DynamoDB and performed database operations from two applications written in Spring Boot first with Spring Data and then using the Enhanced DynamoDB Client. Here is a summary of the things we covered:\n AWS DynamoDB is a NoSQL Key-value data store and helps us to store flexible data models. We store our data in a table in AWS DynamoDB. A table is composed of items and each item has a primary key and a set of attributes. A DynamoDB table must have a primary key which can be composed of a partition key and optionally a sort key. We create a secondary Index to search the DynamoDB on fields other than the primary key. We accessed DynamoDB with Spring Data module and then with Enhanced DynamoDB Client module of AWS Java SDK.  I hope this will help you to get started with building applications using Spring with AWS DynamoDB as the database.\nYou can refer to all the source code used in the article on Github.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"June 13, 2021","image":"https://reflectoring.io/images/stock/0102-dynamo-1200x628-branded_hud04009b440b9fc428d0b8c51a22e3714_119399_650x0_resize_q90_box.jpg","permalink":"/spring-dynamodb/","title":"Working with AWS DynamoDB and Spring"},{"categories":["Spring Boot"],"contents":"A unit test is used to verify the smallest part of an application (a \u0026ldquo;unit\u0026rdquo;) independent of other parts. This makes the verification process easy and fast since the scope of the testing is narrowed down to a class or method.\nThe @TestConfiguration annotation is a useful aid for writing unit tests of components in a Spring Boot application. It allows us to define additional beans or override existing beans in the Spring application context to add specialized configurations for testing.\nIn this article, we will see the use of the @TestConfiguration annotation for writing unit tests for a Spring Boot applications.\n Example Code This article is accompanied by a working code example on GitHub. Introducing the @TestConfiguration Annotation We use @TestConfiguration to modify Spring\u0026rsquo;s application context during test runtime. We can use it to override certain bean definitions, for example to replace real beans with fake beans or to change the configuration of a bean to make it better testable.\nWe can best understand the @TestConfiguration annotation by first looking at the @Configuration annotation which is the parent annotation it inherits from.\nBefore that, let us create a Spring Boot project with the help of the Spring Boot Initializr, and then open the project in our favorite IDE.\nWe have added a dependency on Spring WebFlux in this project since we will work around configuring a bean for WebClient in different ways in the test environment for accessing REST APIs. WebClient is a non-blocking, reactive client to perform HTTP requests.\nWe will use this project to create our service class and bean configurations and then write tests using the @TestConfiguration annotation.\nConfiguring a Test with @Configuration Let us look at the structure of a unit test in Spring Boot where we define the beans in a configuration class annotated with the @Configuration annotation:\n@Configuration public class WebClientConfiguration { ... @Bean public WebClient getWebClient (final WebClient.Builder builder, @Value(\u0026#34;${data.service.endpoint}\u0026#34;) String url) { WebClient webClient = builder.baseUrl(url) .defaultHeader( HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE) // more configurations and customizations  ... .build(); LOGGER.info(\u0026#34;WebClient Bean Instance: {}\u0026#34;, webClient); return webClient; } } In this code snippet, we configure the WebClient bean to run requests against an external URL. We will next define a service class where we will inject this WebClient bean to call a REST API:\n@Service public class DataService { ... private final WebClient webClient; public DataService(final WebClient webClient) { this.webClient = webClient; LOGGER.info(\u0026#34;WebClient instance {}\u0026#34;, this.webClient); } } In this code snippet, the WebClient bean is injected into the DataService class. During testing, a WebClient instance configured to use a different URL will be injected rather than the actual WebClient bean.\nWe will now create our test class and annotate it with SpringBootTest. This results in bootstrapping of the full application context containing the beans selected by component scanning. Due to this, we can inject any bean from the application context by autowiring the bean into our test class:\n@SpringBootTest @TestPropertySource(locations=\u0026#34;classpath:test.properties\u0026#34;) class TestConfigurationExampleAppTests { @Autowired private DataService dataService; ... } In this code snippet, the DataService bean injected in the test class uses the WebClient bean configured with an external URL, which is defined in the property data.service.endpoint located in the properties file test.properties. This makes our unit test dependent on an external dependency, because the WebClient is accessing a remote URL. This might fail if we run our test as part of any automated test or in any other environment with restricted connectivity.\nConfiguring a Test with @TestConfiguration To make our unit tests run without any dependency on an external configuration, we may want to use a modified test configuration that will connect to a locally running mock service instead of bootstrapping the actual application context.\nWe do this by using the @TestConfiguration annotation over our configuration class being used for the test. This test configuration class can be an inner class within a test class or a separate class as shown here:\n@TestConfiguration public class WebClientTestConfiguration { ... @Bean public WebClient getWebClient(final WebClient.Builder builder) { //customized for running unit tests  WebClient webClient = builder .baseUrl(\u0026#34;http://localhost\u0026#34;) // \u0026lt;-- local URL  .build(); ... ... return webClient; } } @SpringBootTest @Import(WebClientTestConfiguration.class) @TestPropertySource(locations=\u0026#34;classpath:test.properties\u0026#34;) class TestConfigurationExampleAppTests { @Autowired private DataService dataService; ... } Here the DataService bean is injected in the test class and uses the WebClient bean configured in the test configuration class with @TestConfiguration annotation with local URL. This way we can execute our unit test without any dependency on an external system.\nWe are also overriding the behavior of the WebClient bean to point to localhost so that we can use a local instance of the REST API only for unit testing.\nThe @TestConfiguration annotation provides the capability for defining additional beans or for modifying the behavior of existing beans in the Spring Application Context for applying customizations primarily required for running a unit test.\nEnabling the Bean Overriding Behavior Every bean in the Spring application context will have one or more unique identifiers. Bean overriding is registering or defining another bean with the same identifier as a result of which the previous bean definition is overridden with a new bean implementation.\nThe bean overriding feature is disabled by default from Spring Boot 2.1. A BeanDefinitionOverrideException is thrown if we attempt to override one or more beans.\nWe should not enable this feature during application runtime. However, we need to enable this feature during testing if we want to override one or more bean definitions.\nWe enable this feature by enabling the application property spring.main.allow-bean-definition-overriding in a resource file as shown here:\nspring.main.allow-bean-definition-overriding=true Here we are setting the application property spring.main.allow-bean-definition-overriding to true in our resource file:test.properties under test to enable bean overriding feature during testing.\nComponent Scanning Behavior Though the @TestConfiguration annotation inherits from the @Configuration annotation, the main difference is that @TestConfiguration is excluded during Spring Boot\u0026rsquo;s component scanning.\nConfiguration classes annotated with @TestConfiguration are excluded from component scanning, so we need to import them explicitly in every test where we want to autowire them.\nThe @TestConfiguration annotation is also annotated with the @TestComponent annotation in its definition to indicate that this annotation should only be used for testing.\nUsing @TestConfiguration in Unit Tests As explained earlier, we can use the @TestConfiguration annotation in two ways during testing:\n Import test configuration using the Import annotation Declaring @TestConfiguration as a static inner class  Using @TestConfiguration with the @Import Annotation The Import annotation is a class-level annotation that allows us to import the bean definitions from multiple classes annotated with the @Configuration annotation or @TestConfiguration annotation into the application context or Spring test context:\n@TestConfiguration public class WebClientTestConfiguration { ... @Bean public WebClient getWebClient(final WebClient.Builder builder) { //customized for running unit tests  WebClient webClient = builder .baseUrl(\u0026#34;http://localhost\u0026#34;) .build(); ... ... return webClient; } } @SpringBootTest @Import(WebClientTestConfiguration.class) class TestConfigurationExampleAppTests { // Test case implementations } In this code snippet, our test configuration is defined in a separate class WebClientTestConfiguration which is annotated with the @TestConfiguration annotation. We then use the Import annotation in our test class TestConfigurationExampleAppTests to import this test configuration.\nWe should use the autowired injection to access the bean definitions declared in imported @TestConfiguration classes.\nUsing @TestConfiguration with a Static Inner Class In this approach, the class annotated with @TestConfiguration is implemented as a static inner class in the test class itself:\nThe Spring Boot test context will automatically discover it and load the test configuration if it is declared as a static inner class:\n@SpringBootTest public class UsingStaticInnerTestConfiguration { @TestConfiguration public static class WebClientConfiguration { @Bean public WebClient getWebClient(final WebClient.Builder builder) { return builder.baseUrl(\u0026#34;http://localhost\u0026#34;).build(); } } @Autowired private DataService dataService; // Test methods of dataService  } The test configuration is defined as a static inner class in this test. Here we do not need to import the test configuration explicitly.\nConclusion In this post, we looked at how we can use the @TestConfiguration annotation for creating a custom bean or for overriding an existing bean for unit testing of Spring applications.\nAlthough we have talked of unit tests here, we can also use @TestConfiguration in integration tests to add specialized bean configurations required for component interactions in specific test environments.\nHere is a summary of the things we covered:\n @TestConfiguration annotation allows us to define additional beans or override existing beans in the Spring application context to add specialized configuration for testing. We can use the @TestConfiguration annotation in two ways during testing:   Declare the configuration in a separate class and then import the configuration in the test class Declare the configuration in a static inner class inside the test class  The bean overriding feature is disabled by default. We enable this feature by switching on an application property spring.main.allow-bean-definition-overriding in our test.  ","date":"May 28, 2021","image":"https://reflectoring.io/images/stock/0102-traffic-light-1200x628-branded_hue287682224420726c39f735d1f707ea7_93887_650x0_resize_q90_box.jpg","permalink":"/spring-boot-testconfiguration/","title":"Testing with Spring Boot's @TestConfiguration Annotation"},{"categories":["Spring Boot","AWS"],"contents":"Amazon Relational Database Service (AWS RDS) is a relational database service available in AWS Cloud. The Spring Framework always had good support for database access technologies built on top of JDBC. Spring Cloud AWS uses the same principles to provide integration with AWS RDS service through the Spring Cloud AWS JDBC module.\nIn this tutorial, we will look at using the Spring Cloud AWS JDBC module of Spring Cloud AWS to integrate with the AWS RDS service with the help of some basic concepts of AWS RDS along with code examples.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. AWS RDS Concepts Amazon Relational Database Service (AWS RDS) is a managed service for a set of supported relational databases. As of today, the supported databases are Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server.\nApart from providing reliable infrastructure and scalable capacity, AWS takes care of all the database administration tasks like taking backups and applying database patches while leaving us free to focus on building our applications.\nDB Instance An RDS DB instance is the basic building block of Amazon RDS. It is an isolated database environment in the cloud and is accessed using the same database-specific client tools used to access on-premise databases.\nEach DB instance has a DB instance identifier used to uniquely identify the DB instance when interacting with the Amazon RDS service using the API or AWS CLI commands.\nDB Instance Class The DB instance class is used to specify the compute and storage capacity of the AWS RDS DB instance. RDS supports three types of instance classes:\nStandard: These are general-purpose instance classes that deliver balanced compute, memory, and networking for a broad range of general-purpose workloads.\nMemory Optimized: This class of instances is optimized for memory-intensive applications offering both high compute capacity and a high memory footprint.\nBurstable Performance: These instances provide a baseline performance level, with the ability to burst to full CPU usage.\nStorage Types DB instances for AWS RDS use AWS Elastic Block Store (Amazon EBS) volumes for database and log storage. AWS RDS provides three types of storage: General Purpose SSD (also known as gp2), Provisioned IOPS SSD (also known as io1), and magnetic (also known as standard) which differ in performance characteristics and price:\nGeneral Purpose SSD volumes offer cost-effective storage that is ideal for a broad range of workloads.\nProvisioned IOPS storage is designed to meet the needs of I/O-intensive workloads, particularly database workloads, that require low I/O latency and consistent I/O throughput.\nThe magnetic storage type is still supported for backward compatibility and is not used for any new storage needs.\nFeatures of Spring Cloud AWS JDBC Spring Cloud AWS JDBC module enables our Java applications to access databases created in AWS RDS with standard JDBC protocol using a declarative configuration. Some of the main features provided by this module are:\n Data source configuration by the creation of an Amazon RDS backed data source to other beans as a javax.sql.DataSource Detection of a read-replica instance and sending requests to the read-replica for read-only transactions to increase overall throughput. Retry-support to send failed database requests to a secondary instance in a different Availability Zone.  Setting Up the Environment After a basic understanding of AWS RDS and Spring Cloud AWS JDBC, we will now get down to using these concepts in an example.\nLet us first create a Spring Boot project with the help of the Spring boot Initializr with the required dependencies (Spring Web, and Lombok), and then open the project in our favorite IDE.\nFor configuring Spring Cloud AWS, let us add a separate Spring Cloud AWS BOM in our pom.xml file using this dependencyManagement block :\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws-dependencies\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.0\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; Creating the AWS RDS Instance Let us create a DB instance using the AWS Management Console:\nHere we have chosen to create the DB instance using the Easy Create option which sets default values for most of the properties. We have chosen MySQL as our database engine and specified the database identifier, user name, and password.\nWe also need to enable public access and allow access from our host if we wish to access this instance from the public network over the internet. Read this article to learn how to deploy an RDS instance into a private subnet with CloudFormation, so that it is not publicly accessible.\nConnecting to the RDS Instance After the DB instance is available, we have to connect to it from our development environment to run our database operations. For this, let us retrieve its endpoint from the DB instance connectivity description in the AWS Management Console :\nWe can see the endpoint of our DB instance that we created in the previous step as testinstance.cfkcguht5mdw.us-east-1.rds.amazonaws.com. We can also retrieve the endpoint with the DescribeDBInstances API or by running the describe-db-instances command in AWS CLI.\nWe use this endpoint to construct the connection string required to connect with our DB instance from our favorite database tool or programming language.\nSince we have chosen MySQL as our database engine when creating our DB instance, we will use a MySQL client to connect to it. MySQL Shell is a command-line shell for MySQL database where we can run SQL statements and scripts written in JavaScript and Python.\nLet us download the MySQL Shell installer for our operating system and install it in our environment. We will be able to run the MySQL commands in the shell.\nBut before that, let us connect to our DB instance in AWS RDS which we created earlier with the endpoint of the DB instance using the below command:\nmysqlsh -h testinstance.cfkcguht5mdw.us-east-1.rds.amazonaws.com -P 3306 -u pocadmin We have specified the port and user, apart from specifying the endpoint of our DB instance in the connection string.\nWe also need to ensure that the AWS RDS instance is reachable from our network where MySQL Shell is running. If we are accessing AWS RDS from a public network over the internet, we need to enable the public access property of our DB instance and associate a security group to accept connections from our host IP.\nWith our connection established, we can run MySQL commands in the shell as shown below:\nMySQL testinstance.cfkcguht5mdw.us-east-1.rds SQL \u0026gt; SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | +--------------------+ 3 rows in set (0.1955 sec) MySQL testinstance.cfkcguht5mdw.us-east-1.rds SQL \u0026gt; USE mysql; Default schema set to `mysql`. Fetching table and column names from `mysql` for auto-completion... Press ^C to stop. MySQL testinstance.cfkcguht5mdw.us-east-1 mysql SQL \u0026gt; SELECT CURRENT_DATE FROM DUAL; +--------------+ | CURRENT_DATE | +--------------+ | 2021-05-11 | +--------------+ 1 row in set (0.1967 sec) Here we list the default set of databases in MySQL and then select a database named mysql before running a simple SQL command to fetch the current date.\nWe will use the same database in our application. We have to specify this database name in the configuration of our data source in our Spring Boot application which we will cover in the next section.\nConfiguring the Data Source A datasource is a factory for obtaining connections to a physical data source. Let\u0026rsquo;s include the module dependency for Spring Cloud AWS JDBC into our Maven configuration. If we were to use the JDBC module of Spring we would have added a module dependency on spring-boot-starter-jdbc for configuring our datasource:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-jdbc\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; We will not need this now, since we are using AWS RDS with Spring cloud. We will instead add a dependency on spring-cloud-starter-aws-jdbc module for configuring database source for AWS RDS:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-aws-jdbc\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; At runtime, Spring Cloud AWS will pull all the required metadata from the AWS RDS DB instance and create a Tomcat JDBC pool with default properties. We will further configure this data source by configuring two sets of properties in our resource file named application.properties:\ncloud.aws.credentials.profile-name=pratikpoc cloud.aws.region.auto=false cloud.aws.region.static=us-east-1 cloud.aws.rds.instances[0].db-instance-identifier=testinstance cloud.aws.rds.instances[0].username=pocadmin cloud.aws.rds.instances[0].password=pocadmin cloud.aws.rds.instances[0].databaseName=mysql The first set of three properties are used to specify the security credentials for connecting to AWS and the region as us-east-1. The next set of four properties are used to specify the AWS RDS instance name, user name, password, and database name.\nWe had specified the AWS RDS instance name when we created our DB instance in RDS along with the user name and password. RDS instances are referred to by instances[0] for the first instance, instances[1] for the second instance, and so on.\nThe database name is the name of the database we selected in the MySQL Shell in the previous section - in our case mysql.\nConfiguring the Data Source Pool With the configuration done so far, Spring Cloud AWS creates the Tomcat JDBC pool with the default properties.We can configure the pool further inside our configuration class using RdsInstanceConfigurer class for instantiating a DataSourceFactory class with custom pool attributes as shown here:\n@Configuration public class ApplicationConfiguration { @Bean public RdsInstanceConfigurer instanceConfigurer() { return ()-\u0026gt; { TomcatJdbcDataSourceFactory dataSourceFactory = new TomcatJdbcDataSourceFactory(); dataSourceFactory.setInitialSize(10); dataSourceFactory.setValidationQuery(\u0026#34;SELECT 1 FROM DUAL\u0026#34;); return dataSourceFactory; }; } } Here we are overriding the validation query and the initial size during instantiation of dataSourceFactory.\nInjecting the Data Source This data source can now be injected into any Spring Bean like our repository class in our example as shown here:\n@Service public class SystemRepository { private final JdbcTemplate jdbcTemplate; @Autowired public SystemRepository(DataSource dataSource) { this.jdbcTemplate = new JdbcTemplate(dataSource); } public String getCurrentDate() { String result = jdbcTemplate.queryForObject( \u0026#34;SELECT CURRENT_DATE FROM DUAL\u0026#34;, new RowMapper\u0026lt;String\u0026gt;(){ @Override public String mapRow(ResultSet rs, int rowNum) throws SQLException { return rs.getString(1); } }); return result; } } As we can see here, it is completely decoupled from the database configuration. We can easily change the database configuration or the database itself (to MySQL or PostgreSQL, or Oracle) in RDS without any change to the code.\nIf we work with multiple data source configurations inside one application context, we need to qualify the data source injection point with a @Qualifier annotation.\nRunning the Example With our data source set up and injected in a repository class, let us now run this example with a JUnit test:\n@SpringBootTest class SpringcloudrdsApplicationTests { @Autowired private SystemRepository systemRepository; @Test void testCurrentDate() { String currentDate = systemRepository.getCurrentDate(); System.out.println(\u0026#34;currentDate \u0026#34;+currentDate); } } Once again, there is nothing specific to Spring Cloud here. All the magic happens in the configuration.\nIn this JUnit test, we are invoking our repository class method to print the current date. The output log after running the test is shown below:\n:: Spring Boot :: (v2.4.5) ... : Starting SpringcloudrdsApplicationTests using Java 14.0.1 ... ... Loading class `com.mysql.jdbc.Driver\u0026#39;. This is deprecated. \\ The new driver class is `com.mysql.cj.jdbc.Driver\u0026#39;... currentDate 2021-05-12 ... : Shutting down ExecutorService \u0026#39;applicationTaskExecutor\u0026#39; We can see a warning in the log for using a deprecated driver class which is safe to be ignored. We have not specified any driver class here. The driver class com.mysql.jdbc.Driver is registered based on the metadata read from the database connection to AWS RDS.\nConfiguring the Read-Replica for Increasing Throughput Replication is a process by which we can copy data from one database server (also known as source database) to be copied to one or more database servers (known as replicas). It is a feature of the database engines of MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL DB which can be configured with AWS RDS.\nAmazon RDS uses this built-in replication feature of these databases to create a special type of DB instance called a read replica from a source DB instance.\nThe source DB instance plays the role of the primary DB instance and updates made to the primary DB instance are asynchronously copied to the read replica.\nThis way we can increase the overall throughput of the database by reducing the load on our primary DB instance by routing read queries from your applications to the read replica.\nLet us create a read-replica of the DB instance from the RDS console:\nHere we are creating a replica of the DB instance we created earlier.\nSpring Cloud AWS supports the use of read-replicas with the help of Spring Framework\u0026rsquo;s declarative transaction support with read-only transactions. We do this by enabling read-replica support in our data source configuration.\nWhen read-replica is enabled, any read-only transaction will be routed to a read-replica instance and the primary database will be used only for write operations.\nWe enable read-replica support by setting a property readReplicaSupport. Our application.properties with this property set looks like this:\ncloud.aws.credentials.profile-name=pratikpoc cloud.aws.region.auto=false cloud.aws.region.static=us-east-1 cloud.aws.rds.instances[0].db-instance-identifier=testinstance cloud.aws.rds.instances[0].username=pocadmin cloud.aws.rds.instances[0].password=pocadmin cloud.aws.rds.instances[0].databaseName=mysql cloud.aws.rds.instances[0].readReplicaSupport=true Here we have set the readReplicaSupport to true to enable read-replica support.\nOur service class with a read-only method looks like this:\n@Service public class SystemRepository { private final JdbcTemplate jdbcTemplate; @Autowired public SystemRepository(DataSource dataSource) { this.jdbcTemplate = new JdbcTemplate(dataSource); } @Transactional(readOnly = true) public List\u0026lt;String\u0026gt; getUsers(){ List\u0026lt;String\u0026gt; result = jdbcTemplate.query(\u0026#34;SELECT USER() FROM DUAL\u0026#34;, new RowMapper\u0026lt;String\u0026gt;(){ @Override public String mapRow(ResultSet rs, int rowNum) throws SQLException { return rs.getString(1); } }); return result; } } Here we have decorated the method getUsers() with Transactional(readOnly = true). At runtime, all the invocations of this method will be sent to the read-replica.\nWe can also see that we have not created any separate data source for the read-replica of our DB instance. With the read-replica support, Spring Cloud AWS JDBC searches for any read-replica that is created for the master DB instance and routes the read-only transactions to one of the available read-replicas.\nConfiguring Fail-Over for High Availability A high availability environment in AWS RDS is provided by creating the DB instance in multiple Availability Zones. This type of deployment also called Multi-AZ deployment provides failover support for the DB instances if one availability zone is not available due to an outage of the primary instance.\nThis replication is synchronous as compared to the read-replica described in the previous section.\nSpring Cloud AWS JDBC module supports the Multi-AZ failover with a retry interceptor which can be associated with a method to retry any failed transactions during a Multi-AZ failover. The configuration of our retry interceptor is shown below:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;beans xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; ...\u0026gt; \u0026lt;jdbc:retry-interceptor db-instance-identifier=\u0026#34;testinstance\u0026#34; id=\u0026#34;interceptor\u0026#34; max-number-of-retries=\u0026#34;3\u0026#34; amazon-rds=\u0026#34;customRdsClient\u0026#34;/\u0026gt; \u0026lt;bean id=\u0026#34;customRdsClient\u0026#34; class=\u0026#34;io.pratik.springcloudrds.SystemRepository\u0026#34; \u0026gt; \u0026lt;constructor-arg value=\u0026#34;com.amazonaws.services.rds.AmazonRDS\u0026#34;/\u0026gt; \u0026lt;/bean\u0026gt; \u0026lt;/beans\u0026gt; The retry-interceptor tag in the XML configuration creates an AOP Interceptor which can be used to retry any database operations which failed due to a temporary error like connectivity loss due to failover to a DB instance in a secondary availability zone.\nHowever, it is better to provide direct feedback to a user in online transactions instead of frequent retries. So the fail-over support is mainly useful for batch applications where the responsiveness of a service call is not important.\nConclusion We saw how to use the Spring Cloud AWS JDBC module for accessing the database of our application with the AWS RDS service. Here is a summary of the things we covered:\n A DB instance is the foundational block that needs to be created when working with AWS Relational Database Service (RDS). It is the container for multiple databases. A DB instance is configured with a storage class and DB instance class based on our storage and processing requirements. These need to be specified when creating a DB instance in AWS Relational Data Service. The data source backed by a DB instance in AWS RDS, is created in the application at runtime. Read-replica feature of RDS is used to increase throughput and is can be enabled in Spring Cloud JDBC by setting a property and decorating a method with Transaction read only annotation. Failover support is provided with the help of retry interceptors.  I hope this will help you to get started with building applications using Spring Cloud AWS using AWS RDS as the data source.\nYou can also read an article published earlier on using Spring Cloud AWS Messaging for accessing Amazon Simple Queue Service (SQS) since a majority of real-life applications need to use a mix of database persistence and message queuing for performing a wide variety of business functions.\nYou can refer to all the source code used in the article on Github.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"May 25, 2021","image":"https://reflectoring.io/images/stock/0046-rack-1200x628-branded_hu38983fac43ab7b5246a0712a5f744c11_252723_650x0_resize_q90_box.jpg","permalink":"/spring-cloud-aws-rds/","title":"Getting Started With AWS RDS and Spring Cloud"},{"categories":["Spring Boot"],"contents":"In Spring 5, Spring gained a reactive web framework: Spring WebFlux. This is designed to co-exist alongside the existing Spring Web MVC APIs, but to add support for non-blocking designs. Using WebFlux, you can build asynchronous web applications, using reactive streams and functional APIs to better support concurrency and scaling.\nAs part of this, Spring 5 introduced the new WebClient API, replacing the existing RestTemplate client. Using WebClient you can make synchronous or asynchronous HTTP requests with a functional fluent API that can integrate directly into your existing Spring configuration and the WebFlux reactive framework.\nIn this article we\u0026rsquo;ll look first at how you can start sending simple GET and POST requests to an API with WebClient right now, and then discuss how to take WebClient further for advanced use in substantial production applications.\nHow to Make a GET Request with WebClient Let\u0026rsquo;s start simple, with a plain GET request to read some content from a server or API.\nTo get started, you\u0026rsquo;ll first need to add some dependencies to your project, if you don\u0026rsquo;t have them already. If you\u0026rsquo;re using Spring Boot you can use spring-boot-starter-webflux, or alternatively you can install spring-webflux and reactor-netty directly.\nThe Spring WebClient API must be used on top of an existing asynchronous HTTP client library. In most cases that will be Reactor Netty, but you can also use Jetty Reactive HttpClient or Apache HttpComponents, or integrate others by building a custom connector.\nOnce these are installed, you can send your first GET request in WebClient:\nWebClient client = WebClient.create(); WebClient.ResponseSpec responseSpec = client.get() .uri(\u0026#34;http://example.com\u0026#34;) .retrieve(); There\u0026rsquo;s a few things happening here:\n We create a WebClient instance We define a request using the WebClient instance, specifying the request method (GET) and URI We finish configuring the request, and obtain a ResponseSpec  This is everything required to send a request, but it\u0026rsquo;s important to note that no request has actually been sent at this point! As a reactive API, the request is not actually sent until something attempts to read or wait for the response.\nHow do we do that?\nHow to Handle an HTTP Response with WebClient Once we\u0026rsquo;ve made a request, we usually want to read the contents of the response.\nIn the above example, we called .retrieve() to get a ResponseSpec for a request. This is an asynchronous operation, which doesn\u0026rsquo;t block or wait for the request itself, which means that on the following line the request is still pending, and so we can\u0026rsquo;t yet access any of the response details.\nBefore we can get a value out of this asynchronous operation, you need to understand the Flux and Mono types from Reactor.\nFlux A Flux represents a stream of elements. It\u0026rsquo;s a sequence that will asynchronously emit any number of items (0 or more) in the future, before completing (either successfully or with an error).\nIn reactive programming, this is our bread-and-butter. A Flux is a stream that we can transform (giving us a new stream of transformed events), buffer into a List, reduce down to a single value, concatenate and merge with other Fluxes, or block on to wait for a value.\nMono A Mono is a specific but very common type of Flux: a Flux that will asynchronously emit either 0 or 1 results before it completes.\nIn practice, it\u0026rsquo;s similar to Java\u0026rsquo;s own CompletableFuture: it represents a single future value.\nIf you\u0026rsquo;d like more background on these, take a look at Spring\u0026rsquo;s own docs which explain the Reactive types and their relationship to traditional Java types in more detail.\nReading the Body To read the response body, we need to get a Mono (i.e: an async future value) for the contents of the response. We then need to unwrap that somehow, to trigger the request and get the response body content itself, once it\u0026rsquo;s available.\nThere are a few different ways to unwrap an asynchronous value. To start with, we\u0026rsquo;ll use the simplest traditional option, by blocking to wait for the data to arrive:\nString responseBody = responseSpec.bodyToMono(String.class).block(); This gives us a string containing the raw body of the response. It\u0026rsquo;s possible to pass different classes here to parse content automatically into an appropriate format, or to use a Flux here instead to receive a stream of response parts (for example from an event-based API), but we\u0026rsquo;ll come back to that in just a minute.\nNote that we\u0026rsquo;re not checking the status here ourselves. When we use .retrieve(), the client automatically checks the status code for us, providing a sensible default by throwing an error for any 4xx or 5xx responses. We\u0026rsquo;ll talk about custom status checks \u0026amp; error handling later on too.\nHow to Send a Complex POST Request with WebClient We\u0026rsquo;ve seen how to send a very basic GET request, but what happens if we want to send something more advanced?\nLet\u0026rsquo;s look at a more complex example:\nMultiValueMap\u0026lt;String, String\u0026gt; bodyValues = new LinkedMultiValueMap\u0026lt;\u0026gt;(); bodyValues.add(\u0026#34;key\u0026#34;, \u0026#34;value\u0026#34;); bodyValues.add(\u0026#34;another-key\u0026#34;, \u0026#34;another-value\u0026#34;); String response = client.post() .uri(new URI(\u0026#34;https://httpbin.org/post\u0026#34;)) .header(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer MY_SECRET_TOKEN\u0026#34;) .contentType(MediaType.APPLICATION_FORM_URLENCODED) .accept(MediaType.APPLICATION_JSON) .body(BodyInserters.fromFormData(bodyValues)) .retrieve() .bodyToMono(String.class) .block(); As we can see here, WebClient allows us to configure headers by either using dedicated methods for common cases (.contentType(type)) or generic keys and values (.header(key, value)).\nIn general, using dedicated methods is preferable, as their stricter typings will help us provide the right values, and they include runtime validation to catch various invalid configurations too.\nThis example also shows how to add a body. There are a few options here:\n We can call .body() with a BodyInserter, which will build body content for us from form values, multipart values, data buffers, or other encodeable types. We can call .body() with a Flux (including a Mono), which can stream content asynchronously to build the request body. We can call .bodyValue(value) to provide a string or other encodeable value directly.  Each of these has different use cases. Most developers who aren\u0026rsquo;t familiar with reactive streams will find the Flux API unhelpful initially, but as you invest more in the reactive ecosystem, asynchronous chains of streamed data like this will begin to feel more natural.\nHow to Take Spring WebClient into Production The above should be enough to get you create and send basic requests and read responses, but there are a few more topics we need to cover if you want to build substantial applications on top of this.\nReading Response Headers Until now, we\u0026rsquo;ve focused on reading the response body, and ignored the headers. A lot of the time that\u0026rsquo;s fine, and the important headers will be handled for us, but you will find that many APIs include valuable metadata in their response headers, not just the body.\nThis data is easily available within the WebClient API too, using the .toEntity() API, which gives us a ResponseEntity, wrapped in a Mono.\nThis allows us to examine response headers:\nResponseEntity\u0026lt;String\u0026gt; response = client.get() // ...  .retrieve() .toEntity(String.class) .block(); HttpHeaders responseHeaders = response.getHeaders(); List\u0026lt;String\u0026gt; headerValue = responseHeaders.get(\u0026#34;header-name\u0026#34;); Parsing Response Bodies In the examples above, we\u0026rsquo;ve handled responses as simple strings, but Spring can also automatically parse these into many higher-level types for you, by providing a more specific type when reading the response, like so:\nMono\u0026lt;Person\u0026gt; response = client.post() // ...  .retrieve() .bodyToMono(Person.class) Which classes can be converted depends on the HttpMessageReaders that are available. By default, the supported formats include:\n Conversion of any response to String, byte[], ByteBuffer, DataBuffer or Resource Conversion of application/x-www-form-urlencoded responses into MultiValueMap\u0026lt;String,String\u0026gt;\u0026gt; Conversion of multipart/form-data responses into MultiValueMap\u0026lt;String, Part\u0026gt; Deserialization of JSON data using Jackson, if available Deserialization of XML data using Jackson\u0026rsquo;s XML extension or JAXB, if available  This can also use the standard HttpMessageConverter configuration registered in your Spring application, so message converters can be shared between your WebMVC or WebFlux server code and your WebClient instances. If you\u0026rsquo;re using Spring Boot, you can use the pre-configured WebClient.Builder instance to get this set up automatically.\nFor more details, take a look at the Spring WebFlux codecs documentation.\nManually Handling Response Status By default .retrieve() will check for error status for you. That\u0026rsquo;s fine for simple cases, but you\u0026rsquo;re likely to find many REST APIs that encode more detailed success information in their status codes (for example returning 201 or 202 values), or APIs where you want to add custom handling for some error status.\nIt\u0026rsquo;s possible to read the status from the ResponseEntity like we did for the headers, but that\u0026rsquo;s only useful for accepted statuses, since and error status will throw the error before we receive the entity in that case.\nTo handle those types of status codes ourselves, we need to add an onStatus handler. This handler can match certain status code, and return a Mono\u0026lt;Throwable\u0026gt; (to control the specific error thrown) or Mono.empty() to stop that error code from being treated as an error.\nIt works like so:\nResponseEntity response = client.get() // ...  .retrieve() // Don\u0026#39;t treat 401 responses as errors:  .onStatus( status -\u0026gt; status == HttpStatus.NOT_FOUND, clientResponse -\u0026gt; Mono.empty() ) .toEntity(String.class) .block(); // Manually check and handle the relevant status codes: if (response.getStatusCode() == HttpStatus.NOT_FOUND) { // ... } else { // ... } Making Fully Asynchronous Requests Up until this point we\u0026rsquo;ve called .block() on every response, blocking the thread completely to wait for the response to arrive.\nWithin a traditional heavily threaded architecture that might fit quite naturally, but in a non-blocking design we need to avoid these kinds of blocking operations wherever possible.\nAs an alternative, we can handle requests by weaving transforms around our Mono or Flux values to handle and combine values as they\u0026rsquo;re returned, and then pass these Flux-wrapped values into other non-blocking APIs, all fully asynchronously.\nThere isn\u0026rsquo;t space here to fully explain this paradigm or WebFlux from scratch, but an example of doing so with WebClient might look like this:\n@GetMapping(\u0026#34;/user/{id}\u0026#34;) private Mono\u0026lt;User\u0026gt; getUserById(@PathVariable String id) { // Load some user data asynchronously, e.g. from a DB:  Mono\u0026lt;BaseUserInfo\u0026gt; userInfo = getBaseUserInfo(id); // Load user data with WebClient from a separate API:  Mono\u0026lt;UserSubscription\u0026gt; userSubscription = client.get() .uri(\u0026#34;http://subscription-service/api/user/\u0026#34; + id) .retrieve() .bodyToMono(UserSubscription.class); // Combine the monos: when they are both done, take the  // data from each and combine it into a User object.  Mono\u0026lt;User\u0026gt; user = userInfo .zipWith(userSubscription) .map((tuple) -\u0026gt; new User(tuple.getT1(), tuple.getT2()); // The resulting mono of combined data can be returned immediately,  // without waiting or blocking, and WebFlux will handle sending  // the response later, once all the data is ready:  return user; } Testing with Spring WebTestClient In addition to WebClient, Spring 5 includes WebTestClient which provides an interface extremely similar to WebClient but designed for convenient testing of server endpoints.\nWe can set this up either by creating a WebTestClient that\u0026rsquo;s bound to a server and sending real requests over HTTP, or one that\u0026rsquo;s bound to a single Controller, RouterFunction or WebHandler to run integration tests using mock request \u0026amp; response objects.\nThat looks like this:\n// Connect to a real server over HTTP: WebTestClient client = WebTestClient .bindToServer() .baseUrl(\u0026#34;http://localhost:8000\u0026#34;) .build(); // Or connect to a single WebHandler using mock objects: WebTestClient client = WebTestClient .bindToWebHandler(handler) .build(); Once we\u0026rsquo;ve created a WebTestClient, we can define requests just like any other WebClient.\nTo send the request and check the result, we call .exchange() and then use the assertion methods available there:\nclient.get() .uri(\u0026#34;/api/user/123\u0026#34;) .exchange() .expectStatus().isNotFound(); // Assert that this is a 404 response There\u0026rsquo;s a wide variety of assertion methods to check the response status, headers and body - see the JavaDoc for the full list.\nInspecting and Mocking WebClient HTTP Traffic with HTTP Toolkit After you\u0026rsquo;ve deployed your WebClient code, you need to be able to debug it. HTTP requests are often the linchpin within complex interactions, and they can fail in many interesting ways. It\u0026rsquo;s useful to be able to see the requests and responses your client is working with to understand what your system is doing, and injecting your own data or errors can be a powerful technique for manual testing.\nTo do this, you can use HTTP Toolkit, a cross-platform open-source tool that can capture traffic from a wide variety of Java HTTP clients, and which includes a specific integration to automatically intercept Spring WebClient.\nOnce you have HTTP Toolkit installed, the next step is to intercept your Java HTTP traffic. To do so you can either:\n Click the \u0026lsquo;Fresh Terminal\u0026rsquo; button in HTTP Toolkit to open a terminal, and launch your application from there; or Start your application as normal, then click the \u0026lsquo;Attach to JVM\u0026rsquo; button in HTTP Toolkit to attach to the already running JVM  Once you\u0026rsquo;ve intercepted your traffic, you can inspect every request and response sent by your application from the \u0026lsquo;View\u0026rsquo; page inside HTTP Toolkit:\n![HTTP Toolkit inspecting HTTP requests]({{ base }}/assets/images/posts/http_toolkit.png)\nYou can also add rules from the \u0026lsquo;Mock\u0026rsquo; page, to interactively mock HTTP responses, breakpoint requests, or inject errors like connection failures and timeouts.\nConclusion In this article we\u0026rsquo;ve looked at everything you need to get started using Spring WebClient. WebFlux and WebClient are mature powerful APIs with a lot to offer on top of the classic Spring feature set. Give them a try in your application today!\n","date":"May 25, 2021","image":"https://reflectoring.io/images/stock/0001-network-1200x628-branded_hu72d229b68bf9f2a167eb763930d4c7d5_172647_650x0_resize_q90_box.jpg","permalink":"/spring-webclient/","title":"Sending HTTP requests with Spring WebClient"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you want to be structured in goal-making and tracking you want to learn about what motivates people and what doesn\u0026rsquo;t you are interested in hearing stories about the goals of big and successful companies  Book Facts  Title: Measure What Matters Authors: John Doerr Word Count: ~ 62,000 (ca. 4 hours at 250 words / minute) Reading Ease: easy to medium Writing Style: rather short chapters, much of the text contributed by company leaders telling their story about OKRs Year Published: 2017  Overview {% include book-link.html book=\u0026ldquo;measure-what-matters\u0026rdquo; %} tells success stories of companies that used OKRs to reach their objectives. These stories are about topics like crafting OKRs, aligning them with each other and between teams in the company, changing culture within a company.\nThe book is written in easy language and entertaining to read due to the fact that most chapters are written by industry leaders, telling the story of how OKRs helped them to reach their companies' objectives.\nI\u0026rsquo;m sold to the value of OKRs, for companies as well as for personal growth. Well-crafted OKRs can make the difference. But so can badly-crafted OKRs\u0026hellip; .\nNotes Here are my notes from the book, distilled into distinct ideas.\nOKRs OKR stands for \u0026ldquo;objectives and key results\u0026rdquo;. An objective is the goal that you want to reach. Key results are the steps that get you there. A company key result can be an objective for a team in the company, which can be broken down in more key results, and so on. Ideally, all OKRs of the teams in a company are aligned to the company-wide objectives.\n \u0026ldquo;An objective has a set of concrete steps that you\u0026rsquo;re intentionally engaged in and actually trying to go for\u0026rdquo; - Bill Gates\n Measuring OKRs Measuring OKR progress regularly is a source of motivation and purpose. OKRs can be measured in different ways, but scoring them on a scale between 0 and 1 is the most popular way. 0 means nothing of the objective or key result has been reached, while 1 means that the objective or key result has been reached 100%. Scoring is not always completely objective, but we can influence it a bit according to context.\n \u0026ldquo;People need a benchmark to know how they\u0026rsquo;re performing against it.\u0026rdquo; - Susan Wojcicki, CEO of YouTube\n  \u0026ldquo;Making measured headway can be more incentivizing than public recognition.\u0026rdquo; - John Doerr\n  \u0026ldquo;The simple act of writing down a goal increases your chance of reaching it.\u0026rdquo; - John Doerr\n Too Many Goals Blur Focus If you have too many goals, you cannot execute on them. Many different goals will blur the focus on any of them. Instead, focus on goals with the most potential. For each team / organization, there shouldn’t be more than a handful of OKRs each with no more than a handful of key results.\n “Successful organizations focus on the handful of initiatives that can make a real difference.” - John Doerr\n OKRs Are Not Set In Stone Regularly checking in with OKRs gives us the opportunity to re-evaluate them. If the objective is not the best objective for the team / company any more, the objective should be changed. Otherwise, we will try to achieve goals that are vain and do not provide real value when reached.\n \u0026ldquo;Our goals are servants to our purpose, not the other way around.\u0026rdquo; - John Doerr\n Aligned Goals Increase Purpose OKRs are powerful when aligned and can be harmful if not aligned. If product management has product KRs and development has infrastructure KRs, they will run in a conflict.\nOKRs should be aligned by mixing top-down objectives (to give direction) with bottom-up KRs (to give the teams purpose and involvement).\nLetting teams come up with OKRs bottom-up will allow them to build networks within the company that will make hairy goals attainable.\n \u0026ldquo;People are more likely to feel fulfilled when they have clear and aligned targets.\u0026rdquo; - Doug Dennerline, CEO of LinkedIn\n Goals Without Leadership Commitment Are Hollow If a leader does not commit to OKRs, why should the rest of the company? On the other hand, if a leader has clear and transparent OKRs, and talks about them regularly in front of the company, people will support the leadership OKRs and ladder their own OKRs up against them.\nGoals Without Monetary Incentives Increase Cooperation OKRs should be kept separate from bonus incentives or performance reviews. As soon as OKRs are tied to a bonus, people start politics to achieve their goals over other teams' goals. If OKRs are transparent and not tied to monetary incentives, the probability of teams helping each other grows, because changing goals becomes possible and teams can align their goals with each other and with the company.\nTransparent goals that are not incentivized with bonuses take politics out of play.\nClear Goals Reduce Stress If a person\u0026rsquo;s goals are well-aligned with their team\u0026rsquo;s goals, which in turn are well-aligned with the company goals, it gives a person purpose and a clear picture of what is expected of them. It also makes decisions easy, as the goals give a framework in which to make decisions.\nIf different teams' goals are aligned well with the company goals, it also takes politics out of the picture. Teams won\u0026rsquo;t be trying to reach their own goals over other teams' goals, but instead help each other.\nClear goals also improve the onboarding experience of new hires. If they know their goals, their team\u0026rsquo;s goals, and their company\u0026rsquo;s goals from the start, they can start making decisions and challenging other\u0026rsquo;s decisions right away.\nThus, in general, clear goals reduce uncertainty and stress.\n \u0026ldquo;When you know your company objectives like you know your last name, it\u0026rsquo;s very calming.\u0026rdquo; - Julia Collins, Co-CEO of Zume Pizza\n Transparent Goals Increase Motivation Sharing goals with others drastically increases the motivation to achieve those goals. It also opens up possibilities for other parts of the organization to help reach those goals. After all, if no one knows about your goals, no one will be able to help you.\nTransparent goals are also a key factor for aligning teams across the organization to the goals of the company because teams are empowered to align their goals with transparent top-level goals.\n \u0026ldquo;When people help choose a course of action, they are more likely to see it through.\u0026rdquo; - John Doerr\n  \u0026ldquo;People who choose their destination will own a deeper awareness of what it takes to get there.\u0026rdquo; - John Doerr\n  \u0026ldquo;Transparency seeds collaboration.\u0026rdquo; - John Doerr\n Bottom-Up Goals Increase Engagement People need to be able to set their own goals, or at least be involved in the process. This gives a team purpose and motivation to actually achieve those goals.\nOKRs should not (only) be dictated top-down. Instead, pass down an objective and let the team decide on the key results.\n You can tell people to clean up a mess, but should you be telling them which broom to use? - John Doerr\n Ambitious Goals Increase Engagement A BHAG (big hairy audacious goal) unifies teams to stretch themselves and be more engaged in reaching that goal. If you don\u0026rsquo;t quite reach a strech goal, you will still have done something great. To find a BHAG, ask the question \u0026ldquo;what would amazing look like?\u0026rdquo;.\nCompanies like Google and YouTube set hairy goals for themselves and fail reaching them more often than not.\n \u0026ldquo;In pursuing high-effort, high-risk goals, employee commitment is essential\u0026rdquo; - John Doerr\n  \u0026ldquo;If you set a crazy, ambitious goal and miss it, you\u0026rsquo;ll still achieve something remarkable.\u0026rdquo; - Larry Page\n  \u0026ldquo;Stretch goals can sharpen an entrepreneurial culture.\u0026rdquo; - John Doerr\n  \u0026ldquo;The hairier the mission, the more important your OKRs\u0026rdquo; - Jini Kim\n OKRs Scale Culture A big fear in growing companies is that the culture will change when too many new people are hired into the company to scale up. If the company\u0026rsquo;s OKRs embody the values and principles of the company, and are transparent and accessible to every new hire, they can be the means the scale the culture while the company is growing.\n \u0026ldquo;Without cultural alignment the world\u0026rsquo;s best operational strategy will fail.\u0026rdquo; - Andrew Cole, Chief HR Officer at Lumeris\n CFRs People cannot be measured solely by numbers, so rating their performance by OKRs alone is not a healthy thing to do. CFRs (conversations, feedback, recognition) provide a framework to provide feedback to employees that is divorced from OKRs. OKRs should at most be a small part of a performance review, together with context-sensitive manager and peer feedback. Employee satisfaction, purpose, and thus retention is much higher when feedback is given continuously instead of just once a year.\n \u0026quot; If we can rate our Uber drivers (and vice versa) \u0026hellip; why can\u0026rsquo;t a workplace support two-way feedback between managers and employees?\u0026quot; - John Doerr\n  \u0026ldquo;Every cheer is a step towards operating excellence.\u0026rdquo; John Doerr\n  \u0026ldquo;Corrective feedback is naturally difficult for people. But when done well, it\u0026rsquo;s also the greates gift you can give to someone.\u0026rdquo; - Donna Morris, Adobe\n IT Needs To Be Aligned With the Company Goals IT is at the core of every modern company. When a company wants to innovate, their IT needs to perform. Well-crafted OKRs help align the IT with the rest of the company and reduce stress at the same time, because when teams are aligned, they will help each other instead of working against each other, trying to achieve their own goals.\n \u0026ldquo;In the brunt of any disruption, IT will bear the brunt of all in-house frustrations.\u0026rdquo; - Atticus Tysen\n  \u0026ldquo;When you\u0026rsquo;re putting out fires every day, it\u0026rsquo;s hard to build a next-generation billing technology.\u0026rdquo; - Atticus Tysen\n Execution \u0026gt; Knowledge Ideas are cheap. Everybody can have ideas. But executing on an idea is what makes the difference and is what achieves goals. Execution is what matters.\n “It almost doesn’t matter what you know. It’s what you can […] actually accomplish.” - John Doerr\n  \u0026ldquo;Ideas are easy, execution is everything.\u0026rdquo; - John Doerr\n ","date":"May 11, 2021","image":"https://reflectoring.io/images/covers/measure-what-matters-teaser_hu980b7ffea70bcc27df34b6538be1ff9c_95660_650x0_resize_q90_box.jpg","permalink":"/book-review-measure-what-matters/","title":"Book Notes: Measure What Matters"},{"categories":["Spring Boot","AWS"],"contents":"Spring Cloud is a suite of projects containing many of the services required to make an application cloud-native by conforming to the 12-Factor principles.\nSpring Cloud for Amazon Web Services(AWS) is a sub-project of Spring Cloud which makes it easy to integrate with AWS services using Spring idioms and APIs familiar to Spring developers.\nIn this tutorial, we will look at using Spring Cloud AWS for interacting with Simple Queue Service (SQS) with the help of some basic concepts of queueing and messaging along with code examples.\nCheck Out the Book!  This article gives only a first impression of what you can do with SQS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. What is SQS? SQS is a distributed messaging system for point-to-point communication and is offered as a fully managed service in the AWS Cloud.\nIt follows the familiar messaging semantics of a producer sending a message to a queue and a consumer reading this message from the queue once the message is available as shown here:\nThe producer will continue to function normally even if the consumer application is temporarily not available. SQS decouples the producer system from the consumer by facilitating asynchronous modes of communication.\nThe SQS queue used for storing messages is highly scalable and reliable with its storage distributed across multiple servers. The SQS queue can be of two types:\n Standard: Standard queues have maximum throughput, best-effort ordering, and at least once delivery. First In First Out (FIFO): When a high volume of transactions are received, messages might get delivered more than one once, which might require complex handling of message sequence. For this scenario, we use FIFO queues where the messages are delivered in a \u0026ldquo;First in first out\u0026rdquo; manner. The message is delivered only once and is made available only until the consumer processes it. After the message is processed by the consumer, it is deleted - thereby preventing chances of duplicate processing.  Spring Cloud AWS Messaging Spring Cloud AWS is built as a collection of modules, with each module being responsible for providing integration with an AWS Service.\nSpring Cloud AWS Messaging is the module that does the integration with AWS SQS to simplify the publication and consumption of messages over SQS.\nAmazon SQS allows only payloads of type string, so any object sent to SQS must be transformed into a string representation before being put in the SQS queue. Spring Cloud AWS enables transferring Java objects to SQS by converting them to string in JSON format.\nIntroducing the Spring Cloud AWS Messaging API The important classes which play different roles for interaction with AWS SQS are shown in this class diagram :\nAn SQS message is represented by the Message interface.\nQueueMessageChannel and QueueMessagingTemplate are two of the main classes used to send and receive messages. For receiving we have a more convenient method of adding polling behavior to a method by adding an SQSListener annotation.\nWe can override the default configuration used by all integrations with ClientConfiguration. The client configuration options control how a client connects to Amazon SQS with attributes like proxy settings, retry counts, etc.\nSetting Up the Environment With this basic understanding of SQS and the involved classes, let us work with a few examples by first setting up our environment.\nLet us first create a Spring Boot project with the help of the Spring boot Initializr, and then open the project in our favorite IDE.\nFor configuring Spring Cloud AWS, let us add a separate Spring Cloud AWS BOM in our pom.xml file using this dependencyManagement block :\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-aws-dependencies\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.0\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; For adding the support for messaging, we need to include the module dependency for Spring Cloud AWS Messaging into our Maven configuration. We do this by adding the starter modulespring-cloud-starter-aws-messaging:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.awspring.cloud\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-cloud-starter-aws-messaging\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; spring-cloud-starter-aws-messaging includes the transitive dependencies for spring-cloud-starter-aws, and spring-cloud-aws-messaging.\nCreating a Message Messages are created using the MessageBuilder helper class. The MessageBuilder provides two factory methods for creating messages from either an existing message or with a payload Object:\n@Service public class MessageSenderWithTemplate { ... ... public void send(final String messagePayload) { Message\u0026lt;String\u0026gt; msg = MessageBuilder.withPayload(messagePayload) .setHeader(\u0026#34;sender\u0026#34;, \u0026#34;app1\u0026#34;) .setHeaderIfAbsent(\u0026#34;country\u0026#34;, \u0026#34;AE\u0026#34;) .build(); ... } } Here we are using the MessageBuilder class to construct the message with a string payload and two headers inside the send method.\nQueue Identifiers A queue is identified with a URL or physical name. It can also be identified with a logical identifier.\nWe create a queue with a queue name that is unique for the AWS account and region. Amazon SQS assigns each queue an identifier in the form of a queue URL that includes the queue name and other Amazon SQS components.\nWe provide the queue URL whenever we want to perform any action on a queue.\nLet us create an SQS queue named \u0026ldquo;testQueue\u0026rdquo; using the AWS Console as shown here:\nWe can see the URL of the queue as https://sqs.us-east-1.amazonaws.com/\u0026lt;aws account ID\u0026gt;/testQueue. We will be using either the queue name or queue URL as identifiers of our queue in our examples.\nSending a Message We can send messages to an SQS queue using the QueueMessageChannel or QueueMessagingTemplate.\nSending with QueueMessageChannel With the QueueMessageChannel, we first create an instance of this class to represent the SQS queue and then call the send() method for sending the message to the queue:\n@Service public class MessageSender { private static final Logger logger = LoggerFactory.getLogger(MessageSender.class); // Replace XXXXX with AWS account ID.  private static final String QUEUE_NAME = \u0026#34;https://sqs.us-east-1.amazonaws.com/XXXXXXX/testQueue\u0026#34;; @Autowired private final AmazonSQSAsync amazonSqs; @Autowired public MessageSender(final AmazonSQSAsync amazonSQSAsync) { this.amazonSqs = amazonSQSAsync; } public boolean send(final String messagePayload) { MessageChannel messageChannel = new QueueMessageChannel(amazonSqs, QUEUE_NAME); Message\u0026lt;String\u0026gt; msg = MessageBuilder.withPayload(messagePayload) .setHeader(\u0026#34;sender\u0026#34;, \u0026#34;app1\u0026#34;) .setHeaderIfAbsent(\u0026#34;country\u0026#34;, \u0026#34;AE\u0026#34;) .build(); long waitTimeoutMillis = 5000; boolean sentStatus = messageChannel.send(msg,waitTimeoutMillis); logger.info(\u0026#34;message sent\u0026#34;); return sentStatus; } } In this code snippet, we first create the QueueMessageChannel with the queue URL. Then we construct the message to be sent with the MessageBuilder class.\nFinally, we invoke the send() method on the MessageChannel by specifying a timeout interval. The send() method is a blocking call so it is always advisable to set a timeout when calling this method.\nSending with QueueMessagingTemplate The QueueMessagingTemplate contains many convenient methods to send a message. The destination can be specified as a QueueMessageChannel object created with a queue URL as in the previous example or the queue name supplied as a primitive string.\nWe create the QueueMessagingTemplate bean in our configuration with an AmazonSQSAsync client, which is available by default in the application context when using the Spring Cloud AWS Messaging Spring Boot starter:\n@Bean public QueueMessagingTemplate queueMessagingTemplate( AmazonSQSAsync amazonSQSAsync) { return new QueueMessagingTemplate(amazonSQSAsync); } ```text Then, we can send the messages using the `convertAndSend()` method: ```java @Slf4j @Service public class MessageSenderWithTemplate { private static final String TEST_QUEUE = \u0026#34;testQueue\u0026#34;; @Autowired private QueueMessagingTemplate messagingTemplate; public void send(final String queueName,final String messagePayload) { Message\u0026lt;String\u0026gt; msg = MessageBuilder.withPayload(messagePayload) .setHeader(\u0026#34;sender\u0026#34;, \u0026#34;app1\u0026#34;) .setHeaderIfAbsent(\u0026#34;country\u0026#34;, \u0026#34;AE\u0026#34;) .build(); messagingTemplate.convertAndSend(TEST_QUEUE, msg); } } In this example, we first create a message with the MessageBuilder class, similar to our previous example, and use the convertAndSend() method to send the message to the queue.\nSending a Message to a FIFO Queue For sending a message to a FIFO Queue, we need to add two fields: messageGroupId and messageDeduplicationId in the header like in the example below:\n@Slf4j @Service public class MessageSenderWithTemplate { private static final String TEST_QUEUE = \u0026#34;testQueue\u0026#34;; @Autowired private QueueMessagingTemplate messagingTemplate; public void sendToFifoQueue( final String messagePayload, final String messageGroupID, final String messageDedupID) { Message\u0026lt;String\u0026gt; msg = MessageBuilder.withPayload(messagePayload) .setHeader(\u0026#34;message-group-id\u0026#34;, messageGroupID) .setHeader(\u0026#34;message-deduplication-id\u0026#34;, messageDedupID) .build(); messagingTemplate.convertAndSend(TEST_QUEUE, msg); log.info(\u0026#34;message sent\u0026#34;); } } Here we are using the MessageBuilder class to add the two header fields required for creating a message for sending to a FIFO queue.\nReceiving a Message Let us now look at how we can receive messages from an SQS queue. To receive a message, the client has to call the SQS API to check for new messages (i.e the messages are not pushed from the server to client).There are two ways to poll for new messages from SQS:\n Short Polling: Short polling returns immediately, even if the message queue being polled is empty. For short polling, we call the receive() method of QueueMessagingTemplate in an infinite loop that regularly polls the queue. The receive() method returns empty if there are no messages in the queue. Long Polling: long-polling does not return a response until a message arrives in the message queue, or the long poll times out. We do this with the @SQSListener annotation.  In most cases, Amazon SQS long polling is preferable to short polling since long polling requests let the queue consumers receive messages as soon as they arrive in the queue while reducing the number of empty responses returned (and thus the costs of SQS, since they are calculated by API calls).\nWe annotate a method with the @SqsListener annotation for subscribing to a queue. The @SqsListener annotation adds polling behavior to the method and also provides support for serializing and converting the received message to a Java object as shown here:\n@Slf4j @Service public class MessageReceiver { @SqsListener(value = \u0026#34;testQueue\u0026#34;, deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS) public void receiveMessage(String message, @Header(\u0026#34;SenderId\u0026#34;) String senderId) { logger.info(\u0026#34;message received {} {}\u0026#34;,senderId,message); } } In this example, the SQS message payload is serialized and passed to our receiveMessage() method. We have also defined the deletion policy ON_SUCCESS for acknowledging (deleting) the message when no exception is thrown. A deletion policy is used to define in which cases a message must be deleted after the listener method is called. For an overview of the available deletion policies, refer to the Java documentation of SqsMessageDeletionPolicy.\nWorking With Object Messages So far we have used payloads of type String. We can also send object payloads by serializing them to a JSON string. We do this by using the MessageConverter interface which defines a simple contract for conversion between Java objects and SQS messages. The default implementation is SimpleMessageConverter which unwraps the message payload if it matches the target type.\nLet us define another SQS queue named testObjectQueue and define a model to represent a signup event:\n@Data public class SignupEvent { private String signupTime; private String userName; private String email; } Now let us change our receiveMessage() method to receive the SignupEvent :\n@Slf4j @Service public class MessageReceiver { @SqsListener(value = \u0026#34;testObjectQueue\u0026#34;, deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS) public void receiveMessage(final SignupEvent message, @Header(\u0026#34;SenderId\u0026#34;) String senderId) { log.info(\u0026#34;message received {} {}\u0026#34;,senderId,message); } } Next, we will send a JSON message matching the structure of our objects from the SQS console:\nIf we run our Spring Boot application, we will get an exception of the following form in the log:\n.. i.a.c.m.listener.QueueMessageHandler : An exception occurred while invoking the handler method org.springframework.messaging.converter.MessageConversionException: / Cannot convert from [java.lang.String] to [io.pratik.springcloudsqs.models.SignupEvent] / for GenericMessage / [payload={\u0026#34;signupTime\u0026#34;:\u0026#34;20/04/2021 11:40 AM\u0026#34;, \u0026#34;userName\u0026#34;:\u0026#34;jackie\u0026#34;,/ \u0026#34;email\u0026#34;:\u0026#34;jackie.chan@gmail.com\u0026#34;}, headers={ ... ... We can see a MessageConversionException here since the default converter SimpleMessageConverter can only convert between String and SQS messages. For complex objects like SignupEvent in our example, a custom converter needs to be configured like this:\n@Configuration public class CustomSqsConfiguration { @Bean public QueueMessagingTemplate queueMessagingTemplate( AmazonSQSAsync amazonSQSAsync) { return new QueueMessagingTemplate(amazonSQSAsync); } @Bean public QueueMessageHandlerFactory queueMessageHandlerFactory( final ObjectMapper mapper, final AmazonSQSAsync amazonSQSAsync){ final QueueMessageHandlerFactory queueHandlerFactory = new QueueMessageHandlerFactory(); queueHandlerFactory.setAmazonSqs(amazonSQSAsync); queueHandlerFactory.setArgumentResolvers(Collections.singletonList( new PayloadMethodArgumentResolver(jackson2MessageConverter(mapper)) )); return queueHandlerFactory; } private MessageConverter jackson2MessageConverter(final ObjectMapper mapper){ final MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter(); converter.setObjectMapper(mapper); return converter; } } Here, we have defined a new message converter using our applications' default object mapper and then passed it to an instance of QueueMessageHandlerFactory. The QueueMessageHandlerFactory allows Spring to use our custom message converter for deserializing the messages it receives in its listener method.\nLet us send the same JSON message again using the AWS SQS console.\nWhen we run our application after making this change, we get the following output:\nio.pratik.springcloudsqs.MessageReceiver : message received {\u0026#34;signupTime\u0026#34;:\u0026#34;20/04/2021 11:40 AM\u0026#34;, \u0026#34;userName\u0026#34;:\u0026#34;jackie\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;jackie.chan@gmail.com\u0026#34;} SignupEvent(signupTime=20/04/2021 11:40 AM, userName=jackie, email=jackie.chan@gmail.com) From the logs, we can see the JSON message deserialized into SingupEvent object in our receiveMessage() method with the help of the configured custom converter.\nConsuming AWS Event Messages SQS message listeners can also receive events generated by other AWS services or microservices. Messages originating from AWS events do not contain the mime-type header, which is expected by our message converter by default.\nTo make the message conversion more robust in this case, the Jackson message converter needs to be configured with the strictContentTypeMatch property set to false as shown below:\n@Configuration public class CustomSqsConfiguration { ... ... private MessageConverter jackson2MessageConverter( final ObjectMapper mapper) { final MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter(); // set strict content type match to false  // to enable the listener to handle AWS events  converter.setStrictContentTypeMatch(false); converter.setObjectMapper(mapper); return converter; } } Here we have modified our earlier configuration by setting strictContentTypeMatch property in the MappingJackson2MessageConverter object to false.\nLet us add a listener class for receiving the notification messages sent by an AWS S3 bucket when certain configured events occur in the bucket. We can enable certain AWS S3 bucket events to send a notification message to a destination like the SQS queue when the events occur. Before running this example, we will create an SQS queue and S3 bucket and attach a notification event as shown below:\nHere we can see a notification event that will get triggered when an object is uploaded to the S3 bucket. This notification event is configured to send a message to our SQS queue testS3Queue.\nOur class S3EventListener containing the listener method which will receive this event from S3 looks like this:\n@Slf4j @Service public class S3EventListener { @SqsListener(value = \u0026#34;testS3Queue\u0026#34;, deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS) public void receive(S3EventNotification s3EventNotificationRecord) { S3EventNotification.S3Entity s3Entity = s3EventNotificationRecord.getRecords().get(0).getS3(); String objectKey = s3Entity.getObject().getKey(); log.info(\u0026#34;objectKey:: {}\u0026#34;,objectKey); } } When we upload an object to our S3 bucket, the listener method receives this event payload in the S3EventNotification object for further processing.\nConclusion We saw how to use Spring Cloud AWS for the integration of our applications with the AWS SQS service. A summary of the things we covered:\n Message, QueueMessageTemplate, QueueMessageChannel, MessageBuilder are some of the important classes used. SQS messages are built using MessageBuilder class where we specify the message payload along with message headers and other message attributes. QueueMessageTemplate and QueueMessageChannel are used to send messages. Applying the @SqsListener annotation to a method enables receiving of SQS messages from a specific SQS queue, sent by other applications. Methods annotated with @SqsListener can take both string and complex objects. For receiving complex objects, we need to configure a custom converter.  I hope this will help you to get started with building applications using AWS SQS.\nYou can refer to all the source code used in the article on Github.\nCheck Out the Book!  This article gives only a first impression of what you can do with SQS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"May 10, 2021","image":"https://reflectoring.io/images/stock/0035-switchboard-1200x628-branded_hu8b558f13f0313494c9155ce4fc356d65_235224_650x0_resize_q90_box.jpg","permalink":"/spring-cloud-aws-sqs/","title":"Getting Started With AWS SQS and Spring Cloud"},{"categories":["Software Craft","AWS"],"contents":"Amazon Web Services provide many possibilities to secure data in the cloud. In this article, we will have a closer look at how to encrypt different types of data at rest on AWS.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n Why Encryption? When we work with any AWS service, we create data that is stored in some storage of AWS. Why should this data be encrypted?\nCloud providers like AWS have several customers all around world. When we have our data in the cloud, we want to protect this data from the cloud provider and other customers of this cloud provider. We want to be sure that only we can read the data that we store in the cloud.\nWe also want to prevent our data from being read, in case it gets into the hands of unauthorized people or applications through intentional or accidental means.\nDue to these reasons, many customers have a concern about unauthorized access to the data stored in plain text in the cloud. Encryption solves this problem of securing data stored in the cloud.\n The primary reason for encrypting data is confidentiality.\n Encryption Basics for Storage We need keys to encrypt data. Keys that we need for encryption are of two types:\n Symmetric keys Asymmetric keys  Symmetric keys are used to encrypt and decrypt data with the same key. It means somebody who encrypts data has to share the encryption key with someone who needs to decrypt the data.\nAn asymmetric key is actually a key pair that consists of a private key and public key. The data, that is encrypted with the public key can be decrypted with the private key only. It means, that if a sender wants to send encrypted data to a receiver, they use the public key of the receiver for encryption of the data. The receiver then uses their private key for the decryption of the data.\nThe receiver also securely stores the private key in storage like a protected file system or specialized software or hardware.\nEncryption and decryption with a symmetric key is much faster than with an asymmetric key.\nIt is however less secure since the entities which want to correspond via symmetric encryption must share the encryption key.\nIf the channel used to share the key is compromised, the entire system for sharing secure messages gets broken. This can be exploited by anyone with the key since they can encrypt or decrypt all communications between the entities.\nOverview of Data Encryption on AWS As mentioned above we can use encryption to secure data at rest and in transit.\nLet us have a deeper look at the encryption of data at rest.\nSince we want to encrypt and decrypt data at one and not at different places, we can use a symmetric key for that. If we encrypt data on an AWS storage we have two approaches:\n Client-side encryption Server-side encryption  In Client-side encryption, the data is encrypted outside of the AWS Cloud and then sent to storage. It is stored on an AWS storage in encrypted form, but AWS has nothing to do with the encryption.\nWhen the client wants to read data, it has to decrypt the data on the client-side after extracting the encrypted data from AWS. We use this approach if we want to use cloud storage, but we don\u0026rsquo;t trust the security service of the cloud provider and want to secure the data on our own.\nIn Server-side encryption, AWS takes care of the encryption of the data in its storage. The encryption process is transparent for the client, who writes or reads this data.\nAWS provides several possibilities for server-side encryption on storage.\nIn general, we need to perform three steps to protect our data:\n Get a key for encryption Encrypt data Store the key securely  AWS provides two services for managing encryption keys:\n AWS Key Management Service AWS CloudHSM  Amazon Key Management Service (KMS) Let\u0026rsquo;s look at the AWS KMS service which we can use to manage our encryption keys.\nMost storage services in AWS support encryption and have good integration with AWS KMS for managing the encryption keys to encrypt their data.\nAdvantages of KMS   KMS provides a centralized system for managing our encryption keys.\n  KMS uses hardware security modules (HSM) to protect the confidentiality and integrity of your keys encryption keys. No one, including AWS employees, can retrieve our plaintext keys from the service.\n  Each KMS request can be audited by enabling AWS CloudTrail. The audit logs contain details of the user, time, date, API action and, the key used.\n  We also get the advantages of scalability, durability, and high availability compared to an on-premise Key management solution.\n  KMS functions are available as APIs and bundled into SDKs which make it possible to integrate with any custom application for key management.\n  Working of KMS It is important to understand CMK and data keys to understand the working of KMS.\nCustomer Master Key (CMK) KMS maintains a logical representation of the key it manages in the form of a customer master key (CMK). The CMK also contains the key ID, creation date, description, and state of the key. CMKs in any one of the states: Enabled, Disabled, PendingImport, PendingDeletion, or Unavailable.\nAWS KMS has three types of CMKs:\n Customer-managed CMK: The customer creates and manages these CMKs and has full control over them. AWS managed CMK: These CMKs are created, managed, and used on our behalf by an AWS service that is integrated with AWS KMS. AWS owned CMK: These are owned and managed by the AWS services for use in multiple AWS accounts. We cannot view and use these CMKs and are not charged any fee for their usage.  Some AWS services support only an AWS-managed CMK. Others use an AWS-owned CMK or offer a choice of CMKs.\nData Key Data keys are the keys that we use to encrypt data and other data encryption keys.\nWe use AWS KMS customer master keys (CMKs) to generate, encrypt, and decrypt data keys.\nThe data key is used outside of KMS, such as when using OpenSSL for client-side encryption using a unique symmetric data key.\nData Key Pair a data key pair is an asymmetric data key consisting of a pair of public key and private key. They are used for client-side encryption and decryption, or signing and verification of messages outside of AWS KMS.\nThe private key in each data key pair is protected under symmetric CMK in AWS KMS. Both RSA and Elliptic curve key pair algorithms are supported.\nFor signing, the private key of the data key pair is used to generate a cryptographic signature for a message. Anyone with the corresponding public key can use it to verify the integrity of the message.\nSource of Key Material If we create a CMK we have two possibilities to get the key:\n KMS generates the key material: We define what kind of key we want to have and KMS creates the key material for us. We get the reference to the key and use it for encryption operations. Bring your own key (BYOK): We create a CMK without key material and then import the key material from outside into the CMK.  Key Rotation Reusing the key for many cryptographic operations is not a good idea. Should the key be stolen, all encrypted data can be decrypted. That\u0026rsquo;s why it is important to rotate CMK. We can do it manually by creating new CMKs at specific intervals and update our applications to use the new CMK.\nIn AWS KMS, we can enable the automatic CMK rotation. With automatic CMK rotation enabled, a new key is created with every rotation, and all new data keys are encrypted with a new CMK.\nThe old CMK is not deleted and is still used for the decryption of old data keys that were created before the rotation.\nKMS Storage AWS KMS key store is used as the default storage for keys managed by KMS but this storage is shared by many customers.\nThis may suffice most use cases. But some customers or applications may have higher security requirements. For instance, we might have to ensure our keys are isolated by using a dedicated infrastructure to ensure any regulatory compliance.\nA custom key store can be configured to address these scenarios. The custom key store is associated with an AWS CloudHSM cluster which is a managed Hardware Security Module (HSM) service set up in our AWS account.\nA HSM is a special hardware for cryptographic operation and storing sensitive material.\nAWS CloudHSM AWS CloudHSM is a managed service providing a hardware security module (HSM) to generate and use our encryption keys on the AWS Cloud.\nBenefits of CloudHSM  CloudHSM protects our keys with exclusive, single-tenant access to tamper-resistant HSM instances in our Virtual Private Cloud (VPC). We can configure AWS KMS to use our AWS CloudHSM cluster as a custom key store instead of the default KMS key store as explained earlier. The HSM provided by AWS CloudHSM is based on open industry standards. This makes it easy to integrate with custom applications with standard APIs like PKCS#11, JCE, and CNG libraries and also migrate keys to and from other commercial HSM solutions. AWS CloudHSM provides access to HSMs over a secure channel to create users and set HSM policies so that the encryption keys which are generated and used with CloudHSM are accessible only by those HSM users. The AWS customer has more functionality for key management than in KMS. The customer can generate symmetric and asymmetric keys of different lengths, perform encryption with many algorithms, import and export keys, make keys non-exportable, and so on. If we want to encrypt data on storage it seems to be a very good solution for key management, especially if we have very high-security requirements for our encryption.  But CloudHSM does not have good integration with other AWS services like KMS. Since AWS has no access to the keys in CloudHSM, it is harder to integrate than KMS, for example to use this solution for encryption of S3 objects, EFS volumes, or EBS.\nWorking of CloudHSM AWS CloudHSM runs in our VPC, enabling easy integration of HSMs with applications running on our EC2 instances.\nCloudHSM Cluster For using the AWS CloudHSM service, we first create a CloudHSM Cluster that can have multiple HSMs spread across two or more Availability Zones in an AWS region.\nHSMs in a cluster are automatically synchronized and load-balanced. Each HSM appears as a network resource in our VPC. After creating and initializing a CloudHSM Cluster, we can configure a client on an EC2 instance that allows our applications to use the cluster over a secure, authenticated network connection.\nMonitoring CloudHSM monitors the health and network availability of our HSMs. Amazon has no access to the keys. The AWS customer has full control over the key management.\nSecure Access The client software maintains a secure channel to all of the HSMs in the cluster and sends requests on this channel, and the HSM performs the operations and returns the results over the secure channel. The client then returns the result to the application through the cryptographic API.\nIn a CloudHSM cluster, the AWS customer has full control over the key management. Amazon has no access to the keys. But Amazon manages and monitors the HSM. AWS takes care of backups, firmware updates synchronization, etc.\nTools and SDKs We can use command line tools like CloudHSM Management Utility for user management and Key Management Utility for key management.\nAWS provides a Client SDK for integration of the custom application with CloudHSM.\nCost Comparison Between KMS and CloudHSM AWS KMS is much cheaper than CloudHSM.\nEvery CMK in AWS KMS currently costs 1.00 USD per month. Also, we get 20,000 cryptographic requests in a month for free. If we make more than 20,000 requests in a month it costs between 0.03 and 12.00 USD for 10,000 requests depending on the key type.\nCloudHSM costs between 1.40 and 2.00 USD per hour and per device depending on the region. If we have two HSMs in the cluster for a price of 1,50 USD, we pay 72 USD per day.\nIf we want to use a custom key store in KMS we have to pay for both.\nEncryption of Storages Now that we know how to manage our encryption keys in AWS, let\u0026rsquo;s go over AWS storage types and look at how we can encrypt the data on these storages.\nAmazon Elastic File System AWS EFS is a serverless file storage service for use with AWS compute services and on-premise servers.\nWhen we create a new file system from AWS Console, encryption at rest is enabled by default. With encryption enabled, every time we want to write or read data, KMS will perform encrypt or decrypt operations on that data.\nEFS uses customer master keys (CMKs) to encrypt our file system. It uses the AWS managed CMK for Amazon EFS stored under aws/elasticfilesystem, to encrypt and decrypt the file system metadata. We choose the CMK to encrypt and decrypt file data (actual file contents). This CMK can be one of the two types:\n AWS-managed CMK: This is the default CMK aws/elasticfilesystem. We do not pay only for the usage. Customer-managed CMK: With this CMK type, we can configure the key policies and grants for multiple users or services. If we use a customer-managed CMK as our master key for file data encryption and decryption, we can enable key rotation.  It is important to know that we have to decide on encryption while creating an EFS. We can set it only at the time of creating the file system. It is not possible to disable or enable the encryption after creation.\nAmazon FSx Amazon FSx is used as Windows storage for Windows servers. Amazon FSx file systems are encrypted at rest with keys managed using AWS KMS. Data is encrypted before being written to the file system and decrypted when it is read.\nAmazon FSx uses CMKs to encrypt our file system. We choose the CMK to encrypt and decrypt file systems (both data and metadata).\nSimilar to EFS storage we can use the default CMK of KMS, which is called aws/fsx, or use a customer-managed CMK.\nAmazon Elastic Block Store Amazon Elastic Block Store (Amazon EBS) is the block-level storage solution of AWS which is attached as volumes to EC2 instances.\nAmazon EBS encryption uses AWS KMS keys when creating encrypted volumes and snapshots. The whole procedure for encryption is very similar to other service services. We have a choice between the default AWS-managed key or Custom Master Key.\nUsing EBS we can create snapshots of the volume. If we encrypt the volume, the snapshots that we create are automatically encrypted.\nIf we create a volume from a snapshot and this snapshot is encrypted, then our new volume will be automatically encrypted as well. If we create a volume from a snapshot and this snapshot is not encrypted, we again have the choice of which key we can use for encryption.\nAmazon S3 Amazon S3 is a highly scalable object storage, which we can use to store and retrieve any amount of data. It is a key-based object-store. Objects are organized in buckets which are resources similar to folders.\nProtecting the Data at Rest We have the following options for protecting data at rest in Amazon S3:\n  Server-side encryption: We request S3 to encrypt our object before saving it on disks in its data centers and then decrypt it when we fetch the objects.\n  Client-side encryption: The complete encryption process is managed on the client side. We encrypt the data before uploading the encrypted data to S3.\n  Encryption Options on the Server-Side We have three options for server-side encryption:\n Using S3 managed keys (SSE-S3): Each object is encrypted with a unique key and the encryption key is further encrypted with a master key that is regularly rotated by S3. Using Customer Master Keys (CMKs) stored in AWS KMS (SSE-KMS): It is similar to SSE-S3, but with some additional benefits provided by the AWS KMS service. We get the additional protection of the stored objects from unauthorized access as well as an audit trail of the usage of CMK for object retrieval. Using Customer-Provided Keys (SSE-C): We manage the encryption keys and S3 manages the encryption when it writes to disks, and decryption, when we fetch our objects. We send the key with the upload request when writing objects to S3.  S3 Bucket Key Among the server-side encryption options, SSE-KMS is most expensive particularly for buckets with a large number of objects or objects with high read frequency since we pay for every cryptographical request to KMS.\nS3 Bucket Key allows us to reduce the costs for cryptographic operation with S3 objects.\nA bucket key is a symmetric key, that is created at the bucket level. It is encrypted once by a CMK in KMS and returned to the S3 service. Now, the S3 service can generate data keys for every object and encrypt them with a bucket key outside KMS. This way the volume of traffic to AWS KMS from the S3 storage service gets reduced thereby reducing the number of cryptographic operations with KMS.\nModifying the Encryption Options Unlike the other storage service, we can change encryption options after the encryption for every object for example from SSE-S3 to SSE-KMS.\nWe can also encrypt every S3 object differently during upload using REST API or AWS SDK. For example, we can have three files. The first file could be encrypted using SSE-S3, the second file using SSE-KMS, and the third with SSE-C.\nConclusion AWS provides many solutions for the protection of the data in the cloud using server-side encryption. The AWS Key Management Service has a very simple interface and good integration with the storage services of AWS.\nAWS CloudHSM provides a solution for more stringent security requirements. It is possible to combine both these services for secure key management.\nThe storage services like EFS, FSx, EBS, and S3 can be easily and securely protected with help of AWS KMS and CloudHSM services.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"May 7, 2021","image":"https://reflectoring.io/images/stock/0101-keylock-1200x628-branded_hu54aa4efa315910c5671932665107f87d_212538_650x0_resize_q90_box.jpg","permalink":"/securing-data-on-aws/","title":"Securing Data in AWS"},{"categories":["Java"],"contents":"A thread is a basic path of execution in a program. Most of the applications we build today execute in a multi-threaded environment. They might become unresponsive if the thread executing at that point of time is stuck due to some reason. In these situations, thread dumps help to narrow down the problem.\nIn this post, we create thread dumps and understand the information they contain to diagnose various runtime errors in applications.\n Example Code This article is accompanied by a working code example on GitHub. What is a Thread Dump? A thread dump provides a snapshot of all the threads in a program executing at a specific instant. Some of the threads belong to our Java application being run while the remaining are JVM internal threads.\nThe state of each thread is followed by a stack trace containing the information about the application’s thread activity that can help us diagnose problems and optimize application and JVM performance.\nFor this reason, a thread dump is a vital tool for analyzing performance degradation (slowness), finding the root cause of an application becoming unresponsive, or for diagnosing deadlock situations.\nLifecycle of a Thread For understanding a thread dump, it is essential to know all the states a thread passes through during its lifecycle.\nA thread can assume one of these states:\n  NEW: Initial state of a thread when we create an instance of Thread or Runnable. It remains in this state until the program starts the thread.\n  RUNNABLE: The thread becomes runnable after a new thread is started. A thread in this state is considered to be executing its task.\n  BLOCKED: A thread is in the blocked state when it tries to access an object that is currently used (locked) by some other thread. When the locked object is unlocked and hence available for the thread, the thread moves back to the runnable state.\n  WAITING: A thread transitions to the waiting state while waiting for another thread to perform a task and transitions back to the runnable state only when another thread signals the waiting thread to resume execution.\n  TIMED_WAITING: A timed waiting state is a thread waiting for a specified interval of time and transitioning back to the runnable state when that time interval expires. The thread is waiting for another thread to do some work for up to a specified waiting time.\n  TERMINATED (Dead) A runnable thread enters the terminated state after it finishes its task.\n  Generating a Thread Dump We will now generate some thread dumps by running a simple Java program.\nRunning an Example Program We will capture the thread dump of an application that simulates a web server. The main method of our application looks like this:\npublic class App { private static final Logger logger = Logger.getLogger(App.class.getName()); public static void main(String[] args) throws Exception { ServerSocket ssock = new ServerSocket(8080); logger.info(\u0026#34;Server Started. Listening on port 8080\u0026#34;); while (true) { new RequestProcessor(ssock).handleClientRequest();; } } } Here we instantiate a ServerSocket class that listens on port 8080 for incoming client requests and does some processing on the same thread the main() method is working on.\nLet us build this program with Maven and then run this program as a Java executable with the command:\njava -jar target/ServerApp-1.0-SNAPSHOT.jar The Java application now listens for requests on port 8080 and responds with a JSON string on receiving HTTP GET requests on the URL http://localhost:8080/.\nGenerating the Thread Dump We will now use a utility named jcmd to generate a thread dump of the application we started in the previous step. The jcmd utility sends diagnostic command requests to the Java Virtual Machine(JVM).\nFor this, we will first find the process identifier(PID) of the application by running the jps command:\njps -l Running the jps command gives the following output:\n753 target/ServerApp-1.0-SNAPSHOT.jar 754 jdk.jcmd/sun.tools.jps.Jps Each line of the output contains the PID and the name of our class containing the main method. Alternatively, we can find the PID by running ps -a in Unix or Linux systems.\nWe will now generate the thread dump by running the jcmd command:\njcmd 753 Thread.print \u0026gt; threadDump.txt The generated thread dump output is written to the threadDump.txt file. A snippet from the thread dump file is shown here:\n2021-04-18 15:54:38 Full thread dump OpenJDK 64-Bit Server VM (14.0.1+7 mixed mode, sharing): ... \u0026#34;main\u0026#34; #1 prio=5 os_prio=31 cpu=111.41ms elapsed=67.87s tid=0x00007f96fb009000 nid=0x2003 runnable [0x00007000008f0000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.Net.accept(java.base@14.0.1/Native Method) at sun.nio.ch.NioSocketImpl.accept(java.base@14.0.1/NioSocketImpl.java:755) at java.net.ServerSocket.implAccept(java.base@14.0.1/ServerSocket.java:684) at java.net.ServerSocket.platformImplAccept(java.base@14.0.1/ServerSocket.java:650) at java.net.ServerSocket.implAccept(java.base@14.0.1/ServerSocket.java:626) at java.net.ServerSocket.implAccept(java.base@14.0.1/ServerSocket.java:583) at java.net.ServerSocket.accept(java.base@14.0.1/ServerSocket.java:540) at io.pratik.RequestProcessor.handleClientRequest(RequestProcessor.java:32) at io.pratik.App.main(App.java:18) \u0026#34;Reference Handler\u0026#34; #2 daemon prio=10 os_prio=31 cpu=0.10ms elapsed=67.86s tid=0x00007f96fd001000 nid=0x3203 waiting on condition [0x0000700001005000] java.lang.Thread.State: RUNNABLE ... \u0026#34;Finalizer\u0026#34; #3 daemon prio=8 os_prio=31 cpu=0.17ms elapsed=67.86s tid=0x00007f96fd002800 nid=0x3403 in Object.wait() [0x0000700001108000] java.lang.Thread.State: WAITING (on object monitor) ... \u0026#34;Signal Dispatcher\u0026#34; #4 daemon prio=9 os_prio=31 cpu=0.24ms elapsed=67.85s tid=0x00007f96fb0d6800 nid=0xa703 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE ... ... \u0026#34;Common-Cleaner\u0026#34; #12 daemon prio=8 os_prio=31 cpu=0.21ms elapsed=67.84s tid=0x00007f96fd06d800 nid=0x9e03 in Object.wait() [0x0000700001920000] java.lang.Thread.State: TIMED_WAITING (on object monitor) ... \u0026#34;Attach Listener\u0026#34; #14 daemon prio=9 os_prio=31 cpu=1.61ms elapsed=14.58s tid=0x00007f96fc85d800 nid=0x6207 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE ... ... \u0026#34;G1 Young RemSet Sampling\u0026#34; os_prio=31 cpu=11.18ms elapsed=67.87s tid=0x00007f96fb0ab800 nid=0x2f03 runnable \u0026#34;VM Periodic Task Thread\u0026#34; os_prio=31 cpu=56.37ms elapsed=67.84s tid=0x00007f96fc848800 nid=0x6003 waiting on condition ... We can see the main thread is in the RUNNABLE state with a thread id (tid), cpu time, and priority. Each thread information is accompanied by its stack trace. The stack trace of the main thread shows the handleClientRequest() method of the RequestProcessor getting invoked from the main method in the last two lines. Apart from the main thread in the RUNNABLE state, we can see some threads in states WAITING, and TIMED_WAITING.\nAnatomy of a Thread Dump Entry Let us now understand the fields present in each thread dump line by looking at an entry from a thread dump of a Kafka broker:\n\u0026#34;main-EventThread\u0026#34; #20 daemon prio=5 os_prio=31 cpu=10.36ms elapsed=90.79s tid=0x00007fa0e021a800 nid=0x6503 waiting on condition [0x0000700003098000] java.lang.Thread.State: WAITING (parking) at jdk.internal.misc.Unsafe.park(java.base@14.0.1/Native Method) - parking to wait for \u0026lt;0x00000007c8103d70\u0026gt; (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(java.base@14.0.1/LockSupport.java:341) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@14.0.1/AbstractQueuedSynchronizer.java:505) at java.util.concurrent.ForkJoinPool.managedBlock(java.base@14.0.1/ForkJoinPool.java:3137) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@14.0.1/AbstractQueuedSynchronizer.java:1614) at java.util.concurrent.LinkedBlockingQueue.take(java.base@14.0.1/LinkedBlockingQueue.java:435) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) The thread dump entry shown here starts with the name of the thread main-EventThread which is the 20th thread (indicated by#20) created by the JVM after it started.\nThe daemon keyword after the thread number indicates that this is a daemon thread, which means that it will not prevent the JVM from shutting down if it is the last running thread.\nThen there are less important pieces of metadata about the thread, like a priority, os priority, thread identifier, and native identifier.\nThe last pieces of information are the most important the state of the thread and its address in the JVM. The thread can be in one of the four states as explained earlier.\nDifferent Ways of Taking a Thread Dump There are various methods of taking the thread dump. We used JDK\u0026rsquo;s jcmd utility in the previous section for taking the thread dumps. Let us look at some of the other methods.\nTaking Thread Dump with Tools Some of the commonly used tools for taking thread dump are:\n jstack: jstack is part of JDK since Java 5 and is widely used for taking thread dumps. We take thread dump with jstack with the below command:  sudo -u \u0026lt;java-user\u0026gt; java-service jstack -l \u0026lt;pid\u0026gt; In this command, we should replace  with the id of the user that the Java process is running as.\nUsing the -l option, we can include in the output, ownable synchronizers in the heap, and locks. However, with the release of JDK 8, Oracle suggests using jcmd for taking thread dumps instead of the jstack for enhanced diagnostics and reduced performance overhead.\n  VisualVM: VisualVM is a graphical user interface (GUI) tool that provides detailed runtime information about the Java application. We use this runtime information to monitor, troubleshoot, and profile those applications. It has the additional capability to capture thread dumps from the java processes running in a remote host. From Java 9 onwards, VisualVM is distributed separately from JDK and can be downloaded from the project\u0026rsquo;s website.\n  JMC: Java Mission Control (JMC) is also a GUI tool to collect and analyze data from Java applications. Like Visual VM, this can also connect to remote Java processes to capture thread dump.\n  OS utilities: We can use the commands kill -3 \u0026lt;pid\u0026gt; in Unix and ctrl+break in Windows to generate a thread dump in the console where our java program is running. The Java process prints the thread dump on the standard output on receiving the signal.\n  Application Performance Monitoring (APM) Tools: Few APM tools provide options to generate thread dumps. For example, AppDynamics provides this capability as part of its diagnostic actions, by directing its Java agent to take a thread dump for a specified number of samples with each sample lasting for a specified number of milliseconds. The thread dump is executed on the node monitored by the agent.\n  Taking a Thread Dump Programmatically with JMX ThreadMXBean is the management interface for the thread system in the Java Virtual Machine. A sample program to generate thread dump is given here:\npublic class ThreadMXBeanSample { private static final Logger logger = Logger.getLogger(ThreadMXBeanSample.class.getName()); public static void main(String[] args) { startThreads(); ThreadMXBean threadMxBean = ManagementFactory.getThreadMXBean(); for (ThreadInfo ti : threadMxBean.dumpAllThreads(true, true)) { logger.info(ti.toString()); } ... logger.info(\u0026#34;Total number of threads created and started : \u0026#34; + threadMxBean.getTotalStartedThreadCount()); } /** * Starts two threads Thread1 and Thread2 and calls their * synchronized methods in the run method resulting in a deadlock. */ private static void startThreads() { final ThreadSample thread1 = new ThreadSample(); final ThreadSample thread2 = new ThreadSample(); Thread t1 = new Thread(\u0026#34;Thread1\u0026#34;) { public void run() { thread1.executeMethod1(thread2); } }; Thread t2 = new Thread(\u0026#34;Thread2\u0026#34;) { @Override public void run() { thread2.executeMethod2(thread1); } }; t1.start(); t2.start(); } } In this snippet, the thread dump is generated by calling the dumpAllThreads() method. Before that we start two threads, each invoking synchronized method on ThreadSample class to provoke a BLOCKED thread state. A part of the thread dump is given here:\nApr 20, 2021 8:09:11 AM io.pratik.threadops.ThreadMXBeanSample main INFO: \u0026#34;Thread1\u0026#34; prio=5 Id=14 BLOCKED on io.pratik.threadops.ThreadSample@5b6f7412 owned by \u0026#34;Thread2\u0026#34; Id=15 at app//io.pratik.threadops.ThreadSample.executeMethod2(ThreadSample.java:22) - blocked on io.pratik.threadops.ThreadSample@5b6f7412 at app//io.pratik.threadops.ThreadSample.executeMethod1(ThreadSample.java:17) - locked io.pratik.threadops.ThreadSample@34c45dca at app//io.pratik.threadops.ThreadMXBeanSample$1.run(ThreadMXBeanSample.java:43) Apr 20, 2021 8:09:11 AM io.pratik.threadops.ThreadMXBeanSample main INFO: \u0026#34;Thread2\u0026#34; prio=5 Id=15 BLOCKED on io.pratik.threadops.ThreadSample@34c45dca owned by \u0026#34;Thread1\u0026#34; Id=14 at app//io.pratik.threadops.ThreadSample.executeMethod1(ThreadSample.java:16) - blocked on io.pratik.threadops.ThreadSample@34c45dca at app//io.pratik.threadops.ThreadSample.executeMethod2(ThreadSample.java:23) - locked io.pratik.threadops.ThreadSample@5b6f7412 at app//io.pratik.threadops.ThreadMXBeanSample$2.run(ThreadMXBeanSample.java:50) We can see the two threads Thread1 and Thread2 in the BLOCKED state. If we follow the stack trace of Thread1, ThreadSample object is locked at method executeMethod1 and blocked at executeMethod2.\nAnalyzing Thread Dumps FastThread is one of the available tools for analyzing thread dumps.\nLet us upload our thread dump file generated from a Kafka broker to the FastThread tool.\nFastThread generates a report from the thread dump which is much easier to understand compared to the raw file. Let us look at some of the useful sections of the report:\n Threads with identical stack trace: This section of the report shows information when several threads in a thread dump working on one single method. This is indicative of resource contention on external resources like databases or APIs or infinite loops. That particular method needs to be analyzed to find the root cause.   Most used methods: By taking multiple consecutive thread dumps in a sequence, we can get an overview of the parts of our Java application that are used the most.   CPU consuming threads: The report lists all threads which need to be analyzed for high CPU consumption.   Blocking threads: Blocking threads that are responsible for making an application unresponsive are listed under this section. Deadlocks: This section contains threads that are causing a deadlock. The deadlock section of the previous example is shown here:   Exceptions: Thread dumps contain exceptions and errors in the thread\u0026rsquo;s stack trace. These should be investigated to look for the root cause of a problem. Flame Graph: A flame graph condenses all the information from the thread dump into one single compactgraph. It helps to identify hot code paths for effective debugging/troubleshooting. The flame graph of our previous program for causing deadlock is shown here:  We can see the flame graph is searched for classes in the package threadops and showing the search results in pink color. The number of threads of that class is displayed on hovering over the cell. Another flame graph of a Kafka broker is given here:\nIBM TDMA, samurai, and Spotify\u0026rsquo;s thread dump analyzer are some of the other tools for analyzing thread dumps.\nManual analysis of raw thread dump files is always an option but is often tedious and time-consuming due to its verbose nature. Irrespective of the method used to analyze thread dumps, the results of the analysis can be used to diagnose a wide range of problems common in live systems.\nConclusion In this post, we looked at the different lifecycle states of a Java thread and described thread dumps as a snapshot of thread states at a particular instant. We then ran a simple Java application to simulate a web server and took its thread dump with the jcmd tool.\nAfter that, we introduced tools to analyze thread dumps and ended with some use cases and best practices of using thread dumps. A thread dump is often used in combination with heap dump and GC logs to diagnose java applications.\nI hope this will enable you to use thread dumps for the use cases described here and also find other areas where it can be put to use like automation with Ci/CD.\nYou can refer to all the source code used in the article on Github.\n","date":"May 5, 2021","image":"https://reflectoring.io/images/stock/0101-threads-1200x628-branded_hu7e7cee9d1f4733c6749e9db8df0b8596_375667_650x0_resize_q90_box.jpg","permalink":"/analyzing-thread-dumps/","title":"Creating and Analyzing Thread Dumps"},{"categories":["AWS","Java"],"contents":"In the article \u0026ldquo;Getting Started with AWS CDK\u0026rdquo;, we have already deployed a Spring Boot application to AWS with the CDK. We used a pre-configured \u0026ldquo;black box\u0026rdquo; construct named SpringBootApplicationStack, passed in a few parameters, and wrapped it in a CDK app to deploy it with the CDK CLI.\nIn this article, we want to go a level deeper and answer the following questions:\n How can we create reusable CDK constructs? How do we integrate such reusable constructs in our CDK apps? How can we design an easy to maintain CDK project?  On the way, we\u0026rsquo;ll discuss some best practices that helped us manage the complexities of CDK.\nLet\u0026rsquo;s dive in!\nCheck Out the Book!  This article is a self-sufficient sample chapter from the book Stratospheric - From Zero to Production with Spring Boot and AWS.\nIf you want to learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check it out!\n The Big Picture The basic goal for this chapter is still the same as in the article \u0026ldquo;Getting Started with AWS CDK\u0026rdquo;: we want to deploy a simple \u0026ldquo;Hello World\u0026rdquo; Spring Boot application (in a Docker image) into a public subnet in our own private virtual network (VPC). This time, however, we want to do it with reusable CDK constructs and we\u0026rsquo;re adding some more requirements:\nThe image above shows what we want to achieve. Each box is a CloudFormation resource (or a set of CloudFormation resources) that we want to deploy. This is a high-level view. So, there are actually more resources involved but let\u0026rsquo;s not worry about that, yet. Each color corresponds to a different CloudFormation stack. Let\u0026rsquo;s go through each of the stacks one by one.\nThe Docker Repository stack creates - you guessed it - a Docker repository for our application\u0026rsquo;s Docker images. The underlying AWS service we\u0026rsquo;re using here is ECR - Elastic Container Registry. We can later use this Docker repository to publish new versions of our application.\nThe Network stack deploys a VPC (Virtual Private Network) with a public subnet and an isolated (private) subnet. The public subnet contains an Application Load Balancer (ALB) that forwards incoming traffic to an ECS (Elastic Container Service) Cluster - the runtime of our application. The isolated subnet is not accessible from the outside and is designed to secure internal resources such as our database.\nThe Service stack contains an ECS service and an ECS task. Remember that an ECS task is basically a Docker image with a few additional configurations, and an ECS service wraps one or more of such tasks. In our case, we\u0026rsquo;ll have exactly one task because we only have one application. In an environment with multiple applications, like in a microservice environment, we might want to deploy many ECS tasks into the same ECS service - one for each application. ECS (in its Fargate flavor) takes care of spinning up EC2 compute instances for hosting the configured Docker image(s). It even handles automatic scaling if we want it to.\nECS will pull the Docker image that we want to deploy as a task directly from our Docker repository.\nNote that we\u0026rsquo;ll deploy the Network stack and the Service stack twice: once for a staging environment and once for a production environment. This is where we take advantage of infrastructure-as-code: we will re-use the same CloudFormation stacks to create multiple environments. We\u0026rsquo;ll use the staging environment for tests before we deploy changes to the production environment.\nOn the other hand, we\u0026rsquo;ll deploy the Docker repository stack only once. It will serve Docker images to both the staging and production environments. Once we\u0026rsquo;ve tested a Docker image of our application in staging we want to deploy exactly the same Docker image to production, so we don\u0026rsquo;t need a separate Docker repository for each environment. If we had more than one application, though, we would probably want to create a Docker repository for each application to keep the Docker images cleanly separated. In that case, we would re-use our Docker repository stack and deploy it once for each application.\nThat\u0026rsquo;s the high-level view of what we\u0026rsquo;re going to do with CDK in this article. Let\u0026rsquo;s take a look at how we can build each of those three stacks with CDK in a manageable and maintainable way.\nWe\u0026rsquo;ll walk through each of the stacks and discuss how we implemented them with reusable CDK constructs.\nEach stack lives in its own CDK app. While discussing each stack, we\u0026rsquo;ll point out concepts that we applied when developing the CDK constructs and apps. These concepts helped us manage the complexity of CDK, and hopefully they will help you with your endeavors, too.\nHaving said that, please don\u0026rsquo;t take those concepts as a silver bullet - different circumstances will require different concepts. We\u0026rsquo;ll dicuss each of these concepts in its own section so they don\u0026rsquo;t get lost in a wall of text.\nWorking with CDK Before we get our hands dirty with CDK, though, some words about working with CDK.\nBuilding hand-rolled stacks with CDK requires a lot of time, especially when you\u0026rsquo;re not yet familiar with the CloudFormation resources that you want to use. Tweaking the configuration parameters of those resources and then testing them is a lot of effort, because you have to deploy the stack each time to test it.\nAlso, CDK and CloudFormation will spout error messages at you every chance they get. Especially with the Java version, you will run into strange errors every once in a while. These errors are hard to debug because the Java code uses a JavaScript engine (JSii) for generating the CloudFormation files. Its stack traces often come from somewhere deep in that JavaScript engine, with little to no information about what went wrong.\nAnother common source of confusion is the distinction between \u0026ldquo;synthesis time\u0026rdquo; errors (errors that happen during the creation of the CloudFormation files) and \u0026ldquo;deploy time\u0026rdquo; errors (errors that happen while CDK is calling the CloudFormation API to deploy a stack). If one resource in a stack references an attribute of another resource, this attribute will be just a placeholder during synthesis time and will be evaluated to the real value during deployment time. Sometimes, it can be surprising that a value is not available at synthesis time.\nCDK has been originally written in TypeScript and then ported to other languages (e.g. C#, Python, and of course Java). This means that the Java CDK does not yet feel like a first-class citizen within the CDK ecosystem. There are not as many construct libraries around and it has some teething problems that the original TypeScript variant doesn\u0026rsquo;t have.\nHaving listed all those seemingly off-putting properties of the Java CDK, not all is bad. The community on GitHub is very active and there has been a solution or workaround for any problem we\u0026rsquo;ve encountered so far. The investment of time will surely pay off once you have built constructs that many teams in your company can use to quickly deploy their applications to AWS.\nNow, finally, let\u0026rsquo;s get our hands dirty on building CDK apps!\nThe Docker Repository CDK App We\u0026rsquo;ll start with the simplest stack - the Docker Repository stack. This stack will only deploy a single CloudFormation resource, namely an ECR repository.\nYou can find the code for the DockerRepositoryApp on GitHub. Here it is in its entirety:\npublic class DockerRepositoryApp { public static void main(final String[] args) { App app = new App(); String accountId = (String) app .getNode() .tryGetContext(\u0026#34;accountId\u0026#34;); requireNonEmpty(accountId, \u0026#34;accountId\u0026#34;); String region = (String) app .getNode() .tryGetContext(\u0026#34;region\u0026#34;); requireNonEmpty(region, \u0026#34;region\u0026#34;); String applicationName = (String) app .getNode() .tryGetContext(\u0026#34;applicationName\u0026#34;); requireNonEmpty(applicationName, \u0026#34;applicationName\u0026#34;); Environment awsEnvironment = makeEnv(accountId, region); Stack dockerRepositoryStack = new Stack( app, \u0026#34;DockerRepositoryStack\u0026#34;, StackProps.builder() .stackName(applicationName + \u0026#34;-DockerRepository\u0026#34;) .env(awsEnvironment) .build()); DockerRepository dockerRepository = new DockerRepository( dockerRepositoryStack, \u0026#34;DockerRepository\u0026#34;, awsEnvironment, new DockerRepositoryInputParameters(applicationName, accountId)); app.synth(); } static Environment makeEnv(String accountId, String region) { return Environment.builder() .account(accountId) .region(region) .build(); } } We\u0026rsquo;ll pick it apart step by step in the upcoming sections. It might be a good idea to open the code in your browser to have it handy while reading on.\nParameterizing Account ID and Region The first concept we\u0026rsquo;re applying is to always pass in an account ID and region.\nWe can pass parameters into a CDK app with the -c command-line parameter. In the app, we read the parameters accountId and region like this:\nString accountId = (String) app .getNode() .tryGetContext(\u0026#34;accountId\u0026#34;); String region = (String) app .getNode() .tryGetContext(\u0026#34;region\u0026#34;); We\u0026rsquo;re using these parameters to create an Environment object:\nstatic Environment makeEnv(String accountId, String region) { return Environment.builder() .account(accountId) .region(region) .build(); } Then, we pass this Environment object into the stack we create via the env() method on the builder.\nIt\u0026rsquo;s not mandatory to explicitly define the environment of our CDK stack. If we don\u0026rsquo;t define an environment, the stack will be deployed to the account and region configured in our local AWS CLI via aws configure. Whatever we typed in there as the account and region would then be used.\nUsing the default account and region depending on our local configuration state is not desirable. We want to be able to deploy a stack from any machine (including CI servers) into any account and any region, so we always parameterize them.\nSanity Checking Input Parameters It should come as no surprise that we strongly recommend validating all input parameters. There are few things more frustrating than deploying a stack only to have CloudFormation complain 5 minutes into the deployment that something is missing.\nIn our code, we add a simple requireNonEmpty() check to all parameters:\nString accountId = (String) app.getNode().tryGetContext(\u0026#34;accountId\u0026#34;); requireNonEmpty(accountId, \u0026#34;accountId\u0026#34;); The method requireNonEmpty() throws an exception with a helpful message if the parameter is null or an empty string.\nThat\u0026rsquo;s enough to catch a whole class of errors early on. For most parameters this simple validation will be enough. We don\u0026rsquo;t want to do heavy validations like checking if an account or a region really exists, because CloudFormation is eager to do it for us.\nOne Stack per App Another concept we\u0026rsquo;re advocating is that of a single stack per CDK app.\nTechnically, CDK allows us to add as many stacks as we want to a CDK app. When interacting with the CDK app we could then choose which stacks to deploy or destroy by providing a matching filter:\ncdk deploy Stack1 cdk deploy Stack2 cdk deploy Stack* cdk deploy * Assuming the CDK app contains many stacks, the first two commands would deploy exactly one stack. The third command would deploy all stacks with the prefix \u0026ldquo;Stack\u0026rdquo;, and the last command would deploy all stacks.\nThere is a big drawback with this approach, however. CDK will create the CloudFormation files for all stacks, even if we want to deploy a single stack only. This means that we have to provide the input parameters for all stacks, even if we only want to interact with a single stack.\nDifferent stacks will most probably require different input parameters, so we\u0026rsquo;d have to provide parameters for a stack that we don\u0026rsquo;t care about at the moment!\nIt might make sense to group certain strongly coupled stacks into the same CDK app, but in general, we want our stacks to be loosely coupled (if at all). So, we recommend wrapping each stack into its own CDK app in order to decouple them.\nIn the case of our DockerRepositoryApp, we\u0026rsquo;re creating exactly one stack:\nStack dockerRepositoryStack = new Stack( app, \u0026#34;DockerRepositoryStack\u0026#34;, StackProps.builder() .stackName(applicationName + \u0026#34;-DockerRepository\u0026#34;) .env(awsEnvironment) .build()); One input parameter to the app is the applicationName, i.e. the name of the application for which we want to create a Docker repository. We\u0026rsquo;re using the applicationName to prefix the name of the stack, so we can identify the stack quickly in CloudFormation.\nThe DockerRepository Construct Let\u0026rsquo;s have a look at the DockerRepository construct, now. This construct is the heart of the DockerRepositoryApp:\nDockerRepository dockerRepository = new DockerRepository( dockerRepositoryStack, \u0026#34;DockerRepository\u0026#34;, awsEnvironment, new DockerRepositoryInputParameters(applicationName, accountId)); DockerRepository is another of the constructs from our constructs library.\nWe\u0026rsquo;re passing in the previously created dockerRepositoryStack as the scope argument, so that the construct will be added to that stack.\nThe DockerRepository construct expects an object of type DockerRepositoryInputParameters as a parameter, which bundles all input parameters the construct needs into a single object. We use this approach for all constructs in our library because we don\u0026rsquo;t want to handle long argument lists and make it very explicit what parameters need to go into a specific construct.\nLet\u0026rsquo;s take a look at the code of the construct itself:\npublic class DockerRepository extends Construct { private final IRepository ecrRepository; public DockerRepository( final Construct scope, final String id, final Environment awsEnvironment, final DockerRepositoryInputParameters dockerRepositoryInputParameters) { super(scope, id); this.ecrRepository = Repository.Builder.create(this, \u0026#34;ecrRepository\u0026#34;) .repositoryName(dockerRepositoryInputParameters.dockerRepositoryName) .lifecycleRules(singletonList(LifecycleRule.builder() .rulePriority(1) .maxImageCount(dockerRepositoryInputParameters.maxImageCount) .build())) .build(); // grant pull and push to all users of the account  ecrRepository.grantPullPush( new AccountPrincipal(dockerRepositoryInputParameters.accountId)); } public IRepository getEcrRepository() { return ecrRepository; } } DockerRepository extends Construct, which makes it a custom construct. The main responsibility of this construct is to create an ECR repository with Repository.Builder.create() and pass in some of the parameters that we previously collected in the DockerRepositoryInputParameters.\nRepository is a level 2 construct, meaning that it doesn\u0026rsquo;t directly expose the underlying CloudFormation attributes, but instead offers an abstraction over them for convenience. One such convenience is the method grantPullPush(), which we use to grant all users of our AWS account access to pushing and pulling Docker images to and from the repository, respectively.\nIn essence, our custom DockerRepository construct is just a glorified wrapper around the CDK\u0026rsquo;s Repository construct with the added responsibility of taking care of permissions. It\u0026rsquo;s a bit over-engineered for the purpose, but it\u0026rsquo;s a good candidate for introducing the structure of the constructs in our cdk-constructs library.\nWrapping CDK Commands with NPM With the above CDK app we can now deploy a Docker repository with this command using the CDK CLI:\ncdk deploy \\ -c accountId=... \\ -c region=... \\ -c applicationName=... That will work as long as we have a single CDK app, but as you might suspect by now, we\u0026rsquo;re going to build multiple CDK apps - one for each stack. As soon as there is more than one app on the classpath, CDK will complain because it doesn\u0026rsquo;t know which of those apps to start.\nTo work around this problem, we use the --app parameter:\ncdk deploy \\ --app \u0026#34;./mvnw -e -q compile exec:java \\ -Dexec.mainClass=dev.stratospheric.todoapp.cdk.DockerRepositoryApp\u0026#34; \\ -c accountId=... \\ -c region=... \\ -c applicationName=... With the --app parameter, we can define the executable that CDK should call to execute the CDK app. By default, CDK calls mvn -e -q compile exec:java to run an app (this default is configured in cdk.json, as discussed in \u0026ldquo;Getting Started with AWS CDK\u0026rdquo;).\nHaving more than one CDK app in the classpath, we need to tell Maven which app to execute, so we add the exec.mainclass system property and point it to our DockerRepositoryApp.\nNow we\u0026rsquo;ve solved the problem of having more than one CDK app but we don\u0026rsquo;t want to type all that into the command line every time we want to test a deployment, do we?\nTo make it a bit more convenient to execute a command with many arguments, most of which are static, we can make use of NPM. We create a package.json file that contains a script for each command we want to run:\n{ \u0026#34;name\u0026#34;: \u0026#34;stratospheric-cdk\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;0.1.0\u0026#34;, \u0026#34;private\u0026#34;: true, \u0026#34;scripts\u0026#34;: { \u0026#34;repository:deploy\u0026#34;: \u0026#34;cdk deploy --app ...\u0026#34;, \u0026#34;repository:destroy\u0026#34;: \u0026#34;cdk destroy --app ...\u0026#34; }, \u0026#34;devDependencies\u0026#34;: { \u0026#34;aws-cdk\u0026#34;: \u0026#34;1.79.0\u0026#34; } } Once we\u0026rsquo;ve run npm install to install the CDK dependency (and its transitive dependencies, for that matter), we can deploy our Docker repository stack with a simple npm run repository:deploy. We can hardcode most of the parameters for each command as part of the package.json file. Should the need arise, we can override a parameter in the command line with:\nnpm run repository:deploy -- -c applicationName=... Arguments after the -- will override any arguments defined in the package.json script.\nWith this package.json file, we now have a central location where we can look up the commands we have at our disposal for deploying or destroying CloudFormation stacks. Moreover, we don\u0026rsquo;t have to type a lot to execute one of the commands. We\u0026rsquo;ll later add more commands to this file. You can have a peek at the complete file with all three stacks on GitHub.\nThe Network CDK App The next stack we\u0026rsquo;re going to look at is the Network stack. The CDK app containing that step is the NetworkApp. You can find its code on GitHub:\npublic class NetworkApp { public static void main(final String[] args) { App app = new App(); String environmentName = (String) app .getNode() .tryGetContext(\u0026#34;environmentName\u0026#34;); requireNonEmpty(environmentName, \u0026#34;environmentName\u0026#34;); String accountId = (String) app .getNode() .tryGetContext(\u0026#34;accountId\u0026#34;); requireNonEmpty(accountId, \u0026#34;accountId\u0026#34;); String region = (String) app .getNode() .tryGetContext(\u0026#34;region\u0026#34;); requireNonEmpty(region, \u0026#34;region\u0026#34;); String sslCertificateArn = (String) app .getNode() .tryGetContext(\u0026#34;sslCertificateArn\u0026#34;); requireNonEmpty(region, \u0026#34;sslCertificateArn\u0026#34;); Environment awsEnvironment = makeEnv(accountId, region); Stack networkStack = new Stack( app, \u0026#34;NetworkStack\u0026#34;, StackProps.builder() .stackName(environmentName + \u0026#34;-Network\u0026#34;) .env(awsEnvironment) .build()); Network network = new Network( networkStack, \u0026#34;Network\u0026#34;, awsEnvironment, environmentName, new Network.NetworkInputParameters(sslCertificateArn)); app.synth(); } static Environment makeEnv(String account, String region) { return Environment.builder() .account(account) .region(region) .build(); } } It\u0026rsquo;s built in the same pattern as the DockerRepositoryApp. First, we have some input parameters, then we create a stack, and finally, we add a Network construct to that stack.\nLet\u0026rsquo;s explore this app in a bit more detail.\nManaging Different Environments The first difference from the DockerRepositoryApp is that we now expect an environmentName as an input parameter.\nRemember that one of our requirements is the ability to deploy our application into different environments like staging or production. We introduced the environmentName parameter for precisely that purpose.\nThe environment name can be an arbitrary string. We use it in the stackName() method to prefix the name of the stack. Later, we\u0026rsquo;ll see that we use it within the Network construct as well to prefix the names of some other resources. This separates the stack and the other resources from those deployed in another environment.\nOnce we\u0026rsquo;ve deployed the app with, say, the environment name staging, we can deploy it again with the environment name prod and a new stack will be deployed. If we use the same environment name CDK will recognize that a stack with the same name has already been deployed and update it instead of trying to create a new one.\nWith this simple parameter, we now have the power to deploy multiple networks that are completely isolated from each other.\nThe Network Construct Let\u0026rsquo;s take a look into the Network construct. This is another construct from our construct library, and you can find the full code on GitHub. Here\u0026rsquo;s an excerpt:\npublic class Network extends Construct { // fields omitted  public Network( final Construct scope, final String id, final Environment environment, final String environmentName, final NetworkInputParameters networkInputParameters) { super(scope, id); this.environmentName = environmentName; this.vpc = createVpc(environmentName); this.ecsCluster = Cluster.Builder.create(this, \u0026#34;cluster\u0026#34;) .vpc(this.vpc) .clusterName(prefixWithEnvironmentName(\u0026#34;ecsCluster\u0026#34;)) .build(); createLoadBalancer(vpc, networkInputParameters.getSslCertificateArn()); createOutputParameters(); } // other methods omitted  } It creates a VPC and an ECS cluster to later host our application with. Additionally, we\u0026rsquo;re now creating a load balancer and connecting it to the ECS cluster. This load balancer will distribute requests between multiple nodes of our application.\nThere are about 100 lines of code hidden in the createVpc() and createLoadBalancer() methods that create level 2 constructs and connect them with each other. That\u0026rsquo;s way better than a couple of hundred lines of YAML code, don\u0026rsquo;t you think?\nWe won\u0026rsquo;t go into the details of this code, however, because it\u0026rsquo;s best looked up in the CDK and CloudFormation docs to understand which resources to use and how to use them. If you\u0026rsquo;re interested, feel free to browse the code of the Network construct on GitHub and open up the CDK docs in a second browser window to read up on each of the resources. If the CDK docs don\u0026rsquo;t go deep enough you can always search for the respective resource in the CloudFormation docs.\nSharing Output Parameters via SSM We are, however, going to investigate the method createOutputParameters() called in the last line of the constructor: What\u0026rsquo;s that method doing?\nOur NetworkApp creates a network in which we can later place our application. Other stacks - such as the Service stack, which we\u0026rsquo;re going to look at next - will need to know some parameters from that network, so they can connect to it. The Service stack will need to know into which VPC to put its resources, to which load balancer to connect, and into which ECS cluster to deploy the Docker container, for example.\nThe question is: how does the Service stack get these parameters? We could, of course, look up these parameters by hand after deploying the Network stack, and then pass them manually as input parameters when we deploy the Service stack. That would require manual intervention, though, which we\u0026rsquo;re trying to avoid.\nWe could automate it by using the AWS CLI to get those parameters after the Network stack is deployed, but that would require lengthy and brittle shell scripts.\nWe opted for a more elegant solution that is easier to maintain and more flexible: When deploying the Network stack, we store any parameters that other stacks need in the SSM parameter store.\nAnd that\u0026rsquo;s what the method createOutputParameters() is doing. For each parameter that we want to expose, it creates a StringParameter construct with the parameter value:\nprivate void createOutputParameters(){ StringParameter vpcId=StringParameter.Builder.create(this,\u0026#34;vpcId\u0026#34;) .parameterName(createParameterName(environmentName,PARAMETER_VPC_ID)) .stringValue(this.vpc.getVpcId()) .build(); // more parameters } An important detail is that the method createParameterName() prefixes the parameter name with the environment name to make it unique, even when the stack is deployed into multiple environments at the same time:\nprivate static String createParameterName( String environmentName, String parameterName) { return environmentName + \u0026#34;-Network-\u0026#34; + parameterName; } A sample parameter name would be staging-Network-vpcId. The name makes it clear that this parameter contains the ID of the VPC that we deployed with the Network stack in staging.\nWith this naming pattern, we can read the parameters we need when building other stacks on top of the Network stack.\nTo make it convenient to retrieve the parameters again, we added static methods to the Network construct that retrieve a single parameter from the parameter store:\nprivate static String getVpcIdFromParameterStore( Construct scope, String environmentName) { return StringParameter.fromStringParameterName( scope, PARAMETER_VPC_ID, createParameterName(environmentName, PARAMETER_VPC_ID)) .getStringValue(); } This method uses the same StringParameter construct to read the parameter from the parameter store again. To make sure we\u0026rsquo;re getting the parameter for the right environment, we\u0026rsquo;re passing the environment name into the method.\nFinally, we provide the public method getOutputParametersFromParameterStore() that collects all output parameters of the Network construct and combines them into an object of type NetworkOutputParameters:\npublic static NetworkOutputParameters getOutputParametersFromParameterStore( Construct scope, String environmentName) { return new NetworkOutputParameters( getVpcIdFromParameterStore(scope, environmentName), // ... other parameters  ); } We can then invoke this method from other CDK apps to get all parameters with a single line of code.\nWe pass the stack or construct from which we\u0026rsquo;re calling the method as the scope parameter. The other CDK app only has to provide the environmentName parameter and will get all the parameters it needs from the Network construct for this environment.\nThe parameters never leave our CDK apps, which means we don\u0026rsquo;t have to pass them around in scripts or command line parameters!\nIf you have read \u0026ldquo;Getting Started with AWS CloudFormation\u0026rdquo; you might remember the Outputs section in the CloudFormation template and wonder why we\u0026rsquo;re not using the feature of CloudFormation output parameters. With the CfnOutput level 1 construct, CDK actually supports CloudFormation outputs.\nThese outputs, however, are tightly coupled with the stack that creates them, while we want to create output parameters for constructs that can later be composed into a stack. Also, the SSM store serves as a welcome overview of all the parameters that exist across different environments, which makes debugging configuration errors a lot easier.\nAnother reason for using SSM parameters is that we have more control over them. We can name them whatever we want and we can easily access them using the pattern described above. That allows for a convenient programming model.\nThat said, SSM parameters have the downside of incurring additional AWS costs with each API call to the SSM parameter store. In our example application this is negligible but in a big infrastructure it may add up to a sizeable amount.\nIn conclusion, we could have used CloudFormation outputs instead of SSM parameters - as always, it\u0026rsquo;s a game of trade-offs.\nThe Service CDK App Let\u0026rsquo;s look at the final CDK app for now, the ServiceApp. Here\u0026rsquo;s most of the code. Again, you can find the complete code on GitHub:\npublic class ServiceApp { public static void main(final String[] args) { App app = new App(); String environmentName = (String) app .getNode() .tryGetContext(\u0026#34;environmentName\u0026#34;); requireNonEmpty(environmentName, \u0026#34;environmentName\u0026#34;); String applicationName = (String) app .getNode() .tryGetContext(\u0026#34;applicationName\u0026#34;); requireNonEmpty(applicationName, \u0026#34;applicationName\u0026#34;); String accountId = (String) app .getNode() .tryGetContext(\u0026#34;accountId\u0026#34;); requireNonEmpty(accountId, \u0026#34;accountId\u0026#34;); String springProfile = (String) app .getNode() .tryGetContext(\u0026#34;springProfile\u0026#34;); requireNonEmpty(springProfile, \u0026#34;springProfile\u0026#34;); String dockerImageUrl = (String) app .getNode() .tryGetContext(\u0026#34;dockerImageUrl\u0026#34;); requireNonEmpty(dockerImageUrl, \u0026#34;dockerImageUrl\u0026#34;); String region = (String) app .getNode() .tryGetContext(\u0026#34;region\u0026#34;); requireNonEmpty(region, region); Environment awsEnvironment = makeEnv(accountId, region); ApplicationEnvironment applicationEnvironment = new ApplicationEnvironment( applicationName, environmentName ); Stack serviceStack = new Stack( app, \u0026#34;ServiceStack\u0026#34;, StackProps.builder() .stackName(applicationEnvironment.prefix(\u0026#34;Service\u0026#34;)) .env(awsEnvironment) .build()); DockerImageSource dockerImageSource = new DockerImageSource(dockerRepositoryName, dockerImageTag); NetworkOutputParameters networkOutputParameters = Network.getOutputParametersFromParameterStore( serviceStack, applicationEnvironment.getEnvironmentName()); ServiceInputParameters serviceInputParameters = new ServiceInputParameters( dockerImageSource, environmentVariables(springProfile)) .withHealthCheckIntervalSeconds(30); Service service = new Service( serviceStack, \u0026#34;Service\u0026#34;, awsEnvironment, applicationEnvironment, serviceInputParameters, networkOutputParameters); app.synth(); } } Again, its structure is very similar to that of the CDK apps we\u0026rsquo;ve discussed before. We extract a bunch of input parameters, create a stack, and then add a construct from our construct library to the stack - this time the Service construct.\nThere are some new things happening here, though. Let\u0026rsquo;s explore them.\nManaging Different Environments In the Network stack, we already used an environmentName parameter to be able to create multiple stacks for different environments from the same CDK app.\nIn the ServiceApp, we go a step further and introduce the applicationName parameter.\nFrom these two parameters, we create an object of type ApplicationEnvironment:\nApplicationEnvironment applicationEnvironment = new ApplicationEnvironment( applicationName, environmentName ); We use this ApplicationEnvironment object to prefix the name of the stack we\u0026rsquo;re creating. The Service construct also uses it internally to prefix the names of the resources it creates.\nWhile for the network stack it was sufficient to prefix stacks and resources with the environmentName, we now need the prefix to contain the applicationName, as well. After all, we might want to deploy multiple applications into the same network.\nSo, given the environmentName \u0026ldquo;staging\u0026rdquo; and the applicationName \u0026ldquo;todoapp\u0026rdquo;, all resources will be prefixed with staging-todoapp- to account for the deployment of multiple Service stacks, each with a different application.\nAccessing Output Parameters from SSM We\u0026rsquo;re also using the applicationEnvironment for accessing the output parameters of a previously deployed Network construct:\nNetworkOutputParameters networkOutputParameters = Network.getOutputParametersFromParameterStore( serviceStack, applicationEnvironment.getEnvironmentName()); The static method Network.getOutputParametersFromParameterStore() we discussed earlier loads all the parameters of the Network construct that was deployed with the given environmentName. If no parameters with the respective prefix are found, CloudFormation will complain during deployment and stop deploying the Service stack.\nWe then pass these parameters into the Service construct so that it can use them to bind the resources it deploys to the existing network infrastructure.\nLater in the book we\u0026rsquo;ll make more use of this mechanism when we\u0026rsquo;ll be creating more stacks that expose parameters that the application needs, like a database URL or password parameters.\nPulling a Docker Image The Service construct exposes the class DockerImageSource, which allows us to specify the source of the Docker image that we want to deploy:\nDockerImageSource dockerImageSource = new DockerImageSource(dockerImageUrl); The ServiceApp shouldn\u0026rsquo;t be responsible for defining where to get a Docker image from, so we\u0026rsquo;re delegating that responsibility to the caller by expecting an input parameter dockerImageUrl. We\u0026rsquo;re then passing the URL into the DockerImageSource and later pass the DockerImageSource to the Service construct.\nThe DockerImageSource also has a constructor that expects a dockerRepositoryName and a dockerImageTag. The dockerRepositoryName is the name of an ECR repository. This allows us to easily point to the Docker repository we have deployed earlier using our DockerRepository stack. We\u0026rsquo;re going to make use of that constructor when we\u0026rsquo;re building a continuous deployment pipeline later.\nManaging Environment Variables A Spring Boot application (or any application, for that matter), is usually parameterized for the environment it is deployed into. The parameters may differ between the environments. Spring Boot supports this through configuration profiles. Depending on the value of the environment variable SPRING_PROFILES_ACTIVE, Spring Boot will load configuration properties from different YAML or properties files.\nIf the SPRING_PROFILES_ACTIVE environment variable has the value staging, for example, Spring Boot will first load all configuration parameters from the common application.yml file and then add all configuration parameters from the file application-staging.yml, overriding any parameters that might have been loaded from the common file already.\nThe Service construct allows us to pass in a map with environment variables. In our case, we\u0026rsquo;re adding the SPRING_PROFILES_ACTIVE variable with the value of the springProfile variable, which is an input parameter to the ServiceApp:\nstatic Map\u0026lt;String, String\u0026gt; environmentVariables(String springProfile) { Map\u0026lt;String, String\u0026gt; vars = new HashMap\u0026lt;\u0026gt;(); vars.put(\u0026#34;SPRING_PROFILES_ACTIVE\u0026#34;, springProfile); return vars; } We\u0026rsquo;ll add more environment variables in later chapters as our infrastructure grows.\nThe Service Construct Finally, let\u0026rsquo;s have a quick look at the Service construct. The code of that construct is a couple of hundred lines strong, which makes it too long to discuss in detail here. Let\u0026rsquo;s discuss some of its highlights, though.\nThe scope of the Service construct is to create an ECS service within the ECS cluster that is provided by the Network construct. For that, it creates a lot of resources in its constructor (see the full code on GitHub):\npublic Service( final Construct scope, final String id, final Environment awsEnvironment, final ApplicationEnvironment applicationEnvironment, final ServiceInputParameters serviceInputParameters, final Network.NetworkOutputParameters networkOutputParameters){ super(scope,id); CfnTargetGroup targetGroup=... CfnListenerRule httpListenerRule=... LogGroup logGroup=... ... } It accomplishes quite a bit:\n It creates a CfnTaskDefinition to define an ECS task that hosts the given Docker image. It adds a CfnService to the ECS cluster previously deployed in the Network construct and adds the tasks to it. It creates a CfnTargetGroup for the loadbalancer deployed in the Network construct and binds it to the ECS service. It creates a CfnSecurityGroup for the ECS containers and configures it so the load balancer may route traffic to the Docker containers. It creates a LogGroup so the application can send logs to CloudWatch.  You might notice that we\u0026rsquo;re mainly using level 1 constructs here, i.e. constructs with the prefix Cfn. These constructs are direct equivalents to the CloudFormation resources and provide no abstraction over them. Why didn\u0026rsquo;t we use higher-level constructs that would have saved us some code?\nThe reason is that the existing higher-level constructs did things we didn\u0026rsquo;t want them to. They added resources we didn\u0026rsquo;t need and didn\u0026rsquo;t want to pay for. Hence, we decided to create our own higher-level Service construct out of exactly those low-level CloudFormation resources we need.\nThis highlights a potential downside of high-level constructs: different software projects need different infrastructure, and high-level constructs are not always flexible enough to serve those different needs. The construct library we created for this book, for example, will probably not serve all of the needs of your next AWS project.\nWe could, of course, create a construct library that is highly parameterized and flexible for many different requirements. This might make the constructs complex and error prone, though. Another option is to expend the effort to create your own construct library tailored for your project (or organization).\nIt\u0026rsquo;s trade-offs all the way down.\nPlaying with the CDK Apps If you want to play around with the CDK apps we\u0026rsquo;ve discussed above, feel free to clone the GitHub repo and navigate to the folder chapters/chapter-6. Then:\n run npm install to install the dependencies look into package.json and change the parameters of the different scripts (most importantly, set the account ID to your AWS account ID) run npm run repository:deploy to deploy a docker repository run npm run network:deploy to deploy a network run npm run service:deploy to deploy the \u0026ldquo;Hello World\u0026rdquo; Todo App  Then, have a look around in the AWS Console to see the resources those commands created.\nDon\u0026rsquo;t forget to delete the stacks afterwards, either by deleting them in the CloudFormation console, or by calling the npm run *:destroy scripts as otherwise you\u0026rsquo;ll incur additional costs.\nCheck Out the Book!  This article is a self-sufficient sample chapter from the book Stratospheric - From Zero to Production with Spring Boot and AWS.\nIf you want to learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check it out!\n ","date":"May 3, 2021","image":"https://reflectoring.io/images/stock/0074-stack-1200x628-branded_hu068f2b0d815bda96ddb686d2b65ba146_143922_650x0_resize_q90_box.jpg","permalink":"/designing-a-aws-cdk-project/","title":"Designing an AWS CDK Project with Java"},{"categories":["Spring Boot"],"contents":"Spring Boot Actuator helps us monitor and manage our applications in production. It exposes endpoints that provide health, metrics, and other information about the running application. We can also use it to change the logging level of the application, take a thread dump, and so on - in short, capabilities that make it easier to operate in production.\nWhile its primary use is in production, it can also help us during development and maintenance. We can use it to explore and analyze a new Spring Boot application.\nIn this article, we\u0026rsquo;ll see how to use some of its endpoints to explore a new application that we are not familiar with. We will work on the command line and use curl and jq, a nifty and powerful command-line JSON processor.\n Example Code This article is accompanied by a working code example on GitHub. Why Use Actuator to Analyze and Explore an Application? Let\u0026rsquo;s imagine we are working on a new Spring Boot-based codebase for the first time. We would probably explore the folder structure, look at the names of the folders, check out the package names and class names to try and build a model of the application in our mind. We could generate some UML diagrams to help identify dependencies between modules, packages, classes, etc.\nWhile these are essential steps, they only give us a static picture of the application. We can\u0026rsquo;t get a complete picture without understanding what happens at runtime. E.g., what are all the Spring Beans that are created? Which API endpoints are available? What are all the filters that a request goes through?\nConstructing this mental model of the runtime shape of the application is very helpful. We can then dive deeper to read and understand code in the important areas more effectively.\nHigh-level Overview of Spring Actuator Let\u0026rsquo;s start with a short primer on Spring Boot Actuator.\nOn a high level, when we work with Actuator, we do the following steps:\n Add Actuator as a dependency to our project Enable and expose the endpoints Secure and configure the endpoints  Let\u0026rsquo;s look at each of these steps briefly.\nStep 1: Add Actuator Adding Actuator to our project is like adding any other library dependency. Here\u0026rsquo;s the snippet for Maven\u0026rsquo;s pom.xml:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-actuator\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; If we were using Gradle, we\u0026rsquo;d add the below snippet to build.gradle file:\ndependencies { implementation \u0026#39;org.springframework.boot:spring-boot-starter-actuator\u0026#39; } Just adding the above dependency to a Spring Boot application provides some endpoints like /actuator/health out-of-the-box which can be used for a shallow health check by a load balancer, for example.\n$ curl http://localhost:8080/actuator/health {\u0026#34;status\u0026#34;:\u0026#34;UP\u0026#34;} We can hit the /actuator endpoint to view the other endpoints available by default. /actuator exposes a \u0026ldquo;discovery page\u0026rdquo; with all available endpoints:\n$ curl http://localhost:8080/actuator {\u0026#34;_links\u0026#34;:{\u0026#34;self\u0026#34;:{\u0026#34;href\u0026#34;:\u0026#34;http://localhost:8080/actuator\u0026#34;,\u0026#34;templated\u0026#34;:false},\u0026#34;health\u0026#34;:{\u0026#34;href\u0026#34;:\u0026#34;http://localhost:8080/actuator/health\u0026#34;,\u0026#34;templated\u0026#34;:false},\u0026#34;health-path\u0026#34;:{\u0026#34;href\u0026#34;:\u0026#34;http://localhost:8080/actuator/health/{*path}\u0026#34;,\u0026#34;templated\u0026#34;:true},\u0026#34;info\u0026#34;:{\u0026#34;href\u0026#34;:\u0026#34;http://localhost:8080/actuator/info\u0026#34;,\u0026#34;templated\u0026#34;:false}}} Step 2: Enable and Expose Endpoints Endpoints are identified by IDs like health, info, metrics and so on. Enabling and exposing an endpoint makes it available for use under the /actuator path of the application URL, like http://your-service.com/actuator/health, http://your-service.com/actuator/metrics etc.\nMost endpoints except shutdown are enabled by default. We can disable an endpoint by setting the management.endpoint.\u0026lt;id\u0026gt;.enabled property to false in the application.properties file. For example, here\u0026rsquo;s how we would disable the metrics endpoint:\nmanagement.endpoint.metrics.enabled=false Accessing a disabled endpoint returns a HTTP 404 error:\n$ curl http://localhost:8080/actuator/metrics {\u0026#34;timestamp\u0026#34;:\u0026#34;2021-04-24T12:55:40.688+00:00\u0026#34;,\u0026#34;status\u0026#34;:404,\u0026#34;error\u0026#34;:\u0026#34;Not Found\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;\u0026#34;,\u0026#34;path\u0026#34;:\u0026#34;/actuator/metrics\u0026#34;} We can choose to expose the endpoints over HTTP and/or JMX. While HTTP is generally used, JMX might be preferable for some applications.\nWe can expose endpoints by setting the management.endpoints.[web|jmx].exposure.include to the list of endpoint IDs we want to expose. Here\u0026rsquo;s how we would expose the metrics endpoint, for example:\nmanagement.endpoints.web.exposure.include=metrics An endpoint has to be both enabled and exposed to be available.\nStep 3: Secure and Configure the Endpoints Since many of these endpoints contain sensitive information, it\u0026rsquo;s important to secure them. The endpoints should be accessible only to authorized users managing and operating our application in production and not to our normal application users. Imagine the disastrous consequences of a normal application user having access to heapdump or shutdown endpoints!\nWe will not look at securing endpoints in any detail in this article since we are mainly interested in using Spring Actuator to explore the application in our local, development environment. You can find details in the documentation here.\nA Quick Introduction to jq jq is a command-line JSON processor. It works like a filter by taking an input and producing an output. Many built-in filters, operators and functions are available. We can combine filters, pipe the output of one filter as input to another etc.\nSuppose we had the following JSON in a file sample.json:\n{ \u0026#34;students\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;John\u0026#34;, \u0026#34;age\u0026#34;: 10, \u0026#34;grade\u0026#34;: 3, \u0026#34;subjects\u0026#34;: [\u0026#34;math\u0026#34;, \u0026#34;english\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;Jack\u0026#34;, \u0026#34;age\u0026#34;: 10, \u0026#34;grade\u0026#34;: 3, \u0026#34;subjects\u0026#34;: [\u0026#34;math\u0026#34;, \u0026#34;social science\u0026#34;, \u0026#34;painting\u0026#34;] }, { \u0026#34;name\u0026#34;: \u0026#34;James\u0026#34;, \u0026#34;age\u0026#34;: 11, \u0026#34;grade\u0026#34;: 5, \u0026#34;subjects\u0026#34;: [\u0026#34;math\u0026#34;, \u0026#34;environmental science\u0026#34;, \u0026#34;english\u0026#34;] }, .... other student objects omitted ... ] } It\u0026rsquo;s an object containing an array of \u0026ldquo;student\u0026rdquo; objects with some details for each student.\nLet\u0026rsquo;s look at a few examples of processing and transforming this JSON with jq.\n$ cat sample.json | jq \u0026#39;.students[] | .name\u0026#39; \u0026#34;John\u0026#34; \u0026#34;Jack\u0026#34; \u0026#34;James\u0026#34; Let\u0026rsquo;s unpack the jq command to understand what\u0026rsquo;s happening:\n   Expression Effect     .students[] iterate over the students array   | output each student to the next filter   .name extract name from the student object    Now, let\u0026rsquo;s get the list of students who have subjects like \u0026ldquo;environmental science\u0026rdquo;, \u0026ldquo;social science\u0026rdquo; etc.:\n$ cat sample.json | jq \u0026#39;.students[] | select(.subjects[] | contains(\u0026#34;science\u0026#34;))\u0026#39; { \u0026#34;name\u0026#34;: \u0026#34;Jack\u0026#34;, \u0026#34;age\u0026#34;: 10, \u0026#34;grade\u0026#34;: 3, \u0026#34;subjects\u0026#34;: [ \u0026#34;math\u0026#34;, \u0026#34;social science\u0026#34;, \u0026#34;painting\u0026#34; ] } { \u0026#34;name\u0026#34;: \u0026#34;James\u0026#34;, \u0026#34;age\u0026#34;: 11, \u0026#34;grade\u0026#34;: 5, \u0026#34;subjects\u0026#34;: [ \u0026#34;math\u0026#34;, \u0026#34;environmental science\u0026#34;, \u0026#34;english\u0026#34; ] } Let\u0026rsquo;s unpack the command again:\n   Expression Effect     .students[] iterate over the students array   | output each student to the next filter   `select(.subjects[] contains(\u0026ldquo;science\u0026rdquo;))`   {: .table}     With one small change, we can collect these items into an array again:\n$ cat sample.json | jq \u0026#39;[.students[] | select(.subjects[] | contains(\u0026#34;science\u0026#34;))]\u0026#39; [ { \u0026#34;name\u0026#34;: \u0026#34;Jack\u0026#34;, \u0026#34;age\u0026#34;: 10, \u0026#34;grade\u0026#34;: 3, \u0026#34;subjects\u0026#34;: [ \u0026#34;math\u0026#34;, \u0026#34;social science\u0026#34;, \u0026#34;painting\u0026#34; ] }, { \u0026#34;name\u0026#34;: \u0026#34;James\u0026#34;, \u0026#34;age\u0026#34;: 11, \u0026#34;grade\u0026#34;: 5, \u0026#34;subjects\u0026#34;: [ \u0026#34;math\u0026#34;, \u0026#34;environmental science\u0026#34;, \u0026#34;english\u0026#34; ] } ] All we needed to do was put the entire expression within brackets.\nWe can use jq to both filter and reshape the JSON:\n$ cat sample.json | jq \u0026#39;[.students[] | {\u0026#34;studentName\u0026#34;: .name, \u0026#34;favoriteSubject\u0026#34;: .subjects[0]}]\u0026#39; [ { \u0026#34;studentName\u0026#34;: \u0026#34;John\u0026#34;, \u0026#34;favoriteSubject\u0026#34;: \u0026#34;math\u0026#34; }, { \u0026#34;studentName\u0026#34;: \u0026#34;Jack\u0026#34;, \u0026#34;favoriteSubject\u0026#34;: \u0026#34;math\u0026#34; }, { \u0026#34;studentName\u0026#34;: \u0026#34;James\u0026#34;, \u0026#34;favoriteSubject\u0026#34;: \u0026#34;math\u0026#34; } ] We\u0026rsquo;ve iterated over the students array, created a new object containing properties studentName and favoriteSubject with values set to the name property and the first subject from the original student object. We finally collected all the new items into an array.\nWe can get a lot done with a few keystrokes in jq. Since most APIs that we usually work with use JSON, it\u0026rsquo;s a great tool to have in our tool belt.\nCheck out the tutorial and manual from the official documentation. jqplay is a great resource for playing around and constructing our jq expressions.\nExploring a Spring Boot Application In the remainder of this article, we\u0026rsquo;ll use Actuator to explore a running Spring Boot application. The application itself is a very simplified example of an eCommerce order processing application. It only has skeleton code needed to illustrate ideas.\nWhile there are many Actuator endpoints available, we will focus only on those which help us understand the runtime shape of the application.\nAll the endpoints we will see are enabled by default. Let\u0026rsquo;s expose them:\nmanagement.endpoints.web.exposure.include=mappings,beans,startup,env,scheduledtasks,caches,metrics Using the mappings Endpoint Checking out the available APIs is usually a good place to start exploring a service. The mappings endpoint provides all the routes and handlers, along with additional details.\nLet\u0026rsquo;s hit the endpoint with a curl command and pipe the response into jq to pretty-print it:\n$ curl http://localhost:8080/actuator/mappings | jq Here\u0026rsquo;s the response:\n{ \u0026#34;contexts\u0026#34;: { \u0026#34;application\u0026#34;: { \u0026#34;mappings\u0026#34;: { \u0026#34;dispatcherServlets\u0026#34;: { \u0026#34;dispatcherServlet\u0026#34;: [ { \u0026#34;handler\u0026#34;: \u0026#34;Actuator web endpoint \u0026#39;metrics\u0026#39;\u0026#34;, \u0026#34;predicate\u0026#34;: \u0026#34;{GET [/actuator/metrics], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}\u0026#34;, \u0026#34;details\u0026#34;: { \u0026#34;handlerMethod\u0026#34;: { \u0026#34;className\u0026#34;: \u0026#34;org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping.OperationHandler\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;handle\u0026#34;, \u0026#34;descriptor\u0026#34;: \u0026#34;(Ljavax/servlet/http/HttpServletRequest;Ljava/util/Map;)Ljava/lang/Object;\u0026#34; }, \u0026#34;requestMappingConditions\u0026#34;: { ... properties omitted ... ], \u0026#34;params\u0026#34;: [], \u0026#34;patterns\u0026#34;: [ \u0026#34;/actuator/metrics\u0026#34; ], \u0026#34;produces\u0026#34;: [ ... properties omitted ... ] } } }, ... 20+ more handlers omitted ... ] }, \u0026#34;servletFilters\u0026#34;: [ { \u0026#34;servletNameMappings\u0026#34;: [], \u0026#34;urlPatternMappings\u0026#34;: [ \u0026#34;/*\u0026#34; ], \u0026#34;name\u0026#34;: \u0026#34;webMvcMetricsFilter\u0026#34;, \u0026#34;className\u0026#34;: \u0026#34;org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter\u0026#34; }, ... other filters omitted ... ], \u0026#34;servlets\u0026#34;: [ { \u0026#34;mappings\u0026#34;: [ \u0026#34;/\u0026#34; ], \u0026#34;name\u0026#34;: \u0026#34;dispatcherServlet\u0026#34;, \u0026#34;className\u0026#34;: \u0026#34;org.springframework.web.servlet.DispatcherServlet\u0026#34; } ] }, \u0026#34;parentId\u0026#34;: null } } } It can still be a bit overwhelming to go through this response JSON - it has a lot of details about all the request handlers, servlets and servlet filters.\nLet\u0026rsquo;s use jq to filter this information further. Since we know the package names from our service, we will have jq select only those handlers which contains our package name io.reflectoring.springboot.actuator:\n$ curl http://localhost:8080/actuator/mappings | jq \u0026#39;.contexts.application.mappings.dispatcherServlets.dispatcherServlet[] | select(.handler | contains(\u0026#34;io.reflectoring.springboot.actuator\u0026#34;))\u0026#39; { \u0026#34;handler\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.controllers.PaymentController#processPayments(String, PaymentRequest)\u0026#34;, \u0026#34;predicate\u0026#34;: \u0026#34;{POST [/{orderId}/payment]}\u0026#34;, \u0026#34;details\u0026#34;: { \u0026#34;handlerMethod\u0026#34;: { \u0026#34;className\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.controllers.PaymentController\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;processPayments\u0026#34;, \u0026#34;descriptor\u0026#34;: \u0026#34;(Ljava/lang/String;Lio/reflectoring/springboot/actuator/model/PaymentRequest;)Lio/reflectoring/springboot/actuator/model/PaymentResponse;\u0026#34; }, \u0026#34;requestMappingConditions\u0026#34;: { \u0026#34;consumes\u0026#34;: [], \u0026#34;headers\u0026#34;: [], \u0026#34;methods\u0026#34;: [ \u0026#34;POST\u0026#34; ], \u0026#34;params\u0026#34;: [], \u0026#34;patterns\u0026#34;: [ \u0026#34;/{orderId}/payment\u0026#34; ], \u0026#34;produces\u0026#34;: [] } } } { \u0026#34;handler\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.controllers.OrderController#getOrders(String)\u0026#34;, \u0026#34;predicate\u0026#34;: \u0026#34;{GET [/{customerId}/orders]}\u0026#34;, \u0026#34;details\u0026#34;: { \u0026#34;handlerMethod\u0026#34;: { \u0026#34;className\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.controllers.OrderController\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;getOrders\u0026#34;, \u0026#34;descriptor\u0026#34;: \u0026#34;(Ljava/lang/String;)Ljava/util/List;\u0026#34; }, \u0026#34;requestMappingConditions\u0026#34;: { \u0026#34;consumes\u0026#34;: [], \u0026#34;headers\u0026#34;: [], \u0026#34;methods\u0026#34;: [ \u0026#34;GET\u0026#34; ], \u0026#34;params\u0026#34;: [], \u0026#34;patterns\u0026#34;: [ \u0026#34;/{customerId}/orders\u0026#34; ], \u0026#34;produces\u0026#34;: [] } } } { \u0026#34;handler\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.controllers.OrderController#placeOrder(String, Order)\u0026#34;, \u0026#34;predicate\u0026#34;: \u0026#34;{POST [/{customerId}/orders]}\u0026#34;, \u0026#34;details\u0026#34;: { \u0026#34;handlerMethod\u0026#34;: { \u0026#34;className\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.controllers.OrderController\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;placeOrder\u0026#34;, \u0026#34;descriptor\u0026#34;: \u0026#34;(Ljava/lang/String;Lio/reflectoring/springboot/actuator/model/Order;)Lio/reflectoring/springboot/actuator/model/OrderCreatedResponse;\u0026#34; }, \u0026#34;requestMappingConditions\u0026#34;: { \u0026#34;consumes\u0026#34;: [], \u0026#34;headers\u0026#34;: [], \u0026#34;methods\u0026#34;: [ \u0026#34;POST\u0026#34; ], \u0026#34;params\u0026#34;: [], \u0026#34;patterns\u0026#34;: [ \u0026#34;/{customerId}/orders\u0026#34; ], \u0026#34;produces\u0026#34;: [] } } } We can see the APIs available and details about the HTTP method, the request path etc. In a complex, real-world application, this would give a consolidated view of all the APIs and their details irrespective of how the packages were organized in a multi-module codebase. This is a useful technique to start exploring the application especially when working on a multi-module legacy codebase where even Swagger documentation may not be available.\nSimilarly, we can check what are the filters that our requests pass through before reaching the controllers:\n$ curl http://localhost:8080/actuator/mappings | jq \u0026#39;.contexts.application.mappings.servletFilters\u0026#39; [ { \u0026#34;servletNameMappings\u0026#34;: [], \u0026#34;urlPatternMappings\u0026#34;: [ \u0026#34;/*\u0026#34; ], \u0026#34;name\u0026#34;: \u0026#34;webMvcMetricsFilter\u0026#34;, \u0026#34;className\u0026#34;: \u0026#34;org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter\u0026#34; }, ... other filters omitted ... ] Using the beans Endpoint Now, let\u0026rsquo;s see the list of beans that are created:\n$ curl http://localhost:8080/actuator/beans | jq { \u0026#34;contexts\u0026#34;: { \u0026#34;application\u0026#34;: { \u0026#34;beans\u0026#34;: { \u0026#34;endpointCachingOperationInvokerAdvisor\u0026#34;: { \u0026#34;aliases\u0026#34;: [], \u0026#34;scope\u0026#34;: \u0026#34;singleton\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;org.springframework.boot.actuate.endpoint.invoker.cache.CachingOperationInvokerAdvisor\u0026#34;, \u0026#34;resource\u0026#34;: \u0026#34;class path resource [org/springframework/boot/actuate/autoconfigure/endpoint/EndpointAutoConfiguration.class]\u0026#34;, \u0026#34;dependencies\u0026#34;: [ \u0026#34;org.springframework.boot.actuate.autoconfigure.endpoint.EndpointAutoConfiguration\u0026#34;, \u0026#34;environment\u0026#34; ] }, .... other beans omitted ... } } } This gives a consolidated view of all the beans in the ApplicationContext. Going through this gives us some idea of the shape of the application at the runtime - what are the Spring internal beans, what are the application beans, what are their scopes, what are the dependencies of each bean etc.\nAgain, we can use jq to filter the responses and focus on those parts of the response that we are interested in:\n$ curl http://localhost:8080/actuator/beans | jq \u0026#39;.contexts.application.beans | with_entries(select(.value.type | contains(\u0026#34;io.reflectoring.springboot.actuator\u0026#34;)))\u0026#39; { \u0026#34;orderController\u0026#34;: { \u0026#34;aliases\u0026#34;: [], \u0026#34;scope\u0026#34;: \u0026#34;singleton\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.controllers.OrderController\u0026#34;, \u0026#34;resource\u0026#34;: \u0026#34;file [/code-examples/spring-boot/spring-boot-actuator/target/classes/io/reflectoring/springboot/actuator/controllers/OrderController.class]\u0026#34;, \u0026#34;dependencies\u0026#34;: [ \u0026#34;orderService\u0026#34;, \u0026#34;simpleMeterRegistry\u0026#34; ] }, \u0026#34;orderService\u0026#34;: { \u0026#34;aliases\u0026#34;: [], \u0026#34;scope\u0026#34;: \u0026#34;singleton\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.services.OrderService\u0026#34;, \u0026#34;resource\u0026#34;: \u0026#34;file [/code-examples/spring-boot/spring-boot-actuator/target/classes/io/reflectoring/springboot/actuator/services/OrderService.class]\u0026#34;, \u0026#34;dependencies\u0026#34;: [ \u0026#34;orderRepository\u0026#34; ] }, ... other beans omitted ... \u0026#34;cleanUpAbandonedBaskets\u0026#34;: { \u0026#34;aliases\u0026#34;: [], \u0026#34;scope\u0026#34;: \u0026#34;singleton\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.services.tasks.CleanUpAbandonedBaskets\u0026#34;, \u0026#34;resource\u0026#34;: \u0026#34;file [/code-examples/spring-boot/spring-boot-actuator/target/classes/io/reflectoring/springboot/actuator/services/tasks/CleanUpAbandonedBaskets.class]\u0026#34;, \u0026#34;dependencies\u0026#34;: [] } } This gives a bird\u0026rsquo;s-eye view of all the application beans and their dependencies.\nHow is this useful? We can derive additional information from this type of view: for example, if we see some dependency repeated in multiple beans, it likely has important functionality encapsulated that impacts multiple flows. We could mark that class as an important one that we would want to understand when we dive deeper into the code. Or perhaps, that bean is a God object that needs some refactoring once we understand the codebase.\nUsing the startup Endpoint Unlike the other endpoints we have seen, configuring the startup endpoint requires some additional steps. We have to provide an implementation of ApplicationStartup to our application:\nSpringApplication app = new SpringApplication(DemoApplication.class); app.setApplicationStartup(new BufferingApplicationStartup(2048)); app.run(args); Here, we have set our application\u0026rsquo;s ApplicationStartup to a BufferingApplicationStartup which is an in-memory implementation that captures the events in Spring\u0026rsquo;s complex startup process. The internal buffer will have the capacity we specified - 2048.\nNow, let\u0026rsquo;s hit the startup endpoint. Unlike the other endpoints startup supports the POST method:\n$ curl -XPOST \u0026#39;http://localhost:8080/actuator/startup\u0026#39; | jq { \u0026#34;springBootVersion\u0026#34;: \u0026#34;2.4.4\u0026#34;, \u0026#34;timeline\u0026#34;: { \u0026#34;startTime\u0026#34;: \u0026#34;2021-04-24T12:58:06.947320Z\u0026#34;, \u0026#34;events\u0026#34;: [ { \u0026#34;startupStep\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;spring.boot.application.starting\u0026#34;, \u0026#34;id\u0026#34;: 1, \u0026#34;parentId\u0026#34;: 0, \u0026#34;tags\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;mainApplicationClass\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.DemoApplication\u0026#34; } ] }, \u0026#34;startTime\u0026#34;: \u0026#34;2021-04-24T12:58:06.956665337Z\u0026#34;, \u0026#34;endTime\u0026#34;: \u0026#34;2021-04-24T12:58:06.998894390Z\u0026#34;, \u0026#34;duration\u0026#34;: \u0026#34;PT0.042229053S\u0026#34; }, { \u0026#34;startupStep\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;spring.boot.application.environment-prepared\u0026#34;, \u0026#34;id\u0026#34;: 2, \u0026#34;parentId\u0026#34;: 0, \u0026#34;tags\u0026#34;: [] }, \u0026#34;startTime\u0026#34;: \u0026#34;2021-04-24T12:58:07.114646769Z\u0026#34;, \u0026#34;endTime\u0026#34;: \u0026#34;2021-04-24T12:58:07.324207009Z\u0026#34;, \u0026#34;duration\u0026#34;: \u0026#34;PT0.20956024S\u0026#34; }, .... other steps omitted .... { \u0026#34;startupStep\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;spring.boot.application.started\u0026#34;, \u0026#34;id\u0026#34;: 277, \u0026#34;parentId\u0026#34;: 0, \u0026#34;tags\u0026#34;: [] }, \u0026#34;startTime\u0026#34;: \u0026#34;2021-04-24T12:58:11.169267550Z\u0026#34;, \u0026#34;endTime\u0026#34;: \u0026#34;2021-04-24T12:58:11.212604248Z\u0026#34;, \u0026#34;duration\u0026#34;: \u0026#34;PT0.043336698S\u0026#34; }, { \u0026#34;startupStep\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;spring.boot.application.running\u0026#34;, \u0026#34;id\u0026#34;: 278, \u0026#34;parentId\u0026#34;: 0, \u0026#34;tags\u0026#34;: [] }, \u0026#34;startTime\u0026#34;: \u0026#34;2021-04-24T12:58:11.213585420Z\u0026#34;, \u0026#34;endTime\u0026#34;: \u0026#34;2021-04-24T12:58:11.214002336Z\u0026#34;, \u0026#34;duration\u0026#34;: \u0026#34;PT0.000416916S\u0026#34; } ] } } The response is an array of events with details about the event\u0026rsquo;s name, startTime, endTime and duration.\nHow can this information help is in our exploration of the application? If we know which steps are taking more time during startup, we can check that area of the codebase to understand why. It could be that a cache warmer is pre-fetching data from a database or pre-computing some data, for example.\nSince the above response contains a lot of details, let\u0026rsquo;s narrow it down by filtering on spring.beans.instantiate step and also sort the events by duration in a descending order:\n$ curl -XPOST \u0026#39;http://localhost:8080/actuator/startup\u0026#39; | jq \u0026#39;.timeline.events | sort_by(.duration) | reverse[] | select(.startupStep.name | contains(\u0026#34;instantiate\u0026#34;))\u0026#39; $ What happened here? Why did we not get any response? Invoking startup endpoint also clears the internal buffer. Let\u0026rsquo;s retry after restarting the application:\n$ curl -XPOST \u0026#39;http://localhost:8080/actuator/startup\u0026#39; | jq \u0026#39;[.timeline.events | sort_by(.duration) | reverse[] | select(.startupStep.name | contains(\u0026#34;instantiate\u0026#34;)) | {beanName: .startupStep.tags[0].value, duration: .duration}]\u0026#39; [ { \u0026#34;beanName\u0026#34;: \u0026#34;orderController\u0026#34;, \u0026#34;duration\u0026#34;: \u0026#34;PT1.010878035S\u0026#34; }, { \u0026#34;beanName\u0026#34;: \u0026#34;orderService\u0026#34;, \u0026#34;duration\u0026#34;: \u0026#34;PT1.005529559S\u0026#34; }, { \u0026#34;beanName\u0026#34;: \u0026#34;requestMappingHandlerAdapter\u0026#34;, \u0026#34;duration\u0026#34;: \u0026#34;PT0.11549366S\u0026#34; }, { \u0026#34;beanName\u0026#34;: \u0026#34;tomcatServletWebServerFactory\u0026#34;, \u0026#34;duration\u0026#34;: \u0026#34;PT0.108340094S\u0026#34; }, ... other beans omitted ... ] So it takes more than a second to create the orderController and orderService beans! That\u0026rsquo;s interesting - we now have a specific area of the application we can focus on to understand more.\nThe jq command here was a bit complex compared to the earlier ones. Let\u0026rsquo;s break it down to understand what\u0026rsquo;s happening:\njq \u0026#39;[.timeline.events \\ | sort_by(.duration) \\ | reverse[] \\ | select(.startupStep.name \\ | contains(\u0026#34;instantiate\u0026#34;)) \\ | {beanName: .startupStep.tags[0].value, duration: .duration}]\u0026#39;    Expression Effect     `.timeline.events sort_by(.duration)   [] iterate over the resulting array   `select(.startupStep.name contains(\u0026ldquo;instantiate\u0026rdquo;))`   {beanName: .startupStep.tags[0].value, duration: .duration} construct a new JSON object with properties beanName and duration    The brackets over the entire expression indicate we want to collect all the constructed JSON objects into an array.\nUsing the env Endpoint The env endpoint gives a consolidated view of all the configuration properties of the application. This includes configurations from theapplication.properties file, the JVM\u0026rsquo;s system properties, environment variables etc.\nWe can use it to see if the application has some configurations set via enviornment variables, what are all the jar files that are on its classpath etc.:\n$ curl http://localhost:8080/actuator/env | jq { \u0026#34;activeProfiles\u0026#34;: [], \u0026#34;propertySources\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;server.ports\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;local.server.port\u0026#34;: { \u0026#34;value\u0026#34;: 8080 } } }, { \u0026#34;name\u0026#34;: \u0026#34;servletContextInitParams\u0026#34;, \u0026#34;properties\u0026#34;: {} }, { \u0026#34;name\u0026#34;: \u0026#34;systemProperties\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;gopherProxySet\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;false\u0026#34; }, \u0026#34;java.class.path\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;/target/test-classes:/target/classes:/Users/reflectoring/.m2/repository/org/springframework/boot/spring-boot-starter-actuator/2.4.4/spring-boot-starter-actuator-2.4.4.jar:/Users/reflectoring/.m2/repository/org/springframework/boot/spring-boot-starter/2.4.4/spring-boot-starter-2.4.4.jar: ... other jars omitted ... \u0026#34; }, ... other properties omitted ... } }, { \u0026#34;name\u0026#34;: \u0026#34;systemEnvironment\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;USER\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;reflectoring\u0026#34;, \u0026#34;origin\u0026#34;: \u0026#34;System Environment Property \\\u0026#34;USER\\\u0026#34;\u0026#34; }, \u0026#34;HOME\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;/Users/reflectoring\u0026#34;, \u0026#34;origin\u0026#34;: \u0026#34;System Environment Property \\\u0026#34;HOME\\\u0026#34;\u0026#34; } ... other environment variables omitted ... } }, { \u0026#34;name\u0026#34;: \u0026#34;Config resource \u0026#39;class path resource [application.properties]\u0026#39; via location \u0026#39;optional:classpath:/\u0026#39;\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;management.endpoint.logfile.enabled\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;origin\u0026#34;: \u0026#34;class path resource [application.properties] - 2:37\u0026#34; }, \u0026#34;management.endpoints.web.exposure.include\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;metrics,beans,mappings,startup,env, info,loggers\u0026#34;, \u0026#34;origin\u0026#34;: \u0026#34;class path resource [application.properties] - 5:43\u0026#34; } } } ] } Using the scheduledtasks Endpoint This endpoint let\u0026rsquo;s us check if the application is running any task periodically using Spring\u0026rsquo;s @Scheduled annotation:\n$ curl http://localhost:8080/actuator/scheduledtasks | jq { \u0026#34;cron\u0026#34;: [ { \u0026#34;runnable\u0026#34;: { \u0026#34;target\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.services.tasks.ReportGenerator.generateReports\u0026#34; }, \u0026#34;expression\u0026#34;: \u0026#34;0 0 12 * * *\u0026#34; } ], \u0026#34;fixedDelay\u0026#34;: [ { \u0026#34;runnable\u0026#34;: { \u0026#34;target\u0026#34;: \u0026#34;io.reflectoring.springboot.actuator.services.tasks.CleanUpAbandonedBaskets.process\u0026#34; }, \u0026#34;initialDelay\u0026#34;: 0, \u0026#34;interval\u0026#34;: 900000 } ], \u0026#34;fixedRate\u0026#34;: [], \u0026#34;custom\u0026#34;: [] } From the response we can see that the application generates some reports every day at 12 pm and that there is a background process that does some clean up every 15 minutes. We could then read those specific classes' code if we we wanted to know what those reports are, what are the steps involved in cleaning up an abandoned basket etc.\nUsing the caches Endpoint This endpoint lists all the application caches:\n$ curl http://localhost:8080/actuator/caches | jq { \u0026#34;cacheManagers\u0026#34;: { \u0026#34;cacheManager\u0026#34;: { \u0026#34;caches\u0026#34;: { \u0026#34;states\u0026#34;: { \u0026#34;target\u0026#34;: \u0026#34;java.util.concurrent.ConcurrentHashMap\u0026#34; }, \u0026#34;shippingPrice\u0026#34;: { \u0026#34;target\u0026#34;: \u0026#34;java.util.concurrent.ConcurrentHashMap\u0026#34; } } } } } We can tell that the application is caching some states and shippingPrice data. This gives us another area of the application to explore and learn more about: how are the caches built, when are cache entries evicted etc.\nUsing the health Endpoint The health endpoint shows the application\u0026rsquo;s health information:\n$ curl http://localhost:8080/actuator/health {\u0026#34;status\u0026#34;:\u0026#34;UP\u0026#34;} This is usually a shallow healthcheck. While this is useful in a production environment for a loadbalancer to check against frequently, it does not help us in our goal of understanding the application.\nMany applications also implement deep healthchecks which can help us quickly find out what are the external dependencies of the application, which databases and message brokers does it connect to etc.\nCheck out this Reflectoring article to learn more about implementing healthchecks using Actuator.\nUsing the metrics Endpoint This endpoint lists all the metrics generated by the application:\n$ curl http://localhost:8080/actuator/metrics | jq { \u0026#34;names\u0026#34;: [ \u0026#34;http.server.requests\u0026#34;, \u0026#34;jvm.buffer.count\u0026#34;, \u0026#34;jvm.buffer.memory.used\u0026#34;, \u0026#34;jvm.buffer.total.capacity\u0026#34;, \u0026#34;jvm.threads.states\u0026#34;, \u0026#34;logback.events\u0026#34;, \u0026#34;orders.placed.counter\u0026#34;, \u0026#34;process.cpu.usage\u0026#34;, ... other metrics omitted ... ] } We can then fetch the individual metrics data:\n$ curl http://localhost:8080/actuator/metrics/jvm.memory.used | jq { \u0026#34;name\u0026#34;: \u0026#34;jvm.memory.used\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The amount of used memory\u0026#34;, \u0026#34;baseUnit\u0026#34;: \u0026#34;bytes\u0026#34;, \u0026#34;measurements\u0026#34;: [ { \u0026#34;statistic\u0026#34;: \u0026#34;VALUE\u0026#34;, \u0026#34;value\u0026#34;: 148044128 } ], \u0026#34;availableTags\u0026#34;: [ { \u0026#34;tag\u0026#34;: \u0026#34;area\u0026#34;, \u0026#34;values\u0026#34;: [ \u0026#34;heap\u0026#34;, \u0026#34;nonheap\u0026#34; ] }, { \u0026#34;tag\u0026#34;: \u0026#34;id\u0026#34;, \u0026#34;values\u0026#34;: [ \u0026#34;CodeHeap \u0026#39;profiled nmethods\u0026#39;\u0026#34;, \u0026#34;G1 Old Gen\u0026#34;, ... other tags omitted ... ] } ] } Checking out the available custom API metrics is especially useful. It can give us some insight into what is important about this application from a business\u0026rsquo;s point of view. For example, we can see from the metrics list that there is an orders.placed.counter that probably tells us how many orders have been placed in a period of time.\nConclusion In this article, we learned how we can use Spring Actuator in our local, development environment to explore a new application. We looked at a few actuator endpoints that can help us identify important areas of the codebase that may need a deeper study. Along the way, we also learned how to process JSON on the command line using the lightweight and extremely powerful jq tool.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"April 30, 2021","image":"https://reflectoring.io/images/stock/0100-motor-1200x628-branded_hu27daf92d9dece49b58b30c88717afe92_170013_650x0_resize_q90_box.jpg","permalink":"/exploring-a-spring-boot-app-with-actuator-and-jq/","title":"Exploring a Spring Boot App with Actuator and jq"},{"categories":["Java"],"contents":"In this article we will learn how to mock objects with Mockito. We\u0026rsquo;ll first talk about what test doubles are and then how we can use them to create meaningful and tailored unit tests. We will also have a look at the most important Dos and Don\u0026rsquo;ts while writing clean unit tests with Mockito.\n Example Code This article is accompanied by a working code example on GitHub. Introduction to Mocks The basic concept of mocking is replacing real objects with doubles. We can control how these doubles behave. These doubles we call test doubles. We\u0026rsquo;ll cover the different kinds of test doubles later in this article.\nLet\u0026rsquo;s imagine we have a service that processes orders from a database. It\u0026rsquo;s very cumbersome to set up a whole database just to test that service. To avoid setting up a database for the test, we create a mock that pretends to be the database, but in the eyes of the service it looks like a real database. We can advise the mock exactly how it shall behave. Having this tool, we can test the service but don\u0026rsquo;t actually need a database.\nHere Mockito comes into play. Mockito is a very popular library that allows us to create such mock objects.\nConsider reading Why Mock? for additional information about mocking.\nDifferent Types of Test Doubles In the world of code, there are many different words for test doubles and definitions for their duty. I recommend defining a common language within the team.\nHere is a little summary of the different types for test doubles and how we use them in this article:\n   Type Description     Stub A stub is an object that always returns the same value, regardless of which parameters you provide on a stub\u0026rsquo;s methods.   Mock A mock is an object whose behavior - in the form of parameters and return values - is declared before the test is run. (This is exactly what Mockito is made for!)   Spy A spy is an object that logs each method call that is performed on it (including parameter values). It can be queried to create assertions to verify the behavior of the system under test. (Spies are supported by Mockito!)    Mockito in Use Consider the following example:\nThe green arrow with the dotted line and filled triangle stands for implements. CityServiceImpl is the implementation of CityService and therefore an instance of CityService.\nThe white arrow with the diamond says that CityRepository is part of CityService. It is also known as composition.\nThe remaining white arrow with the dotted line stands for the fact that CityServiceImpl owns a reference to CityRepository.\nWe don\u0026rsquo;t want to consider the CityRepository implementation when unit testing CityServiceImpl. If we used a real CityRepository implementation in the test, we would have to connect it to a database, which makes the test setup more complicated and would increase the number of reasons why our test could fail, since we have added complexity in our test fixture with potentially failing components.\nHere Mockito comes to the rescue! Mockito allows us to create a suitable test double for the CityRepository interface and lets us define the behavior we expect from it. Applying this possibility we can create meaningful unit Here Mockito comes to the rescue! Mockito allows us to create a suitable test double for the CityRepository interface and lets us define the behavior we expect from it. Applying this possibility we can create meaningful unit tests to ensure the correct behavior of the service.\nIn summary, what we want is a simple, fast, and reliable unit test instead of a potentially complex, slow, and flaky test!\nLet\u0026rsquo;s look at an example:\nclass CityServiceImplTest { // System under Test (SuT)  private CityService cityService; // Mock  private CityRepository cityRepository; @BeforeEach void setUp() { cityRepository = Mockito.mock(CityRepository.class); cityService = new CityServiceImpl(cityRepository); } // Test cases omitted for brevity.  } The test case consists of the system under test CityService and its dependencies. In this case, the only dependency is an instance of CityRepository. We need those references to test the expected behavior and reset the test double to not interfere with other test cases (more about that later).\nWithin the setup section, we create a test double with Mockito.mock(\u0026lt;T\u0026gt; classToMock). Then, we inject this test double into the CityServiceImpl constructor so that its dependencies are satisfied. Now we are ready to create the test cases:\nclass CityServiceImplTest { // System under Test (SuT)  private CityService cityService; // Mock  private CityRepository cityRepository; @BeforeEach void setUp() { cityRepository = Mockito.mock(CityRepository.class); cityService = new CityServiceImpl(cityRepository); } @Test void find() throws Exception { City expected = createCity(); Mockito.when(cityRepository.find(expected.getId())) .thenReturn(Optional.of(expected)); City actual = cityService.find(expected.getId()); ReflectionAssert.assertReflectionEquals(expected, actual); } @Test void delete() throws Exception { City expected = createCity(); cityService.delete(expected); Mockito.verify(cityRepository).delete(expected); } } Here we have two example test cases.\nThe first one (find()) is about finding a city via the CityService. We create an instance of City as the object which we expect to be returned from the CityService. Now we have to advise the repository to return that value if - and only if - the declared ID has been provided.\nSince cityRepository is a Mockito mock, we can declare its behavior with Mockito.when(). Now we can call the find() method on the service, which will return an instance of City.\nHaving the expected and the actually returned City objects, we can assert that they have the same field values.\nIn case a method has no return value (like cityService.delete() in the code example), we can\u0026rsquo;t create an assertion on the return value. Here Mockito\u0026rsquo;s spy features come into play.\nWe can query the test double and ask if a method was called with the expected parameter. This is what Mockito.verify() does.\nThese two features - mocking return values and verifying method calls on test doubles - give us great power to create various simple test cases. Also, the shown examples can be used for test-driven development and regression tests. Mockito fits both needs!\nHow to Create Mocks with Mockito Until now, we have seen how to create fast and simple test cases. Now let\u0026rsquo;s look at the different ways of creating mocks for our needs. Before we\u0026rsquo;ll continue, we must understand what kind of test double Mockito creates.\nMockito creates test doubles of the type mock, but they have some features of a spy. These extra features allow us to verify if a certain method was called after we executed our test case. More about that later.\nCreating Mocks with Plain Mockito Let\u0026rsquo;s continue with the first variant to create a mock with Mockito. This variant doesn\u0026rsquo;t require any framework or annotations. It is applicable in every project where we have included Mockito.\nCityRepository cityRepository = Mockito.mock(CityRepository.class); CityService cityService = new CityServiceImpl(cityRepository); We can simply declare a variable with the type of the component we want to mock. Taking the example from above, we want CityRepository to be a mock so that we don\u0026rsquo;t have to rely on its dependencies (like a database). The mock is then passed to the service, which is the system under test.\nThat\u0026rsquo;s all we need to set up our first mock with Mockito!\nInitializing Mocks with Mockito Annotations In case we have multiple dependencies that must be mocked, it gets cumbersome to create each and every mock manually with the variant shown above. So, we can also create mocks by using the @Mock annotation:\nclass CityServiceImplTestMockitoAnnotationStyle { // System under Test (SuT)  private CityService cityService; // Mock  @Mock private CityRepository cityRepository; @BeforeEach void setUp() { MockitoAnnotations.openMocks(this); cityService = new CityServiceImpl(cityRepository); } } We can annotate each field to be a mock with the annotation of @Mock. Annotating them doesn\u0026rsquo;t initialize them yet. To do so, we call MockitoAnnotations.openMocks(this) in the @BeforeEach section of our test. The annotated fields of the provided object are then initialized and ready to use, which is in our case is the class instance itself (this). We don\u0026rsquo;t have to deal with boilerplate code anymore and can keep our unit tests neat and concise.\nUsing JUnit Jupiter\u0026rsquo;s MockitoExtension As an alternative to the Mockito annotation style we can make use of JUnit Jupiter\u0026rsquo;s @ExtendWith and extend JUnit Jupiter\u0026rsquo;s context with MockitoExtension.class:\n@ExtendWith(MockitoExtension.class) class CityServiceImplTestMockitoJUnitExtensionStyle { // System under Test (SuT)  private CityService cityService; // Mock  @Mock private CityRepository cityRepository; @BeforeEach void setUp() { cityService = new CityServiceImpl(cityRepository); } } The extension assumes the initialization for annotated fields, so we must not do it ourselves. This makes our setup even more neat and concise!\nInjecting Mocks with Spring If we have a more complex test fixture, and we want to inject the mock into Spring\u0026rsquo;s ApplicationContext we can make use of @MockBean:\n@ExtendWith(SpringExtension.class) class CityServiceImplTestMockitoSpringStyle { // System under Test (SuT)  private CityService cityService; // Mock  @MockBean private CityRepository cityRepository; @BeforeEach void setUp() { cityService = new CityServiceImpl(cityRepository); } } Note that @MockBean is not an annotation from Mockito but from Spring Boot! In the startup process, Spring places the mock in the context so that we don\u0026rsquo;t need to do it ourselves. Wherever a bean requests to have its dependency satisfied, Spring injects the mock instead of the real object. This becomes handy if we want to have the same mock in different places.\nSee Mocking with Mockito and Spring Boot for a deep dive on how to mock Beans in Spring Boot.\nDefining the Behavior of Mocks In this section, we have a look at how to define the behavior of the mocks in our test. What we have seen until now is what mocks are used for and how to create them. We are ready to use them in our test cases.\nHow to Return an Expected Object The probably most common case when using Mockito is to return expected objects. If we call findByName(name) on CityService we would expect that the argument for name is forwarded to the repository which returns an Optional of a City. The service unpacks the Optional if present or otherwise throws an exception.\n@Test void findByName() throws ElementNotFoundException { City expected = createCity(); Mockito.when(cityRepository.findByName(expected.getName())) .thenReturn(Optional.of(expected)); City actual=cityService.findByName(expected.getName()); ReflectionAssert.assertReflectionEquals(expected,actual); } We first create the expected object for City. Having that expected instance for a City, we can define the behavior of the mock which is to return the Optional of the expected instance. We do so by calling Mockito.when() with the call we want to make. As a last step, we must declare the return value of that call at the end of the method chain.\nIf we try to find the expected city by its name, the service will return the previously declared object without throwing an exception. We can assert that the expected City equals the actual City from the service.\nHow to Throw an Exception Mockito gives us developers also the possibility to throw exceptions instead of returning a value. This is mostly used to test error handling blocks in our code.\n@Test void findByNameThrowsExceptionIfCityNameContainsIllegalCharacter() { String cityName=\u0026#34;C!tyN@me\u0026#34;; Mockito.when(cityRepository.findByName(cityName)) .thenThrow(IllegalArgumentException.class); Assertions.assertThrows(IllegalArgumentException.class, () -\u0026gt; cityService.findByName(cityName)); } Declaring the behavior only differs by the last call in the method chain. With thenThrow(), we advise Mockito to throw an IllegalArgumentException in this case.\nIn our case, we just assert that our CityService implementation re-throws the exception.\nHow to Verify a Method Call We can\u0026rsquo;t advise Mockito to return a value on void methods. In this case it is better to assert an underlying component was called. This can be achieved by using Mockito.verify():\n@Test void delete() throws ElementNotFoundException { City expected = createCity(); cityService.delete(expected); Mockito.verify(cityRepository).delete(expected); } In this example, it isn\u0026rsquo;t necessary to declare the behavior of the mock beforehand. Instead, we just query the mock if it has been called during the test case. If not, the test case fails.\nHow To Verify the Number of Method Calls Mockito.verify(cityRepository, Mockito.times(1)).delete(expected); We can verify how many times a mock was called by simply use the built-in verify() method. If the condition is not met, our test case will fail. This is extremely handy for algorithms or similar processes. There are other predefined verification modes such as atLeastOnce() or never() already present and ready to use!\nMockito Best Practices Knowing how to create the mocks, let\u0026rsquo;s have a look at some best practices to keep our tests clean and maintainable. It will save us much time debugging and doesn\u0026rsquo;t let our team members guess what the intent of the test case is.\nDon\u0026rsquo;t Share Mock behavior Between Tests We might be tempted to put all behavior declarations using Mockito.when() into a setup method that runs before each test (i.e. annotated with @BeforeEach) to have them in a common place. Even though this reduces the test cases to a minimum, the readability suffers a lot:\n@BeforeEach void setUp() { expected = createCity(); cityRepository = Mockito.mock(CityRepository.class); cityService = new CityServiceImpl(cityRepository); // Avoid such complex declarations  Mockito.when(cityRepository.save(expected)) .thenReturn(Optional.of(expected)); Mockito.when(cityRepository.find(expected.getId())) .thenReturn(Optional.of(expected)); Mockito.when(cityRepository.findByName(expected.getName())) .thenReturn(Optional.of(expected)); Mockito.when(cityRepository.findAllByCanton(expected.getCanton())) .thenReturn(Collections.singleton(expected)); Mockito.when(cityRepository.findAllByCountry(expected.getCanton().getCountry())) .thenReturn(Collections.singleton(expected)); } This will get us simple test cases like this because we don\u0026rsquo;t have to define the behavior in each test case:\n@Test void save() throws ElementNotFoundException { ReflectionAssert.assertReflectionEquals(expected, cityService.save(expected)); } @Test void find() throws ElementNotFoundException { ReflectionAssert.assertReflectionEquals(expected, cityService.find(expected.getId())); } @Test void delete() throws ElementNotFoundException { cityService.delete(expected); Mockito.verify(cityRepository).delete(expected); } But, because all mocking behavior is in a central place, we must pay attention to not break any test cases when modifying this central code. Also, we don\u0026rsquo;t know which test case requires which behavior when reading the test case. We have to guess or investigate the actual code to find out.\nWe better declare the behavior for each test case in isolation, so that the test cases are independent of each other. The code from above should be refactored to something like the following:\n@BeforeEach void setUp() { cityRepository = Mockito.mock(CityRepository.class); cityService = new CityServiceImpl(cityRepository); } @Test void save() throws ElementNotFoundException { City expected = createCity(); Mockito.when(cityRepository.save(expected)) .thenReturn(Optional.of(expected)); City actual=cityService.save(expected); ReflectionAssert.assertReflectionEquals(expected,actual); } @Test void find() throws ElementNotFoundException { City expected = createCity(); Mockito.when(cityRepository.find(expected.getId())) .thenReturn(Optional.of(expected)); City actual=cityService.find(expected.getId()); ReflectionAssert.assertReflectionEquals(expected,actual); } @Test void delete() throws ElementNotFoundException { City expected = createCity(); cityService.delete(expected); Mockito.verify(cityRepository).delete(expected); } If we explicitly want to re-use a certain mock behavior in multiple test cases, we can move it into special methods like this:\nvoid givenCityExists(City city) throws ElementNotFoundException { Mockito.when(cityRepository.find(city.getId())) .thenReturn(Optional.of(city)); } @Test void find() throws ElementNotFoundException { City expected = createCity(); givenCityExists(expected); City actual=cityService.find(expected.getId()); ReflectionAssert.assertReflectionEquals(expected,actual); } We can then use these methods in the test cases like above. It\u0026rsquo;s important to make methods with shared mock behavior very specific and name them properly to keep the test cases readable.\nWrite Self-Contained Test Cases The unit tests we write should be runnable on any machine with the same result. They shouldn\u0026rsquo;t affect other test cases in any way. So we must write every unit test self-contained and independent of test execution order.\nIt\u0026rsquo;s likely that the errors in non-self-contained test cases are caused by setup blocks that declare behavior shared between test methods. If we need to add a new behavior at the end of the block, each previous declaration must be executed before we can to call ours. Or vice versa: if a new declaration is inserted at the beginning, causes a shift of all other declarations towards the end. At least now our alarm bell should ring, and it\u0026rsquo;s time to reconsider our test case!\nAvoid Mockito.reset() for Better Unit Tests Mockito recommends in their documentation to prefer recreation of mocks over resetting them:\n Smart Mockito users hardly use this feature because they know it could be a sign of poor tests. Normally, you don\u0026rsquo;t need to reset your mocks, just create new mocks for each test method.\n We better create simple and small test cases than lengthy and over-specified tests. The cause of such tests might be testing too much in a single unit test. But let\u0026rsquo;s look at an example for this situation:\n@Test void findAndDelete() throws ElementNotFoundException { City expected = createCity(); Mockito.when(cityRepository.find(expected.getId())) .thenReturn(Optional.of(expected)); City actual = cityService.find(expected.getId()); ReflectionAssert.assertReflectionEquals(expected,actual); cityService.delete(expected); Mockito.verify(cityRepository).delete(expected); Mockito.reset(cityRepository); Mockito.when(cityRepository.find(expected.getId())) .thenReturn(Optional.empty()); Assertions.assertThrows(ElementNotFoundException.class, () -\u0026gt; cityService.find(expected.getId())); } What is this test case doing?\n Tries to find a city and asserts that it\u0026rsquo;s equal to the expected city Deletes a city and verifies that the delete method on the repository has been called Tries to find the previously created city again but expecting an exception.  We must call cityRepository.reset() to let Mockito forget what was declared before that line. This is necessary, because we declared two different behaviors of cityService(expected.getId()) in the same test. This test case\u0026rsquo;s because we declared two different behaviors of cityService(expected.getId()) in the same test. This test case\u0026rsquo;s design is unfortunate. It tests too much for one single test and could be split in simpler and smaller units:\n@BeforeEach void setUp() { cityRepository = Mockito.mock(CityRepository.class); cityService = new CityServiceImpl(cityRepository); } @Test void find() throws ElementNotFoundException { City expected = createCity(); Mockito.when(cityRepository.find(expected.getId())).thenReturn(Optional.of(expected)); City actual = cityService.find(expected.getId()); ReflectionAssert.assertReflectionEquals(expected,actual); } @Test void delete() throws ElementNotFoundException { City expected = createCity(); cityService.delete(expected); Mockito.verify(cityRepository).delete(expected); } @Test void findThrows () { City expected = createCity(); Mockito.when(cityRepository.find(expected.getId())).thenReturn(Optional.empty()); Assertions.assertThrows(ElementNotFoundException.class,()-\u0026gt;cityService.find(expected.getId())); } Now each test is simple and easily understandable. We don\u0026rsquo;t have to reset the mocks anymore, since this is achieved in the setUp() method. The effectively tested code is the same but a lot more meaningful than before.\nDon\u0026rsquo;t Mock Value Objects or Collections Mockito is a framework to mock objects with behavior that can be declared at the beginning of our test. It is common to have Data Transfer Objects (or DTOs). The intent of such a DTO is, as its name says, to transport data from a source to a destination. To retrieve this data from the object, we could declare the behavior of each getter. Albeit this is possible, we should rather use real values and set them to the DTO. The same rule applies for collections too, since they are a container for values as well.\nAs explained, it is possible to mock a City, which is a wrapper for the city name and other properties.\n@Test void mockCity() { String cityName = \u0026#34;MockTown\u0026#34;; City mockTown = Mockito.mock(City.class); Mockito.when(mockTown.getName()).thenReturn(cityName); Assertions.assertEquals(cityName, mockTown.getName()); } It\u0026rsquo;s not worth the effort to declare the behavior for numerous getters of an object. We better create a real object containing the values and don\u0026rsquo;t cover implicitly clear behavior of objects. Now let\u0026rsquo;s see a mocked List:\n@Test void mockList() { List\u0026lt;City\u0026gt; cities = Mockito.mock(List.class); City city = createCity(); City anotherCity = createCity(); Mockito.when(cities.get(0)).thenReturn(city); Mockito.when(cities.get(1)).thenReturn(anotherCity); assertEquals(city, cities.get(0)); assertEquals(anotherCity, cities.get(1)); } There is no value added to mock the list. It\u0026rsquo;s even harder to understand what we expected from our list. In comparison with a real List (i. e. ArrayList) things get clearer right away.\n@Test void mockListResolution() { List\u0026lt;City\u0026gt; cities = new ArrayList\u0026lt;\u0026gt;(); City city = createCity(); City anotherCity = createCity(); cities.add(city); cities.add(anotherCity); assertEquals(city, cities.get(0)); assertEquals(anotherCity, cities.get(1)); } Using mocks for collections we might hide the natural behavior of a List. In the worst case, our application fails in production because we assumed a List to behave differently from how it actually does!\nMockito is a framework to mock behavior of components based on values and not to mock values. This means that we better create tests for components that process DTOs rather than for the DTOs themselves.\nTesting Error Handling with Mockito Mockito.when(cityRepository.find(expected.getId())) .thenThrow(RuntimeException.class); We often only test the happy flow of our application. But how to test the correct behavior in our try-catch blocks? Mockito has the answer: Instead of declaring a return value, we can declare an exception to be thrown. This allows us, to write unit tests, that ensure our try-catch-blocks work as expected!\nImportant to know: In case we throw checked exceptions, the compiler doesn\u0026rsquo;t let us throw checked exceptions that are not declared on the method!\nMockito FAQ In this section, we want to point out important things which are nice to know.\n What types can I mock? Mockito allows us to mock not only interfaces but also concrete classes. What is returned if I don\u0026rsquo;t declare a mock\u0026rsquo;s behavior? Mockito by default returns null for complex objects, and the default values for primitive data types (for example 0 for int and false for boolean) How many times does Mockito return a previously declared value? If we have declared a return value once, Mockito returns always the same value, regardless of how many times a method is called. If we have multiple calls to Mockito.when() with different return values, the first method call will return the first declared value, the second method call the second value, and so on. Can I mock final classes? No, final classes can\u0026rsquo;t be mocked and neither can final methods. This has to do with the internal mechanism of how Mocktio creates the mock and the Java Language Specification. If we want to do so, we can use PowerMock. Can I mock a constructor? Mockito can\u0026rsquo;t mock constructors, static methods, equals() nor hashCode() out of the box. To achieve that, PowerMock must be used.  Pros and Cons Mockito helps us to create simple mocks fast. The Mockito API is easy to read since it allows us to write tests in fluent style. Mockito can be used in plain Java projects or together with frameworks such as Spring Boot. It is well documented and has lots of examples in it. In case of problems, there is a huge community behind it and questions are answered frequently on StackOverflow. It provides great flexibility to its users which can contribute their ideas since it is an open-source project. Therefore, the development is ongoing, and the project is maintained.\nMockito can\u0026rsquo;t mock everything out of the box. In case we want to mock final or static methods, equals() or the construction of an object, we need PowerMock.\nConclusion In this post, we learned how to create mocks for unit tests in various variants. Mockito gives us a lot of flexibility, and the freedom to choose between numerous tools to achieve our goals. When working in teams, we define a common language and Mockito code style guideline on how we want to use this powerful tool for testing. This will improve our performance and helps to discuss and communicate.\nAlthough Mockito comes with a lot of features, be aware of its restrictions. Don\u0026rsquo;t spend time to make the impossible possible, better reconsider our approach to test a scenario.\nYou will find all examples on GitHub.\n","date":"April 25, 2021","image":"https://reflectoring.io/images/stock/0052-mock-1200x628-branded_hu6cd8324df61b792144dc37534f748771_62678_650x0_resize_q90_box.jpg","permalink":"/clean-unit-tests-with-mockito/","title":"Clean Unit Tests with Mockito"},{"categories":["AWS"],"contents":"Provisioning infrastructure resources has always been a time-consuming manual process. Infrastructure has now moved away from physical hardware in data centers to software-defined infrastructure using virtualization technology and cloud computing.\nAll the cloud providers provide services for the creation and modification of infrastructure resources through code like AWS Cloudformation and Azure Resource Manager. Terraform provides a common language for creating infrastructure for multiple cloud providers thereby becoming a key enabler for multi-cloud computing.\nIn this post, we will look at the capabilities of Terraform with examples of creating resources in the AWS cloud.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. Infrastructure as Code with Terraform Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of a manual process. From the website of Terraform:\n \u0026ldquo;Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services.\u0026rdquo;\n For defining resources with Terraform, we specify the provider in a configuration file and add configurations for the resources in one or more files.\nTerraform is logically split into two main parts:\n Terraform Core Terraform plugins  Terraform Core is a binary written in Go and provides the Terraform command-line interface(CLI).\nA Terraform plugin is an executable binary also written in Go and exposes an implementation for a specific service, like AWS or Azure, or a provisioner, like bash.\nAll providers and provisioners used in Terraform configurations are plugins. Terraform Core communicates with plugins using remote procedure calls (RPC) and does resource state management and constructs the resource tree.\nThe Terraform AWS provider is a plugin for Terraform that allows for the full lifecycle management of AWS resources.\nTerraform Setup For running our examples, let us download a binary distribution for our specific operating system for local installation. We will use this to install the Terraform command-line interface (CLI) where we will execute different Terraform commands. We can check for successful installation by running the below command:\nterraform -v This gives the below output on my Mac OS showing the version of Terraform that is installed:\nTerraform v0.15.0 on darwin_amd64 We can view the list of all Terraform commands by running the terraform command without any arguments:\nterraform ... ... Main commands: init Prepare your working directory for other commands validate Check whether the configuration is valid plan Show changes required by the current configuration apply Create or update infrastructure destroy Destroy previously-created infrastructure All other commands: console Try Terraform expressions at an interactive command prompt fmt ... ... We will use the main commands init, plan, and apply throughout this post.\nSince we will be creating resources in AWS, we will also set up the AWS CLI by running the below command:\naws configure When prompted, we will provide AWS access key id and secret access key and choose a default region and output format:\nAWS Access Key ID [****************2345]: .... AWS Secret Access Key [****************2345]: ... Default region name [us-east-1]: Default output format [json]: We are using us-east-1 as the region and JSON as the output format.\nFor more details about the AWS CLI, have a look at our CloudFormation article.\nTerraform Concepts with a Simple Workflow For a basic workflow in Terraform, we first design the infrastructure resources in a configuration file. We call this activity defining our \u0026ldquo;desired state\u0026rdquo;. We then use this configuration to create the actual infrastructure.\nThe configuration is defined in Terraform language using a JSON-like syntax called Hashicorp Configuration Language (HCL) that tells Terraform how to manage a collection of infrastructure resources. A configuration can consist of one or more files and directories.\nThe Terraform Development Loop We start with our \u0026ldquo;desired state\u0026rdquo; which is the collection of infrastructure resources we wish to create. When we run the plan command, Terraform pulls the actual resource information from the provider and compares it with the \u0026ldquo;desired state\u0026rdquo;. It then outputs a report containing the changes which will happen when the configuration is applied (during the apply stage).\nThe main steps for any basic task with Terraform are:\n Configure the \u0026ldquo;desired state\u0026rdquo; in Terraform files (*.tf). Initialize the workspace using the command terraform init. Create the plan using terraform plan. Apply the plan using terraform apply. Destroy the provisioned resources with terraform destroy, when we want to dispose of the infrastructure.  Let us go through each of these steps.\nConfiguring the Desired State Let us define our Terraform configuration in the Terraform language in a file main.tf:\nterraform { required_providers { aws = { source = \u0026#34;hashicorp/aws\u0026#34; version = \u0026#34;~\u0026gt; 3.27\u0026#34; } } } provider \u0026#34;aws\u0026#34; { profile = \u0026#34;default\u0026#34; region = \u0026#34;us-west-2\u0026#34; } resource \u0026#34;aws_instance\u0026#34; \u0026#34;vm-web\u0026#34; { ami = \u0026#34;ami-830c94e3\u0026#34; instance_type = \u0026#34;t2.micro\u0026#34; tags = { Name = \u0026#34;server for web\u0026#34; Env = \u0026#34;dev\u0026#34; } } Here we are creating an AWS EC2 instance named \u0026ldquo;vm-web\u0026rdquo; of type t2.micro using an AMI (Amazon Machine Image) ami-830c94e3. We also associate two tags with the names Name and Env with the EC2 instance.\nWe can also see the three main parts of configuration :\n  Resource: We define our infrastructure in terms of resources. Each resource block in the configuration file describes one or more infrastructure objects. S3 bucket, Lambda function, or their equivalents from other Cloud platforms are some examples of different resource types.\n  Provider: Terraform uses providers to connect to remote systems. Each resource type is implemented by a provider. Most providers configure a specific infrastructure platform (either cloud or self-hosted). Providers can also offer local utilities for tasks like generating random numbers for generating unique resource names.\n  Terraform Settings: We configure some behaviors of Terraform like the minimum Terraform version in the terraform block. Here we also specify all of the providers, each with a source address and a version constraint required by the current module using the required_providers block.\n  Initializing the Working Directory We run Terraform commands from a working directory that contains one or more configuration files. Terraform reads configuration content from this directory, and also uses this directory to store settings, caches for plugins and modules, and sometimes state data.\nThis working directory must be initialized before Terraform can perform any operations like provisioning infrastructure or modifying state.\nLet us now create a working directory and save under it the configuration file that we created in the previous step. We will now initialize our working directory by running the terraform init command.\nAfter running this command, we get this output:\nInitializing the backend... Initializing provider plugins... - Checking for available provider plugins... - Downloading plugin for provider \u0026#34;aws\u0026#34; (hashicorp/aws) 3.36.0... Terraform has been successfully initialized! ... From the output, we can see initialization messages for the backend and provider plugins.\nThe backend is used to store state information. Here we are using the default local backend, which requires no configuration.\nIn real-life situations, a remote backend should be used where state information can be persisted. This is required in projects where multiple individuals work with the same infrastructure.\nThe first run of this command will download the plugins required for the configured provider.\nOur working directory contents after running the terraform init command look like this:\n├── .terraform │ └── plugins │ └── darwin_amd64 │ ├── lock.json │ └── terraform-provider-aws_v3.36.0_x5 └── main.tf The plugin for the configured provider AWS is downloaded and stored as terraform-provider-aws_v3.36.0_x5.\nCreating the Plan We can generate an execution plan by running the terraform plan command. Terraform first performs a refresh and then determines the actions required to achieve the desired state specified in the configuration files.\nThis command is a convenient way to check whether the execution plan for a set of changes matches our expectations without making any changes to real resources.\nLet us run the terraform plan command to generate an execution plan:\nterraform plan -out aws-app-stack-plan We specify the optional -out argument to save the generated plan to a file aws-app-stack-plan for later execution with terraform apply, which can be useful when running terraform in automation environments.\nRunning the terraform plan command gives the following output:\nRefreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_instance.vm-web will be created + resource \u0026#34;aws_instance\u0026#34; \u0026#34;vm-web\u0026#34; { + ami = \u0026#34;ami-830c94e3\u0026#34; + arn = (known after apply) ... ... Plan: 1 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ This plan was saved to file `aws-app-stack-plan` To perform exactly these actions, run the following command to apply: terraform apply \u0026#34;aws-app-stack-plan\u0026#34; From the output, we can see that one resource will be added (the EC2 instance), zero changed and zero destroyed. No actual changes to the infrastructure have taken place yet. The plan is saved in the file specified in the output.\nApplying the Plan We use the terraform apply command to apply our changes and create or modify the changes. By default, apply scans the current directory for the configuration and applies the changes appropriately. However, we can give the path to a file that was previously created by running terraform plan.\nLet us now run the terraform apply command to create or update the resources using the plan file we created in the previous step:\nterraform apply \u0026#34;aws-app-stack-plan\u0026#34; After running this command, we can see the resources getting created in the output log:\naws_instance.vm-web: Creating... aws_instance.vm-web: Still creating... [10s elapsed] aws_instance.vm-web: Still creating... [20s elapsed] aws_instance.vm-web: Still creating... [30s elapsed] aws_instance.vm-web: Creation complete after 35s [id=i-0f07186f0c1481df4] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. The state of your infrastructure has been saved to the path below. This state is required to modify and destroy your infrastructure, so keep it safe. To inspect the complete state use the `terraform show` command. State path: terraform.tfstate Here we come across the important concept of terraform state. After applying our changes to the infrastructure, the state of the infrastructure is stored locally in a file terraform.tfstate.\nIf we do not give a plan file on the command line, running terraform apply creates a new plan automatically and then prompts for approval to apply it. If the created plan does not include any changes to resources or root module output values then running terraform apply exits immediately, without prompting.\nDestroy At last, we destroy our infrastructure by running the terraform destroy command.\nRunning the destroy command first asks for a confirmation and proceeds to delete the infrastructure on receiving a yes answer:\nPlan: 0 to add, 0 to change, 1 to destroy. Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only \u0026#39;yes\u0026#39; will be accepted to confirm. Enter a value: yes aws_instance.vm-web: Destroying... [id=i-0f07186f0c1481df4] ... aws_instance.vm-web: Destruction complete after 48s Destroy complete! Resources: 1 destroyed. The output log states the number of resources destroyed: one EC2 instance in this case.\nParameterizing the Configuration with Input Variables In our last example, instead of putting the values of ami, tag, and instance type directly in the configuration file, we can use variables to allow these aspects of our configuration to be modified without changing the source code. We can receive their values when applying the configuration.\nLet us modify the configuration file (main.tf) created earlier with variables for instance type:\nresource \u0026#34;aws_instance\u0026#34; \u0026#34;vm-web\u0026#34; { ami = \u0026#34;ami-830c94e3\u0026#34; instance_type = var.ec2_instance_type tags = { Name = \u0026#34;server for web\u0026#34; Env = \u0026#34;dev\u0026#34; } As we can see here, we have introduced a variable by the name ec2_instance_type in our resource configuration. We have declared our variable in a file variables.tf in a variable block as shown here:\nvariable \u0026#34;ec2_instance_type\u0026#34; { description = \u0026#34;AWS EC2 instance type.\u0026#34; type = string } This is a variable of type string with an appropriate description. We can similarly declare variables of types number and bool and complex types like list, map, set and tuple. Some additional arguments we can specify for a variable are default, validation, and sensitive.\nWhen we run the plan, it prompts for the value of the variable:\nterraform plan var.ec2_instance_type AWS EC2 instance type. Enter a value: t2.micro We supply a value t2.micro to allow Terraform to create our desired ec2 instance. Apart from this method of setting variable values, we can define the values in a variable definition file ending in .tfvars and specify the file on the command line.\nOrganizing and Reusing Configurations with Modules In our previous example, we represented our architecture by directly creating an EC2 instance. In real-life situations, our application stack will have many more resources with dependencies between them.\nWe might also like to reuse certain constructs for the consistency and compactness of our configuration code. Functions fulfill this need in programming languages. Terraform has a similar concept called modules. Similar to functions, a module has an input, output, and a body.\nModules are the main way to package and reuse resource configurations with Terraform. It is most often a grouping of one or more resources that are used to represent a logical component in the architecture. For example, we might create our infrastructure with two logical constructs (modules) a module for the application composed of EC2 instances and ELB and another module for storage composed of S3 and RDS.\nEvery Terraform configuration has at least one module called the root module that has the resources defined in the .tf files in the main working directory. A module can call other modules.\nLet us create two modules for our application stack, one for creating an EC2 instance and another for creating an S3 bucket. Our directory structure now looks like this:\n├── main.tf └── modules ├── application │ ├── main.tf │ ├── outputs.tf │ └── variables.tf └── storage ├── main.tf ├── outputs.tf └── variables.tf ```text Here we have defined two child modules named `application` and `storage` under the `modules` folder which will be invoked from the root module. Each of these modules has a configuration file `main.tf` (it can also be any other name) and input variables in `variables.tf` and output variables in `outputs.tf`. ```hcl resource \u0026#34;aws_instance\u0026#34; \u0026#34;vm-web\u0026#34; { ami = var.ami instance_type = var.ec2_instance_type tags = var.tags } Here we define the resource with variables declared in variables.tf:\nvariable \u0026#34;ec2_instance_type\u0026#34; { description = \u0026#34;Instance type\u0026#34; type = string } variable \u0026#34;ami\u0026#34; { description = \u0026#34;ami id\u0026#34; type = string } variable \u0026#34;tags\u0026#34; { description = \u0026#34;Tags to set on the bucket.\u0026#34; type = map(string) default = {Name = \u0026#34;server for web\u0026#34; Env = \u0026#34;dev\u0026#34;} } We are declaring three variables: ec2_instance_type and ami are of type string and the variable tags is of type map with a default value. Our main configuration now invokes these modules instead of the directly declaring the resources:\nterraform { required_providers { aws = { source = \u0026#34;hashicorp/aws\u0026#34; version = \u0026#34;~\u0026gt; 3.27\u0026#34; } } } provider \u0026#34;aws\u0026#34; { profile = \u0026#34;default\u0026#34; region = \u0026#34;us-west-2\u0026#34; } module \u0026#34;app_server\u0026#34; { source = \u0026#34;./modules/application\u0026#34; ec2_instance_type = \u0026#34;t2.micro\u0026#34; ami = \u0026#34;ami-830c94e3\u0026#34; tags = { Name = \u0026#34;server for web\u0026#34; Env = \u0026#34;dev\u0026#34; } } module \u0026#34;app_storage\u0026#34; { source = \u0026#34;./modules/storage\u0026#34; bucket_name = \u0026#34;io.pratik.tf-example-bucket\u0026#34; env = \u0026#34;dev\u0026#34; } During invocation of the child modules, we are using the module construct with a source argument containing the path of the child modules application and storage. Here we are using the local directory to store our modules.\nOther than the local path, we can also use different source types like a terraform registry, GitHub, s3, etc to reuse modules published by other individuals or teams. When using remote sources, terraform will download these modules when we run terraform init and store them in the local directory.\nTerraform Cloud and Terraform Enterprise We ran Terraform using Terraform CLI which performed operations on the workstation where it is invoked and stored state in a local working directory. This is called the \u0026ldquo;local workflow\u0026rdquo;.\nHowever, we will need a remote workflow when using Terraform in a team which will require the state to be shared and Terraform to run in a remote environment.\nTerraform has two more variants Terraform Cloud and Terraform Enterprise for using Terraform in a team environment:\n  Terraform Cloud is a hosted service at https://app.terraform.io where Terraform runs on disposable virtual machines in its cloud infrastructure.\n  Terraform Enterprise is available for hosting in a private data center which might be an option preferred by large enterprises.\n  Let us run remote plans in Terraform Cloud from our local command line, also called the \u0026ldquo;CLI workflow\u0026rdquo;. First, we need to log in to https://app.terraform.io after creating an account with our email address. Similar to our working directory in the CLI, we will create a workspace with a \u0026ldquo;CLI driven workflow\u0026rdquo; as shown here:\nWe will modify our configuration to add a backend block to configure our remote backend as shown here:\nterraform { backend \u0026#34;remote\u0026#34; { hostname = \u0026#34;app.terraform.io\u0026#34; organization = \u0026#34;pratikorg\u0026#34; token = \u0026#34;pj7p5*************************************************czt62p1bs\u0026#34; workspaces { name = \u0026#34;my-tf-workspace\u0026#34; } } required_providers { aws = { source = \u0026#34;hashicorp/aws\u0026#34; version = \u0026#34;~\u0026gt; 3.36\u0026#34; } } } We configure AWS credentials by adding two environment variables for access token and secret key:\nRunning the terraform plan command will start a remote run in the configured Terraform Cloud workspace. Running terraform plan will output the following log:\nRunning plan in the remote backend. Output will stream here. Pressing Ctrl-C will stop streaming the logs, but will not stop the plan running remotely. Preparing the remote plan... To view this run in a browser, visit: https://app.terraform.io/app/pratikorg/my-tf-workspace/runs/run-Q2PMW9pCRtqXiKqh Waiting for the plan to start... Terraform v0.15.0 on linux_amd64 Configuring remote state backend... Initializing Terraform configuration... Terraform Configuration with Version Control Systems for Continuous Integration Apart from the CLI workflow, Terraform Cloud/Enterprise has two more types of workflow targeted for continuous integration.\nHere the Terraform workspace is connected to a repository on one of the supported version control systems which provides Terraform configurations for that workspace. Terraform Cloud monitors new commits and pull requests to the repository using webhooks. After any commit to a branch, a Terraform Cloud workspace based on that branch will run Terraform.\nWe can find elaborate documentation for configuring Terraform for specific VCS providers by following their respective links.\nConclusion In this post, we introduced the following concepts of Terraform with examples of creating resources in AWS Cloud:\n A resource is the basic building block of creating infrastructure with Terraform. Plugins as executable Go binaries which expose implementation for a specific service, like AWS or Azure. Terraform resources are defined in a configuration file ending with .tf and written in Terraform language using HCL syntax. Modules are used for organizing and grouping resources to create logical abstractions. Basic workflow is composed of init-plan-apply cycle. Terraform backend is configured as local or remote where state information is stored. Terraform Cloud and Terraform Enterprise use remote backends and are suitable for use in team environments.  These concepts should help you to get started with Terraform and inspire you to explore more advanced features like automation, extending its features, and integration capabilities.\nYou can refer to all the source code used in the article on Github.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"April 22, 2021","image":"https://reflectoring.io/images/stock/0099-desert-1200x628-branded_hu9ee5a464f6a70cfb3903984fe15aaa43_72854_650x0_resize_q90_box.jpg","permalink":"/terraform-aws/","title":"Using Terraform to Deploy AWS Resources"},{"categories":["Java"],"contents":"You are just beginning your programming career? Or you have dabbled a bit in programming but want to get into Java?\nThen this article is for you. We\u0026rsquo;ll go from zero to building a robot arena in Java.\nIf you get stuck anywhere in this tutorial, know that this is totally fine. In this case, you might want to learn Java on CodeGym. They take you through detailed and story-based Java tutorials with in-browser coding exercises that are ideal for Java beginners.\nHave fun building robots with Java!\n Example Code This article is accompanied by a working code example on GitHub. Getting Ready to Code Before we can start writing code, we have to set up our development environment. Don\u0026rsquo;t worry, this is not going to be complicated. The only thing we need for now is to install an IDE or \u0026ldquo;Integrated Development Environment\u0026rdquo;. An IDE is a program that we\u0026rsquo;ll use for programming.\nWhen I\u0026rsquo;m working with Java, IntelliJ is my IDE of choice. You can use whatever IDE you\u0026rsquo;re comfortable with, but for this tutorial, I\u0026rsquo;ll settle with instructions on how to work with IntelliJ.\nSo, if you haven\u0026rsquo;t already, download and install the free community edition of IntelliJ for your operating system here. I\u0026rsquo;ll wait while you\u0026rsquo;re downloading it.\nIntelliJ is installed and ready? Let\u0026rsquo;s get started, then!\nBefore we get our hands dirty on code, we create a new Java project in IntelliJ. When you start IntelliJ for the first time, you should see a dialog something like this:\nClick on \u0026ldquo;New project\u0026rdquo;, to open this dialog:\nIf you have a different IntelliJ project open already, you can reach the \u0026ldquo;New project\u0026rdquo; dialog through the option \u0026ldquo;File -\u0026gt; New -\u0026gt; Project\u0026rdquo;.\nIf the \u0026ldquo;Project SDK\u0026rdquo; drop-down box shows \u0026ldquo;No JDK\u0026rdquo;, select the option \u0026ldquo;Download JDK\u0026rdquo; in the dropdown box to install a JDK (Java Development Kit) before you continue.\nThen, click \u0026ldquo;Next\u0026rdquo;, click \u0026ldquo;Next\u0026rdquo; again, enter \u0026ldquo;robot-arena\u0026rdquo; as the name of the project, and finally click \u0026ldquo;Finish\u0026rdquo;.\nCongratulations, you have just created a Java project! Now, it\u0026rsquo;s time to create some code!\nLevel 1 - Hello World Let\u0026rsquo;s start with the simplest possible program, the infamous \u0026ldquo;Hello World\u0026rdquo; (actually, in Java there are already quite a few concepts required to build a \u0026ldquo;Hello World\u0026rdquo; program \u0026hellip; it\u0026rsquo;s definitely simpler in other programming languages).\nThe goal is to create a program that simply prints \u0026ldquo;Hello World\u0026rdquo; to a console.\nIn your fresh Java project, you should see the following folder structure on the left:\nThere are folders named .idea and out, in which IntellJ stores some configuration and compiled Java classes \u0026hellip; we don\u0026rsquo;t bother with them for now.\nThe folder we\u0026rsquo;re interested in is the src folder, which stands short for \u0026ldquo;source\u0026rdquo;, or rather \u0026ldquo;source code\u0026rdquo; or \u0026ldquo;source files\u0026rdquo;. This is where we put our Java files.\nIn this folder, create a new package by right-clicking on it and selecting \u0026ldquo;New -\u0026gt; Package\u0026rdquo;. Call the package \u0026ldquo;level1\u0026rdquo;.\nPackages  In Java, source code files are organized into so-called \"packages\". A package is just a folder in your file system and can contain files and other packages, just like a normal file system folder.  In this tutorial, we'll create a separate package for each chapter (or \"level\") with all the source files we need for that chapter.  In the package level1, go ahead and create a new Java file by right-clicking on it and selecting \u0026ldquo;New -\u0026gt; Java Class\u0026rdquo;. Call this new class \u0026ldquo;Application\u0026rdquo;.\nCopy the following code block into your new file (replacing what is already there):\npackage level1; public class Application { public static void main(String[] arguments){ System.out.println(\u0026#34;Hello World\u0026#34;); } } Java programs are organized into \u0026ldquo;classes\u0026rdquo;, where each class is usually in its own separate Java file with the same name of the class (more about classes later). You will see that IntelliJ has created a file with the name Application.java and the class within is also called Application. Each class is in a certain package, which is declared with package level1; in our case above.\nOur Application class contains a method called main(). A class can declare many methods like that with names that we choose - we\u0026rsquo;ll see how later in this tutorial. A method is a unit of code in a class that we can execute. It can have input in the form of arguments and output in the form of a return value. Our main() method takes an array of Strings as input and returns a void output, which means it returns no output (check out the vocabulary at the end of this article if you want to recap what a certain term means).\nA method named main() with the public and static modifiers is a special method because it\u0026rsquo;s considered the entry point into our program. When we tell Java to run our program, it will execute this main() method.\nLet\u0026rsquo;s do this now. Run the program by right-clicking the Application class in the project explorer on the left side and select \u0026ldquo;Run \u0026lsquo;Application.main()'\u0026rdquo; from the context menu.\nIntelliJ should now open up a console and run the program for us. You should see the output \u0026ldquo;Hello World\u0026rdquo; in the console.\nCongratulations! You have just run your first Java program! We executed the main() method which printed out some text. Feel free to play around a bit, change the text, and run the application again to see what happens.\nLet\u0026rsquo;s now explore some more concepts of the Java language in the next level.\nLevel 2 - Personalized Greeting Let\u0026rsquo;s modify our example somewhat to get to know about some more Java concepts.\nThe goal in this level is to make the program more flexible, so it can greet the person executing the program.\nFirst, create a new package level2, and create a new class named Application in it. Paste the following code into that class:\npackage level2; public class Application { public static void main(String[] arguments){ String name = arguments[0]; System.out.println(\u0026#34;Hello, \u0026#34; + name); } } Let\u0026rsquo;s inspect this code before we execute it. We added the line String name = arguments[0];, but what does it mean?\nWith String name, we declare a variable of type String. A variable is a placeholder that can hold a certain value, just like in a mathematical equation. In this case, this value is of the type String, which is a string of characters (you can think of it as \u0026ldquo;text\u0026rdquo;).\nWith String name = \u0026quot;Bob\u0026quot;, we would declare a String variable that holds the value \u0026ldquo;Bob\u0026rdquo;. You can read the equals sign as \u0026ldquo;is assigned the value of\u0026rdquo;.\nWith String name = arguments[0], finally, we declare a String variable that holds the value of the first entry in the arguments variable. The arguments variable is passed into the main() method as an input parameter. It is of type String[], which means it\u0026rsquo;s an array of String variables, so it can contain more than one string. With arguments[0], we\u0026rsquo;re telling Java that we want to take the first String variable from the array.\nThen, with System.out.println(\u0026quot;Hello, \u0026quot; + name);, we print out the string \u0026ldquo;Hello, \u0026quot; and add the value of the name variable to it with the \u0026ldquo;+\u0026rdquo; operator.\nWhat do you think will happen when you execute this code? Try it out and see if you\u0026rsquo;re right.\nMost probably, you will get an error message like this:\nException in thread \u0026#34;main\u0026#34; java.lang.ArrayIndexOutOfBoundsException: Index 0 out of bounds for length 0 at level2.Application.main(Application.java:5) The reason for this error is that in line 5, we\u0026rsquo;re trying to get the first value from the arguments array, but the arguments array is empty. There is no first value to get. Java doesn\u0026rsquo;t like that and tells us by throwing this exception at us.\nTo solve this, we need to pass at least one argument to our program, so that the arguments array will contain at least one value.\nTo add an argument to the program call, right-click on the Application class again, and select \u0026ldquo;Modify Run Configuration\u0026rdquo;. In the field \u0026ldquo;Program arguments\u0026rdquo;, enter your name. Then, execute the program again. The program should now greet you with your name!\nChange the program argument to a different name and run the application again to see what happens.\nLevel 3 - Play Rock, Paper, Scissors with a Robot Let\u0026rsquo;s add some fun by programming a robot!\nIn this level, we\u0026rsquo;re going to create a virtual robot that can play Rock, Paper, Scissors.\nFirst, create a new package level3. In this package, create a Java class named Robot and copy the following content into it:\npackage level3; class Robot { String name; Random random = new Random(); Robot(String name) { this.name = name; } String rockPaperScissors() { int randomNumber = this.random.nextInt(3); if (randomNumber == 0) { return \u0026#34;rock\u0026#34;; } else if (randomNumber == 1) { return \u0026#34;paper\u0026#34;; } else { return \u0026#34;scissors\u0026#34;; } } } Let\u0026rsquo;s go through this code to understand it:\nWith class Robot, we declare a new class with the name \u0026ldquo;Robot\u0026rdquo;. As mentioned before, a class is a unit to organize our code. But it\u0026rsquo;s more than that. We can use a class as a \u0026ldquo;template\u0026rdquo;. In our case, the Robot class is a template for creating robots. We can use the class to create one or more robots that can play Rock, Paper, Scissors.\nLearning Object-Oriented Programming  If you haven't been in contact with object-oriented programming before, the concepts of classes and objects can be a lot to take in. Don't worry if you don't understand all the concepts from reading this article alone ... it'll come with practice.  If you want to go through a more thorough, hands-on introduction to object-oriented programming with Java, you might want to take a look at CodeGym.  A class can have attributes and methods. Let\u0026rsquo;s look at the attributes and methods of our Robot class.\nA robot shall have a name, so with String name; we declare an attribute with the name \u0026ldquo;name\u0026rdquo; and the type String. An attribute is just a variable that is bound to a class.\nWe\u0026rsquo;ll look at the other attribute with the name random later.\nThe Robot class then declares two methods:\n The Robot() method is another special method. It\u0026rsquo;s a so-called \u0026ldquo;constructor\u0026rdquo; method. The Robot() method is used to construct a new object of the class (or type) Robot. Since a robot must have a name, the constructor method expects a name as an input parameter. With this.name = name we set the name attribute of the class to the value that was passed into the constructor method. We\u0026rsquo;ll later see how that works. The rockPaperScissors() method is the method that allows a robot to play Rock, Paper, Scissors. It does not require any input, but it returns a String object. The returned String will be one of \u0026ldquo;rock\u0026rdquo;, \u0026ldquo;paper\u0026rdquo;, or \u0026ldquo;scissors\u0026rdquo;, depending on a random number. With this.random.nextInt(3) we use the random number generator that we have initialized in the random attribute to create a random number between 0 and 2. Then, with an if/else construct, we return one of the strings depending on the random number.  So, now we have a robot class, but what do we do with it?\nCreate a new class called Application in the level3 package, and copy this code into it:\npackage level3; class Application { public static void main(String[] args) { Robot c3po = new Robot(\u0026#34;C3PO\u0026#34;); System.out.println(c3po.rockPaperScissors()); } } This class has a main() method, just like in the previous levels. In this method, with Robot c3po = new Robot(\u0026quot;C3PO\u0026quot;); we create an object of type Robot and store it in a variable with the name c3po. With the new keyword, we tell Java that we want to call a constructor method. In the end, this line of code calls the Robot() constructor method we have declared earlier in the Robot class. Since it requires a robot name as an input parameter, we pass the name \u0026ldquo;C3PO\u0026rdquo;.\nWe now have an object of type Robot and can let it play Rock, Paper, Scissors by calling the rockPaperScissors() method, which we do in the next line. We pass the result of that method into the System.out.println() method to print it out on the console.\nBefore you run the program, think about what will happen. Then, run it, and see if you were right!\nThe program should print out either \u0026ldquo;rock\u0026rdquo;, \u0026ldquo;paper\u0026rdquo;, or \u0026ldquo;scissors\u0026rdquo;. Run it a couple of times to see what happens!\nLevel 4 - A Robot Arena Now we can create robot objects that play Rock, Paper, Scissors. It would be fun to let two robots fight a duel, wouldn\u0026rsquo;t it?\nLet\u0026rsquo;s build an arena in which we can pit two robots against each other!\nFirst, create a new package level4 and copy the Robot class from the previous level into this package. Then, create a new class in this package with the name Arena and copy the following code into it:\npackage level4; class Arena { Robot robot1; Robot robot2; Arena(Robot robot1, Robot robot2) { this.robot1 = robot1; this.robot2 = robot2; } Robot startDuel() { String shape1 = robot1.rockPaperScissors(); String shape2 = robot2.rockPaperScissors(); System.out.println(robot1.name + \u0026#34;: \u0026#34; + shape1); System.out.println(robot2.name + \u0026#34;: \u0026#34; + shape2); if (shape1.equals(\u0026#34;rock\u0026#34;) \u0026amp;\u0026amp; shape2.equals(\u0026#34;scissors\u0026#34;)) { return robot1; } else if (shape1.equals(\u0026#34;paper\u0026#34;) \u0026amp;\u0026amp; shape2.equals(\u0026#34;rock\u0026#34;)) { return robot1; } else if (shape1.equals(\u0026#34;scissors\u0026#34;) \u0026amp;\u0026amp; shape2.equals(\u0026#34;paper\u0026#34;)) { return robot1; } else if (shape2.equals(\u0026#34;rock\u0026#34;) \u0026amp;\u0026amp; shape1.equals(\u0026#34;scissors\u0026#34;)) { return robot2; } else if (shape2.equals(\u0026#34;paper\u0026#34;) \u0026amp;\u0026amp; shape1.equals(\u0026#34;rock\u0026#34;)) { return robot2; } else if (shape2.equals(\u0026#34;scissors\u0026#34;) \u0026amp;\u0026amp; shape1.equals(\u0026#34;paper\u0026#34;)) { return robot2; } else { // both robots chose the same shape: no winner  return null; } } } Let\u0026rsquo;s investigate the Arena class.\nAn arena has two attributes of type Robot: robot1, and robot2. Since an arena makes no sense without any robots, the constructor Arena() expects two robot objects as input parameters. In the constructor, we initialize the attributes with the robots passed into the constructor.\nThe fun part happens in the startDuel() method. This method pitches the two robots against each other in battle. It expects no input parameters, but it returns an object of type Robot. We want the method to return the robot that won the duel.\nIn the first two lines, we call each of the robots\u0026rsquo; rockPaperScissors() methods to find out which shape each of the robots chose and store them in two String variables shape1 and shape2.\nIn the next two lines, we just print the shapes out to the console so that we can later see which robot chose which shape.\nThen comes a long if/else construct that compares the shapes both robots selected. If robot 1 chose \u0026ldquo;rock\u0026rdquo; and robot 2 chose \u0026ldquo;scissors\u0026rdquo;, we return robot 1 as the winner, because rock beats scissors. This goes on for all 6 different cases. Finally, we have an unconditional else block which is only reached if both robots have chosen the same shape. In this case, there is no winner, so we return null. Null is a special value that means \u0026ldquo;no value\u0026rdquo;.\nNow we have an Arena in which we can let two robots battle each other. How do we start a duel?\nLet\u0026rsquo;s create a new Application class in the level4 package and copy this code into it:\npackage level4; class Application { public static void main(String[] args) { Robot c3po = new Robot(\u0026#34;C3PO\u0026#34;); Robot r2d2 = new Robot(\u0026#34;R2D2\u0026#34;); Arena arena = new Arena(c3po, r2d2); Robot winner = arena.startDuel(); if (winner == null) { System.out.println(\u0026#34;Draw!\u0026#34;); } else { System.out.println(winner.name + \u0026#34; wins!\u0026#34;); } } } What\u0026rsquo;s happening in this code?\nIn the first two lines, we create two Robot objects.\nIn the next line, we create an Arena object, using the previously discussed constructor Arena() that expects two robots as input. We pass in the two robot objects we created earlier.\nThen, we call the startDuel() method on the arena object. Since the startDuel() method returns the winner of the duel, we store the return value of the method into the winner variable of type Robot.\nIf the winner variable has no value (i.e. it has the value null), we don\u0026rsquo;t have a winner, so we print out \u0026ldquo;Draw!\u0026rdquo;.\nIf the winner variable does have a value, we print out the name of the winner.\nGo through the code again and trace in your mind what happens in each line of code. Then run the application and see what happens!\nEvery time we run the program, it should now print out the Rock, Paper, or Scissor shapes that each of the robots has chosen and then print out the name of the winner or \u0026ldquo;Draw!\u0026rdquo; if there was no winner.\nWe have built a robot arena!\nLevel 5 - Cleaning Up the Arena The robot arena we\u0026rsquo;ve built is pretty cool already. But the code is a bit unwieldy in some places.\nLet\u0026rsquo;s clean up the code to professional-grade quality! We\u0026rsquo;ll introduce some more Java concepts on the way.\nWe\u0026rsquo;re going to fix three main issues with the code:\n The rockPaperScissors() method in the Robot class returns a String. We could accidentally introduce an error here by returning an invalid string like \u0026ldquo;Duck\u0026rdquo;. The big if/else construct in the Arena class is repetitive and error-prone: we could easily introduce an error through copy \u0026amp; paste here. The startDuel() method in the Arena class returns null if there was no winner. We might expect the method to always return a winner and forget to handle the case when it returns null.  Before we start, create a new package level5, and copy all the classes from level4 into it.\nTo make the code a bit safer, we\u0026rsquo;ll first introduce a new class Shape. Create this class and copy the following code into it:\npackage level5; enum Shape { ROCK(\u0026#34;rock\u0026#34;, \u0026#34;scissors\u0026#34;), PAPER(\u0026#34;paper\u0026#34;, \u0026#34;rock\u0026#34;), SCISSORS(\u0026#34;scissors\u0026#34;, \u0026#34;paper\u0026#34;); String name; String beats; Shape(String name, String beats) { this.name = name; this.beats = beats; } boolean beats(Shape otherShape) { return otherShape.name.equals(this.beats); } } The Shape class is a special type of class: an \u0026ldquo;enum\u0026rdquo;. This means it\u0026rsquo;s an enumeration of possible values. In our case, an enumeration of valid shapes in the Rock, Paper, Scissors game.\nThe class declares three valid shapes: ROCK, PAPER, and SCISSORS. Each of the declarations passes two parameters into the constructor:\n the name of the shape, and the name of the shape it beats.  The constructor Shape() takes these parameters and stores them in class attributes as we have seen in the other classes earlier.\nWe additionally create a method beats() that is supposed to decide whether the shape beats another shape. It expects another shape as an input parameter and returns true if that shape is the shape that this shape beats.\nWith the Shape enum in place, we can now change the method rockPaperScissors() in the Robot class to return a Shape instead of a string:\nclass Robot { ... Shape rockPaperScissors() { int randomNumber = random.nextInt(3); return Shape.values()[randomNumber]; } } The method now returns Shape object. We have also removed the if/else construct and replaced it with Shape.values()[randomNumber] to the same effect. Shape.values() returns an array containing all three shapes. From this array we just pick the element with the random index.\nWith this new Robot class, we can go ahead and clean up the Arena class:\nclass Arena { ... Optional\u0026lt;Robot\u0026gt; startDuel() { Shape shape1 = robot1.rockPaperScissors(); Shape shape2 = robot2.rockPaperScissors(); System.out.println(robot1.name + \u0026#34;: \u0026#34; + shape1.name); System.out.println(robot2.name + \u0026#34;: \u0026#34; + shape2.name); if (shape1.beats(shape2)) { return Optional.of(robot1); } else if (shape2.beats(shape1)) { return Optional.of(robot2); } else { return Optional.empty(); } } } We changed the type of the shape variables from String to Shape, since the robots now return Shapes.\nThen, we have simplified the if/else construct considerably by taking advantage of the beats() method we have introduced in the Shape enum. If the shape of robot 1 beats the shape of robot 2, we return robot 1 as the winner. If the shape of robot 2 beats the shape of robot 1, we return robot 2 as the winner. If no shape won, we have a draw, so we return no winner.\nYou might notice that the startDuel() method now returns an object of type Optional\u0026lt;Robot\u0026gt;. This signifies that the return value can be a robot or it can be empty. Returning an Optional is preferable to returning a null object as we did before because it makes it clear to the caller of the method that the return value may be empty.\nTo accommodate the new type of the return value, we have changed the return statements to return either a robot with Optional.of(robot) or an empty value with Optional.empty().\nFinally, we have to adapt our Application class to the new Optional return value:\nclass Application { public static void main(String[] args) { Robot c3po = new Robot(\u0026#34;C3PO\u0026#34;); Robot r2d2 = new Robot(\u0026#34;R2D2\u0026#34;); Arena arena = new Arena(c3po, r2d2); Optional\u0026lt;Robot\u0026gt; winner = arena.startDuel(); if (winner.isEmpty()) { System.out.println(\u0026#34;Draw!\u0026#34;); } else { System.out.println(winner.get().name + \u0026#34; wins!\u0026#34;); } } } We change the type of the winner variable to Optional\u0026lt;Robot\u0026gt;. The Optional class provides the isEmpty() method, which we use to determine if we have a winner or not.\nIf we don\u0026rsquo;t have a winner, we still print out \u0026ldquo;Draw!\u0026rdquo;. If we do have a winner, we call the get() method on the Optional to get the winning robot and then print out its name.\nLook at all the classes you created in this level and recap what would happen if you call the program.\nThen, run this program and see what happens.\nIt should do the same as before, but we have taken advantage of some more advanced Java features to make the code more clear and less prone to accidental errors.\nDon\u0026rsquo;t worry if you didn\u0026rsquo;t understand all the features we have used in detail. If you want to go through a more detailed tutorial of everything Java, you\u0026rsquo;ll want to check out the CodeGym Java tutorials.\nJava Vocabulary Phew, there were a lot of terms in the tutorial above. The following table sums them up for your convenience:\n .table td { padding: 5px; }     Term Description     Array A variable type that contains multiple elements. An array can be declared by appending brackets ([]) to the type of a variable: String[] myArray;. The elements in an array can be accessed by adding brackets with the index of the wanted element to the variable name, starting with 0 for the first element: myArray[0].   Attribute A class can have zero or more attributes. An attribute is a variable of a certain type that belongs to that class. Attributes can be used like normal variables within the methods of the class.   Boolean A variable type that contains either the value true or the value false.   Class A class is a unit to organize code and can be used as a template to create many objects with the same set of attributes and methods.   Constructor A special method that is called when we use the new keyword to create a new object from a class. It can have input parameters like any other method and implicitly returns an object of the type of the class it\u0026rsquo;s in.   Enum A special class that declares an enumeration of one or more valid values.   Input parameter A variable of a specific type that can be passed into a method.   Method A method is a function that takes some input parameters, does something with them, and then returns a return value.   Null A special value that signals \u0026ldquo;no value\u0026rdquo;.   Object An object is an instance of a class. A class describes the \u0026ldquo;type\u0026rdquo; of an object. Many objects can have the same type.   Operator Operators are used to compare, concatenate or modify variables.   Optional A class provided by Java that signifies that a variable can have an optional value, but the value can also be empty.   Package High-level unit to organize code. It\u0026rsquo;s just a folder in the file system.   Return value A method may return an object of a specified type. When you call the method, you can assign the return value to a variable.   String A variable type that contains a string of characters (i.e. a \u0026ldquo;text\u0026rdquo;, if you will).   this A special keyword that means \u0026ldquo;this object\u0026rdquo;. Can be used to access attributes of a class in the classes' methods.   Variable A variable can contain a value of a certain type/class. Variables can be passed into methods, combined with operators, and returned from methods.   {: .table}     Where to Go From Here? If this article made you want to learn more about Java, head over to CodeGym. They provide a very entertaining and motivating learning experience for Java. Exercises are embedded in stories and you can create and run code right in the browser!\nAnd, of course, you can play around with the code examples from this article on GitHub.\n","date":"April 15, 2021","image":"https://reflectoring.io/images/special/robot-arena_hubfbcf92b2119d9487b07981a428838c0_215650_650x0_resize_q90_box.jpg","permalink":"/first-java-program-robot-arena/","title":"Getting Started With Java: Build a Robot Arena"},{"categories":["Java"],"contents":"As Java developers, we may maintain many applications using Maven for their dependency management. These applications need upgrades from time to time to be up to date and to add new features or security updates.\nThis easy task - updating dependencies' versions - can easily turn out to become a nightmare because of conflicts between certain dependencies. The resolution of these dependency conflicts can take a lot of time.\nTo make dependency management easier, we can use the Bill of Materials (BOM), a feature that offers easier and safer dependency management.\nIn this article, we are going to look at dependency management in Maven and look at the BOM with some examples.\nDirect vs. Transitive Dependencies Let\u0026rsquo;s imagine we write some business code that requires logging the output, using some String utilities, or securing the application. This logic can be implemented in our project, or we can use a library instead. It often makes sense to use existing libraries to minimize the amount of code we need to write ourselves.\nThe use of libraries encourages reuse since we will rely on other libraries that solve problems similar to ours: these libraries are our dependencies.\nThere are two types of dependencies in Maven:\n  direct dependencies: dependencies that are explicitly included in our Project Object Model (pom.xml) file in the \u0026lt;dependencies\u0026gt; section. They can be added using the \u0026lt;dependency\u0026gt; tag. Here is an example of a logging library added to a pom.xml file:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;log4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;log4j\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.2.17\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt;   transitive dependencies: a project that we include as a dependency in our project, like the logging library above, can declare its own dependencies in a pom.xml file. These dependencies are then considered transitive dependencies to our project. When Maven pulls a direct dependency, it also pulls its transitive dependencies.\n  Transitive Dependencies with Maven Now that we have an overview of the different dependency types in Maven, let\u0026rsquo;s see in detail how Maven deals with transitive dependencies in a project.\nAs an example, we\u0026rsquo;ll look at two dependencies from the Spring Framework: spring-context and spring-security-web.\nIn the pom.xml file we add them as direct dependencies, deliberately selecting two different version numbers:\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-context\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.3.5\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.security\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-security-web\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.4.5\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; Visualize Version Conflicts with a Dependency Tree Someone who is not aware of transitive dependencies will think that using this dependency declaration only two JAR files will be pulled. Fortunately, Maven provides a command that will show us what was pulled exactly concerning these two dependencies.\nWe can list all the dependencies including the transitive ones using this command:\nmvn dependency:tree -Dverbose=true We use the verbose mode of this command so that Maven tells us the reason for selecting one version of a dependency over another.\nThe result is this:\n+- org.springframework:spring-context:jar:5.3.5:compile | +- org.springframework:spring-aop:jar:5.3.5:compile | | +- (org.springframework:spring-beans:jar:5.3.5:compile - omitted for duplicate) | | \\- (org.springframework:spring-core:jar:5.3.5:compile - omitted for duplicate) | +- org.springframework:spring-beans:jar:5.3.5:compile | | \\- (org.springframework:spring-core:jar:5.3.5:compile - omitted for duplicate) ... +- (org.springframework:spring-expression:jar:5.2.13.RELEASE:compile - omitted for conflict with 5.3.5) \\- org.springframework:spring-web:jar:5.2.13.RELEASE:compile +- (org.springframework:spring-beans:jar:5.2.13.RELEASE:compile - omitted for conflict with 5.3.5) \\- (org.springframework:spring-core:jar:5.2.13.RELEASE:compile - omitted for conflict with 5.3.5) We started from two dependencies, and in this output, we find out that Maven pulled additional dependencies. These additional dependencies are simply transitive.\nWe can see that there are different versions of the same dependency in the tree. For example, there are two versions of the spring-beans dependency:5.2.13.RELEASE and 5.3.5.\nMaven has resolved this version conflict, but how? What does omitted for duplicate and omitted for conflict mean?\nHow Does Maven Resolve Version Conflicts? The first thing to know is that Maven can\u0026rsquo;t sort versions: The versions are arbitrary strings and may not follow a strict semantic sequence. For example, if we have two versions 1.2 and 1.11, we know that 1.11 comes after 1.2 but the String comparison gives 1.11 before 1.2. Other version values can be 1.1-rc1 or 1.1-FINAL, that\u0026rsquo;s why sorting versions by Maven is not a solution.\nThat means that Maven doesn\u0026rsquo;t know which version is newer or older and cannot choose to always take the newest version.\nSecond, Maven takes the approach of the nearest transitive dependency in the tree depth and the first in resolution. To understand this, let\u0026rsquo;s look at an example:\nWe start with a POM file having some dependencies with transitive dependencies (to make it short, all the dependencies will be represented by the letter D):\n D1(v1) -\u0026gt; D11(v11) -\u0026gt; D12(v12) -\u0026gt; DT(v1.3) D2(v2) -\u0026gt; DT(v1.2) D3(v3) -\u0026gt; D31(v31) -\u0026gt; DT(v1.0) D4(v4) -\u0026gt; DT(v1.5)  Note that each of the direct dependencies pulls in a different version of the DT dependency.\nMaven will create a dependency tree and following the criteria mentioned above, a dependency will be selected for DT:\nWe note that the resolution order played a major role in choosing the DT dependency since the v1.2 and v1.5 had the same depth, but v1.2 came first in the resolution order. So even if v1.2 is not the last version of DT, Maven chose it to work with.\nIf we wanted to use version v1.5 in this case, we could simply add the dependency D4 before D2 in our POM file. In this case, v1.5 will be first in terms of resolution order and Maven will select it.\nSo, to help us understand the dependency tree result from above, Maven indicates for each transitive dependency why it was omitted:\n \u0026ldquo;omitted for duplicate\u0026rdquo; means that Maven preferred another dependency with the same name and version over this one (i.e. the other dependency had a higher priority according to the resolution order and depth) \u0026ldquo;omitted for conflict\u0026rdquo; means that Maven preferred another dependency with the same name but a different version over this one (i.e. the other dependency with the different version had a higher priority according to the resolution order and depth)  Now it is clear for us how Maven resolves transitive dependencies. For some reason, we may be tempted one day to pick a specific version of a dependency and get rid of all the processes made by Maven to select it. To do this we have two options:\nOverriding Transitive Dependency Versions If we want to resolve a dependency conflict ourselves, we have to tell Maven which version to choose. There are two ways of doing this.\nOverride a Transitive Dependency Version Using a Direct Dependency Adding the desired transitive dependency version as a direct dependency in the POM file will result in making it the nearest in depth. This way Maven will select this version. In our previous example, if we wanted version v1.3 to be selected, then adding the dependency DT(v1.3) in the POM file will ensure its selection.\nOverride a Transitive Dependency Version Using the dependencyManagement Section For projects with sub-modules, to ensure compatibility and coherence between all the modules, we need a way to provide the same version of a dependency across all sub-modules. For this, we can use the dependencyManagement section: it provides a lookup table for Maven to help determine the selected version of a transitive dependency and to centralize dependency information.\nA dependencyManagement section contains dependency elements. Each dependency is a lookup reference for Maven to determine the version to select for transitive (and direct) dependencies. The version of the dependency is mandatory in this section. However, outside of the dependencyManagement section, we can now omit the version of our dependencies, and Maven will select the correct version of the transitive dependencies from the list of dependencies provided in dependencyManagement.\nWe should note that defining a dependency in the dependencyManagement section doesn\u0026rsquo;t add it to the dependency tree of the project, it is used just for lookup reference.\nA better way to understand the use of dependencyManagement is through an example. Let\u0026rsquo;s go back to our previous example with the Spring dependencies. Now we are going to play with the spring-beans dependency. When we executed the command mvn dependency:tree, the version resolved for spring-beans was 5.3.5.\nUsing dependencyManagement we can override this version and select the version that we want. All that we have to do is to add the following to our POM file:\n\u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-beans\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.2.13.RELEASE\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; Now we want Maven to resolve version 5.2.13.RELEASE instead of 5.3.5.\nLet\u0026rsquo;s execute the command mvn dependency:tree one more time. The result is:\n+- org.springframework:spring-context:jar:5.3.5:compile | +- org.springframework:spring-aop:jar:5.3.5:compile | +- org.springframework:spring-beans:jar:5.2.13.RELEASE:compile | +- org.springframework:spring-core:jar:5.3.5:compile | | \\- org.springframework:spring-jcl:jar:5.3.5:compile | \\- org.springframework:spring-expression:jar:5.3.5:compile \\- org.springframework.security:spring-security-web:jar:5.4.5:compile +- org.springframework.security:spring-security-core:jar:5.4.5:compile \\- org.springframework:spring-web:jar:5.2.13.RELEASE:compile In the dependency tree, we find the 5.2.13.RELEASE version for spring-beans. This is the version that we wanted Maven to resolve for each spring-beans transitive dependency.\nIf spring-beans was a direct dependency, in order to take advantage of the dependencyManagement section, we will no longer have to set the version when adding the dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-beans\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; This way, Maven will resolve the version using the information provided in the dependencyManagement section.\nIntroducing Maven\u0026rsquo;s Bill of Material (BOM) The Bill Of Material is a special POM file that groups dependency versions that are known to be valid and tested to work together. This will reduce the developers' pain of having to test the compatibility of different versions and reduce the chances to have version mismatches.\nThe BOM file has:\n a pom packaging type: \u0026lt;packaging\u0026gt;pom\u0026lt;/packaging\u0026gt;. a dependencyManagement section that lists the dependencies of a project.  As seen above, in the dependencyManagement section we will group all the dependencies required by our project with the recommended versions.\nLet\u0026rsquo;s create a BOM file as an example:\n\u0026lt;project ...\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;reflectoring-bom\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;packaging\u0026gt;pom\u0026lt;/packaging\u0026gt; \u0026lt;name\u0026gt;Reflectoring Bill Of Material\u0026lt;/name\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logging\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;test\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; \u0026lt;/project\u0026gt; This file can be used in our projects in two different ways:\n as a parent POM, or as a dependency.  Third-party projects can provide their BOM files to make dependency management easier. Here are some examples:\n spring-data-bom: The Spring data team provides a BOM for their Spring Data project. jackson-bom: The Jackson project provides a BOM for Jackson dependencies.  Using a BOM as a Parent POM The BOM file that we created previously can be used as a parent POM of a new project. This newly created project will inherit the dependencyManagement section and Maven will use it to resolve the dependencies required for it.\n\u0026lt;project ...\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;parent\u0026gt; \u0026lt;groupId\u0026gt;reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;reflectoring-bom\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;/parent\u0026gt; \u0026lt;groupId\u0026gt;reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;new-project\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0.0-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;packaging\u0026gt;jar\u0026lt;/packaging\u0026gt; \u0026lt;name\u0026gt;New Project\u0026lt;/name\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logging\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/project\u0026gt; In this example, we note that the logging dependency in our project doesn\u0026rsquo;t need a version number. Maven will resolve it from the list of dependencies in the BOM file.\nIf a version is added to the dependency, this will override the version defined in the BOM, and Maven will apply the \u0026ldquo;nearest depth version\u0026rdquo; rule.\nFor a real-life example, Spring Boot projects created from the start.spring.io platform inherit from a parent POM spring-boot-starter-parent which inherits also from spring-boot-dependencies. This POM file has a dependencyManagement section containing a list of dependencies required by Spring Boot projects. This file is a BOM file provided by the Spring Boot team to manage all the dependencies.\nWith a new version of Spring Boot, a new BOM file will be provided that handles version upgrades and makes sure that all the given dependencies work well together. Developers will only care about upgrading the Spring Boot version, the underlying dependencies' compatibility was tested by the Spring Boot team.\nWe should note that if we use a BOM as a parent for our project, we will no longer be able to declare another parent for our project. This can be a blocking issue if the concerned project is a child module. To bypass this, another way to use the BOM is by dependency.\nAdding a BOM as a Dependency A BOM can be added to an existing POM file by adding it to the dependencyManagement section as a dependency with a pom type:\n\u0026lt;project ...\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;new-project\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0.0-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;packaging\u0026gt;jar\u0026lt;/packaging\u0026gt; \u0026lt;name\u0026gt;New Project\u0026lt;/name\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logging\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependencyManagement\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;reflectoring\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;reflectoring-bom\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;type\u0026gt;pom\u0026lt;/type\u0026gt; \u0026lt;scope\u0026gt;import\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/dependencyManagement\u0026gt; \u0026lt;/project\u0026gt; Maven will behave exactly like the example with the parent BOM file in terms of dependency resolution. The only thing that differs is how the BOM file is imported.\nThe import scope set in the dependency section indicates that this dependency should be replaced with all effective dependencies declared in its POM. In other words, the list of dependencies of our BOM file will take the place of the BOM import in the POM file.\nConclusion Understanding dependency management in Maven is crucial to avoid getting version conflicts and wasting time resolving them.\nUsing the BOM is a good way the ensure consistency between the dependencies versions and a safer way in multi-module projects management.\n","date":"April 10, 2021","image":"https://reflectoring.io/images/stock/0045-checklist-1200x628-branded_hu9e774932f96798e633ac569f63dda92c_116442_650x0_resize_q90_box.jpg","permalink":"/maven-bom/","title":"Using Maven's Bill of Materials (BOM)"},{"categories":["Java"],"contents":"In this article, we are going to talk about Java\u0026rsquo;s Service Provider Interface (SPI). We will have a short overview of what the SPI is and describe some cases where we can use it. Then we will give an implementation of an SPI for a practical use case.\n Example Code This article is accompanied by a working code example on GitHub. Overview The Service Provider Interface was introduced to make applications more extensible.\nIt gives us a way to enhance specific parts of a product without modifying the core application. All we need to do is provide a new implementation of the service that follows certain rules and plug it into the application. Using the SPI mechanism, the application will load the new implementation and work with it.\nTerms and Definitions To work with extensible applications, we need to understand the following terms:\n Service Provider Interface: A set of interfaces or abstract classes that a service defines. It represents the classes and methods available to your application. Service Provider: Called also Provider, is a specific implementation of a service. It is identified by placing the provider configuration file in the resources directory META-INF/services. It must be available in the application\u0026rsquo;s classpath. ServiceLoader: The main class used to discover and load a service implementation lazily. The ServiceLoader maintains a cache of services already loaded. Each time we invoke the service loader to load services, it first lists the cache\u0026rsquo;s elements in instantiation order, then discovers and instantiates the remaining providers.  How Does ServiceLoader Work? We can describe the SPI as a discovery mechanism since it automatically loads the different providers defined in the classpath.\nThe ServiceLoader is the main tool used to do that by providing some methods to allow this discovery :\n  iterator(): Creates an iterator to lazily load and instantiate the available providers. At this moment, the providers are not instantiated yet, that\u0026rsquo;s why we called it a lazy load. The instantiation is done when calling the methods next() or hasNext() of the iterator. The iterator maintains a cache of these providers for performance reasons so that they don\u0026rsquo;t get loaded with each call. A simple way to get the providers instantiated is through a loop:\nIterator\u0026lt;ServiceInterface\u0026gt; providers = loader.iterator(); while (providers.hasNext()) { ServiceProvider provider = providers.next(); //actions... }   stream(): Creates a stream to lazily load and instantiate the available providers. The stream elements are of type Provider. The providers are loaded and instantiated when invoking the get() method of the Provider class.\nIn the following example we can see how to use the stream() method to get the providers:\nStream\u0026lt;ServiceInterface\u0026gt; providers = ServiceLoader.load(ServiceInterface.class) .stream() .map(Provider::get);   reload(): Clears the loader\u0026rsquo;s provider cache and reloads the providers. This method is used in situations in which new service providers are installed into a running JVM.\n  Apart from the service providers implemented and the service provider interface created, we need to register these providers so that the ServiceLoader can identify and load them. The configuration files need to be created in the folder META-INF/services.\nWe should name these files with the fully qualified class name of the service provider interface. Each file will contain the fully qualified class name of one or many providers, one provider per line.\nFor example, if we have a service provider interface called InterfaceName, to register the service provider ServiceProviderImplementation, we create a text file named package.name.InterfaceName. This file contains one line:\npackage.name.ServiceProviderImplementation We can note that there will be many configuration files with the same name in the classpath. For this reason, the ServiceLoader uses ClassLoader.getResources() method to get an enumeration of all the configuration files to identify each provider.\nExploring the Driver Service in Java By default, Java includes many different service providers. One of them is the Driver used to load database drivers.\nLet\u0026rsquo;s go further with the Driver and try to understand how the database drivers are loaded in our applications.\nIf we examine the PostgreSQL JAR file, we will find a folder called META-INF/services containing a file named java.sql.Driver. This configuration file holds the name of the implementation class provided by PostgreSQL for the Driver interface, in this case: org.postgresql.Driver.\nWe note the same thing with the MySQL driver: The file with the name java.sql.Driver located in META-INF/services contains com.mysql.cj.jdbc.Driver which is the MySQL implementation of the Driver interface.\nIf the two drivers are loaded in the classpath, the ServiceLoader will read the implementation class names from each file, then calls Class.forName() with the class names and then newInstance() to create an instance of the implementation classes.\nNow that we have two implementations loaded, how will the connection to the database work?\nIn the getConnection() method of the DriverManager class from the java.sql package, we can see how the connection to the database is established when different drivers are available.\nHere is the code of the getConnection() method:\nfor (DriverInfo aDriver : registeredDrivers) { if (isDriverAllowed(aDriver.driver, callerCL)) { try { println(\u0026#34;trying \u0026#34; + aDriver.driver.getClass().getName()); Connection con = aDriver.driver.connect(url, info); if (con != null) { // Success!  println(\u0026#34;getConnection returning \u0026#34; + aDriver.driver.getClass().getName()); return (con); } } catch (SQLException ex) { if (reason == null) { reason = ex; } } } else { println(\u0026#34;skipping: \u0026#34; + aDriver.getClass().getName()); } } As we can see, the algorithm goes through the registeredDrivers and tries to connect to the database using the database URL. If the connection to the database is established, the connection object is returned, otherwise, the other drivers are given a try until all the drivers are covered.\nImplementing a Custom Service Provider Now that we have an understanding of the SPI concepts, let\u0026rsquo;s create an example of an SPI and load providers using the ServiceLoader class.\nLet\u0026rsquo;s say that we have a librarian who needs an application to check whether a book is available in the library or not when requested by customers. We can do this by defining a service represented by a class named LibraryService and a service provider interface called Library.\nThe LibraryService provides a singleton LibraryService object. This object retrieves the book from Library providers.\nThe library service client which is in our case the application that we are building gets an instance of this service, and the service will search, instantiate and use Library service providers.\nThe application developers may in the first place use a standard list of books that can be available in all libraries. Other users who deal with computer science books may require a different list of books for their library (another library provider). In this case, it would be better if the user can add the new library with the desired books to the existing application without modifying its core functionality. The new library will just be plugged into the application.\nOverview of Maven Modules We start by creating a Maven root project that will contain all our sub-modules. We will call it service-provider-interface. The sub-modules will be:\n library-service-provider: Contains the Service Provider Interface Library and the service class to load the providers. classics-library: The provider for a library of classic books chosen by the developers. computer-science-library: The provider for a library of computer science books required by users. library-client: An application to put all together and create a working example.  The following diagram shows the dependencies between each module:\nBoth, the classics-library and the computer-science-library implement the library-service-provider. The library-client module then uses the library-service-provider module to find books. The library-client doesn\u0026rsquo;t have a compile-time dependency to the library implementations!\nThe library-service-provider Module First, let\u0026rsquo;s create a model class that represents a book:\npublic class Book { String name; String author; String description; } Then, we define the service provider interface for our service:\npackage org.library.spi; public interface Library { String getCategory(); Book getBook(String name); } Finally, we create the LibraryService class that the client will use to get the books from the library:\npublic class LibraryService { private static LibraryService libraryService; private final ServiceLoader\u0026lt;Library\u0026gt; loader; public static synchronized LibraryService getInstance() { if (libraryService == null) { libraryService = new LibraryService(); } return libraryService; } private LibraryService() { loader = ServiceLoader.load(Library.class); } public Optional\u0026lt;Book\u0026gt; getBook(String name) { Book book = null; Iterator\u0026lt;Library\u0026gt; libraries = loader.iterator(); while (book == null \u0026amp;\u0026amp; libraries.hasNext()) { Library library = libraries.next(); book = library.getBook(name); } return Optional.ofNullable(book); } public Optional\u0026lt;Book\u0026gt; getBook(String name, String category) { return loader.stream() .map(ServiceLoader.Provider::get) .filter(library -\u0026gt; library.getCategory().equals(category)) .map(library -\u0026gt; library.getBook(name)) .filter(Objects::nonNull) .findFirst(); } } Using the getInstance() method, the clients will get a singleton LibraryService object to retrieve the books they need.\nIn the constructor, LibraryService invokes the static factory method load() to get an instance of ServiceLoader that can retrieve Library implementations.\nIn getBook(String name), we iterate through all available Library implementations using the iterate() method and call their getBook() methods to find the book we are looking for.\nIn getBook(String name, String category) we are looking for a book from a specific library category. This method uses a different approach to fetch the book by invoking the stream() method to load the providers and then call the getBook() method to find the book.\nThe classics-library Module First, we include the dependency to the service API provider in the pom.xml file of this submodule:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.library\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;library-service-provider\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Then we create a class that implements the Library SPI:\npackage org.library; public class ClassicsLibrary implements Library { public static final String CLASSICS_LIBRARY = \u0026#34;CLASSICS\u0026#34;; private final Map\u0026lt;String, Book\u0026gt; books; public ClassicsLibrary() { books = new TreeMap\u0026lt;\u0026gt;(); Book nineteenEightyFour = new Book(\u0026#34;Nineteen Eighty-Four\u0026#34;, \u0026#34;George Orwell\u0026#34;, \u0026#34;Description\u0026#34;); Book theLordOfTheRings = new Book(\u0026#34;The Lord of the Rings\u0026#34;, \u0026#34;J. R. R. Tolkien\u0026#34;, \u0026#34;Description\u0026#34;); books.put(\u0026#34;Nineteen Eighty-Four\u0026#34;, nineteenEightyFour); books.put(\u0026#34;The Lord of the Rings\u0026#34;, theLordOfTheRings); } @Override public String getCategory() { return CLASSICS_LIBRARY; } @Override public Book getBook(String name) { return books.get(name); } } This implementation provides access to two books through the getBook() method. Finally, we should create a folder called META-INF/services in the resources directory with a file named org.library.spi.Library. This file will contain the full class name of the implementation that will be used by the ServiceLoader to instantiate it. In our case, it will be org.library.ClassicsLibrary.\nThe computer-science-library Module The computer-science-library submodule has the same structure and requirements as the classics-library submodule. However, the implementation of the Library SPI, the file name, and the class name that will be created in the META-INF/services folder will change.\nThe code of the computer-science-library submodule is available on GitHub.\nThe library-client Module In this submodule, we will call the LibraryService to get information about some books. In the beginning, we will use only the classics-library as a library for our demo, then we will see how we can add more capabilities to our demo project by adding the computer-science-library jar file to the classpath. The ServiceLoader will then load and instantiate our provider.\nTo start, let\u0026rsquo;s add the classics-library submodule to the library-clientpom.xml file:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.library\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;classics-library\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Then, we try to get information about two books:\npublic class LibraryClient { public static void main(String[] args) { LibraryService libraryService = LibraryService.getInstance(); requestBook(\u0026#34;Clean Code\u0026#34;, libraryService); requestBook(\u0026#34;The Lord of the Rings\u0026#34;, libraryService); requestBook(\u0026#34;The Lord of the Rings\u0026#34;, \u0026#34;COMPUTER_SCIENCE\u0026#34;, libraryService); } private static void requestBook(String bookName, LibraryService library) { library.getBook(bookName) .ifPresentOrElse( book -\u0026gt; System.out.println(\u0026#34;The book \u0026#39;\u0026#34; + bookName + \u0026#34;\u0026#39; was found, here are the details:\u0026#34; + book), () -\u0026gt; System.out.println(\u0026#34;The library doesn\u0026#39;t have the book \u0026#39;\u0026#34; + bookName + \u0026#34;\u0026#39; that you need.\u0026#34;)); } private static void requestBook( String bookName, String category, LibraryService library) { library.getBook(bookName, category) .ifPresentOrElse( book -\u0026gt; System.out.println(\u0026#34;The book \u0026#39;\u0026#34; + bookName + \u0026#34;\u0026#39; was found in \u0026#34; + category + \u0026#34;, here are the details:\u0026#34; + book), () -\u0026gt; System.out.println(\u0026#34;The library \u0026#34; + category + \u0026#34; doesn\u0026#39;t have the book \u0026#39;\u0026#34; + bookName + \u0026#34;\u0026#39; that you need.\u0026#34;)); } } The output for this program will be:\nThe library doesn\u0026#39;t have the book \u0026#39;Clean Code\u0026#39; that you need. The book \u0026#39;The Lord of the Rings\u0026#39; was found, here are the details:Book{name=\u0026#39;The Lord of the Rings\u0026#39;,...} The library COMPUTER_SCIENCE doesn\u0026#39;t have the book \u0026#39;The Lord of the Rings\u0026#39; that you need. As seen above, the book \u0026ldquo;The Lord of the Rings\u0026rdquo; is available in the classics library, but not in the computer science library which is expected behavior.\nThe \u0026ldquo;Clean Code\u0026rdquo; book is not available in the classics library. In order to get it, we can add our computer-science-library which contains the required book. All that we have to do is to add the dependency to the library-clientpom file:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.library\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;computer-science-library\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; When we run the demo application we get this output:\nThe book \u0026#39;Clean Code\u0026#39;was found, here are the details:Book{name=\u0026#39;Clean Code...} The book \u0026#39;The Lord of the Rings\u0026#39; was found, here are the details: Book{name=\u0026#39;The Lord of ...} The library COMPUTER_SCIENCE doesn\u0026#39;t have the book \u0026#39;The Lord of the Rings\u0026#39; that you need. Finally, we get the requested books. We only had to plug-in a provider to add extra behavior to our program.\nThe book \u0026ldquo;The Lord of the Rings\u0026rdquo; is not found in the \u0026lsquo;COMPUTER_SCIENCE\u0026rsquo; category when we choose the wrong library during the fetch.\nConclusion In this article, we described the capabilities of the Service Provider Interface and how it works.\nWe gave examples of some SPI in the java ecosystem like the Driver provider used to connect to a database.\nWe also implemented a library application where we learned how to:\n define a service provider interface, implement the providers and the configuration file that should be created inMETA-INF/services folder for the ServiceLoader. use the ServiceLoader to find the different providers and instantiate them.  Find the complete code of the example application on GitHub.\n","date":"March 26, 2021","image":"https://reflectoring.io/images/stock/0088-jigsaw-1200x628-branded_hu5d0fbb80fd5a577c9426d368c189788e_197833_650x0_resize_q90_box.jpg","permalink":"/service-provider-interface/","title":"Implementing Plugins with Java's Service Provider Interface"},{"categories":["Spring Boot"],"contents":"With profiles, Spring (Boot) provides a very powerful feature to configure our applications. Spring also offers the @Profile annotation to add beans to the application context only when a certain profile is active. This article is about this @Profile annotation, why it\u0026rsquo;s a bad idea to use it, and what to do instead.\nWhat Are Spring Profiles? For an in-depth discussion of profiles in Spring Boot, have a look at my \u0026ldquo;One-Stop Guide to Profiles with Spring Boot\u0026rdquo;.\nThe one-sentence explanation of profiles is this: when we start a Spring (Boot) application with a certain profile (or number of profiles) activated, the application can react to the activated profiles in some way.\nThe main use case for profiles in Spring Boot is to group configuration parameters for different environments into different application-\u0026lt;profile\u0026gt;.yml configuration files. Spring Boot will automatically pick up the right configuration file depending on the activated profile and load the configuration properties from that file.\nWe might have an application-local.yml file to configure the application for local development, an application-staging.yml file to configure it for the staging environment, and an application-prod.yml file to configure it for production.\nThat\u0026rsquo;s a powerful feature and we should make use of it!\nWhat\u0026rsquo;s the @Profile Annotation? The @Profile annotation is one of the ways to react to an activated profile in a Spring (Boot) application. The other way is to call Environment.getActiveProfiles(), which you can read about here.\nOne pattern of using the @Profile annotation that I have observed in various projects is replacing \u0026ldquo;real\u0026rdquo; beans with mock beans depending on a profile, something like this:\n@Configuration class MyConfiguration { @Bean @Profile(\u0026#34;test\u0026#34;) Service mockService() { return new MockService(); } @Bean @Profile(\u0026#34;!test\u0026#34;) Service realService(){ return new RealService(); } } This configuration adds a bean of type MockService to the application context if the test profile is active, and a bean of type RealService otherwise.\nAnother case I often see is this one:\n@Configuration class MyConfiguration { @Bean @Profile(\u0026#34;staging\u0026#34;) Client stagingClient() { return new Client(\u0026#34;https://staging.url\u0026#34;); } @Bean @Profile(\u0026#34;prod\u0026#34;) Client prodClient(){ return new Client(\u0026#34;https://prod.url\u0026#34;); } } We create a Client bean that connects against a different URL depending on the active profile.\nI have also seen the @Profile annotation used like this:\n@Configuration class MyConfiguration { @Bean @Profile(\u0026#34;postgresql\u0026#34;) DatabaseService postgresqlService() { return new PostgresqlService(); } @Bean @Profile(\u0026#34;h2\u0026#34;) DatabaseService h2Service(){ return new H2Service(); } } If the postgresql profile is active, we connect to a \u0026ldquo;real\u0026rdquo; PostgreSQL database (assuming the PostgresqlService class does that for us). If the h2 profile is active, we connect to an in-memory H2 database, instead.\nAll of the above patterns are bad. Don\u0026rsquo;t do it at home (or rather, at work)!\nActually, don\u0026rsquo;t use the @Profile annotation at all, if you can avoid it. And I will tell you how to avoid it later.\nWhat\u0026rsquo;s Wrong with the @Profile Annotation? The main issue I see with the @Profile annotation is that it spreads dependencies to the profiles across the codebase.\nThere probably won\u0026rsquo;t be a single a configuration class where we use @Profile(\u0026quot;test\u0026quot;), @Profile(\u0026quot;!test\u0026quot;), @Profile(\u0026quot;postgresql\u0026quot;), or @Profile(\u0026quot;h2\u0026quot;). There will be many places, spread across multiple components of our codebase.\nWith the @Profile annotations spread across the codebase, we can\u0026rsquo;t see at a glance what effect a particular profile has on our application. What\u0026rsquo;s more, we don\u0026rsquo;t know what happens if we combine certain profiles.\nWhat happens if we activate the h2 profile? What happens if we activate the h2 profile and we do not activate the test profile? What happens if we activate the postgresql profile together with the test profile? Will the application still work?\nTo find out, we have to do a full text search for @Profile annotations in our codebase and try to make sense of the configuration. Which no one will do, because it\u0026rsquo;s tedious. Which means that no one will understand the application configuration. In turn, this means that we\u0026rsquo;ll trial-and-error our way through any issues we encounter\u0026hellip; .\nUsing negations like @Profile(\u0026quot;!test\u0026quot;) makes it even worse. We can\u0026rsquo;t even use a full-text search to look for beans that are activated with a certain profile, because the profile is not visible in the code. Instead we have to know that we have to search for !test, instead.\nYou get the gist. And we\u0026rsquo;ve only been talking about a couple of different profiles here. Imagine the combinatorial mess when there are more!\nHow to Avoid the @Profile Annotation? First of all, say goodbye to profiles like postgresql, h2, or enableFoo. Profiles should be used for exactly one reason: to create a configuration profile for a runtime environment. You can read more about when not to use profiles here.\nFor each environment the application is going to run in, we create a separate profile. Usually these are variations of the following:\n local to configure the application for local development, staging to configure the application to run in a staging environment, prod to configure the application to run in a prod environment, and perhaps test to configure the application to run in tests.  There may be more environments, of course, depending on the application and the ecosystem it lives in.\nBut the idea is that we have an application-\u0026lt;profile\u0026gt;.yml configuration file for each profile which contains ALL configuration parameters that are different from the default.\nThen, we can fix the examples from above.\nInstead of using @Profile(\u0026quot;test\u0026quot;) and @Profile(\u0026quot;!test\u0026quot;) to load a MockService or a RealService instance, we add a property to our application.yml:\nservice.mock: false In application-test.yml, we override this property to true, to load the mock during testing.\nIn the code, we do the following:\n@Configuration class MyConfiguration { @Bean @ConditionalOnProperty(name=\u0026#34;service.mock\u0026#34;, havingValue=\u0026#34;true\u0026#34;) Service mockService() { return new MockService(); } @Bean @ConditionalOnProperty(name=\u0026#34;service.mock\u0026#34;, havingValue=\u0026#34;false\u0026#34;) Service realService(){ return new RealService(); } } The code doesn\u0026rsquo;t look much different from the original, but what we\u0026rsquo;ve achieved is that we no longer reference the profile in the code. Instead, we reference a configuration property. This property we can influence from any application-\u0026lt;profile\u0026gt;.yml configuration file. We\u0026rsquo;re no longer bound to the test profile, but we have fine-grained control over the configuration property that influences mocking of the service.\nWhat To Do in a Plain Spring Application?  The @ConditionalOnProperty annotation is only available in Spring Boot, not in plain Spring. Also, we don't have Spring Boot's powerful configuration features with a different application-\u0026lt;profile\u0026gt;.yml configuration file for each profile.  In a plain Spring application, make sure that you're using profiles only for environment profiles like \"local\", \"staging\", and \"prod\", and not to control features (i.e. no \"h2\", \"postgresql\", or \"enableFoo\" profiles). Then, create a @Configuration class for each profile that's annotated with @Profile(\"profileName\") that contains all beans that are loaded conditionally in that profile.  This means you have to write a bit more code because you have to duplicate some bean definitions across profiles, but you have also centralized the dependency to profiles to a few classes and avoided to spread it across the codebase. Also, you can just search for a profile name and you will find the beans it controls (as long as you don't use negations like @Profile(\"!test\")).  We do a very similar thing in the second example. Instead of hard-coding the staging and production URL of the external resource into the code, we create the property client.resourceUrl in application-staging.yml and application-prod.yml and set its value to the URL we need in the respective environment. Then, we access that configuration property from the code like this:\n@Configuration class MyConfiguration { @Bean Client client(@Value(\u0026#34;${client.resourceUrl}\u0026#34;) String resourceUrl) { return new Client(resourceUrl); } } We have even shaved off a couple lines of code this way, because now we only have one @Bean-annotated method instead of two.\nWe can solve the third example in a similar manner: we create a property database.mode and set it to h2 in application-local.yml and to postgresql in application-staging.yml and application-prod.yml. Then, in the code, we reference this new property:\n@Configuration class MyConfiguration { @Bean DatabaseService databaseService(@Value(\u0026#34;${database.mode}\u0026#34;) String databaseMode) { if (\u0026#34;postgresql\u0026#34;.equals(databaseMode)) { return new PostgresqlService(); } else if (\u0026#34;h2\u0026#34;.equals(databaseMode)) { return new H2Service(); } throw new ConfigurationException(\u0026#34;invalid value for \u0026#39;database.mode\u0026#39;: \u0026#34; + databaseMode); } } The code looks a bit more complicated because we have introduced an if/else block, but again we have removed the dependency to a specific profile from the code and instead pushed it into the application-\u0026lt;profile\u0026gt;.yml configuration files where they belong.\nThe pattern is this: every time you want to use @Profile, create a configuration property instead. Then, set the value of that configuration for each environment in the respective application-\u0026lt;profile\u0026gt;.yml file.\nThis way, we have a single source of truth for the configuration of each environment and no longer need to search the codebase for all the @Profile annotations and then guess which combinations are valid and which are not.\nConclusion Don\u0026rsquo;t use @Profile, because it spreads dependencies to profiles all across the codebase. Every time you need a profile-specific configuration, introduce a specific configuration property and control that property for each profile in the respective application-\u0026lt;profile\u0026gt;.yml file.\nIt will make your team\u0026rsquo;s life easier because you now have a single source of truth for all your configuration properties instead of having to search the codebase every time you want to know how the application is configured.\n","date":"March 21, 2021","image":"https://reflectoring.io/images/stock/0098-profile-1200x628-branded_huc871bff62bbbf27ac0fe6e66c8b066d4_38247_650x0_resize_q90_box.jpg","permalink":"/dont-use-spring-profile-annotation/","title":"Don't Use the @Profile Annotation in a Spring Boot App!"},{"categories":["Java","AWS"],"contents":"In the article \u0026ldquo;Getting Started with AWS CloudFormation\u0026rdquo;, we have already played around a bit with AWS CloudFormation. We have deployed a network stack that provides the network infrastructure we need, and a service stack that deploys a Docker image with our Spring Boot application into that network.\nIn this article, we\u0026rsquo;ll do the same with the Cloud Development Kit (CDK) instead of CloudFormation. Instead of describing our stacks in YAML, however, we\u0026rsquo;ll be using Java. Furthermore, we\u0026rsquo;ll replace the AWS CLI with the CDK CLI which allows us to deploy and destroy our stacks with ease.\nUnder the hood, CDK will \u0026ldquo;synthesize\u0026rdquo; a CloudFormation file from our Java code and pass that file to the CloudFormation API to deploy our infrastructure. This means that with CDK, we describe the same resources as we would in a CloudFormation YAML file. But, having the power of a real programming language at our hands (in our case, Java), we can build abstractions on top of the low-level CloudFormation resources (and, most importantly, we don\u0026rsquo;t have to worry about indentation). These abstractions are called \u0026ldquo;constructs\u0026rdquo; in CDK lingo.\nLet\u0026rsquo;s create our first CDK app! Follow along the steps in this chapter to create a CDK app that deploys our \u0026ldquo;Hello World\u0026rdquo; application to the cloud.\nCheck Out the Book!  This article is a self-sufficient sample chapter from the book Stratospheric - From Zero to Production with Spring Boot and AWS.\nIf you want to learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check it out!\n Creating Our First CDK App The unit of work in CDK is called an \u0026ldquo;app\u0026rdquo;. Think of an app as a project that we import into our IDE. In Java terms, this is a Maven project by default.\nIn that app, we can define one or more stacks. And each stack defines a set of resources that should be deployed as part of that stack. Note that a CDK stack is the same concept as a CloudFormation stack.\nOnce we have an app in place, the CDK CLI allows us to deploy or destroy (undeploy) all stacks at the same time, or we can choose to interact with a specific stack only.\nBefore we can start, we have to get some prerequisites out of the way.\nInstalling Node Even though we\u0026rsquo;re using the Java CDK, the CDK CLI is built with Node.js. So, we need to install it on our machine.\nIf you don\u0026rsquo;t have Node.js running, yet, you can download it from Node website or use the package manager of your choice to install it. We have tested all the steps in this book with Node.js 14, which is the latest version at the time of writing, but it will probably work with other versions as well.\nYou can check your Node.js version by calling node -v.\nInstalling the CDK CLI Next, we want to install the CDK CLI.\nHaving Node.js installed, this is as easy as calling npm install -g aws-cdk. This will make the CDK CLI command cdk available globally on your system.\nAs with Node.js you can check the version of your CDK CLI installation by calling cdk --version.\nCreating the CDK App Now we\u0026rsquo;re ready to create our first CDK app!\nLike many modern development CLIs, the CDK CLI provides the functionality to bootstrap a new project from scratch.\nLet\u0026rsquo;s create a new folder for our app, change into it, and run this command:\ncdk init app --language=java After CDK has created our app we\u0026rsquo;re greeted with this message:\n# Welcome to your CDK Java project! This is a blank project for Java development with CDK. The `cdk.json` file tells the CDK Toolkit how to execute your app. It is a [Maven](https://maven.apache.org/) based project, so you can open this project with any Maven compatible Java IDE to build and run tests. ## Useful commands  * `mvn package` compile and run tests * `cdk ls` list all stacks in the app * `cdk synth` emits the synthesized CloudFormation template * `cdk deploy` deploy this stack to your default AWS account/region * `cdk diff` compare deployed stack with current state * `cdk docs` open CDK documentation Enjoy! Aside from some useful commands, there is some important information in this message:\n the project relies on Maven to compile and package the code, and there\u0026rsquo;s a file called cdk.json that tells the CDK how to run our app.  We\u0026rsquo;ll make use of that information in the next section.\nMaking the CDK App Portable with the Maven Wrapper Before we inspect the generated app in more detail, let\u0026rsquo;s fix an issue with the auto-generated Maven setup.\nThe message above says that we need to run mvn package to compile and run the tests. That means Maven needs to be installed on our machine. Thinking a bit further, this also means that Maven needs to be installed on the build server once we decide to set up a continuous deployment pipeline.\nWhile it\u0026rsquo;s not an unsolvable problem to install Maven on a local or remote machine, we\u0026rsquo;ll have a more self-contained solution if the build takes care of \u0026ldquo;installing\u0026rdquo; Maven itself.\nThe solution to this is the Maven Wrapper. It\u0026rsquo;s a script that downloads Maven if necessary. To install it we copy the folder .mvn and the files mvnw and mvnw.cmd from the example project into the main folder of our newly created CDK app.\nInstead of calling mvn package, we can now call ./mvnw package for the same effect, even if Maven is not installed on our machine.\nBut we\u0026rsquo;re not completely done yet. Remember the message saying that the file cdk.json tells the CDK how to execute our app? Let\u0026rsquo;s look into that file:\n{ \u0026#34;app\u0026#34;: \u0026#34;mvn -e -q compile exec:java\u0026#34;, \u0026#34;context\u0026#34;: { \u0026#34;@aws-cdk/core:enableStackNameDuplicates\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;aws-cdk:enableDiffNoFail\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;@aws-cdk/core:stackRelativeExports\u0026#34;: \u0026#34;true\u0026#34; } } In the first line of this JSON structure, it\u0026rsquo;s telling the CDK how to compile and then execute our CDK app. It\u0026rsquo;s set up to call mvn by default. So, let\u0026rsquo;s replace that with ./mvnw and we\u0026rsquo;re done.\nNow, any time we call a command like cdk deploy, the CDK will call the Maven Wrapper instead of Maven directly to execute our CDK app.\nInspecting the Generated Source Code With everything set up, let\u0026rsquo;s have a look at the code that the CDK created for us. In the folder src/main/java/com/myorg we\u0026rsquo;ll find the files CdkApp and CdkStack:\npublic class CdkApp { public static void main(final String[] args) { App app = new App(); new CdkStack(app, \u0026#34;CdkStack\u0026#34;); app.synth(); } } public class CdkStack extends Stack { public CdkStack(final Construct scope, final String id) { this(scope, id, null); } public CdkStack(final Construct scope, final String id, final StackProps props) { super(scope, id, props); // The code that defines your stack goes here  } } That\u0026rsquo;s all the code we need for a working CDK app!\nCdkApp is the main class of the app. It\u0026rsquo;s a standard Java class with a standard main() method to make it executable. The main() method creates an App instance and a CdkStack instance and finally calls app.synth() to tell the CDK app to create CloudFormation files with all the CloudFormation resources it contains. These CloudFormation files will be written to the folder named cdk.out.\nWhen we run CDK commands like cdk deploy, CDK will execute the main method of CdkApp to generate the CloudFormation files. The deploy command knows where to look for these files and then sends them to the CloudFormation API to deploy.\nThe CdkStack class represents a CloudFormation stack. As mentioned before, a CDK app contains one or more stacks. This stack is where we would add the resources we want to deploy. We\u0026rsquo;ll add our own resources later in this chapter. For now, we\u0026rsquo;ll leave it empty.\nDeploying the Generated CDK App Let\u0026rsquo;s try to deploy the generated CDK app.\nThis is as easy as executing the cdk deploy command in the folder of the app. It will take a couple of seconds and we\u0026rsquo;ll be rewarded with a success message like this one:\nTestStack: deploying... TestStack: creating CloudFormation changeset... [========================================================] (2/2) TestStack Stack ARN: arn:aws:cloudformation:ap-southeast-2:... This means that CDK has successfully deployed the (empty) stack. If we log in to the AWS web console and navigate to the CloudFormation service, we should see a stack called \u0026ldquo;TestStack\u0026rdquo; deployed there:\nThe stack contains a single resource called CDKMetadata, which the CDK needs to work with that stack.\nBefore moving on, let\u0026rsquo;s destroy the stack again with cdk destroy.\nDeploying a Spring Boot App with a CDK Construct Now that we know the basic workings of CDK, let\u0026rsquo;s deploy a real application! The goal is to deploy an ECS Cluster that runs a Docker image with our Spring Boot app. To keep things simple for now, we\u0026rsquo;ll deploy the \u0026ldquo;Hello World\u0026rdquo; app from the Stratospheric book.\nAs mentioned, the resources that we include in a CDK stack are called constructs. To show the power of CDK - and to keep it easy for now - we have prepared a construct with the name SpringBootApplicationStack that includes all the resources we need. All we need to do is to include this construct into our CDK stack.\nAdding the Stratospheric Construct Library To get access to the SpringBootApplicationStack construct, we need to include the cdk-constructs library in our project. We created this library to provide constructs that we\u0026rsquo;re going to use throughout the book.\nLet\u0026rsquo;s add the following snippet to the pom.xml file in the CDK project:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;dev.stratospheric\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;cdk-constructs\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.0.7\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; You can check for a more recent version of the cdk-constructs library and browse the source files on GitHub.\nUsing the SpringBootApplicationStack As you might expect from the name of the construct, SpringBootApplicationStack is a stack. It extends the Stack class of the CDK API. That means we can use it to replace the generated CdkStack class.\nSo, we modify the generated CdkApp class to include a SpringBootApplicationStack instead of an empty CdkStack:\npublic class CdkApp { public static void main(final String[] args) { App app = new App(); String accountId = (String) app.getNode().tryGetContext(\u0026#34;accountId\u0026#34;); Objects.requireNonNull(accountId, \u0026#34;context variable \u0026#39;accountId\u0026#39; must not be null\u0026#34;); String region = (String) app.getNode().tryGetContext(\u0026#34;region\u0026#34;); Objects.requireNonNull(region, \u0026#34;context variable \u0026#39;region\u0026#39; must not be null\u0026#34;); new SpringBootApplicationStack( app, \u0026#34;SpringBootApplication\u0026#34;, makeEnv(accountId, region), \u0026#34;docker.io/stratospheric/todo-app-v1:latest\u0026#34;); app.synth(); } static Environment makeEnv(String account, String region) { return Environment.builder() .account(account) .region(region) .build(); } } The first apparent change is that we\u0026rsquo;re now accepting two parameters. With app.getNode().tryGetContext(), we\u0026rsquo;re reading so-called \u0026ldquo;context variables\u0026rdquo; from the command line.\nWe can pass such parameters to the cdk command line with the -c parameter, for example like this:\ncdk deploy -c accountId=123456789 -c region=ap-southeast-2 Why are we passing the account ID and the AWS region into the app? The reason is to be more flexible. If not provided, the CDK CLI will always take the account and region that we have pre-configured with the AWS CLI. We\u0026rsquo;d have no way of deploying resources into other accounts and regions. We don\u0026rsquo;t really need this flexibility yet but SpringBootApplicationStack uses more sophisticated constructs under the hood which need these parameters as input.\nNext, we create a SpringBootApplicationStack instance. We pass in the app instance to let CDK know that this SpringBootApplicationStack is part of the app and should be included in the synthesized CloudFormation files.\nThe second parameter is an arbitrary (but unique) identifier for the construct within the app.\nThe third parameter combines the accountId and region parameters to create an Environment object. Environment is a CDK class that we\u0026rsquo;re reusing here.\nThe final parameter is the URL to the Docker image that we want to deploy. We\u0026rsquo;ll use the same image we have used before. We could also decide to make the URL a context variable to be passed from the outside to make the CDK app more flexible.\nYou might wonder why we\u0026rsquo;re not doing anything with the SpringBootApplicationStack instance. When creating a construct, we always pass a parent construct or the parent app into the constructor. The construct will then register with the app so that the app knows which constructs to include in the synthesized CloudFormation stack when calling app.synth().\nDeploying the CDK App Let\u0026rsquo;s try out our shiny new CDK app! Let\u0026rsquo;s run this command:\ncdk deploy -c accountId=\u0026lt;ACCOUNT_ID\u0026gt; -c region=\u0026lt;REGION\u0026gt; Replace ACCOUNT_ID and REGION with your AWS account number and region, respectively.\nThe CDK will show a list of \u0026ldquo;IAM Statement Changes\u0026rdquo; and \u0026ldquo;Security Group Changes\u0026rdquo; for you to confirm. This is a security measure to avoid unintended changes in security configuration. After confirming, the console should show the deployment progress like this:\nDo you wish to deploy these changes (y/n)? y SpringBootApplication: deploying... SpringBootApplication: creating CloudFormation changeset... [========·················································] (7/46) 7:29:22 am | CREATE_IN_PROGRESS | AWS::CloudFormation::Stack | SpringBootAppli... 7:29:28 am | CREATE_IN_PROGRESS | AWS::EC2::InternetGateway | network/vpc/IGW 7:29:28 am | CREATE_IN_PROGRESS | AWS::EC2::VPC | network/vpc 7:29:29 am | CREATE_IN_PROGRESS | AWS::IAM::Role | Service/ecsTaskRole 7:29:29 am | CREATE_IN_PROGRESS | AWS::IAM::Role | Service/ecsTaskE... Since the SpringBootApplicationStack contains a lot of resources under the hood, it will take a minute or two for the deployment to finish.\nWhen it\u0026rsquo;s done we should see an output like this in the console:\nOutputs: SpringBootApplication.loadbalancerDnsName = prod-loadbalancer-810384126.ap-southeast-2.elb.amazonaws.com Stack ARN: arn:aws:cloudformation:ap-southeast-2:494365134671:stack/SpringBootApplication/0b6b4410-3be9-11eb-b5d5-0a689720a8fe This means the SpringBootApplication stack has been successfully deployed. CloudFormation stacks support the concept of \u0026ldquo;output parameters\u0026rdquo; and CDK prints any such output parameters after a successful deployment. The SpringBootApplication is built to expose the DNS name of its load balancer as an output parameter, which is why we see that DNS name in the console.\nIf we copy this URL into our browser, we should see our hello world application.\nInspecting the CloudFormation web console again, we should see a stack with a bunch of resources\nWhen done inspecting the stack don\u0026rsquo;t forget to destroy it to avoid unnecessary costs:\ncdk destroy -c accountId=\u0026lt;ACCOUNT_ID\u0026gt; -c region=\u0026lt;REGION\u0026gt; Why Not Stop Here? We have successfully deployed a Spring Boot application with about 20 lines of Java code with the help of AWS CDK. Doing this with plain CloudFormation templates, the same would take us a couple hundred lines of YAML configuration. That\u0026rsquo;s quite an achievement!\nSo, why not stop here? Why is there another in-depth chapter about CDK coming up? Our SpringBootApplicationStack gives us everything we need to deploy a Spring Boot application, doesn\u0026rsquo;t it?\nThe main reason is that our SpringBootApplicationStack construct is not very flexible. The only thing we have control over is the URL of the Docker image. Like any abstraction, the SpringBootApplicationStack hides a lot of details from us.\nWhat if we need to connect our Spring Boot application to a database or SQS queues? What if the path to our application\u0026rsquo;s health check is different from the default? What if our application needs more CPU power than the default 256 units? What if we prefer to use HTTPS rather than HTTP?\nAlso, imagine an environment with more than one application. We\u0026rsquo;d have one network for staging and another for production. We\u0026rsquo;d want to deploy multiple applications into each network. This doesn\u0026rsquo;t work currently, because each SpringBootApplicationStack would try to create its own VPC (which would fail for the second application because it would try to use the same resource names).\nThis means our CDK project needs to be flexible enough to let us deploy additional resources as needed and give us a lot of knobs and dials to configure the infrastructure and our application. We want to have fine-grained control.\nTo get this control, we have to build our own stacks and our own constructs. And this is what we\u0026rsquo;re going to do in the next chapter.\nCheck Out the Book!  This article is a self-sufficient sample chapter from the book Stratospheric - From Zero to Production with Spring Boot and AWS.\nIf you want to learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check it out!\n ","date":"March 6, 2021","image":"https://reflectoring.io/images/stock/0061-cloud-1200x628-branded_hu34d6aa247e0bb2675461b5a0146d87a8_82985_650x0_resize_q90_box.jpg","permalink":"/deploy-spring-boot-app-with-aws-cdk/","title":"Deploying a Spring Boot App with the AWS CDK"},{"categories":["Java"],"contents":"As Java developers, we are familiar with our applications throwing OutOfMemoryErrors or our server monitoring tools throwing alerts and complaining about high JVM memory utilization.\nTo investigate memory problems, the JVM Heap Memory is often the first place to look at.\nTo see this in action, we will first trigger an OutOfMemoryError and then capture a heap dump. We will next analyze this heap dump to identify the potential objects which could be the cause of the memory leak.\n Example Code This article is accompanied by a working code example on GitHub. What is a Heap Dump? Whenever we create a Java object by creating an instance of a class, it is always placed in an area known as the heap. Classes of the Java runtime are also created in this heap.\nThe heap gets created when the JVM starts up. It expands or shrinks during runtime to accommodate the objects created or destroyed in our application.\nWhen the heap becomes full, the garbage collection process is run to collect the objects that are not referenced anymore (i.e. they are not used anymore). More information on memory management can be found in the Oracle docs.\nHeap dumps contain a snapshot of all the live objects that are being used by a running Java application on the Java heap. We can obtain detailed information for each object instance, such as the address, type, class name, or size, and whether the instance has references to other objects.\nHeap dumps have two formats:\n the classic format, and the Portable Heap Dump (PHD) format.  PHD is the default format. The classic format is human-readable since it is in ASCII text, but the PHD format is binary and should be processed by appropriate tools for analysis.\nSample Program to Generate an OutOfMemoryError To explain the analysis of a heap dump, we will use a simple Java program to generate an OutOfMemoryError:\npublic class OOMGenerator { /** * @param args * @throws Exception */ public static void main(String[] args) throws Exception { System.out.println(\u0026#34;Max JVM memory: \u0026#34; + Runtime.getRuntime().maxMemory()); try { ProductManager productManager = new ProductManager(); productManager.populateProducts(); } catch (OutOfMemoryError outofMemory) { System.out.println(\u0026#34;Catching out of memory error\u0026#34;); throw outofMemory; } } } public class ProductManager { private static ProductGroup regularItems = new ProductGroup(); private static ProductGroup discountedItems = new ProductGroup(); public void populateProducts() { int dummyArraySize = 1; for (int loop = 0; loop \u0026lt; Integer.MAX_VALUE; loop++) { if(loop%2 == 0) { createObjects(regularItems, dummyArraySize); }else { createObjects(discountedItems, dummyArraySize); } System.out.println(\u0026#34;Memory Consumed till now: \u0026#34; + loop + \u0026#34;::\u0026#34;+ regularItems + \u0026#34; \u0026#34;+discountedItems ); dummyArraySize *= dummyArraySize * 2; } } private void createObjects(ProductGroup productGroup, int dummyArraySize) { for (int i = 0; i \u0026lt; dummyArraySize; ) { productGroup.add(createProduct()); } } private AbstractProduct createProduct() { int randomIndex = (int) Math.round(Math.random() * 10); switch (randomIndex) { case 0: return new ElectronicGood(); case 1: return new BrandedProduct(); case 2: return new GroceryProduct(); case 3: return new LuxuryGood(); default: return new BrandedProduct(); } } } We keep on allocating the memory by running a for loop until a point is reached, when JVM does not have enough memory to allocate, resulting in an OutOfMemoryError being thrown.\nFinding the Root Cause of an OutOfMemoryError We will now find the cause of this error by doing a heap dump analysis. This is done in two steps:\n Capture the heap dump Analyze the heap dump file to locate the suspected reason.  We can capture heap dump in multiple ways. Let us capture the heap dump for our example first with jmap and then by passing a VM argument in the command line.\nGenerating a Heap Dump on Demand with jmap jmap is packaged with the JDK and extracts a heap dump to a specified file location.\nTo generate a heap dump with jmap, we first find the process ID of our running Java program with the jps tool to list down all the running Java processes on our machine:\n...:~ fab$ jps 10514 24007 41927 OOMGenerator 41949 Jps After running the jps command, we can see the processes are listed in the format “ ”.\nNext, we run the jmap command to generate the heap dump file:\njmap -dump:live,file=mydump.hprof 41927 After running this command the heap dump file with extension hprof is created.\nThe option live is used to collect only the live objects that still have a reference in the running code. With the live option, a full GC is triggered to sweep away unreachable objects and then dump only the live objects.\nAutomatically Generating a Heap Dump on OutOfMemoryErrors This option is used to capture a heap dump at the point in time when an OutOfMemoryError occurred. This helps to diagnose the problem because we can see what objects were sitting in memory and what percentage of memory they were occupying right at the time of the OutOfMemoryError.\nWe will use this option for our example since it will give us more insight into the cause of the crash.\nLet us run the program with the VM option HeapDumpOnOutOfMemoryError from the command line or our favorite IDE to generate the heap dump file:\njava -jar target/oomegen-0.0.1-SNAPSHOT.jar \\ -XX:+HeapDumpOnOutOfMemoryError \\ -XX:HeapDumpPath=\u0026lt;File path\u0026gt;hdump.hprof After running our Java program with these VM arguments, we get this output:\nMax JVM memory: 2147483648 Memory Consumed till now: 960 Memory Consumed till now: 29760 Memory Consumed till now: 25949760 java.lang.OutOfMemoryError: Java heap space Dumping heap to \u0026lt;File path\u0026gt;/hdump.hprof ... Heap dump file created [17734610 bytes in 0.031 secs] Catching out of memory error Exception in thread \u0026#34;main\u0026#34; java.lang.OutOfMemoryError: Java heap space at io.pratik.OOMGenerator.main(OOMGenerator.java:25) As we can see from the output, the heap dump file with the name: hdump.hprof is created when the OutOfMemoryError occurs.\nOther Methods of Generating Heap Dumps Some of the other methods of generating a heap dump are:\n  jcmd: jcmd is used to send diagnostic command requests to the JVM. It is packaged as part of the JDK. It can be found in the \\bin folder of a Java installation.\n  JVisualVM: Usually, analyzing heap dump takes more memory than the actual heap dump size. This could be problematic if we are trying to analyze a heap dump from a large server on a development machine. JVisualVM provides a live sampling of the Heap memory so it does not eat up the whole memory.\n  Analyzing the Heap Dump What we are looking for in a Heap dump is:\n Objects with high memory usage Object graph to identify objects of not releasing memory Reachable and unreachable objects  Eclipse Memory Analyzer (MAT) is one of the best tools to analyze Java heap dumps. Let us understand the basic concepts of Java heap dump analysis with MAT by analyzing the heap dump file we generated earlier.\nWe will first start the Memory Analyzer Tool and open the heap dump file. In Eclipse MAT, two types of object sizes are reported:\n Shallow heap size: The shallow heap of an object is its size in the memory Retained heap size: Retained heap is the amount of memory that will be freed when an object is garbage collected.  Overview Section in MAT After opening the heap dump, we will see an overview of the application\u0026rsquo;s memory usage. The piechart shows the biggest objects by retained size in the overview tab as shown here:\nFor our application, this information in the overview means if we could dispose of a particular instance of java.lang.Thread we will save 1.7 GB, and almost all of the memory used in this application.\nHistogram View While that might look promising, java.lang.Thread is unlikely to be the real problem here. To get a better insight into what objects currently exist, we will use the Histogram view:\nWe have filtered the histogram with a regular expression \u0026ldquo;io.pratik.* \u0026quot; to show only the classes that match the pattern. With this view, we can see the number of live objects: for example, 243 BrandedProduct objects, and 309 Price Objects are alive in the system. We can also see the amount of memory each object is using.\nThere are two calculations, Shallow Heap and Retained Heap. A shallow heap is the amount of memory consumed by one object. An Object requires 32 (or 64 bits, depending on the architecture) for each reference. Primitives such as integers and longs require 4 or 8 bytes, etc… While this can be interesting, the more useful metric is the Retained Heap.\nRetained Heap Size The retained heap size is computed by adding the size of all the objects in the retained set. A retained set of X is the set of objects which would be removed by the Garbage Collector when X is collected.\nThe retained heap can be calculated in two different ways, using the quick approximation or the precise retained size:\nBy calculating the Retained Heap we can now see that io.pratik.ProductGroup is holding the majority of the memory, even though it is only 32 bytes (shallow heap size) by itself. By finding a way to free up this object, we can certainly get our memory problem under control.\nDominator Tree The dominator tree is used to identify the retained heap. It is produced by the complex object graph generated at runtime and helps to identify the largest memory graphs. An Object X is said to dominate an Object Y if every path from the Root to Y must pass through X.\nLooking at the dominator tree for our example, we can see which objects are retained in the memory.\nWe can see that the ProductGroup object holds the memory instead of the Thread object. We can probably fix the memory problem by releasing objects contained in this object.\nLeak Suspects Report We can also generate a \u0026ldquo;Leak Suspects Report\u0026rdquo; to find a suspected big object or set of objects. This report presents the findings on an HTML page and is also saved in a zip file next to the heap dump file.\nDue to its smaller size, it is preferable to share the \u0026ldquo;Leak Suspects Report\u0026rdquo; report with teams specialized in performing analysis tasks instead of the raw heap dump file.\nThe report has a pie chart, which gives the size of the suspected objects:\nFor our example, we have one suspect labeled as \u0026ldquo;Problem Suspect 1\u0026rdquo; which is further described with a short description:\nApart from the summary, this report also contains detailed information about the suspects which is accessed by following the “details” link at the bottom of the report:\nThe detailed information is comprised of :\n  Shortest paths from GC root to the accumulation point: Here we can see all the classes and fields through which the reference chain is going, which gives a good understanding of how the objects are held. In this report, we can see the reference chain going from the Thread to the ProductGroup object.\n  Accumulated Objects in Dominator Tree: This gives some information about the content which is accumulated which is a collection of GroceryProduct objects here.\n  Conclusion In this post, we introduced the heap dump, which is a snapshot of a Java application\u0026rsquo;s object memory graph at runtime. To illustrate, we captured the heap dump from a program that threw an OutOfMemoryError at runtime.\nWe then looked at some of the basic concepts of heap dump analysis with Eclipse Memory Analyzer: large objects, GC roots, shallow vs. retained heap, and dominator tree, all of which together will help us to identify the root cause of specific memory issues.\n","date":"March 1, 2021","image":"https://reflectoring.io/images/stock/0019-magnifying-glass-1200x628-branded_hudd3c41ec99aefbb7f273ca91d0ef6792_109335_650x0_resize_q90_box.jpg","permalink":"/create-analyze-heapdump/","title":"Creating and Analyzing Java Heap Dumps"},{"categories":["Software Craft"],"contents":"Writing meaningful commit messages can save a lot of time answering many \u0026ldquo;why?\u0026rdquo; and \u0026ldquo;how?\u0026rdquo; questions, and thus gives us more time in the day to do productive work.\nWhy Is a Good Commit Message Important? Commit messages are a way of communication between team members. Let\u0026rsquo;s say there\u0026rsquo;s a bug in the application which was not there before. To find out what caused the problem, reading the commit messages could be handy. The proper commit message can save a great deal of time finding the recent changes related to a bug.\nBeing a new member of a team and working on projects we haven\u0026rsquo;t seen before has its challenges. If we have a task to add some logic to some part of the code, previous good commit messages can help us find out where and how to add the code.\nIf we fix a bug or add a feature we will probably completely forget about it a month or two later. It\u0026rsquo;s not a good idea to think that if it\u0026rsquo;s not clear for others, they can ask us about it. Instead, we should provide proper commit messages for people to use as a resource in their daily work.\nWhat Is a Good Commit Message? Good commit messages can be written in many different styles. The trick is to pick the best style that suits the team and the project and then stick to it. Like in so many other things, being consistent in our commit message produces compound results over time.\nThe perfect commit message should have certain qualities:\n It should be understandable even by seeing only the header of the message (we\u0026rsquo;ll talk about the header soon). It should be just enough, and not too detailed. It should be unambiguous.  Let\u0026rsquo;s explore some things we should keep in mind when creating commit messages.\nAtomic Commits Although using a proper style is a good practice, it\u0026rsquo;s not enough. Discipline is crucial. Our commits should be reasonably small and atomic.\nIf the commit consists of multiple changes that make the message too long or inefficient, it\u0026rsquo;s good practice to separate it into several commits. In other words: we don\u0026rsquo;t want to commit a change that changes too much.\nIf we commit two changes together, for example, a bug fix and a minor refactoring, it might not cause a very long commit message, but it can cause some other problems.\nLet\u0026rsquo;s say the bug fix created some other bugs. In that case, we need to roll back the production code to the previous. This will result in the loss of the refactoring as well. It\u0026rsquo;s not efficient, and it\u0026rsquo;s not atomic.\nAlso, if someone searches the commit history for the changes made for the refactoring, they have to figure out which files were touched for the refactoring and which for the bugfix. This will cost more time than necessary.\nShort and Unambiguous The commit message should describe what changes our commit makes to the behavior of the code, not what changed in the code. We can see what changed in the diff with the previous commit, so we don\u0026rsquo;t need to repeat it in the commit message. But to understand what behavior changed, a commit message can be helpful.\nIt should answer the question: \u0026ldquo;What happens if the changes are applied?\u0026quot;. If the answer can\u0026rsquo;t be short, it might be because the commit is not atomic, and it\u0026rsquo;s too much change in one commit.\nActive Voice Use the imperative, present tense. It is easier to read and scan quickly:\nRight: Add feature to alert admin for new user registration Wrong: Added feature ... (past tense) We use an imperative verb because it\u0026rsquo;s going to complete the sentence \u0026ldquo;If applied, this commit will \u0026hellip;\u0026rdquo; (e.g. \u0026ldquo;If applied, this commit will add a feature to alert admin for new user registration\u0026rdquo;).\nUsing present tense and not past tense in commit messages has made a big thread of discussions between developers over the question \u0026ldquo;Why should it be present tense?\u0026rdquo;.\nThe reason behind using present tense is that the commit message is answering the question \u0026ldquo;What will happen after the commit is applied?\u0026rdquo;. If we think of a commit as an independent patch, it doesn\u0026rsquo;t matter if it applied in the past. What matters is that this patch is always supposed to make that particular change when it\u0026rsquo;s applied.\nDetailed Enough Super-detailed commit messages are frustrating as well. We can find that level of detail in the code. For example, if our version control is Git, we can see all the changed files in Git, so we don\u0026rsquo;t have to list them.\nSo, instead of answering \u0026ldquo;what are the changes?\u0026rdquo;, it\u0026rsquo;s better to answer \u0026ldquo;What are the changes for?\u0026quot;.\nFormatting Let\u0026rsquo;s start with Git conventions. Other conventions usually have the Git conventions in their core.\nGit suggests a commit message should have three parts including a subject, a description, and a ticket number. Let\u0026rsquo;s see the exact template mentioned on Git\u0026rsquo;s website:\nSubject line (try to keep under 50 characters) Multi-line description of commit, feel free to be detailed. (Up to 72) [Ticket: X] The subject is better to be less than 50 characters to get a clean output when executing the command git log --oneline. The description is better to be up to 72 characters.\nPreslav Rachev in his article explains the reason for the 50/72 rule. The ideal size of a git commit summary is around 50 characters in length. Analyzing the average length of commit messages in the Linux kernel suggests this number. The 72 character rule is to center the description on an 80-column terminal in the git log since it adds four blank spaces at the left when displaying the commit message, so we want to add space for four more blank spaces on the right side.\nConventional Commit Messages Let\u0026rsquo;s now have a look at Conventional Commits, a specification that gives opinionated guardrails to format commit messages.\nThe Conventional Commits format goes hand in hand with semantic versioning, so let\u0026rsquo;s talk about that first.\nSemantic Versioning As described on the Semantic Versioning website, semantic versioning consists of three numbers: MAJOR, MINOR, and PATCH. Each number is incremented in different circumstances:\n the MAJOR version when we make incompatible API changes, the MINOR version when we add functionality in a backward-compatible manner, and the PATCH version when we make backward-compatible bug fixes.  As we\u0026rsquo;ll see, if we follow semantic versioning consistently, generating the version number can be automated based on the commit messages.\nConventional Commits Structure The general structure of a conventional commit message is this:\n[type] [optional scope]: [description] [optional body] [optional footer(s)] Each commit has a type that directly matches semantic versioning practice:\n fix: patches a bug in our codebase (correlates with PATCH in semantic versioning) feat: introduces a new feature to the codebase (correlates with MINOR in semantic versioning) refactor!: introduces a breaking API change by refactoring because of the \u0026ldquo;!\u0026rdquo; symbol (correlating with MAJOR in semantic versioning)  The symbol \u0026rdquo;!\u0026quot; can be used with any type. It signifies a breaking change that correlates with MAJOR in semantic versioning.\nUsing BREAKING CHANGE in the footer introduces a breaking API change as well (correlating with MAJOR in semantic versioning).\nThe Angular commit message format is another conventional format. It suggests that a commit message should consist of a header, a body, and a footer with a blank line between each section because tools like rebase in Git get confused if we run them together without space.\n[type] [optional scope]: [short summary] [body] - at least 20 characters up to 72, optional only for docs [optional footer] The header consists of a type and a summary part. Some add an optional \u0026ldquo;scope\u0026rdquo; in between.\nType The type of commit message says that the change was made for a particular problem. For example, if we\u0026rsquo;ve fixed a bug or added a feature, or maybe changed something related to the docs, the type would be \u0026ldquo;fix\u0026rdquo;, \u0026ldquo;feat\u0026rdquo;, or \u0026ldquo;docs\u0026rdquo;.\nThis format allows multiple types other than \u0026ldquo;fix:\u0026rdquo; and \u0026ldquo;feat:\u0026rdquo; mentioned in the previous part about conventional messages. Some other Angular\u0026rsquo;s type suggestions are: \u0026ldquo;build:\u0026rdquo;, \u0026ldquo;chore:\u0026rdquo;, \u0026ldquo;ci:\u0026rdquo;, \u0026ldquo;docs:\u0026rdquo;, \u0026ldquo;style:\u0026rdquo;, \u0026ldquo;refactor:\u0026rdquo;, \u0026ldquo;perf:\u0026rdquo;, \u0026ldquo;test:\u0026rdquo;, and others.\nScope The scope is the package or module that is affected by the change. As mentioned before, it\u0026rsquo;s optional.\nSummary As Angular suggests: \u0026ldquo;It should be present tense. Not capitalized. No period in the end.\u0026rdquo;, and imperative like the type.\nAs Chris Beams mentions in his article about commit messages, the summary should always be able to complete the following sentence:\nIf applied, this commit will\u0026hellip; add authorization for document access\nLet\u0026rsquo;s look at some summary examples:\nRight: fix: add authorization for document access Wrong: fix: Add authorization for document access (capitalized) Wrong: fix: added authorization for document access (not present tense) Wrong: fix: add authorization for document access. (period in the end) In this example, \u0026ldquo;fix\u0026rdquo; is the type, and the sentence after that is the summary.\nBody The format of the body should be just like the summary, but the content goal is different. It should explain the motivation for the change.\nIn other words, it should be an imperative sentence explaining why we\u0026rsquo;re changing the code, compared to what it was before.\nFooter In the footer, we can mention the related task URL or the number of the issue that we worked on:\nConsistency in the Format All the rules above are beneficial only if we keep doing it in all our commits. If the structure changes in each commit, the Git log would be unstructured and unreadable over time, which misses the whole point of making these rules.\nExamples Let\u0026rsquo;s have a look at some examples. In each Example, we describe a scenario and then show the shape of the commit message based on formats discussed previously in the article.\nExample One We added a feature to the codebase. It gets the mobile number from the user and adds it to the user table. All positive and negative tests are ready except one. It should check that a user is not allowed to enter characters as the mobile number. We add this test scenario and then commit it with this message:\ntest: add negative test for entering mobile number add test scenario to check if entering character as mobile number is forbidden TST-145 Example Two We realized that getting a parameter from the API output is going to clean up our code. So we did the refactoring and now the new input is mandatory. This means the client should send this specific input or the API does not respond. This refactoring made a MAJOR change that is not backward-compatible. We commit our change with this commit message:\nrefactor!: add terminal field in the payment API BREAKING CHANGE: add the terminal field as a mandatory field to be able to buy products by different terminal numbers the terminal field is mandatory and the client needs to send it or else the API does not work PAYM-130 Example Three We add another language support to our codebase. We can use a scope in our commit message like this:\nfeat(lang): add french language The available scopes must be defined for a codebase beforehand. Ideally, they match a component within the architecture of our code.\nConclusion A great format for writing commit messages can be different in each team. The most important aspect is to keep it simple, readable, and consistent.\nUseful Links  https://github.com/joelparkerhenderson/git_commit_message https://medium.com/better-programming/you-need-meaningful-commit-messages-d869e44e98d4 https://medium.com/@auscunningham/enforcing-git-commit-message-style-b86a45380b0f https://www.conventionalcommits.org/en/v1.0.0/  ","date":"February 22, 2021","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/meaningful-commit-messages/","title":"Writing Meaningful Commit Messages"},{"categories":["Java"],"contents":"Are you working on a project with other developers where reading code is not as fun as you would want because of inconsistent coding styles? In this article, we\u0026rsquo;ll have a look at how to achieve painless code formatting with EditorConfig.\nThe Challenges of Code Formatting I joined a new team almost a year ago and after my onboarding with other engineers across several of their codebases, it was time to start making code contributions. I was using IntelliJ IDEA since most of the codebases I\u0026rsquo;m working on revolve around Java.\nMy initial pull request had a few bugs which I fixed but the ones that seemed distracting were comments around spacing and tabs. I was always having PR comments on spacing/indentation and it became a headache when in one of my PRs, I had several such comments.\nThe issue here was that some files in the repository I worked with use space indentation while the newer files use tab indentation. But after checking out the code with my IDE whose default setting is tabs, all of my changes in any file used that same tab indentation and that was where the problem started.\nIt wasn\u0026rsquo;t too much of an issue for me while working locally in my IDE but since I work in a team where other Engineers would need to review my code, having consistent coding styles across files became very important.\nAs a short-term fix, I would always confirm the indentation style being used by each file I made changes to and then tweak my IDE indentation style to be the same. This amounts to unnecessary extra work and when you add up the time spent doing this per PR that has indentation issues, you would realize it\u0026rsquo;s a lot of time that could have been spent doing more productive tasks.\nThis problem is exactly what EditorConfig solves.\nEditorConfig allows us to define commonly used coding styles in a file that can easily be used across several IDEs to enforce consistent coding styles among several developers working on the same codebase, thus leading to less friction in your team.\nUsing EditorConfig With EditorConfig, we can define whether we want our indentation style to be tabs or spaces, what should be the indentation size (the most common size I have seen is 4), and other properties that we will be discussing in the next section.\nSince there are lot of IDEs and we cannot touch all of them, I have selected two IDEs:\n IntelliJ IDEA, which comes with native support for EditorConfig and Eclipse, which requires a plugin to be downloaded for it to work properly.  For a complete list of supported IDEs (those requiring plugins and those with native support), please check the official website.\nUsing EditorConfig with IntelliJ IntelliJ comes with native support for EditorConfig, which means that we do not have to install a plugin to make it work. To get started, we need to create a file named .editorconfig in the root folder of our project and define the coding styles we need.\nSince I would like my Java code to use tab indentation with a tab size of 4, the UTF-8 character set, and trim any trailing whitespaces in my code, I will define the following properties in the .editorconfig file to achieve this:\n# Topmost editor config file root = true # Custom Coding Styles for Java files [*.java] # The other allowed value you can use is space indent_style = tab # You can play with this value and set it to how # many characters you want your indentation to be indent_size = 4 # Character set to be used in java files. charset = utf-8 trim_trailing_whitespace = true In the snippet above, we have defined two major sections: the root section and the Java style section.\nWe have specified the root value to be true which means that when a file is opened, editorConfig will start searching for .editorconfig files starting from the current directory going upwards in the directory structure. The search will only stop when it has reached the root directory of the project or when it sees a .editorconfig file with root value set to true.\nEditorConfig applies styles in a top down fashion, so if we have several .editorconfig files in our project with some duplicated properties, the closest .editorconfig file takes precedence.\nFor the Java section, we have defined a pattern [*.java] to apply the config to all java files. If your requirement is to match some other type of files with a different extension, a complete list of wildcard patterns is available on the official website.\nTo apply the EditorConfig styles to all Java classes in our IntelliJ project, as shown in the screenshots below, we click on the Code tab and select Reformat Code from the list of options. A dialog box should appear, and we can click on the Run button to apply our style changes.\nStep 1: ![IntelliJ Reformat Window]({{ base }}/assets/img/posts/painless-code-formatting-with-editor-config/intellij-reformat.png)\nStep 2: ![IntelliJ Reformat Window]({{ base }}/assets/img/posts/painless-code-formatting-with-editor-config/intellij-run.png)\nOnce done, we should see all our Java source files neatly formatted according to the styles we have defined in .editorconfig file.\nA complete list of universally supported properties across IDEs can be found in the official reference.\nUsing EditorConfig with Eclipse Since Eclipse does not support EditorConfig out of the box, we have to install a plugin to make this work. Fortunately, it\u0026rsquo;s not too much of a hassle.\nTo install the EditorConfig plugin in Eclipse, follow the official installation guide. Once it is installed in our workspace, we can go ahead to create a .editorconfig file in the root folder of our java project and apply the same coding styles as discussed in the IntelliJ section above.\nTo apply the editorconfig format to all java classes in our project, as shown in the screenshot below, we would right click on the project from the Package Explorer tab on the top-left corner of Eclipse and select Source, then click on Format. This will format all our java files using the coding styles in the .editorconfig file.\n![Eclipse Reformat Window]({{ base }}/assets/img/posts/painless-code-formatting-with-editor-config/eclipse.png)\n","date":"February 15, 2021","image":"https://reflectoring.io/images/stock/0096-tools-1200x628-branded_hue8579b2f8c415ef5a524c005489e833a_326215_650x0_resize_q90_box.jpg","permalink":"/painless-code-formatting-with-editor-config/","title":"Painless Code Formatting with EditorConfig"},{"categories":["Spring Boot","AWS"],"contents":"In this article, we are going to explore AWS' Simple Storage Service (S3) together with Spring Boot to build a custom file-sharing application (just like in the good old days before Google Drive, Dropbox \u0026amp; co).\nAs we will learn, S3 is an extremely versatile and easy to use solution for a variety of use cases.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. What is S3? S3 stands for \u0026ldquo;simple storage service\u0026rdquo; and is an object store service hosted on Amazon Web Services (AWS) - but what does this exactly mean?\nYou are probably familiar with databases (of any kind). Let\u0026rsquo;s take Postgres for example. Postgres is a relational database, very well suited for storing structured data that has a schema that won\u0026rsquo;t change too much over its lifetime (e.g. financial transaction records). But what if we want to store more than just plain data? What if we want to store a picture, a PDF, a document, or a video?\nIt is technically possible to store those binary files in Postgres but object stores like S3 might be better suited for storing unstructured data.\nObject Store vs. File Store So we might ask ourselves, how is an object store different from a file store? Without going into the gory details, an object store is a repository that stores objects in a flat structure, similar to a key-value store.\nAs opposed to file-based storage where we have a hierarchy of files inside folders, inside folders,\u0026hellip; the only thing we need to get an item out of an object store is the key of the object we want to retrieve. Additionally, we can provide metadata (data about data) that we attach to the object to further enrich it.\nUnderstanding Basic S3 Concepts S3 was one of the first services offered by AWS in 2006. Since then, a lot of features have been added but the core concepts of S3 are still Buckets and Objects.\nBuckets Buckets are containers of objects we want to store. An important thing to note here is that S3 requires the name of the bucket to be globally unique.\nObjects Objects are the actual things we are storing in S3. They are identified by a key which is a sequence of Unicode characters whose UTF-8 encoding is at most 1,024 bytes long.\nKey Delimiter By default, the \u0026ldquo;/\u0026rdquo; character gets special treatment if used in an object key. As written above, an object store does not use directories or folders but just keys. However, if we use a \u0026ldquo;/\u0026rdquo; in our object key, the AWS S3 console will render the object as if it was in a folder.\nSo, if our object has the key \u0026ldquo;foo/bar/test.json\u0026rdquo; the console will show a \u0026ldquo;folder\u0026rdquo; foo that contains a \u0026ldquo;folder\u0026rdquo; bar which contains the actual object. This key delimiter helps us to group our data into logical hierarchies.\nBuilding an S3 Sample Application Going forward we are going to explore the basic operations of S3. We do so by building our own file-sharing application (code on GitHub) that lets us share files with other people securely and, if we want, temporarily limited.\n The sample application does include a lot of code that is not directly related to S3. The io.jgoerner.s3.adapter.out.s3 package is solely focused on the S3 specific bits.\n The application\u0026rsquo;s README has all instructions needed to launch it. You don\u0026rsquo;t have to use the application to follow this article. It is merely meant as supportive means to explain certain S3 concepts.\nSetting up AWS \u0026amp; AWS SDK The first step is to set up an AWS account (if we haven\u0026rsquo;t already) and to configure our AWS credentials. Here is another article that explains this set up in great detail (only the initial configuration paragraphs are needed here, so feel free to come back after we are all set).\nSpring Boot \u0026amp; S3 Our sample application is going to use the Spring Cloud for Amazon Web Services project. The main advantage over the official AWS SDK for Java is the convenience and head start we get by using the Spring project. A lot of common operations are wrapped into higher-level APIs that reduce the amount of boilerplate code.\nSpring Cloud AWS gives us the org.springframework.cloud:spring-cloud-starter-aws dependency which bundles all the dependencies we need to communicate with S3.\nConfiguring Spring Boot Just as with any other Spring Boot application, we can make use of an application.properties/application.yaml file to store our configuration:\n## application.yaml cloud: aws: region: static: eu-central-1 stack: auto: false credentials: profile-name: dev The snippet above does a few things:\n region.static: we statically set our AWS region to be eu-central-1 (because that is the region that is closest to me). stack.auto: this option would have enabled the automatic stack name detection of the application. As we don\u0026rsquo;t rely on the AWS CloudFormation service, we want to disable that setting (but here is a great article about automatic deployment with CloudFormation in case we want to learn more about it). credentials.profile-name: we tell the application to use the credentials of the profile named dev (that\u0026rsquo;s how I named my AWS profile locally).  If we configured our credentials properly we should be able to start the application. However, due to a known issue we might want to add the following snippet to the configuration file to prevent noise in the application logs:\nlogging: level: com: amazonaws: util: EC2MetadataUtils: error What the above configuration does is simply adjusting the log level for the class com.amazonaws.util.EC2MetadataUtils to error so we don\u0026rsquo;t see the warning logs anymore.\nAmazon S3 Client The core class to handle the communication with S3 is the com.amazonaws.services.s3.AmazonS3Client. Thanks to Spring Boot\u0026rsquo;s dependency injection we can simply use the constructor to get a reference to the client:\npublic class S3Repository { private final AmazonS3Client s3Client; public S3Repository(AmazonS3Client s3Client) { this.s3Client = s3Client; } // other repository methods  } Creating a Bucket Before we can upload any file, we have to have a bucket. Creating a bucket is quite easy:\ns3Client.createBucket(\u0026#34;my-awesome-bucket\u0026#34;); We simply use the createBucket() method and specify the name of the bucket. This sends the request to S3 to create a new bucket for us. As this request is going to be handled asynchronously, the client gives us the way to block our application until that bucket exists:\n// optionally block to wait until creation is finished s3Client .waiters() .bucketExists() .run( new WaiterParameters\u0026lt;\u0026gt;( new HeadBucketRequest(\u0026#34;my-awesome-bucket\u0026#34;) ) ); We simply use the client\u0026rsquo;s waiters() method and run a HeadBucketRequest (similar to the HTTP head method).\nAs mentioned before, the name of the S3 bucket has to be globally unique, so often I end up with rather long or non-human readable bucket names. Unfortunately, we can\u0026rsquo;t attach any metadata to the bucket (as opposed to objects). Therefore, the sample application uses a little lookup table to map human and UI friendly names to globally unique ones. This is not required when working with S3, just something to improve the usability.\nCreating a Bucket in the Sample Application  Navigate to the Spaces section Click on New Space Enter the name and click Submit A message should pop up to indicate success   Uploading a File Now that our bucket is created we are all set to upload a file of our choice. The client provides us with the overloaded putObject() method. Besides the fine-grained PutObjectRequest we can use the function in three ways:\n// String-based String content = ...; s3Client.putObject(\u0026#34;my-bucket\u0026#34;, \u0026#34;my-key\u0026#34;, content); // File-based File file = ...; s3Client.putObject(\u0026#34;my-bucket\u0026#34;, \u0026#34;my-key\u0026#34;, file); // InputStream-based InputStream input = ...; Map\u0026lt;String, String\u0026gt; metadata = ...; s3Client.putObject(\u0026#34;my-bucket\u0026#34;, \u0026#34;my-key\u0026#34;, input, metadata); In the simplest case, we can directly write the content of a String into an object. We can also put a File into a bucket. Or we can use an InputStream.\nOnly the last option gives us the possibility to directly attach metadata in the form of a Map\u0026lt;String, String\u0026gt; to the uploaded object.\nIn our sample application, we attach a human-readable name to the object while making the key random to avoid collisions within the bucket - so we don\u0026rsquo;t need any additional lookup tables.\nObject metadata can be quite useful, but we should note that S3 does not give us the possibility to directly search an object by metadata. If we are looking for a specific metadata key (e.g. department being set to Engineering) we have to touch all objects in our bucket and filter based on that property.\nThere are some upper boundaries worth mentioning when it comes to the size of the uploaded object. At the time of writing this article, we can upload an item of max 5GB within a single operation as we did with putObject(). If we use the client\u0026rsquo;s initiateMultipartUpload() method, it is possible to upload an object of max 5TB through a Multipart upload.\nUploading a File in the Sample Application  Navigate to the Spaces section Select Details on the target Space/Bucket Click on Upload File Pick the file, provide a name and click Submit A message should pop up to indicate success   Listing Files Once we have uploaded our files, we want to be able to retrieve them and list the content of a bucket. The simplest way to do so is the client\u0026rsquo;s listObjectV2() method:\ns3Client .listObjectsV2(\u0026#34;my-awesome-bucket\u0026#34;) .getObjectSummaries(); Similar to concepts of the JSON API, the object keys are not directly returned but wrapped in a payload that also contains other useful information about the request (e.g. such as pagination information). We get the object details by using the getObjectSummaries() method.\nWhat does V2 mean?  AWS released version 2 of their AWS SDK for Java in late 2018. Some of the client's methods offer both versions of the function, hence the V2 suffix of the listObjectsV2() method.  As our sample application doesn\u0026rsquo;t use the S3ObjectSummary model that the client provides us, we map those results into our domain model:\ns3Client.listObjectsV2(bucket).getObjectSummaries() .stream() .map(S3ObjectSummary::getKey) .map(key -\u0026gt; mapS3ToObject(bucket, key)) // custom mapping function  .collect(Collectors.toList()); Thanks to Java\u0026rsquo;s stream() we can simply append the transformation to the request.\nAnother noteworthy aspect is the handling of buckets that contain more than 1000 objects. By default, the client might only return a fraction, requiring pagination. However, the newer V2 SDK provides higher-level methods, that follow an autopagination approach.\nListing all Objects in the Sample Application  Navigate to the Spaces section Select Details on the target Space/Bucket You see a list of all objects stored in the bucket   Making a File Public Every object in S3 has a URL that can be used to access that object. The URL follows a specific pattern of bucket name, region, and object key. Instead of manually creating this URL, we can use the getUrl() method, providing a bucket name and an object key:\ns3Client .getUrl(\u0026#34;my-awesome-bucket\u0026#34;, \u0026#34;some-key\u0026#34;); Depending on the region we are in, this yields an URL like the following (given that we are in the eu-central-1 region):\nhttps://my-awesome-bucket.s3.eu-central-1.amazonaws.com/some-key Getting an Object's URL in the Sample Application  Navigate to the Spaces section Select Details on the target Space/Bucket Select Download on the target object The object's URL shall be opened in a new tab   When accessing this URL directly after uploading an object we should get an Access Denied error, since all objects are private by default:\n\u0026lt;Error\u0026gt; \u0026lt;Code\u0026gt;AccessDenied\u0026lt;/Code\u0026gt; \u0026lt;Message\u0026gt;Access Denied\u0026lt;/Message\u0026gt; \u0026lt;RequestId\u0026gt;...\u0026lt;/RequestId\u0026gt; \u0026lt;HostId\u0026gt;...\u0026lt;/HostId\u0026gt; \u0026lt;/Error\u0026gt; As our application is all about sharing things, we do want those objects to be publicly available though.\nTherefore, we are going to alter the object\u0026rsquo;s Access Control List (ACL).\nAn ACL is a list of access rules. Each of those rules contains the information of a grantee (who) and a permission (what). By default, only the bucket owner (grantee) has full control (permission) but we can easily change that.\nWe can make objects public by altering their ACL like the following:\ns3Client .setObjectAcl( \u0026#34;my-awesome-bucket\u0026#34;, \u0026#34;some-key\u0026#34;, CannedAccessControlList.PublicRead ); We are using the the clients' setObjectAcl() in combination with the high level CannedAccessControlList.PublicRead. The PublicRead is a prepared rule, that allows anyone (grantee) to have read access (permission) on the object.\nMaking an Object Public in the Sample Application  Navigate to the Spaces section Select Details on the target Space/Bucket Select Make Public on the target object A message should pop up to indicate success   If we reload the page that gave us an Access Denied error again, we will now be prompted to download the file.\nMaking a File Private Once the recipient downloaded the file, we might want to revoke the public access. This can be done following the same logic and methods, with slightly different parameters:\ns3Client .setObjectAcl( \u0026#34;my-awesome-bucket\u0026#34;, \u0026#34;some-key\u0026#34;, CannedAccessControlList.BucketOwnerFullControl ); The above snippet sets the object\u0026rsquo;s ACL so that only the bucket owner (grantee) has full control (permission), which is the default setting.\nMaking an Object Private in the Sample Application  Navigate to the Spaces section Select Details on the target Space/Bucket Select Make Private on the target object A message should pop up to indicate success   Deleting Files \u0026amp; Buckets You might not want to make the file private again, because once it was downloaded there is no need to keep it.\nThe client also gives us the option to easily delete an object from a bucket:\ns3Client .deleteObject(\u0026#34;my-awesome-bucket\u0026#34;, \u0026#34;some-key\u0026#34;); The deleteObject() method simply takes the name of the bucket and the key of the object.\nDeleting an Object in the Sample Application  Navigate to the Spaces section Select Details on the target Space/Bucket Select Delete on the target object The list of objects should reload without the deleted one   One noteworthy aspect around deletion is that we can\u0026rsquo;t delete non-empty buckets. So if we want to get rid of a complete bucket, we first have to make sure that we delete all the items first.\nDeleting a Bucket in the Sample Application  Navigate to the Spaces section Select Delete on the target Space/Bucket The list of buckets should reload without the deleted one   Using Pre-Signed URLs Reflecting on our approach, we did achieve what we wanted to: making files easily shareable temporarily. However, there are some features that S3 offers which greatly improve the way we share those files.\nOur current approach to making a file shareable contains quite a lot of steps:\n Update ACL to make the file public Wait until the file was downloaded Update ACL to make the file private again  What if we forget to make the file private again?\nS3 offers a concept called \u0026ldquo;pre-signed URLs\u0026rdquo;. A pre-signed URL is the link to our object containing an access token, that allows for a temporary download (or upload). We can easily create such a pre-signed URL by specifying the bucket, the object, and the expiration date:\n// duration measured in seconds var date = new Date(new Date().getTime() + duration * 1000); s3Client .generatePresignedUrl(bucket, key, date); The client gives us the generatePresignedUrl() method, which accepts a java.util.Date as the expiration parameter. So if we think of a certain duration as opposed to a concrete expiration date, we have to convert that duration into a Date.\nIn the above snippet, we do so by simply multiplying the duration (in seconds) by 1000 (to convert it to milliseconds) and add that to the current time (in UNIX milliseconds).\nThe official documentation has some more information around the limitations of pre-signed URLs.\nGenerating a Pre-Signed URL in the Sample Application  Navigate to the Spaces section Select Details on the target Space/Bucket Select Magic Link on the target object A message should pop up, containing a pre-signed URL for that object (which is valid for 15 minutes)   Using Bucket Lifecycle Policies Another improvement we can implement is the deletion of the files. Even though the AWS free tier gives us 5GB of S3 storage space before we have to pay, we might want to get rid of old files we have shared already. Similar to the visibility of objects, we can manually delete objects, but wouldn\u0026rsquo;t it be more convenient if they get automatically cleaned up?\nAWS gives us multiple ways to automatically delete objects from a bucket, however we\u0026rsquo;ll use S3\u0026rsquo;s concept of Object Life Cycle rules. An object life cycle rule basically contains the information when to do what with the object:\n// delete files a week after upload s3Client .setBucketLifecycleConfiguration( \u0026#34;my-awesome-bucket\u0026#34;, new BucketLifecycleConfiguration() .withRules( new BucketLifecycleConfiguration.Rule() .withId(\u0026#34;custom-expiration-id\u0026#34;) .withFilter(new LifecycleFilter()) .withStatus(BucketLifecycleConfiguration.ENABLED) .withExpirationInDays(7) ) ); We use the client\u0026rsquo;s setBucketLifecycleConfiguration() method, given the bucket\u0026rsquo;s name and the desired configuration. The configuration above consists of a single rule, having:\n an id to make the rule uniquely identifiable a default LifecycleFilter, so this rule applies to all objects in the bucket a status of being ENABLED, so as soon as this rule is created, it is effective an expiration of seven days, so after a week the object gets deleted  It shall be noted that the snippet above overrides the old lifecycle configuration. That is ok for our use case but we might want to fetch the existing rules first and upload the combination of old and new rules.\nSetting a Bucket's Expiration in the Sample Application  Navigate to the Spaces section Select Make Temporary on the target Space/Bucket A message should pop up to indicate success   Lifecycle rules are very versatile, as we can use the filter to only apply the rule to objects with a certain key prefix or carry out other actions like archiving of objects.\nConclusion In this article, we\u0026rsquo;ve learned the basics of AWS' Simple Storage Service (S3) and how to use Spring Boot and the Spring Cloud project to get started with it.\nWe used S3 to build a custom file-sharing application (code on GitHub), that lets us upload \u0026amp; share our files in different ways. But it shall be said, that S3 is way more versatile, often also quoted to be the backbone of the internet.\nAs this is a getting started article, we did not touch other topics like storage tiers, object versioning, or static content hosting. So I can only recommend you get your hands dirty, and play around with S3!\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"February 9, 2021","image":"https://reflectoring.io/images/stock/0095-bucket-1200x628-branded_hu1b7ec4fe4c986592b1bc09ccba225864_253756_650x0_resize_q90_box.jpg","permalink":"/spring-boot-s3/","title":"Getting Started with AWS S3 and Spring Boot"},{"categories":["Spring Boot"],"contents":"This article is about cookies and different ways we can implement them in Spring Boot. We are going to have a short overview of what cookies are, how they work, and how we can handle them using the Servlet API and Spring Boot.\nIf you are building a web application then you probably have reached the point where there\u0026rsquo;s the need to implement cookies. If you haven\u0026rsquo;t, you will!\n Example Code This article is accompanied by a working code example on GitHub. What are Cookies? Simply put, cookies are nothing but a piece of information that is stored on the client-side (i.e. in the browser). The client sends them to the server with each request and servers can tell the client which cookies to store.\nThey are commonly used to track the activity of a website, to customize user sessions, and for servers to recognize users between requests. Another scenario is to store a JWT token or the user id in a cookie so that the server can recognize if the user is authenticated with every request.\nHow Do Cookies Work? Cookies are sent to the client by the server in an HTTP response and are stored in the client (user\u0026rsquo;s browser).\nThe server sets the cookie in the HTTP response header named Set-Cookie. A cookie is made of a key /value pair, plus other optional attributes, which we\u0026rsquo;ll look at later.\nLet\u0026rsquo;s imagine a scenario where a user logs in. The client sends a request to the server with the user\u0026rsquo;s credentials. The server authenticates the user, creates a cookie with a user id encoded, and sets it in the response header. The header Set-Cookie in the HTTP response would look like this:\nSet-Cookie: user-id=c2FtLnNtaXRoQGV4YW1wbGUuY29t Once the browser gets the cookie, it can send the cookie back to the server. To do this, the browser adds the cookie to an HTTP request by setting the header named Cookie:\nCookie: user-id=c2FtLnNtaXRoQGV4YW1wbGUuY29t The server reads the cookie from the request verifies if the user has been authenticated or not, based on the fact if the user-id is valid.\nAs mentioned, a cookie can have other optional attributes, so let\u0026rsquo;s explore them.\nCookie Max-Age and Expiration Date The attributes Max-Age and/or Expires are used to make a cookie persistent. By default, the browser removes the cookie when the session is closed unless Max-Age and/or Expires are set. These attributes are set like so:\nSet-Cookie: user-id=c2FtLnNtaXRoQGV4YW1wbGUuY29t; Max-Age=86400; Expires=Thu, 21-Jan-2021 20:06:48 GMT This cookie will expire 86400 seconds after being created or when the date and time specified in the Expires is passed.\nWhen both attributes are present in the cookie, Max-Age has precedence over Expires.\nCookie Domain Domain is another important attribute of the Cookie. We use it when we want to specify a domain for our cookie:\nSet-Cookie: user-id=c2FtLnNtaXRoQGV4YW1wbGUuY29t; Domain=example.com; Max-Age=86400; Expires=Thu, 21-Jan-2021 20:06:48 GMT By doing this we are telling the client to which domain it should send the cookie. A browser will only send a cookie to servers from that domain.\nSetting the domain to \u0026ldquo;example.com\u0026rdquo; not only will send the cookie to the \u0026ldquo;example.com\u0026rdquo; domain but also its subdomains \u0026ldquo;foo.example.com\u0026rdquo; and \u0026ldquo;bar.example.com\u0026rdquo;.\nIf we don\u0026rsquo;t set the domain explicitly, it will be set only to the domain that created the cookie, but not to its subdomains.\nCookie Path The Path attribute specifies where a cookie will be delivered inside that domain. The client will add the cookie to all requests to URLs that match the given path. This way we narrow down the URLs where the cookie is valid inside the domain.\nLet\u0026rsquo;s consider that the backend sets a cookie for its client when a request to http://example.com/login is executed:\nSet-Cookie: user-id=c2FtLnNtaXRoQGV4YW1wbGUuY29t; Domain=example.com; Path=/user/; Max-Age=86400; Expires=Thu, 21-Jan-2021 20:06:48 GMT Notice that the Path attribute is set to /user/. Now let\u0026rsquo;s visit two different URLs and see what we have in the request cookies.\nWhen we execute a request to http://example.com/user/, the browser will add the following header in the request:\nCookie: user-id=c2FtLnNtaXRoQGV4YW1wbGUuY29t As expected, the browser sends the cookie back to the server.\nWhen we try to do another request to http://example.com/contacts/ the browser will not include the Cookie header, because it doesn\u0026rsquo;t match the Path attribute.\nWhen the path is not set during cookie creation, it defaults to /.\nBy setting the Path explicitly, the cookie will be delivered to the specified URL and all of its subdirectories.\nSecure Cookie In cases when we store sensitive information inside the cookie and we want it to be sent only in secure (HTTPS) connections, then the Secure attribute comes to our rescue:\nSet-Cookie: user-id=c2FtLnNtaXRoQGV4YW1wbGUuY29t; Domain=example.com; Max-Age=86400; Expires=Thu, 21-Jan-2021 20:06:48 GMT; Secure By setting Secure, we make sure our cookie is only transmitted over HTTPS, and it will not be sent over unencrypted connections.\nHttpOnly Cookie HttpOnly is another important attribute of a cookie. It ensures that the cookie is not accessed by the client scripts. It is another form of securing a cookie from being changed by malicious code or XSS attacks.\nSet-Cookie: user-id=c2FtLnNtaXRoQGV4YW1wbGUuY29t; Domain=example.com; Max-Age=86400; Expires=Thu, 21-Jan-2021 20:06:48 GMT; Secure; HttpOnly Not all browsers support the HttpOnly flag. The good news is most of them do, but if it doesn\u0026rsquo;t, it will ignore the HttpOnly flag even if it is set during cookie creation. Cookies should always be HttpOnly unless the browser doesn\u0026rsquo;t support it or there is a requirement to expose them to clients' scripts.\nNow that we know what cookies are and how they work let\u0026rsquo;s check how we can handle them in spring boot.\nHandling Cookies with the Servlet API Now, let\u0026rsquo;s take a look at how to set cookies on the server-side with the Servlet API.\nCreating a Cookie For creating a cookie with the Servlet API we use the Cookie class which is defined inside the javax.servlet.http package.\nThe following snippet of code creates a cookie with name user-id and value c2FtLnNtaXRoQGV4YW1wbGUuY29t and sets all the attributes we discussed:\nCookie jwtTokenCookie = new Cookie(\u0026#34;user-id\u0026#34;, \u0026#34;c2FtLnNtaXRoQGV4YW1wbGUuY29t\u0026#34;); jwtTokenCookie.setMaxAge(86400); jwtTokenCookie.setSecure(true); jwtTokenCookie.setHttpOnly(true); jwtTokenCookie.setPath(\u0026#34;/user/\u0026#34;); jwtTokenCookie.setDomain(\u0026#34;example.com\u0026#34;); Now that we created the cookie, we will need to send it to the client. To do so, we add the cookie to the response(HttpServletResponse) and we are done. Yes, it is as simple as that:\nresponse.addCookie(jwtTokenCookie); Reading a Cookie After adding the cookie to the response header, the server will need to read the cookies sent by the client in every request.\nThe method HttpServletRequest#getCookies() returns an array of cookies that are sent with the request. We can identify our cookie by the cookie name.\nIn the following snippet of code, we are iterating through the array, searching by cookie name, and returning the value of the matched cookie:\npublic Optional\u0026lt;String\u0026gt; readServletCookie(HttpServletRequest request, String name){ return Arrays.stream(request.getCookies()) .filter(cookie-\u0026gt;name.equals(cookie.getName())) .map(Cookie::getValue) .findAny(); } Deleting a Cookie To delete a cookie we will need to create another instance of the Cookie with the same name and maxAge 0 and add it again to the response as below:\nCookie deleteServletCookie = new Cookie(\u0026#34;user-id\u0026#34;, null); deleteServletCookie.setMaxAge(0); response.addCookie(deleteServletCookie); Going back to our use case where we save the JWT token inside the cookie, we would need to delete the cookie when the user logs out. Keeping the cookie alive after the user logs out can seriously compromise the security.\nHandling Cookies with Spring Now that we know how to handle a cookie using the Servlet API, let\u0026rsquo;s check how we can do the same using the Spring Framework.\nCreating a Cookie In this section, we will create a cookie with the same properties that we did using the Servlet API.\nWe will use the class ResponseCookie for the cookie and ResponseEntity for setting the cookie in the response. They are both defined inside org.springframework.http package.\nResponseCookie has a static method from(final String name, final String value) which returns a ResponseCookieBuilder initialized with the name and value of the cookie.\nWe can add all the properties that we need and use the method build() of the builder to create the ResponseCookie:\nResponseCookie springCookie = ResponseCookie.from(\u0026#34;user-id\u0026#34;, \u0026#34;c2FtLnNtaXRoQGV4YW1wbGUuY29t\u0026#34;) .httpOnly(true) .secure(true) .path(\u0026#34;/\u0026#34;) .maxAge(60) .domain(\u0026#34;example.com\u0026#34;) .build(); After creating the cookie, we add it to the header of the response like this:\nResponseEntity .ok() .header(HttpHeaders.SET_COOKIE, springCookie.toString()) .build(); Reading a Cookie with @CookieValue Spring Framework provides the @CookieValue annotation to read any cookie by specifying the name without needing to iterate over all the cookies fetched from the request.\n@CookieValue is used in a controller method and maps the value of a cookie to a method parameter:\n@GetMapping(\u0026#34;/read-spring-cookie\u0026#34;) public String readCookie( @CookieValue(name = \u0026#34;user-id\u0026#34;, defaultValue = \u0026#34;default-user-id\u0026#34;) String userId) { return userId; } In cases where the cookie with the name \u0026ldquo;user-id\u0026rdquo; does not exist, the controller will return the default value defined with defaultValue = \u0026quot;default-user-id\u0026quot;. If we do not set the default value and Spring fails to find the cookie in the request then it will throw java.lang.IllegalStateException exception.\nDeleting a Cookie To delete a cookie, we will need to create the cookie with the same name and maxAge to 0 and set it to the response header:\nResponseCookie deleteSpringCookie = ResponseCookie .from(\u0026#34;user-id\u0026#34;, null) .build(); ResponseEntity .ok() .header(HttpHeaders.SET_COOKIE, deleteSpringCookie.toString()) .build(); Conclusion In this article, we looked at what cookies are and how they work.\nAll in all, cookies are simple text strings that carry some information and are identified with a name.\nWe checked some of the optional attributes that we can add to cookies to make them behave a certain way. We saw that we can make them persistent with Max-Age and Expires, narrow down their scope with Domain and Path, have them transmitted only over HTTPS with Secure, and hide them from client scripts with HttpOnly.\nFinally, we looked into two ways of handling cookies using the Servlet API and Spring. Both of these APIs offer the required methods for creating (with attributes), reading, and deleting cookies.\nThey are easy to implement and developers can choose either of them to implement cookies.\nYou can play around with the example code of this article on GitHub.\n","date":"February 1, 2021","image":"https://reflectoring.io/images/stock/0093-cookie-1200x628-branded_huffd6cf74dde02454411b7c567db2a8ab_206222_650x0_resize_q90_box.jpg","permalink":"/spring-boot-cookies/","title":"Handling Cookies with Spring Boot and the Servlet API"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you want to start investing money, but don\u0026rsquo;t know how you want to confirm your investment mindset you enjoy reading a financial adviser\u0026rsquo;s view on investing money  Book Facts  Title: The Psychology of Money Authors: Morgan Housel Word Count: ~ 80,000 (ca. 5 hours at 250 words / minute) Reading Ease: easy to medium Writing Style: easy to medium language, short chapters, a few financial terms Year Published: 2020  Overview {% include book-link.html book=\u0026ldquo;psychology-of-money\u0026rdquo; %} is about how we think and feel about money, and how that affects how we act about our money.\nIt\u0026rsquo;s an easy read with chapters that can be finished in a lunch break (just the way I like it). Each chapter tells a story about how people act and why they do so, with some anecdotes by the author himself.\nThe author was a financial adviser, so he\u0026rsquo;s learned a bit in his time.\nThe book doesn\u0026rsquo;t give concrete investment advice, but communicates a clear mindset that is beneficial for investing.\nNotes Here are my notes, as usual with some comments in italics.\n1 - No One\u0026rsquo;s Crazy  we\u0026rsquo;re all biased by our lives - everyone thinks and acts differently and we all have good reason to \u0026ldquo;We all think we know how the world works, but we\u0026rsquo;ve all experienced only a tiny fraction of it.\u0026rdquo; we can\u0026rsquo;t really take advantage of history, because reading about history is not the same as experiencing it also, the history of saving money is very short - not much to learn from (this is what I find frustrating about the software industry, as well, by the way)  2 - Luck \u0026amp; Risk  \u0026ldquo;Nothing is as good or bad as it seems\u0026rdquo;  success or failure always has to do with luck or risk but we don\u0026rsquo;t talk about it because it\u0026rsquo;s rude to imply that someone else\u0026rsquo;s success has been luck   \u0026ldquo;Not all success is due to hard work, and not all poverty is due to laziness.\u0026rdquo; we should focus more on broad patterns than on individual stories about success of failure  3 - Never Enough  some people that own unimaginable amounts of money risk everything to get even more money \u0026ldquo;There is no reason to risk what you have and need for what you don\u0026rsquo;t have and don\u0026rsquo;t need.\u0026rdquo; (This strikes me as a very healthy mindset when working with your money) \u0026ldquo;Life isn\u0026rsquo;t any fun without a sense of enough\u0026rdquo; - you can never enjoy the status quo if you always want more social comparison is a game you can\u0026rsquo;t win unless you\u0026rsquo;re the richest person in the world (which you probably aren\u0026rsquo;t)  4 - Confounding Compounding  \u0026ldquo;You don\u0026rsquo;t need tremendous force to create tremendous results.\u0026rdquo;  the ice ages started because every year, a bit more snow was left over from last winter which eventually compounded to a thick layer of ice   Warren Buffet got a return of about 22% on his investments every year on average - he\u0026rsquo;s only as wealthy as he is today because he\u0026rsquo;s been doing it forever \u0026ldquo;The most powerful and important book [about investing] should be called \u0026lsquo;Shut Up and Wait\u0026rsquo;\u0026rdquo;  5 - Getting Wealthy vs. Staying Wealthy  getting money is a very different skill from keeping money  getting money requires taking risks keeping money requires humility and paranoia about not losing what you have   the key to successful investment is survival - stick around long enough to let the compunding work for you be financially unbreakable instead of going after the big returns allow for error in your financial planning to become financially unbreakable be short-term paranoid to keep your wealth and long-term optimistic to grow it  6 - Tails, You Win  \u0026ldquo;An investor can be wrong half the time and still make a fortune.\u0026rdquo; we overreact when things fail, even though it won\u0026rsquo;t make a dent in the long run (I invested a (for me) serious amount of money into index funds a couple weeks before the stock market crashed due to COVID-19 in early 2020 \u0026hellip; I almost overreacted by selling with a loss, but it\u0026rsquo;s good I didn\u0026rsquo;t) anything successful is the result of a \u0026ldquo;tail event\u0026rdquo; - an event in the \u0026ldquo;long tail\u0026rdquo; of events in a distribution curve that are rare, but have an immense impact do something a lot to be rewarded with such a tail event (this speaks my heart and has inspired me to write about it in this newsletter) 4 in 10 publicly traded companies experience a catastrophic loss from which they don\u0026rsquo;t recover very few companies have stellar growth, most because of a single or a few products that outperform all others by orders of magnitude - but that\u0026rsquo;s enough to win with if you have a diversified portfolio \u0026ldquo;Your success as an investor will be determined by how you respond to punctuated moments of terror, not years spent on cruise control.\u0026rdquo; (i.e. don\u0026rsquo;t sell every time you experience a hiccup) \u0026ldquo;Tails drive everything.\u0026rdquo; - tail events have the strongest impact on your investment strategy  7 - Freedom  happiness means being able to do what you want, when you want, with who you want - i.e. controlling your own life we\u0026rsquo;re working more with our heads than with our hands today, so we tend to take work with us everywhere - this means less control over our lives \u0026ldquo;Controlling your time is the highest dividend money pays.\u0026rdquo; (while my blog started out as a platform for me learning things, I\u0026rsquo;m now growing it to bring me that dividend)  8 - Man in the Car Paradox  when we see a driver in an expensive car, we\u0026rsquo;re usually impressed by the car and not by the driver - we imagine ourselves driving that car instead of admiring the driver\u0026rsquo;s achievements \u0026ldquo;No one is as impressed with your possessions as much as you are.\u0026rdquo;  9 - Wealth is What You Don\u0026rsquo;t See  wealth is the expensive car that you didn\u0026rsquo;t buy being a millionaire is the opposite of spending a million dollars \u0026ldquo;Wealth is an option not yet taken to buy something later.\u0026rdquo; what we see is richness, not wealth - money spent on cars, houses, and other visible things  10 - Save Money  getting money is largely out of our control - saving money is not reducing lifestyle bloat is often easier and has more potential than increasing income savings are the gap between your income and your ego - more humility will raise your savings rate saving money gives you flexibility to take opportunities you would otherwise have to decline - a lower paying, but more rewarding job, learning something new, retiring early, \u0026hellip;  11 - Reasonable \u0026gt; Rational  being rational about investments means to be passionless  it means you might sell a stock when it\u0026rsquo;s going down instead of keeping it and making more with it in the long run   being reasonable instead of being rational (i.e. allowing a bit of passion) helps you sleep at night being strictly rational will get you in uncomfortable situations  12 - Surprise!  \u0026ldquo;Things that have never happened before happen all the time.\u0026rdquo; it\u0026rsquo;s dangerous to use history as a guide for the future history tells us how people behaved under greed and stress, but not about trends - these will always be surprises  13 - Room for Error  \u0026ldquo;You have to plan on your plan not going to plan.\u0026rdquo; (that\u0026rsquo;s what I preach in every software project where I\u0026rsquo;m asked for estimates) \u0026ldquo;Room for error lets you endure a range of outcomes\u0026rdquo; - instead of only one don\u0026rsquo;t invest all your cash - having some in the bank gives options during surprises take risks with one portion of the money and be terrified about losing the other portion avoid a single point of failure (also something that we do when building software) you don\u0026rsquo;t need to save money for a specific reason - how can you know today what you\u0026rsquo;ll need the money for in the future?  14 - You\u0026rsquo;ll Change  as a child we want to drive a tractor, but later we don\u0026rsquo;t the \u0026ldquo;End of History\u0026rdquo; illusion is believing that we won\u0026rsquo;t change as much in the future as we did in the past - the History lies behind us and we\u0026rsquo;ve learned all there is  15 - Nothing\u0026rsquo;s Free  we pay for high dividends with high volatility trying to get high dividends without high volatility is the equivalent of grand theft auto - you\u0026rsquo;re not paying the price for it and if you get caught, you lose a lot if you try to make high gains without paying the price of uncertainty, it will bite you later  example: Every quarter, General Electric wanted to have their revenue look just a bit better than the forecast, so they wouldn\u0026rsquo;t pay the price of uncertainty in front of their investors - they did that by pulling some of next quarters revenues into this quarter - this only went well for so long   viewing volatility as a fee rather than a fine makes it easier to live with it  16 - You \u0026amp; Me  short-term trading and long-term investing are very different games - don\u0026rsquo;t take advice from a short-term trader if you are investing long-term bubbles happen when long-term investors start taking cues from short-term traders  17 - The Seduction of Pessimism  optimism is the best attitude for most people because things are getting better for most people most of the time pessimism is taken more seriously than optimism - when someone says the stocks will rise, they are ignored, but when they say the stocks will fall, we sell financial bad news are expecially impactful because money is a topic that touches everyone \u0026ldquo;Progress happens too slowly to notice, but setbacks happen too quickly to ignore.\u0026rdquo; we pay attention to failures more than to successes  18 - When You\u0026rsquo;ll Believe Anything  stories are the most powerful force in finance - more powerful than tangible facts you\u0026rsquo;ll believe just about anything when the stakes are high we tell ourselves stories to explain things we don\u0026rsquo;t understand  19 - All Together Now  \u0026ldquo;Less ego, more wealth.\u0026rdquo; - save more money manage your money so that you can sleep well the longest lever to increase your wealth is to increase the time period in which you save use money to gain control over your time  20 - Confessions  what works for the author (but may not work for others):  buying a house without a mortgage because the feeling of owning the house is worth more than the lost revenue from higher-return investments (looking at the Sydney property prices, I don\u0026rsquo;t believe I will ever be in the position to pay for a house in cash \u0026hellip;) keeping 20% of all assets in cash for unexpected expenses instead of investing it investing only in a handful of index funds because they have the highest odds for long-term success   \u0026ldquo;There is little correlation between investment effort and investment success.\u0026rdquo;  Conclusion While the book didn\u0026rsquo;t contain any mind-blowing revelations about money (at least for me), it was an entertaining and interesting read that confirmed me in my thinking about money.\nI will continue to invest in index funds and maybe invest a little play money into single stocks for short-term trading.\nThe main takeaway for me was the fact that rare \u0026ldquo;tail events\u0026rdquo; are mainly responsible for investment success. That\u0026rsquo;s something I only have control over when I keep investing, so I\u0026rsquo;ll do just that.\n","date":"January 27, 2021","image":"https://reflectoring.io/images/covers/psychology-of-money-teaser_hu65d5e771a14d278ecbcbec1ca3d556e5_82904_650x0_resize_q90_box.jpg","permalink":"/book-review-psychology-of-money/","title":"Book Notes: The Psychology of Money"},{"categories":["Software Craft"],"contents":"Robert C. Martin, maybe better known to you as „Uncle Bob“, has defined a set of principles for software engineering and software architecture.\nTogether, they are known as the SOLID Principles. One of them is the Open-Closed Principle, which we’ll explain in this post.\nA SOLID Background The Open-Closed Principle is the \u0026ldquo;O\u0026rdquo; in SOLID. It was, however, originally stated by Bertrand Meyer in 1988 already.\nAccording to Robert Martin, it says that:\n A software artifact - such as a class or a component - should be open for extension but closed for modification.\n In this article, I\u0026rsquo;d like to explain the implications of the Open-Closed Principle, why it is beneficial to good design, and how we may apply it in practice.\nSaving The Value Of Software Software in the context of the SOLID principles has more than one value.\nFirst, there is the functionality: Software is used to store, process and display data, compute results, and so on.\nSecond, software makes a promise that it can be flexible in case of new of changed requirements. It claims to be easy to change (that\u0026rsquo;s why it\u0026rsquo;s called _soft_ware).\nIn order to hold that promise, the software’s design should follow a set of principles, one of them being the Open-Closed Principle.\nThe promise here is that, if the design is in accordance with said principle, its behavior and functionality can be easily changed by just extending what is already present - instead of modifying the present code.\nInheritance Meyer\u0026rsquo;s original approach was to use inheritance as a core mechanism to achieve this feat.\nAt a first glance, this is easy to understand: If a behavior coded in a class needs to be changed, a way of doing that is to create a subclass and override methods as necessary.\nNo change to the superclass is necessary, just new code in the subclass.\nLet\u0026rsquo;s look at an example. The following class greets the world:\npublic class Greeter { public void greet() { System.out.println(\u0026#34;Hello, World!\u0026#34;); } } It is used by the following application:\npublic class GreeterApp { public static void main(String[] args) { Greeter greeter = new Greeter(); greeter.greet(); } } While pondering about the greeting of all people in the world, we notice that not everyone speaks the same language.\nTherefore, we decide that we need to extend our Greeter for additional languages.\nFollowing what we\u0026rsquo;ve already learned about the Open-Closed Principle, we create a new subclass to do so:\npublic class FrenchGreeter extends Greeter { @Override public void greet() { System.out.println(\u0026#34;Bonjour!\u0026#34;); } } But how do we integrate the new behaviour into our present application?\nWe would need to introduce some kind of \u0026ldquo;switch\u0026rdquo;, wouldn\u0026rsquo;t we? How could we do that without modifying the present code?\nThis situation already shows the limitations of Inheritance - it only takes us so far.\nAbstraction and Composition Furthermore, inheritance introduces tight coupling between the affected classes - if the superclass changes, subclasses may need to be modified, too.\nLet\u0026rsquo;s say we want to generalize our example a bit, so that the output can be redirected towards a given PrintStream.\nimport java.io.PrintStream; public class Greeter { private PrintStream target; public Greeter(PrintStream target) { this.target = target; } public void greet() { target.println(\u0026#34;Hello, World!\u0026#34;); } } This breaks our subclass FrenchGreeter, which needs to be adapted to call the constructor of the superclass.\nHow could we avoid this?\nWe can use abstraction instead of inheritance.\nTo do so, we first introduce an abstract interface:\npublic interface GreeterService { void greet(); } The default greeter as well as the localised one should now implement this interface instead of inheriting from each other:\npublic class Greeter implements GreeterService{ private PrintStream target; public Greeter(PrintStream target) { this.target = target; } public void greet() { target.println(\u0026#34;Hello, World!\u0026#34;); } } public class FrenchGreeter implements GreeterService { @Override public void greet() { System.out.println(\u0026#34;Bonjour!\u0026#34;); } } This breaks up the tight coupling between the two classes, allowing us to develop them independently.\nThe Whole Truth What happens if we want to extend the behaviour even further? Can we do that with our new class hierarchy?\nIn our example, let\u0026rsquo;s say that we want to greet the user by name.\nIn a first step, we\u0026rsquo;d need to modify the GreeterService interface and introduce a name parameter:\npublic interface GreeterService { void greet(String name); } Alas, this is already a modification of the present code!\nWe see another limitation of the Open-Closed Principle - we need to already anticipate which extensions we could want to make in the future in the original design.\nSummary and Conclusion The Open-Closed Principle is one of the five SOLID principles. It requires that a software artifact should be open for extension, but closed for modification.\nTo fulfil this requirement, we could apply inheritance or better yet, introduce a layer of abstraction with different implementations in our design to avoid tight coupling between particular classes.\nWe also learned that the Open-Closed Principle has two limitations:\n we still need some kind of toggle mechanism to switch between the original and extended behaviour, which could require modification of the present code, and the design needs to support the particular extension that we want to make - we cannot design our code in a way that ANY modification is possible without touching it.  Nevertheless, it is worthwhile to follow the Open-Closed Principle as far as possible, as it encourages us to develop cohesive, loosely coupled components.\nFurther Reading  The Open-Closed Principle  ","date":"January 25, 2021","image":"https://reflectoring.io/images/stock/0093-open-closed-1200x628-branded_hud46cf09109d15665b3c0c539841a8b13_144865_650x0_resize_q90_box.jpg","permalink":"/open-closed-principle-explained/","title":"The Open-Closed Principle Explained"},{"categories":["Spring Boot"],"contents":"GraphQL was developed by Facebook in 2012 for their mobile apps. It was open-sourced in 2015 and is now used by many development teams, including some prominent ones like GitHub, Twitter, and Airbnb. Here we will see what GraphQL is and explain its usage with some simple examples.\n Example Code This article is accompanied by a working code example on GitHub. What is GraphQL? GraphQL is a specification of a query language for APIs. The client or API consumer sends the request in a query language containing the fields it requires and the server returns only the requested fields instead of the complete payload.\nInstead of having many different endpoints, as we would have with REST, we have a single endpoint to which the consumer sends different queries depending on the data of interest. A sample GraphQL query and its response might look like this:\nGraphQL query:\n{ Product { title description category } } Response:\n{ \u0026#34;data\u0026#34;: { \u0026#34;Product\u0026#34;: { \u0026#34;title\u0026#34;: \u0026#34;Television\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;My 25 inch Television\u0026#34;, \u0026#34;category\u0026#34;: \u0026#34;Electronic Goods\u0026#34; } } } In this sample, we send a request for fetching a product with attributes title, description, and category, and the server returns the response containing only those fields (title, description, and category).\nGraphQL shifts some responsibility to the client for constructing the query containing only the fields of its interest. The server is responsible for processing the query and then fetching the data from an underlying system like a database or a web service.\nSo, instead of the server providing multiple APIs for different needs of the consumer, the onus is thrown to the consumer to fetch only the data it\u0026rsquo;s interested in.\nGraphQL Schema GraphQL is language-agnostic so it defines its own query language and a schema definition language (SDL).\nSo, to define what data we can get from a GraphQL endpoint, we need to define a schema.\nA Type is the most basic component of a GraphQL schema and represents a kind of object we can fetch from our service.\nScalar and Object Types We create a GraphQL schema by defining types and then providing functions for each type. Similar to the types in many programming languages, a type can be a scalar like int, string, decimal, etc, or an object type formed with a combination of multiple scalar and complex types.\nAn example of types for a GraphQL service that fetches a list of recent purchases looks like this:\ntype Product { id: ID! title: String! description: String! category: String madeBy: Manufacturer! } type Manufacturer { id: ID! name: String! address: String } Here we have defined the object types Product and Manufacturer.\nManufacturer is composed of scalar types with the names id, name, and address. Similarly, the Product type is composed of four scalar types with the names id, title, description, category, and an object type Manufacturer.\nSpecial Types: Query, Mutation, and Subscription We need to add root types to the GraphQL schema for adding functionality to the API. The GraphQL schema has three root-level types: Query, Mutation, and Subscription. These are special types and signify the entry point of a GraphQL service. Of these three, only the Query type is mandatory for every GraphQL service.\nThe root types determine the shape of the queries and mutations that will be accepted by the server.\nAn example Query root type for a GraphQL service that fetches a list of recent purchases looks like this:\ntype Query { myRecentPurchases(count: Int, customerID: String): [Product]! } This query fetches the specified number of recent purchases for a customer.\nA Mutation represents changes that we can make on our objects. Our schema with a Mutation will look like this:\ntype Mutation { addPurchases(count: Int, customerID: String): [Product]! } This mutation is used to add purchases of a customer.\nSubscription is another special type for real-time push-style updates. Subscriptions depend on the use of a publishing mechanism to generate the event that notifies a subscription that is subscribed to that event. Our schema with a Subscription will look like this:\ntype Subscription { newProduct: Product! } This is a subscription for adding a new Product.\nServer-Side Implementation GraphQL has several server-side implementations available in multiple languages. These implementations roughly follow a pipeline pattern with the following stages:\n We expose an endpoint that accepts GraphQL queries. We define a schema with types, queries, and mutations. We associate a function called \u0026ldquo;resolver\u0026rdquo; for each type to fetch data from underlying systems.  A GraphQL endpoint can live alongside REST APIs. Similar to REST, the GraphQL endpoint will also depend on a business logic layer for fetching data from underlying systems.\nSupport for GraphQL constructs varies across implementations. While the basic types Query and Mutation are supported across all implementations, support for the Subscription type is not available in a few.\nClient-Side Implementations The consumers of the GraphQL API use the query language defined by the server\u0026rsquo;s schema to request the specific data of their interest.\nOn the client-side, at the most basic level, we can send the query as a JSON payload in a POST request to a graphql endpoint:\ncurl --request POST \u0026#39;localhost:8080/graphql\u0026#39; \\  --header \u0026#39;Content-Type: application/json\u0026#39; \\  --data-raw \\  \u0026#39;{\u0026#34;query\u0026#34;:\u0026#34;query {myRecentPurchases(count:10){title,description}}\u0026#34;}\u0026#39; Here we send a request for fetching 10 recent purchases with the fields title, and description in each record.\nTo avoid making the low-level HTTP calls, we should use a GraphQL client library as an abstraction layer. Among other things, the GraphQL client library will take care of\n sending the request and handling the response, integrating with the view layer and optimistic UI updates, and caching query results.  There are several client frameworks available with popular ones being the Apollo Client, Relay (from Facebook), and urql.\nBuilding a GraphQL Server with Spring Boot We will use a Spring Boot application to build a GraphQL server implementation. For this, let us first create a Spring Boot application with the Spring Initializr.\nYou can find the code of the complete example application on GitHub.\nAdding GraphQL Dependencies For the GraphQL server, we will add the following Maven dependencies:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.graphql-java\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;graphql-spring-boot-starter\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.0.2\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.graphql-java\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;graphql-java-tools\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.2.4\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Here we have added graphql-spring-boot-starter as a GraphQL starter and a Java tools module graphql-java-tools.\nDefining the GraphQL Schema We can either take a top-down approach by defining the schema and then creating the POJOs for each type or a bottom-up approach by creating the POJOs first and then create a schema from those POJOs.\nWe opt for the first approach and create our schema first. The GraphQL schema needs to be defined in a file with the extension graphqls and needs to live in the resources folder.\nLet\u0026rsquo;s define our schema in a file src/main/resources/product.graphqls:\ntype Product { id: ID! title: String! description: String! category: String madeBy: Manufacturer! } type Manufacturer { id: ID! name: String! address: String } # The Root Query for the application type Query { myRecentPurchases(count: Int, customerID: String): [Product]! lastVisitedProducts(count: Int, customerID: String): [Product]! productsByCategory(category: String): [Product]! } # The Root Mutation for the application type Mutation { addRecentProduct(title: String!, description: String!, category: String) : Product! } Here we have added three operations to our Query and a Mutation for adding recent products.\nNext, we define the POJO classes for the Object types Product and Manufacturer:\npublic class Product { private String id; private String title; private String description; private String category; private Manufacturer madeBy; } public class Manufacturer { private String id; private String name; private String address; } This Product POJO maps to the product type and Manufacturer maps to the manufacturer object defined in our GraphQL schema.\nAssociate GraphQL Types with Resolvers Multiple resolver components convert the GraphQl request received from the API consumers and invoke operations to fetch data from applicable data sources. For each type, we define a resolver.\nWe will now add resolvers for all the types defined in the schema. The resolver classes need to implement GraphQLQueryResolver for the Query object and GraphQLMutationResolverfor the Mutation object. As explained earlier, Query and Mutation are the root GraphQL objects.\nWhen a GraphQL request is received, the fields in the root types are resolved to the output of the executed methods in these resolver classes.\nLet\u0026rsquo;s first add a resolver class named QueryResolver containing the methods corresponding to the fields in our GraphQL Query object:\n@Service public class QueryResolver implements GraphQLQueryResolver { private ProductRepository productRepository; @Autowired public QueryResolver(final ProductRepository productRepository) { super(); this.productRepository = productRepository; } public List\u0026lt;Product\u0026gt; getMyRecentPurchases( final Integer count, String customerID) { List\u0026lt;Product\u0026gt; products = productRepository .getRecentPurchases(count); return products; } public List\u0026lt;Product\u0026gt; getLastVisitedProducts( final Integer count, final String customerID) { List\u0026lt;Product\u0026gt; products = productRepository .getLastVisitedPurchases(count); return products; } public List\u0026lt;Product\u0026gt; getProductsByCategory( final String category) { List\u0026lt;Product\u0026gt; products = productRepository .getProductsByCategory(category); return products; } } We have defined the QueryResolver class as a Service class to resolve the root Query type in our GraphQL schema. In our example app, this service class is injected with a ProductRepository object to fetch product data from an H2 Database.\nWe next add a resolver for the Manufacturer object type:\n@Service public class ProductResolver implements GraphQLResolver\u0026lt;Product\u0026gt;{ private ManufacturerRepository manufacturerRepository; @Autowired public ProductResolver(ManufacturerRepository manufacturerRepository) { super(); this.manufacturerRepository = manufacturerRepository; } public Manufacturer getMadeBy(final Product product) { return manufacturerRepository .getManufacturerById(product.getManufacturerID()); } } The GraphQL library will automatically call this resolver for each Product to resolve its madeBy field with a Manufacturer object. This happens only if the consumer has requested the madeBy field, of course.\nSimilar to the resolver for Query object types, let us add a resolver for the Mutation root object type:\n@Service public class Mutation implements GraphQLMutationResolver{ public Product addRecentProduct( final String title, final String description, final String category) { return Product.builder() .title(\u0026#34;television\u0026#34;) .category(\u0026#34;electronic\u0026#34;) .build(); } } Here the Mutation class implements GraphQLMutationResolver and contains a method addRecentProduct which maps to the field in the Mutation root object type.\nConnecting to Datasources and Applying Middleware Logic Next, we will enable our resolvers to fetch data from underlying data sources like a database or web service. For this example, we have configured an in-memory H2 database as the data store for products and manufacturers. We use Spring JDBC to retrieve data from the database and put this logic in separate repository classes.\nApart from fetching data, we can also build different categories of middleware logic in this business service layer. A few examples of middleware logic are:\n authorization of incoming requests, applying filters on data fetched from backend, transformation into backend data models, and caching rarely changing data.  Running the Application After compiling and running the application, we can send GraphQL queries to the endpoint http://localhost:8080/graphql. A sample GraphQL query and response might look like this:\nGraphQL query:\nquery { myRecentPurchases(count: 2) { title description } } Response:\n{ \u0026#34;data\u0026#34;: { \u0026#34;myRecentPurchases\u0026#34;: [ { \u0026#34;title\u0026#34;: \u0026#34;Samsung TV\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Samsung Television\u0026#34; }, { \u0026#34;title\u0026#34;: \u0026#34;Macbook Pro 13\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Macbook pro 13 inch laptop\u0026#34; } ] } } GraphQL vs. REST REST has been the de-facto standard style for building APIs. Good API designs are usually driven by consumer needs which vary depending on the consumer. Let\u0026rsquo;s look at some differences between REST and GraphQL.\nOver Fetching and Under Fetching With REST, we might require multiple APIs to retrieve different \u0026ldquo;shapes\u0026rdquo; of the same product data. Alternately we might fetch the entire product data with all its relations every time even though we only need a part of the data.\nGraphQL tries to solve the problems of over fetching and under fetching data. With GraphQL, we will have a single endpoint on which the consumer can send different queries depending on the data of interest.\nShape of the API REST APIs are based on resources that are identified by URLs and an HTTP method (GET, POST, PUT, DELETE) indicating one of the CRUD operations. GraphQL, in contrast, is based on a data graph that is returned in response to a request sent as a query to a fixed endpoint.\nHTTP Status Codes REST APIs are mostly designed to return 2xx status codes for success and 4xx and 5xx for failures. GraphQL APIs return 200 as status code irrespective of whether it is a success or failure.\nHealth Check With REST APIs, we check for a 2xx status code on a specific endpoint to check if the API is healthy and capable of serving the requests. In GraphQL, health checking is relatively complex since the monitoring function needs to parse the response body to check the server status.\nCaching With REST APIs, the GET endpoints are cached in the application layer or by using a CDN. With GraphQL, we need to cache on the client-side, which is supported by some GraphQL client implementations. Apollo Client and URQL, for example, make use of GraphQL\u0026rsquo;s schema and type system using introspection to maintain a client-side cache.\nGraphQL is however known to break server-side caching because of the varying nature of requests. Server-side caching is at present not standardized across libraries. More information about server-side caching is found in the GraphQL Portal.\nConclusion In this article, we looked at the main capabilities of GraphQL and how it helps to solve some common problems associated with consuming APIs.\nWe also looked at GraphQL\u0026rsquo;s Schema Definition Language (SDL) along with the root types: Query, Mutation, and Subscription followed by how it is implemented on the server-side with the help of resolver functions.\nWe finally set up a GraphQL server implementation with the help of two Spring modules and defined a schema with a Query and Mutation. We then defined resolver functions to connect the query with the underlying data source in the form of an H2 database.\nGraphQL is a powerful mechanism for building APIs but we should use it to complement REST APIs instead of using it as a complete replacement. For example, REST may be a better fit for APIs with very few entities and relationships across entities while GraphQL may be appropriate for applications with many different domain objects.\nFind the complete code of the example application on GitHub.\n","date":"January 20, 2021","image":"https://reflectoring.io/images/stock/0001-network-1200x628-branded_hu72d229b68bf9f2a167eb763930d4c7d5_172647_650x0_resize_q90_box.jpg","permalink":"/getting-started-with-graphql/","title":"Getting Started with GraphQL"},{"categories":["Software Craft"],"contents":"In this article, we are going to discuss some very important commands from Git and how they make the life of developers easy - working individually or in a team. We will compare git rebase with git merge and explore some ways of using them in our Git workflow.\nIf you are a beginner to Git and are looking to understand the basic fork \u0026amp; pull workflow with Git, then you should give this article a read.\nIntroduction to Git Git is an open-source distributed version control system. We can break that work down into the following pieces:\n Control System: Git can be used to store content – it is usually used to store code, but other content can also be stored. Version Control System: Git helps in maintaining a history of changes and supports working on the same files in parallel by providing features like branching and merging. Distributed Version Control System: The code is present in two types of repositories – the local repository, and the remote repository.  What is Git Merge? Let\u0026rsquo;s first have a look at git merge. A merge is a way to put a forked history back together. The git merge command lets us take independent branches of development and combine them into a single branch.\nIt\u0026rsquo;s important to note that while using git merge, the current branch will be updated to reflect the merge, but the target branch remains untouched.\ngit merge is often used in combination with git checkout for the selection of the current branch, and git branch -d for deleting the obsolete source branch.\nWe use git merge for combining multiple sequences of commits into one unified history. In the most common cases, we use git merge to combine two branches.\nLet\u0026rsquo;s take an example in which we will mainly focus on branch merging patterns. In the scenario which we have taken, git merge takes two commit pointers and tries to find the common base commit between them.\nOnce Git has found a common base commit, it will create a new \u0026ldquo;merge commit\u0026rdquo;, that will combine the changes of each queued merge commit sequence.\nAfter a merge, we have a single new commit on the branch we merge into. This commit contains all the changes from the source branch.\nWhat is Git Rebase? Let\u0026rsquo;s have a look at the concept of git rebase. A rebase is the way of migrating or combining a sequence of commits to a new base commit. If we consider it in the context of a feature branching workflow, we can visualize it as follows:\nLet\u0026rsquo;s understand the working of git rebase by looking at history with a topic branch off another topic branch.\nLet\u0026rsquo;s say we have branched a feature1 branch from the mainline, and added some functionality to our project, and then made a commit. Now, we branch off the feature2 branch to make some additional changes. Finally, we go back to the feature1 branch and commit a few more changes:\nNow suppose that we have decided to merge the feature2 changes to the mainline for the release, but we also want to hold the feature1 changes until they are tested further.\nWith git rebase, we can \u0026ldquo;replay\u0026rdquo; the changes in the feature2 branch (that are not in the feature1 branch, i.e. C8 and C9), and then replay them on the main branch by using the –onto option of git rebase. We have to specify all the three branches names in this case because we are holding the changes from feature1 branch while replaying them in the main branch from feature2 branch:\ngit rebase --onto main feature1 feature2 It gives us a bit complex but a pretty cool result:\nThe commits from the feature2 branch have been replayed onto the main branch, and the feature2 branch now contains all the commits from the main branch plus the new commits from the feature2 branch.\nNow it\u0026rsquo;s time to fast forward our main branch so it will contain the new commits.\nFast forward is a unique instance of git rebase in which we are moving the tip of a branch to the latest commit. In our case, we want to move the tip of the main branch forward so it points to the latest commit of our feature2 branch.\nWe will use the following commands to do this:\ngit checkout main git merge feature2 In simple words, fast-forwarding main to the feature2 branch means that previously the HEAD pointer for main branch was at \u0026lsquo;C6\u0026rsquo; but after the above command it fast forwards the main branch\u0026rsquo;s HEAD pointer to the feature2 branch:\nGit Rebase vs Git Merge Now let\u0026rsquo;s go through the difference between git rebase and git merge.\nLet\u0026rsquo;s have a look at git merge first:\nIf we look at the diagram above, the golden commit is the latest commit on the base branch before the merge and the red commit is the merge commit. The merge commit has both - the latest commit in the base branch and the latest commit in the feature branch - as ancestors.\ngit merge preserves the ancestry of commits.\ngit rebase, on the other hand, re-writes the changes of one branch onto another branch without the creation of a merge commit:\nA new commit will be created on top of the branch we rebase onto, for every commit that is in the source branch, and not in the target branch. It will be just like all commits have been written on top of the main branch all along.\nArguments for Using git merge  It\u0026rsquo;s a very simple Git methodology to use and understand. It helps in maintaining the original context of the source branch. If one needs to maintain the history graph semantically correct, then Git Merge preserves the commit history. The source branch commits are separated from the other branch commits. It can be very helpful in extracting the useful feature and merging later into another branch.  Arguments for Using git rebase When a lot of developers are working on the same branch in parallel, the history can be intensely populated by lots of merge commits. It can create a very messy look of the visual charts, which can create hurdles in extracting useful information:\ngit rebase will help keep the history clean.\nChoosing the Right Method When the team chooses to go for a feature-based workflow, then git merge is the right choice because of the following reasons:\n It helps in preserving the commit history, and we need not worry about the changing history and commits. Avoids unnecessary git reverts or resets. A complete feature branch can easily reconcile changes with the help of a merge.  Contrary to this, if we want a more linear history, then git rebase is the best option. It helps to avoid unnecessary commits by keeping the changes linear and more centralized.\nWe need to be very careful while applying a rebase because if it is done incorrectly, it can cause some serious issues.\nDangers of Rebasing When it comes to rebasing and merging, most people hesitate to use git rebase as compared to git merge.\nThe basic purpose of git rebase and git merge is the same, i.e. they help us to bring changes from one branch into another. The difference is that git rebase re-writes the commit history:\nSo, if someone else checks out your branch before we rebase ours then it would be really hard to figure out what the history of each branch is.\nA problem that normally occurs when more than one developer is working on the same branch is explained in the following example:\nYou are working with a developer on the same feature branch called login_branch. The problem in this case with using rebase directly for login_branch by both of the developers is that both of them would be merging changes repeatedly and will get conflicts due to working on the same branch,\nTo avoid this problem, both developers should rebase off a common branch and once the common branch on becomes stable, one of the developers can rebase onto the main branch.\nTo summarize:\n rebase replays your commits on top of the new base. rebase rewrites history by creating new commits. rebase keeps the Git history clean.  Some of the key points to keep in mind are:\n rebase only your own local branches. Don\u0026rsquo;t rebase public branches. Undo rebase with git reflog.  Conclusion Let\u0026rsquo;s summarize what we have discussed so far.\nFor repositories where multiple people work on the same branches, git rebase is not the most suitable option because the feature branch keeps on changing.\nFor individuals, on the other hand, rebasing provides a lot of ease. If one wants to maintain the history track, then one must go for the merging option because merging preserves the history while rebase just overwrites it.\nHowever, if we have a complex history and we want to streamline it, then rebasing can be very useful. It can help us to remove undesirable commits, squash two or more commits into each other, also providing the option to edit commit messages (during an \u0026ldquo;interactive\u0026rdquo; rebase).\nRebase focuses on presenting one commit at a time, whereas merging focuses on presenting all at once (in a merge commit). But we should keep in mind that reverting a rebase is much more difficult than reverting a merge if there are many conflicts.\nFurther Reading  Merging vs. Rebasing Git Rebasing  ","date":"January 14, 2021","image":"https://reflectoring.io/images/stock/0050-git-1200x628-branded_hue893d837883783866d1e88c8e713ed74_236340_650x0_resize_q90_box.jpg","permalink":"/git-rebase-merge/","title":"Git Rebase vs. Git Merge Explained"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you\u0026rsquo;re looking for some guiding principals you want to trigger some thoughts about what is worth doing and what isn\u0026rsquo;t you enjoy reading about a maker\u0026rsquo;s decisions  Book Facts  Title: Hell Yeah or No Authors: Derek Sivers Word Count: ~ 25.000 (2 hours at 250 words / minute) Reading Ease: easy Writing Style: conversational, very short chapters, inspiring Year Published: 2020  Overview {% include book-link.html book=\u0026ldquo;hell-yeah\u0026rdquo; %} is a very succinctly written book about some lessons the author made in his life and thought worth sharing.\nIt\u0026rsquo;s a collection of very short chapters of 1-5 pages. Each chapter was once a blog post that contains one lesson about life. It\u0026rsquo;s basically a window into the philosophy of Derek Sivers, a maker and entrepreneur.\nIt\u0026rsquo;s about things like finding your identity, making things happen, saying yes and no, and finding out what\u0026rsquo;s worth doing.\nNotes Here are my notes, as usual with some comments in italics.\nUpdating Identity  what\u0026rsquo;s left if you stopped doing everything you did for money and attention? (i.e. don\u0026rsquo;t let your work life define you) your actions reveal your values  your small actions make up who you are and thus may change who you are \u0026ldquo;How you do anything is how you do everything.\u0026rdquo; your character predicts your future   if you think you want something but haven\u0026rsquo;t started yet, either  stop lying to yourself that you want it, or start doing it to see if you really want it   \u0026ldquo;Success comes from doing, not declaring.\u0026rdquo; if you don\u0026rsquo;t keep doing the same job, you shouldn\u0026rsquo;t keep the same job title - we shouldn\u0026rsquo;t pretend to be something we aren\u0026rsquo;t anymore make sure to know what you value most so you can align your decisions with your goals and values  always be able to answer why you\u0026rsquo;re doing things but: \u0026ldquo;Old opinions shouldn\u0026rsquo;t define who you are in the future.\u0026rdquo;   no matter what your preferences are, someone will always say you\u0026rsquo;re wrong - knowing your preferences will help you handle that ideas need not be 100% original - you can copy others and still provide value to the world your public persona doesn\u0026rsquo;t need to be yourself - that\u0026rsquo;s inviting stress we often don\u0026rsquo;t understand other people because we\u0026rsquo;re biased in our own little bubble of the world - we should try seeing things from their perspective  This is also emphasized in the first chapter of \u0026ldquo;The Psychology of Money\u0026rdquo;: everybody has a different background, a different history, has grown up with different values and different opportunities, so we shouldn\u0026rsquo;t expect everyone to share our opinions. a rarely recognized axis of difference between people is being future-focused vs. being present-focused - try to use this to understand other people    Saying No  if you don\u0026rsquo;t feel like \u0026ldquo;Hell yeah!\u0026rdquo; about an opportunity, don\u0026rsquo;t do it - it will free you up for the next \u0026ldquo;Hell yeah!\u0026rdquo; thing create an environment that makes it easy to say \u0026ldquo;no\u0026rdquo; to distractions (hide your phone while working on something, close the door, close your browser tabs, \u0026hellip; see my notes on \u0026ldquo;Make Time\u0026rdquo; for some more) we focus so much on being useful that we have forgotten to do things for ourselves it\u0026rsquo;s ok to be a \u0026ldquo;slow thinker\u0026rdquo; - manage other people\u0026rsquo;s expectations to not expect immediate answers to their questions (I definitely fall into the category of slow thinkers - there\u0026rsquo;s little I hate more than people expecting immediate answers from me, I always need to think things through first) motivation is delicate and can be influenced by subtle tweaks - find the tweaks that improve your motivation, even if it\u0026rsquo;s a bit inconvenient to other people if you get too comfortable, it may be time to let go of something you love doing to gain freedom for change before you start something new, think about the ways it might end - maybe it\u0026rsquo;s better to say no \u0026ldquo;Empty time has the potential to be filled with great things. Time filled with little things has little potential.\u0026rdquo; (This is something I\u0026rsquo;ve been trying to teach my kids - being bored for a while is a great opportunity, not a great loss) when you\u0026rsquo;re feeling down, raise your bar and say no to everything that doesn\u0026rsquo;t help you feel better  Making Things Happen  there\u0026rsquo;s no speed limit on learning or creating something - the only limiting factor is yourself instead of going full-on 100%, dial back to 50% and compare your results to earlier - they may be almost the same as if you went 100% and you just gained time for other stuff! disconnecting from the internet and from people gives room for doing one\u0026rsquo;s best work when feeling unmotivated do some of those boring but necessary chores to get back into doing something compare to the next thing below your situation, not the next thing above your situation - you\u0026rsquo;ll feel gratitude for what you have instead of envy for what you don\u0026rsquo;t have \u0026ldquo;Great insight comes only from opening your mind to many options\u0026rdquo; - there are usually more than two options - make a list \u0026ldquo;Asking advice should be like echolocation\u0026rdquo; - bounce ideas off people to get the whole picture  don\u0026rsquo;t trust a single source of advice   first, try many different things, then, when you\u0026rsquo;ve found something rewarding, focus on that, and it will probably pay out \u0026ldquo;Most people overestimate what they can do in one year and underestimate what they can do in ten years\u0026rdquo; - think long term, you have a lot of time (hopefully)  Changing Perspective  assume you\u0026rsquo;re below average and you\u0026rsquo;ll be free to learn and ask questions (I think this is a good way to get into a Growth Mindset) don\u0026rsquo;t think that everything is someone else\u0026rsquo;s false - instead, assume that it\u0026rsquo;s your fault and you have the power to change it! (this is a core tenet in \u0026ldquo;The 7 Habits of Highly Effective People\u0026rdquo; - change yourself instead of trying to change others) \u0026ldquo;Amazingly rare things happen to people every day\u0026rdquo; - thinking about this will change your perspective  What\u0026rsquo;s Worth Doing?  \u0026ldquo;Everybody\u0026rsquo;s ideas seem obvious to them\u0026rdquo; - but they\u0026rsquo;re often amazing to others if it makes you happy and it\u0026rsquo;s smart for your long-term future and it\u0026rsquo;s useful to others, it\u0026rsquo;s worth doing - drop one of the three attributes and it might not be worth doing \u0026ldquo;If you have too much stability, you get bored. If you don\u0026rsquo;t have enough stability, you panic.\u0026rdquo; \u0026ldquo;Do something for love and something for money\u0026rdquo; - don\u0026rsquo;t mix them - the two halves will balance each other out ask \u0026ldquo;what do I hate not doing?\u0026rdquo; instead of \u0026ldquo;what do I love doing?\u0026rdquo; to get a new perspective \u0026ldquo;Learning without doing is wasted\u0026rdquo; (yes! Also see my notes on \u0026ldquo;Pragmatic Thinking \u0026amp; Learning\u0026rdquo;) make decisions as late as possible to because you\u0026rsquo;ll have the most information don\u0026rsquo;t start a business until people are asking you to - prove a real demand first  Fixing Faulty Thinking  \u0026ldquo;We don\u0026rsquo;t get wise just by adding and adding. We also need to subtract.\u0026rdquo; - unlearn things that don\u0026rsquo;t work (anymore) (that\u0026rsquo;s why I unlearned JavaScript, but it came back!) read books and apply the general lessons to your life - don\u0026rsquo;t get hung up on examples in the books that most likely have nothing to do with your life \u0026ldquo;To make a change, you have to be extreme.\u0026rdquo; - to change habits, do an extreme to have a better chance at changing it  Saying Yes  some people are born with talent, but you can also become talented with long years of practice \u0026ldquo;Judge a goal by how well it changes your actions in the present moment.\u0026rdquo; - a great goal makes you take action immediately \u0026ldquo;Inspiration is not receiving information. Inspiration is applying what you\u0026rsquo;ve received.\u0026rdquo; \u0026ldquo;You grow by doing what excites you and scares you.\u0026rdquo; whatever scares you, go do it - you won\u0026rsquo;t be scared by it for long  Conclusion The book is full of very small chapters that encouraged me to think about my decisions and how I\u0026rsquo;m spending my time. Most chapters tell a story from the author\u0026rsquo;s life, so they\u0026rsquo;re not directly applicable to our own lives, but they\u0026rsquo;re still very inspirational.\nThe book has a high density of inspirational quotes - I\u0026rsquo;m still thinking about how to make the most of them. And it\u0026rsquo;s a quick read. In summary, I strongly recommend reading this book.\n","date":"January 9, 2021","image":"https://reflectoring.io/images/covers/hell-yeah-teaser_hud5b659458ccbcdd0eb9bdd085ca0dd4b_82367_650x0_resize_q90_box.jpg","permalink":"/book-review-hell-yeah-or-no/","title":"Book Notes: Hell Yeah or No"},{"categories":["AWS"],"contents":"Continuous deployment is an important part in today\u0026rsquo;s software development loop. We want to ship the latest version of our software in no time to provide our users with the newest features or bugfixes. This is a major pillar of the DevOps movement.\nThis means deployments have to be automated.\nAWS CloudFormation is Amazon\u0026rsquo;s solution to deploying software and infrastructure into the cloud. In this article, we\u0026rsquo;ll deploy a Docker image to the AWS cloud with CloudFormation. We\u0026rsquo;ll start at zero so no previous AWS knowledge is required.\nAt the end of this article you will\n know what CloudFormation is and can do, know the basic vocabulary to talk about AWS cloud infrastructure, and have all the tools necessary to deploy a Docker image with a couple of CLI commands.  Check Out the Book!  This article gives only a first impression of what you can do with CloudFormation.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n Getting Ready If you\u0026rsquo;ve never deployed an app to the cloud before, you\u0026rsquo;re in for a treat. We\u0026rsquo;re going to deploy a \u0026ldquo;Hello World\u0026rdquo; version of a Todo app to AWS with only a couple of CLI commands (it requires some preparation to get these CLI commands working, though).\nWe\u0026rsquo;re going to use Docker to make our app runnable in a container, AWS CloudFormation to describe the infrastructure components we need, and the AWS CLI to deploy that infrastructure and our app.\nThe goal of this chapter is not to become an expert in all things AWS, but instead to learn a bit about the AWS CLI and CloudFormation to have a solid foundation to build more AWS knowledge.\nWe\u0026rsquo;ll start at zero and set up our AWS account first.\nSetting up an AWS Account To do anything with AWS, you need an account with them. If you don\u0026rsquo;t have an account yet, go ahead and create one now.\nIf you already have an account running serious applications, you might want to create an extra account just to make sure you\u0026rsquo;re not messing around with your serious business while playing around with this article.\nInstalling the AWS CLI To do magic with AWS from our command line, we need to install the AWS CLI.\nThe AWS CLI is a beast of a command-line interface that provides commands for many and many different AWS services (224 at the time of this writing). In this chapter, we\u0026rsquo;re going to use it to deploy the application and then to get some information about the deployed application.\nInstalling the AWS CLI differs across operating systems, so please follow the official instructions for your operating system to install version 2 of the AWS CLI on your machine.\nOnce it\u0026rsquo;s installed, run aws configure. You will be asked to provide 4 parameters:\n~ aws configure AWS Access Key ID [****************OGBE]: AWS Secret Access Key [****************CmqH]: Default region name [ap-southeast-2]: Default output format [yaml]: You can get the \u0026ldquo;AWS Access Key ID\u0026rdquo; and \u0026ldquo;AWS Secret Access Key\u0026rdquo; after you have logged into to your AWS account when you click on your account name and then \u0026ldquo;My Security Credentials\u0026rdquo;. There, you open the tab \u0026ldquo;Access keys\u0026rdquo; and click on \u0026ldquo;Create New Access Key\u0026rdquo;. Copy the values into the prompt of the AWS CLI.\nThe AWS CLI is now authorized to make calls to the AWS APIs in your name.\nNext, the aws configure command will ask you for a \u0026ldquo;Default region name\u0026rdquo;.\nThe AWS services are distributed across \u0026ldquo;regions\u0026rdquo; and \u0026ldquo;availability zones\u0026rdquo;. Each geographical region is fairly isolated from the other regions for reasons of data residency and low latency. Each region has 2 or more availability zones to make the services resilient against outages.\nEach time we interact with an AWS service, it will be with the service\u0026rsquo;s instance in a specific region. So, choose the region nearest to your location from the list of service endpoints provided by AWS and enter the region code into the aws configure prompt (for example \u0026ldquo;us-east-1\u0026rdquo;).\nFinally, the aws configure command will prompt you for the \u0026ldquo;Default output format\u0026rdquo;. This setting defines the way the AWS CLI will format any output it presents to you.\nYou can choose between two evils: \u0026ldquo;json\u0026rdquo; and \u0026ldquo;yaml\u0026rdquo;. I\u0026rsquo;m not going to judge you on your choice.\nWe\u0026rsquo;re done configuring the AWS CLI now. Run the following command to test it:\n aws ec2 describe-regions This command lists all the AWS regions in which we can make use of EC2 instances (i.e. \u0026ldquo;Elastic Cloud Compute\u0026rdquo; machines that we can use to deploy our own applications into). If you get a list of regions, you\u0026rsquo;re good to go.\nInspecting the \u0026ldquo;Hello World\u0026rdquo; App Let\u0026rsquo;s take a quick peek at the Todo app we\u0026rsquo;re going to deploy to AWS.\nYou\u0026rsquo;ll find the source code for the app in the folder chapters/chapter-1/application of the GitHub repository. Feel free to clone it or to inspect it on GitHub.\nAt this point, the app is no more than a stateless \u0026ldquo;Hello World\u0026rdquo; Spring Boot app.\nIt has a single controller IndexController that shows nothing more than the message \u0026ldquo;Welcome to the Todo Application!\u0026rdquo;. Feel free to start the application via this command:\n./gradlew bootrun Then, navigate to http://localhost:8080 to see the message.\nTo deploy the app to AWS, we need to publish it as a Docker image next.\nPublishing the \u0026ldquo;Hello World\u0026rdquo; App to Docker Hub If you know how to package a Spring Boot app in a Docker image, you can safely skip this section. We have published the app on Docker Hub already, so you can use that Docker image in the upcoming steps.\nIf you\u0026rsquo;re interested in the steps to create and publish a basic Docker image, stay tuned.\nFirst, we need a Dockerfile. The repository already contains a Dockerfile with this content:\nFROM openjdk:11.0.9.1-jre ARG JAR_FILE=build/libs/*.jar COPY ${JAR_FILE} app.jar ENTRYPOINT [\u0026#34;java\u0026#34;, \u0026#34;-jar\u0026#34;, \u0026#34;/app.jar\u0026#34;] This file instructs Docker to create an image based on a basic openjdk image, which bundles OpenJDK 11 with a Linux distribution. Starting with version 2.3.0, Spring Boot supports more sophisticated ways of creating Docker images, including cloud-native Buildpacks. We\u0026rsquo;re not going to dive into that, but if you\u0026rsquo;re interested, this blog post gives an introduction to what you can do.\nWe create the argument JAR_FILE and tell Docker to copy the file specified by that argument into the file app.jar within the container.\nThen, Docker will start the app by calling java -jar /app.jar.\nBefore we can build a Docker image, we need to build the app with\n./gradlew build This will create the file /build/libs/todo-application-0.0.1-SNAPSHOT.jar, which will be caught by the JAR_FILE argument in the Docker file.\nTo create a Docker image we can now call this command:\ndocker build -t stratospheric/todo-app-v1:latest . Docker will now build an image in the namespamce stratospheric and the name todo-app-v1 and tag it with the tag latest. If you do this yourself, make sure to use your Docker Hub username as the namespace because you won\u0026rsquo;t be able to publish a Docker image into the stratospheric namespace.\nA call to docker image ls should list the Docker image now:\n~ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE stratospheric/todo-app-v1 latest 5d3ef7cda994 3 days ago 647MB To deploy this Docker image to AWS, we need to make it available to AWS somehow. One way to do that is to publish it to Docker Hub, which is the official registry for Docker images (in the book, we\u0026rsquo;ll also learn how use Amazon\u0026rsquo;s ECR service to deploy Docker images). To do this, we call docker login and docker push:\ndocker login docker push stratospheric/todo-app-v1:latest The login command will ask for your credentials, so you need to have an account at hub.docker.com. The push command will upload the image to the Docker Hub, so that anyone can pull it from there with this command:\ndocker pull stratospheric/todo-app-v1:latest Great! the app is packaged in a Docker image and the image is published. Time to talk about deploying it to AWS.\nGetting Started with AWS Resources As mentioned above, we\u0026rsquo;ll be using AWS CloudFormation to deploy some infrastructure and finally our Docker image to the cloud.\nIn a nutshell, CloudFormation takes a YAML or JSON file as input and provisions all the resources listed in that file to the cloud. This way, we can spin up a whole network with load balancers, application clusters, queues, databases, and whatever else we might need.\nPretty much every AWS service provides some resources we can provision with CloudFormation. Almost everything that you can do via the AWS web interface (called the AWS Console), you can also do with CloudFormation. The docs provide a list of the CloudFormation resources.\nThe advantage of this is clear: with CloudFormation, we can automate what we would otherwise have to do manually.\nLet\u0026rsquo;s have a look at what we\u0026rsquo;re going to deploy in this article:\nFor deploying our Todo app, we\u0026rsquo;re starting with just a few resources so we don\u0026rsquo;t get overwhelmed. We\u0026rsquo;re deploying the following resources:\nA Virtual Private Cloud (VPC) is the basis for many other resources we deploy. It spins up a virtual network that is accessible only to us and our resources.\nA VPC contains public and private subnets. A public subnet is reachable from the internet, a private subnet is not. In our case, we deploy a single public subnet only. For production deployments, we\u0026rsquo;d usually deploy at least two subnets, each in a different availability zone (AZ) for higher availability.\nTo make a subnet public, we need an internet gateway. An internet gateway allows outbound traffic from the resources in a public subnet to the internet and it does network address translation (NAT) to route inbound traffic from the internet to the resources in a public subnet.\nA subnet that is not attached to an internet gateway makes it a private subnet.\nInto our public subnet, we deploy an ECS cluster. ECS (Elastic Container Service) is an AWS service that automates much of the work to deploy Docker images.\nWithin an ECS cluster we can define one or more different services that we want to run. For each service, we can define a so-called task. A task is backed with a Docker image. We can decide how many instances of each task we want to run and ECS takes care of keeping that many instances alive at all times.\nIf the healthcheck of one of our application instances (i.e. task instances) fails, ECS will automatically kill that instance and restart a new one. If we want to deploy a new version of the Docker image, we give ECS the URL to the new Docker image and it will automatically do a rolling deployment, keeping at least one instance alive at all times until all old instances have been replaced with new ones.\nLet\u0026rsquo;s get our hands dirty and have a look at the files that describe this infrastructure!\nInspecting the CloudFormation Templates You can find the CloudFormation templates in the cloudformation folderon GitHub.\nIn that folder, we have two YAML files - network.yml and service.yml - as well as two shell scripts - create.sh and delete.sh.\nThe YAML files are the CloudFormation templates that describe the resources we want to deploy. The shell scripts wrap some calls to the AWS CLI to create (i.e. deploy) and delete (i.e. destroy) the resources described in those files. network.yml describes the basic network infrastructure we need, and service.yml describes the application we want to run in that network.\nBefore we look at the CloudFormation files, we need to discuss the concept of \u0026ldquo;stacks\u0026rdquo;.\nA stack is CloudFormation\u0026rsquo;s unit of work. We cannot create single resources with CloudFormation, unless they are wrapped in a stack.\nA YAML file (or JSON file, if you enjoy chasing closing brackets more than chasing indentation problems) always describes the resources of a stack. Using the AWS CLI, we can interact with this stack by creating it, deleting it, or modifying it.\nCloudFormation will automatically resolve dependencies between the resources defined in a stack. If we define a subnet and a VPC, for example, CloudFormation will create the VPC before the subnet, because a subnet always refers to a specific VPC. When deleting a stack, it will automatically delete the subnet before deleting the VPC.\nThe Network Stack With the CloudFormation basics in mind, let\u0026rsquo;s have a look at the first couple of lines of the network stack defined in network.yml:\nAWSTemplateFormatVersion: \u0026#39;2010-09-09\u0026#39; Description: A basic network stack that creates a VPC with a single public subnet  and some ECS resources that we need to start a Docker container  within this subnet. Resources: ... A stack file always refers to a version of the CloudFormation template syntax. The last version is from 2010. I couldn\u0026rsquo;t believe that at first, but the syntax is rather simple, as we\u0026rsquo;ll see shortly, so I guess it makes sense that it\u0026rsquo;s stable.\nNext is a description of the stack and then a big section with the key Resources that describes the resources we want to deploy in this stack.\nIn the network stack, we want to deploy the basic resources we need to deploy our Todo application onto. That means we want to deploy a VPC with a public subnet, an internet gateway to make that subnet accessible from the internet, and an ECS cluster that we can later put our Docker image into.\nThe first resource we define within the Resources block is the VPC:\nVPC: Type: AWS::EC2::VPC Properties: CidrBlock: \u0026#39;10.0.0.0/16\u0026#39; The key VPC we can choose as we see fit. We can reference the resource by this name later in the template.\nA resource always has a Type. There are a host of different resource types available, since almost every AWS service allows us to create resources via CloudFormation. In our case, we want to deploy a VPC - a virtual private cloud in which we put all the other resources.\nNext, a resource may require some Properties to work. Most resources do require properties. To find out which properties are available, have a look at the reference documentation of the resource you want to work with. The easiest way to get there is by googling \u0026ldquo;cloudformation \u0026lt;resource name\u0026gt;\u0026rdquo;. The documentation is not always clear about which properties are required and which are optional, so it may require some trial and error when working with a new resource.\nIn the case of our VPC, we only define the property CidrBlock that defines the range of IP addresses available to any resources within the VPC that need an IP address. The value 10.0.0.0/16 means that we\u0026rsquo;re creating a network with an IP address range from 10.0.0.0 through 10.0.255.255 (the 16 leading bits 10.0 are fixed, the rest is free to use).\nWe could deploy the CloudFormation stack with only this single resource, but we need some more infrastructure for deploying our application. Here\u0026rsquo;s a list of all the resources we deploy with a short description for each. You can look them up in the network.yml file) to see their configuration:\n PublicSubnet: A public subnet in one of the availability zones of the region we\u0026rsquo;re deploying into. We make this subnet public by setting MapPublicIpOnLaunch to true and attaching it to an internet gateway. InternetGateway: An internet gateway to allow inbound traffic from the internet to resources in our public subnet and outbound traffic from the subnet to the internet. GatewayAttachment: This resource of type VpcGatewayAttachment attaches our subnet to the internet gateway, making it effectively public. PublicRouteTable: A RouteTable to define routes between the internet gateway and the public subnet. PublicSubnetRouteTableAssociation: Some boilerplate to link the route table with our public subnet. PublicRoute: The actual route telling AWS that we want to allow traffic from our internet gateway to any IP address within our public subnet. ECSCluster: A container for running ECS tasks. We\u0026rsquo;ll deploy an ECS task with our Docker image later in the service stack (service.yml). ECSSecurityGroup: A security group that we can later use to allow traffic to the ECS tasks (i.e. to our Docker container). We\u0026rsquo;ll refer to this security group later in the service stack (service.yml) ECSSecurityGroupIngressFromAnywhere: A security group rule that allows traffic from anywhere to any resources attached to our ECSSecurityGroup. ECSRole: A role that attaches some permissions to the ecs-service principal. We\u0026rsquo;re giving the ECS service some permissions to modify networking stuff for us. ECSTaskExecutionRole: A role that attaches some permissions to the ecs-tasks principal. This role will give our ECS tasks permissions to write log events, for example.  That\u0026rsquo;s quite some resources we need to know about and configure. Creating CloudFormation templates quickly becomes a trial-and-error marathon until you get it configured just right for your use case. In the book, we\u0026rsquo;ll also have a look at the Cloud Development Kit (CDK) which takes some of that work from our shoulders.\nIn case you wondered about the special syntax used in some places of the YAML file, let\u0026rsquo;s quickly run through it:\n Fn::Select / !Select: Allows us to select one element from a list of elements. We use it to select the first availability zone of the region we\u0026rsquo;re working in. Fn::GetAZs / !GetAZs: Gives us a list of all availability zones in a region. Fn::Ref / !Ref: Allows us to reference another resource by the name we\u0026rsquo;ve given to it. Fn::Join / !Join: Joins a list of strings to a single string, with a given delimiter between each. Fn::GetAtt / !GetAtt: Resolves an attribute of a resource we\u0026rsquo;ve defined.  All functions have a long form (Fn::...) and a short form (!...) which behave the same, but look a bit different in YAML. In a nutshell, we can use the short form for single-line expressions and the long form for longer expressions that we might want to split over several lines.\nFinally, at the bottom of network.yml, we see an Outputs section:\nOutputs: ClusterName: Description: The name of the ECS cluster Value: !Ref \u0026#39;ECSCluster\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;ClusterName\u0026#39; ] ] ... (more outputs) Each output describes a parameter that we want to export from the stack to be used in other stacks.\nFor example, we export the name of the ECS Cluster under the name \u0026lt;NETWORK_STACK_NAME\u0026gt;:ClusterName. In other stacks, like our service stack, we now only need to know the name of the network stack to access all of its output parameters.\nLet\u0026rsquo;s have a look at the service stack now to see how we deploy our application.\nThe Service Stack The service stack is defined in service.yml. We call it \u0026ldquo;service stack\u0026rdquo; because it describes an ECS task and an ECS service that spins up Docker containers and do some magic to make them available via the internet.\nDifferent from the network stack, the service stack starts with a Parameters section:\nAWSTemplateFormatVersion: \u0026#39;2010-09-09\u0026#39; Description: Deploys a Docker container within a previously created VPC.  Requires a running network stack. Parameters: NetworkStackName: Type: String Description: The name of the networking stack that these resources are put into. ServiceName: Type: String Description: A human-readable name for the service. ImageUrl: Type: String Description: The url of a docker image that will handle incoming traffic. ContainerPort: Type: Number Default: 80 Description: The port number the application inside the docker container is binding to. ContainerCpu: Type: Number Default: 256 Description: How much CPU to give the container. 1024 is 1 CPU. ContainerMemory: Type: Number Default: 512 Description: How much memory in megabytes to give the container. DesiredCount: Type: Number Default: 1 Description: How many copies of the service task to run. ... Within the Parameters section, we can define input parameters to a stack. We\u0026rsquo;re passing the name of an existing network stack, for example, so that we can refer to its output parameters. Also, we pass in a URL pointing to the Docker image we want to deploy and some other information that we might want to change from one deployment to another.\nThe service stack deploys merely three resources:\n LogGroup: A container for the logs of our application. TaskDefinition: The definition for an ECS task. The task will pull one or more Docker images from URLs and run them. Service: An ECS service that provides some logic around a task definition, like how many instances should run in parallel and if they should be assigned public IP addresses.  In several instances, you\u0026rsquo;ll see references to the network stack\u0026rsquo;s outputs like this one:\nFn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;NetworkStackName\u0026#39;, \u0026#39;ClusterName\u0026#39;]] Fn:ImportValue imports an output value exported by another stack. Since we have included the network stack name in the name out its outputs, we need to join the network stack name with the output parameter name to get the right value.\nSo, we\u0026rsquo;ve looked at over 200 lines of YAML configuration describing the infrastructure we want to deploy. In the book, we\u0026rsquo;ll also have a look at AWS CDK (Cloud Development Kit) to see how to do this in Java instead of YAML, making it more reusable and easier to handle in general.\nInspecting the Deployment Scripts Let\u0026rsquo;s deploy our app to the cloud! We\u0026rsquo;ll need the scripts create.sh and delete.sh from the cloudformation folder in the GitHub repo.\nGo ahead and run the create.sh script now, if you want. While you\u0026rsquo;re waiting for the script to finish (it can take a couple of minutes), we\u0026rsquo;ll have a look at the script itself.\nThe script starts with calling aws cloudformation create-stack to create the network stack:\naws cloudformation create-stack \\ --stack-name stratospheric-basic-network \\ --template-body file://network.yml \\ --capabilities CAPABILITY_IAM aws cloudformation wait stack-create-complete \\ --stack-name stratospheric-basic-network We\u0026rsquo;re passing the name for the stack, the path to our network.yml stack template and the capability CAPABILITY_IAM to allow the stack to make changes to IAM (Identity and Access Management) roles.\nSince the create-stack command executes asynchronously, we call aws cloudformation wait stack-create-complete afterwards to wait until the stack is up and running.\nNext, we\u0026rsquo;re doing the same for the service stack:\naws cloudformation create-stack \\ --stack-name stratospheric-basic-service \\ --template-body file://service.yml \\ --parameters \\ ParameterKey=NetworkStackName,ParameterValue=stratospheric-basic-network \\ ParameterKey=ServiceName,ParameterValue=todo-app-v1 \\ ParameterKey=ImageUrl,ParameterValue=docker.io/stratospheric/todo-app-v1:latest \\ ParameterKey=ContainerPort,ParameterValue=8080 aws cloudformation wait stack-create-complete \\ --stack-name stratospheric-basic-service With --parameters, we\u0026rsquo;re passing in all the parameters that we want different from the defaults. Specifically, we\u0026rsquo;re passing docker.io/stratospheric/todo-app-v1:latest into the ImageUrl parameter to tell AWS to download our Docker image and run it.\nAfter both stacks are up and running, we\u0026rsquo;re using some AWS command-line magic to extract the public IP address of the running application:\nCLUSTER_NAME=$( aws cloudformation describe-stacks \\ --stack-name stratospheric-basic-network \\ --output text \\ --query \u0026#39;Stacks[0].Outputs[?OutputKey==`ClusterName`].OutputValue | [0]\u0026#39; ) echo \u0026#34;ECS Cluster: \u0026#34; $CLUSTER_NAME TASK_ARN=$( aws ecs list-tasks \\ --cluster $CLUSTER_NAME \\ --output text --query \u0026#39;taskArns[0]\u0026#39; ) echo \u0026#34;ECS Task: \u0026#34; $TASK_ARN ENI_ID=$( aws ecs describe-tasks \\ --cluster $CLUSTER_NAME \\ --tasks $TASK_ARN \\ --output text \\ --query \u0026#39;tasks[0].attachments[0].details[?name==`networkInterfaceId`].value\u0026#39; ) echo \u0026#34;Network Interface: \u0026#34; $ENI_ID PUBLIC_IP=$( aws ec2 describe-network-interfaces \\ --network-interface-ids $ENI_ID \\ --output text \\ --query \u0026#39;NetworkInterfaces[0].Association.PublicIp\u0026#39; ) echo \u0026#34;Public IP: \u0026#34; $PUBLIC_IP echo \u0026#34;You can access your service at http://$PUBLIC_IP:8080\u0026#34; We\u0026rsquo;re using different AWS commands to get to the information we want. First, we output the network stack and extract the name of the ECS cluster. With the cluster name, we get the ARN (Amazon Resource Name) of the ECS task. With the task ARN, we get the ID of the network interface of that task. And with the network interface ID we finally get the public IP address of the application so we know where to go.\nAll commands use the AWS CLI to output the results as text and we extract certain information from that text with the --query parameter.\nThe output of the script should look something like that:\nStackId: arn:aws:cloudformation:.../stratospheric-basic-network/... StackId: arn:aws:cloudformation:.../stratospheric-basic-service/... ECS Cluster: stratospheric-basic-network-ECSCluster-qqX6Swdw54PP ECS Task: arn:aws:ecs:.../stratospheric-basic-network-... Network Interface: eni-02c096ce1faa5ecb9 Public IP: 13.55.30.162 You can access your service at http://13.55.30.162:8080 Go ahead and copy the URL at the end into your browser and you should see the text \u0026ldquo;Welcome to the Todo application\u0026rdquo; on your screen.\nHooray! We\u0026rsquo;ve just deployed an app and all the infrastructure it needs to the cloud with a single CLI command! We\u0026rsquo;re going to leverage that later to create a fully automated continuous deployment pipeline.\nBut first, let\u0026rsquo;s inspect the infrastructure and application we\u0026rsquo;ve deployed.\nInspecting the AWS Console The AWS console is the cockpit for all things AWS. We can view the status of all the resources we\u0026rsquo;re using, interact with them, and provision new resources.\nWe could have done everything we\u0026rsquo;ve encoded into the CloudFormation templates above by hand using the AWS console. But setting up infrastructure manually is error prone and not repeatable, so we\u0026rsquo;re not going to look at how to do that.\nHowever, the AWS console is a good place to view the resources we\u0026rsquo;ve deployed, to check their status, and to kick off debugging if we need it.\nGo ahead and log in to the AWS console and let\u0026rsquo;s take a quick tour!\nAfter logging in, type \u0026ldquo;CloudFormation\u0026rdquo; into the \u0026ldquo;Find Services\u0026rdquo; box and select the CloudFormation service.\nYou should see a list of your CloudFormation stacks with a status for each. The list should contain at least the stacks stratospheric-basic-service and stratospheric-basic-network in status CREATE_COMPLETE. Click on the network stack.\nIn the detail view of a stack, we get a host of information about the stack. Click on the \u0026ldquo;Events\u0026rdquo; tab first.\nHere, we see a list of events for this stack. Each event is a status change of one of the stack\u0026rsquo;s resources. We can see the history of events: in the beginning, a bunch of resources were in status CREATE_IN_PROGRESS and transitioned into status CREATE_COMPLETE a couple of seconds later. Then, when the resources they depend on are ready, other resources started their life in the same way. And so on. CloudFormation takes care of the dependencies between resources and creates and deletes them in the correct sequence.\nThe \u0026ldquo;Events\u0026rdquo; tab is the place to go when the creation of a stack fails for some reason. It will show which resource failed and will (usually) show an error message that helps us to debug the problem.\nLet\u0026rsquo;s move on to the \u0026ldquo;Resources\u0026rdquo; tab. It shows us a list of the network stack\u0026rsquo;s resources. The list shows all the resources we\u0026rsquo;ve included in the network.yml CloudFormation template:\nFor some resources, we get a link to the resource in the \u0026ldquo;Physical ID\u0026rdquo; column. Let\u0026rsquo;s click on the ID of the ECSCluster resource to take a look at our application.\nThe link has brought us to the console of the ECS service. We can also get here by opening the \u0026ldquo;Services\u0026rdquo; dropdown at the top of the page and typing \u0026ldquo;ECS\u0026rdquo; into the search box.\nThe detail view of our ECS cluster shows that we have 1 service and 1 task running in this cluster. If we click on the \u0026ldquo;Tasks\u0026rdquo; tab, we see a list of running tasks, which should contain one entry only. Let\u0026rsquo;s click on the link in the \u0026ldquo;Task\u0026rdquo; column to get a detail view of the task.\nThe detail view shows a lot of information we\u0026rsquo;re not interested in, but it also shows the Public IP address of the task. This is the IP address that we extracted via AWS CLI commands earlier. You can copy it into your browser, append the port 8080, and you should see the hello message again.\nBelow the general information is a section called \u0026ldquo;Containers\u0026rdquo;, which shows the container we\u0026rsquo;ve deployed with this task. Click on the little arrow on the left to expand it. In the \u0026ldquo;Log Configuration\u0026rdquo; section, click on the link \u0026ldquo;View logs in CloudWatch\u0026rdquo;.\nCloudWatch is Amazon\u0026rsquo;s service for monitoring applications. In our service stack, we added a \u0026ldquo;LogGroup\u0026rdquo; resource and used the name of that log group in the logging configuration of the container definition. This is the reason why we can now see the logs of that app in CloudWatch.\nAfter the \u0026ldquo;Events\u0026rdquo; tab in the CloudFormation UI, the logs are the second place to look at when (not if) something goes wrong.\nThis concludes our first experiment with AWS. Feel free to explore the AWS console a bit more to get a feel for how everything works. In the book, we\u0026rsquo;ll go into more detail of different AWS services.\nWhen you\u0026rsquo;re done, don\u0026rsquo;t forget to run delete.sh to delete the stacks again, otherwise they will incur costs at some point. You can also delete the stacks via the CloudFormation UI.\nCheck Out the Book!  This article gives only a first impression of what you can do with CloudFormation.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"January 3, 2021","image":"https://reflectoring.io/images/stock/0061-cloud-1200x628-branded_hu34d6aa247e0bb2675461b5a0146d87a8_82985_650x0_resize_q90_box.jpg","permalink":"/getting-started-with-aws-cloudformation/","title":"Getting Started with AWS CloudFormation"},{"categories":["Meta"],"contents":"It\u0026rsquo;s the time of the year again to look back at what I have achieved in the last year and to look forward to what I want to achieve in the upcoming year.\nThis account is more for myself than anyone else, but if you\u0026rsquo;re interested in some numbers around the blog and growing passive income, there may be some interesting nuggets for you in this post.\nLet\u0026rsquo;s start\u0026hellip;\nThe Blog - reflectoring.io The blog you\u0026rsquo;re just now reading is the center of my online activities. I started to write publicly in 2018, and every year I took it a bit more seriously.\nTurns out that being serious about something pays out.\nIn 2019, I had about 600,000 unique users on my website (as Google Analytics counts them). In 2020, I almost doubled it to 1.1 million unique users:\nThat traffic generated almost $200 a month at peak with the subtle ad in the sidebar. And it also brings potential readers for my book(s).\nBut, I didn\u0026rsquo;t achieve that traffic growth all by myself. I had a little help from Google whose algorithm change caused a considerable bump upwards in May. But Google decided to withdraw their help again in December with an equally sized bump downwards.\nBut more importantly, I\u0026rsquo;ve had help from a bunch of great authors who published articles on reflectoring. In 2020, the step to take my blog more seriously was to open up the blog for contributing authors, with me helping them to polish their articles to get them over the finish line. The serious part in this is that I pay the authors to write on the blog, so I\u0026rsquo;m betting real money that this will pay out in the long run.\nWith the help of my authors, I doubled the number of blog posts from about 30 in 2019 to 60 in 2020! Of these 60, I only wrote 21 myself. The other 39 were written by contributing authors. I paid $3,700 in total to these authors. 2020 has been the first year that I\u0026rsquo;m investing serious money into this blog.\nAt this point, I want to thank all the authors that have published articles on reflectoring. Thank you for working with me! I\u0026rsquo;m looking forward to continuing working with you in 2021! I\u0026rsquo;m convinced that writing (or editing, in my case) is the best way to learn and I really hope that you gain valuable experience from your writing gigs on reflectoring! I certainly do.\nIf you\u0026rsquo;re interested in writing on reflectoring, check out the \u0026ldquo;Write With Me\u0026rdquo; page.\nAnother thing where I\u0026rsquo;ve been more serious is my mailing list. I\u0026rsquo;ve been consistently sending a weekly update with a link to the newest article to my mailing list. The list has grown from about 1,300 to 2,300 subscribers during 2020.\nI have a feeling that the subscription rate to the mailing list could be a lot higher considering the traffic that my blog is generating. That\u0026rsquo;s probably because the hurdle of paying $5 for the welcome gift (my book) is quite high. But I\u0026rsquo;m rewarded with a very engaged audience with mail opening rates of about 40% (which is double the industry average), so I\u0026rsquo;m completely happy with that.\nThanks to all my subscribers - you\u0026rsquo;re awesome!\nBook #1 - Get Your Hands Dirty on Clean Architecture I wouldn\u0026rsquo;t have invested a couple of thousand dollars into paying authors if I didn\u0026rsquo;t earn that money somehow. My book \u0026ldquo;Get Your Hands Dirty on Clean Architecture\u0026rdquo; is what made this possible.\nI finished the book in late 2019, and it\u0026rsquo;s been generating some steady income since then, which I decided to re-invest into growing the blog by paying authors to help me write more articles.\nIn 2020, I\u0026rsquo;ve made about $13,000 with this book, which is way more than I would ever have thought possible without having a giant audience beforehand.\nBut the book was received well and has gotten some great reviews on Goodreads and Amazon. It has 2,500 readers on Leanpub and probably around 1,000 more with the Packt version.\nI put a lot of effort into it, so I\u0026rsquo;m very about happy how this turned out.\nThe SaaS - blogtrack.io One of my (informal) goals for 2020 was to create a SaaS application for bloggers. I\u0026rsquo;ve been frustrated with Google Analytics for a while and wanted to build an analytics solution for bloggers that gives me concrete and actionable insights into what I can improve (without having to read multiple tutorials and do a course on how to use Google Analytics).\nI decided to start building a blog analytics app with myself as the first user, but building it in a way that I can sell it to other bloggers later. This became blogtrack.io, but it\u0026rsquo;s not quite ready for the public yet.\nI wanted to learn new things while building it, so I built it in a way that I would build a real application, with a modular architecture and a robust deployment on AWS.\nSo, I dove into AWS in depth to learn how to best deploy the app, combining the knowledge I already had from my work at Atlassian with new knowledge gained on the way.\nAt some point, I started a conversation with Philip Riecks and Björn Wilmsmann about blogging in general and we decided to write a book about AWS - \u0026ldquo;Stratospheric\u0026rdquo;.\nBlogtrack is not generating any income, yet, and the AWS bill is around $100 a month, but a new book doesn\u0026rsquo;t write itself, so I had to stop my work on blogtrack for now. But I\u0026rsquo;m happy with the result so far nevertheless. I\u0026rsquo;m using it for myself and my authors are using it to see the stats of their articles. And I plan to continue work on it in earnest in 2021.\nBook #2 - Stratospheric As mentioned above, my current focus is writing a book called \u0026ldquo;Stratospheric\u0026rdquo; about deploying Spring Boot apps on AWS. It\u0026rsquo;s for Spring developers who are not familiar with AWS, yet. We\u0026rsquo;re about 30% done and I\u0026rsquo;m guessing we\u0026rsquo;ll have a final version ready sometime around April 2021 or so.\nThe book is already available on Leanpub and we\u0026rsquo;re adding new chapters as we finish them. We already have 200 paying readers, which is very encouraging. I\u0026rsquo;m happy with how the book is turning out and looking forward to getting it over the finish line in early 2021, so I can focus on blogtrack once more.\nReview of My Goals for 2020 2020 has certainly been disrupting. I\u0026rsquo;ve been spared a tragedy caused by the pandemic, but I\u0026rsquo;m aware that many others were not so lucky.\nThe main impact for me was that working from home became the new normal for knowledge workers. This has actually made it possible for me to invest more time into some of my goals for 2020.\nLet\u0026rsquo;s see which of my goals for 2020 I could scratch off my list:\n  I wanted to read (at least) 15 nonfiction books from cover to cover: I\u0026rsquo;ve read something north of 20 books last year. I\u0026rsquo;m reading 20-30 minutes every lunch break and I want to keep it this way. Books are such a great source of inspiration!  I wanted to prepare a fun talk connecting psychology and habits with software development: No new talk last year. With everything going on in 2020 (in the world and personally), I couldn\u0026rsquo;t muster the creativity to come up with something.  I wanted to start (and perhaps even finish?), another writing project: I\u0026rsquo;ve started writing \u0026ldquo;Stratospheric\u0026rdquo; and it\u0026rsquo;s even already available on Leanpub as an early bird preview version. Working on it with two co-authors definitely helps to deliver on promises!  I wanted to speak at 3 or more conferences or meetups: Not much conferencing this year. I had a talk accepted to the Spring I/O conference in Barcelona but the pandemic had other plans. I also had submitted some talks to other conferences, but my creativity was elsewhere last year, so they weren\u0026rsquo;t accepted. I spoke at a Java User Group meetup about Clean Architecture, though.  I wanted to double the visitors to my blog by working together with other authors and editing their work: Only with the help of my authors could I increase the visitors to my blog. Thanks again!  I want to build a habit of working out to get my neglected body into shape: My body is still not what I would call \u0026ldquo;in shape\u0026rdquo;, but I have built a steady habit of walking and/or running every day. I\u0026rsquo;m consciously shaping those habits so I have a good feeling about the next year.  I definitely wanted to take surfing lessons while I’m in Sydney: I didn\u0026rsquo;t take the time for surfing lessons. I need to get out more!  Goals for 2021 Here\u0026rsquo;s what I have planned for 2021:\n I want to get my back in order. Despite my new fitness habits, my back and ribs are acting up, probably because I\u0026rsquo;m sitting more (and wrong) in my home office. I\u0026rsquo;ll have to get professional help. I want to go up another level of seriousness and invest a full day each week in my online activities of blogging, writing, and building a SaaS. First step is to find a source for that day (I can\u0026rsquo;t just take it from my day job since my visa currently doesn\u0026rsquo;t allow me to work part-time). With that investment of time, I want to reach a passive income of $3,000 a month in 2021 (with the higher-order goal of generating more options about how I can spend my time) I want to read even more books than in 2020! 20 books at least! More if possible! I already have a stack of books lined up!  Conclusion For my online activities, 2020 was a good year. I enjoy writing, blogging, and building a SaaS, but doing all that means that I can\u0026rsquo;t do other things I enjoy any more (like preparing and giving talks). For now, I will just accept that and come back to those other things once my passive income gives me more control over my time.\n","date":"January 2, 2021","image":"https://reflectoring.io/images/stock/0092-2020-1200x628-branded_hue840d58bf4cec83df5c6d1d6ebf622e8_178401_650x0_resize_q90_box.jpg","permalink":"/blog-review-2020/","title":"Reflectoring Review 2020 - Being Serious About Passive Income"},{"categories":["Spring Boot"],"contents":"Handling exceptions is an important part of building a robust application. Spring Boot offers more than one way of doing it.\nThis article will explore these ways and will also provide some pointers on when a given way might be preferable over another.\n Example Code This article is accompanied by a working code example on GitHub. Introduction Spring Boot provides us tools to handle exceptions beyond simple \u0026lsquo;try-catch\u0026rsquo; blocks. To use these tools, we apply a couple of annotations that allow us to treat exception handling as a cross-cutting concern:\n @ResponseStatus @ExceptionHandler @ControllerAdvice  Before jumping into these annotations we will first look at how Spring handles exceptions thrown by our web controllers - our last line of defense for catching an exception.\nWe will also look at some configurations provided by Spring Boot to modify the default behavior.\nWe\u0026rsquo;ll identify the challenges we face while doing that, and then we will try to overcome those using these annotations.\nSpring Boot\u0026rsquo;s Default Exception Handling Mechanism Let\u0026rsquo;s say we have a controller named ProductController whose getProduct(...) method is throwing a NoSuchElementFoundException runtime exception when a Product with a given id is not found:\n@RestController @RequestMapping(\u0026#34;/product\u0026#34;) public class ProductController { private final ProductService productService; //constructor omitted for brevity...  @GetMapping(\u0026#34;/{id}\u0026#34;) public Response getProduct(@PathVariable String id){ // this method throws a \u0026#34;NoSuchElementFoundException\u0026#34; exception  return productService.getProduct(id); } } If we call the /product API with an invalid id the service will throw a NoSuchElementFoundException runtime exception and we\u0026rsquo;ll get the following response:\n{ \u0026#34;timestamp\u0026#34;: \u0026#34;2020-11-28T13:24:02.239+00:00\u0026#34;, \u0026#34;status\u0026#34;: 500, \u0026#34;error\u0026#34;: \u0026#34;Internal Server Error\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/product/1\u0026#34; } We can see that besides a well-formed error response, the payload is not giving us any useful information. Even the message field is empty, which we might want to contain something like \u0026ldquo;Item with id 1 not found\u0026rdquo;.\nLet\u0026rsquo;s start by fixing the error message issue.\nSpring Boot provides some properties with which we can add the exception message, exception class, or even a stack trace as part of the response payload:\nserver: error: include-message: always include-binding-errors: always include-stacktrace: on_trace_param include-exception: false Using these Spring Boot server properties in our application.yml we can alter the error response to some extent.\nNow if we call the /product API again with an invalid id we\u0026rsquo;ll get the following response:\n{ \u0026#34;timestamp\u0026#34;: \u0026#34;2020-11-29T09:42:12.287+00:00\u0026#34;, \u0026#34;status\u0026#34;: 500, \u0026#34;error\u0026#34;: \u0026#34;Internal Server Error\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Item with id 1 not found\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/product/1\u0026#34; } Note that we\u0026rsquo;ve set the property include-stacktrace to on_trace_param which means that only if we include the trace param in the URL (?trace=true), we\u0026rsquo;ll get a stack trace in the response payload:\n{ \u0026#34;timestamp\u0026#34;: \u0026#34;2020-11-29T09:42:12.287+00:00\u0026#34;, \u0026#34;status\u0026#34;: 500, \u0026#34;error\u0026#34;: \u0026#34;Internal Server Error\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Item with id 1 not found\u0026#34;, \u0026#34;trace\u0026#34;: \u0026#34;io.reflectoring.exception.exception.NoSuchElementFoundException: Item with id 1 not found...\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/product/1\u0026#34; } We might want to keep the value of include-stacktrace flag to never, at least in production, as it might reveal the internal workings of our application.\nMoving on! The status and error message - 500 - indicates that something is wrong with our server code but actually it\u0026rsquo;s a client error because the client provided an invalid id.\nOur current status code doesn\u0026rsquo;t correctly reflect that. Unfortunately, this is as far as we can go with the server.error configuration properties, so we\u0026rsquo;ll have to look at the annotations that Spring Boot offers.\n@ResponseStatus As the name suggests, @ResponseStatus allows us to modify the HTTP status of our response. It can be applied in the following places:\n On the exception class itself Along with the @ExceptionHandler annotation on methods Along with the @ControllerAdvice annotation on classes  In this section, we\u0026rsquo;ll be looking at the first case only.\nLet\u0026rsquo;s come back to the problem at hand which is that our error responses are always giving us the HTTP status 500 instead of a more descriptive status code.\nTo address this we can we annotate our Exception class with @ResponseStatus and pass in the desired HTTP response status in its value property:\n@ResponseStatus(value = HttpStatus.NOT_FOUND) public class NoSuchElementFoundException extends RuntimeException { ... } This change will result in a much better response if we call our controller with an invalid ID:\n{ \u0026#34;timestamp\u0026#34;: \u0026#34;2020-11-29T09:42:12.287+00:00\u0026#34;, \u0026#34;status\u0026#34;: 404, \u0026#34;error\u0026#34;: \u0026#34;Not Found\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Item with id 1 not found\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/product/1\u0026#34; } Another way to achieve the same is by extending the ResponseStatusException class:\npublic class NoSuchElementFoundException extends ResponseStatusException { public NoSuchElementFoundException(String message){ super(HttpStatus.NOT_FOUND, message); } @Override public HttpHeaders getResponseHeaders() { // return response headers  } } This approach comes in handy when we want to manipulate the response headers, too, because we can override the getResponseHeaders() method.\n@ResponseStatus, in combination with the server.error configuration properties, allows us to manipulate almost all the fields in our Spring-defined error response payload.\nBut what if want to manipulate the structure of the response payload as well?\nLet\u0026rsquo;s see how we can achieve that in the next section.\n@ExceptionHandler The @ExceptionHandler annotation gives us a lot of flexibility in terms of handling exceptions. For starters, to use it, we simply need to create a method either in the controller itself or in a @ControllerAdvice class and annotate it with @ExceptionHandler:\n@RestController @RequestMapping(\u0026#34;/product\u0026#34;) public class ProductController { private final ProductService productService; //constructor omitted for brevity...  @GetMapping(\u0026#34;/{id}\u0026#34;) public Response getProduct(@PathVariable String id) { return productService.getProduct(id); } @ExceptionHandler(NoSuchElementFoundException.class) @ResponseStatus(HttpStatus.NOT_FOUND) public ResponseEntity\u0026lt;String\u0026gt; handleNoSuchElementFoundException( NoSuchElementFoundException exception ) { return ResponseEntity .status(HttpStatus.NOT_FOUND) .body(exception.getMessage()); } } The exception handler method takes in an exception or a list of exceptions as an argument that we want to handle in the defined method. We annotate the method with @ExceptionHandler and @ResponseStatus to define the exception we want to handle and the status code we want to return.\nIf we don\u0026rsquo;t wish to use these annotations, then simply defining the exception as a parameter of the method will also do:\n@ExceptionHandler public ResponseEntity\u0026lt;String\u0026gt; handleNoSuchElementFoundException( NoSuchElementFoundException exception) Although it\u0026rsquo;s a good idea to mention the exception class in the annotation even though we have mentioned it in the method signature already. It gives better readability.\nAlso, the annotation @ResponseStatus(HttpStatus.NOT_FOUND) on the handler method is not required as the HTTP status passed into the ResponseEnity will take precedence, but we have kept it anyway for the same readability reasons.\nApart from the exception parameter, we can also have HttpServletRequest, WebRequest, or HttpSession types as parameters.\nSimilarly, the handler methods support a variety of return types such as ResponseEntity, String, or even void.\nFind more input and return types in @ExceptionHandler java documentation.\nWith many different options available to us in form of both input parameters and return types in our exception handling function, we are in complete control of the error response.\nNow, let\u0026rsquo;s finalize an error response payload for our APIs. In case of any error, clients usually expect two things:\n An error code that tells the client what kind of error it is. Error codes can be used by clients in their code to drive some business logic based on it. Usually, error codes are standard HTTP status codes, but I have also seen APIs returning custom errors code likes E001. An additional human-readable message which gives more information on the error and even some hints on how to fix them or a link to API docs.  We will also add an optional stackTrace field which will help us with debugging in the development environment.\nLastly, we also want to handle validation errors in the response. You can find out more about bean validations in this article on Handling Validations with Spring Boot.\nKeeping these points in mind we will go with the following payload for the error response:\n@Getter @Setter @RequiredArgsConstructor @JsonInclude(JsonInclude.Include.NON_NULL) public class ErrorResponse { private final int status; private final String message; private String stackTrace; private List\u0026lt;ValidationError\u0026gt; errors; @Getter @Setter @RequiredArgsConstructor private static class ValidationError { private final String field; private final String message; } public void addValidationError(String field, String message){ if(Objects.isNull(errors)){ errors = new ArrayList\u0026lt;\u0026gt;(); } errors.add(new ValidationError(field, message)); } } Now, let\u0026rsquo;s apply all these to our NoSuchElementFoundException handler method.\n@RestController @RequestMapping(\u0026#34;/product\u0026#34;) @AllArgsConstructor public class ProductController { public static final String TRACE = \u0026#34;trace\u0026#34;; @Value(\u0026#34;${reflectoring.trace:false}\u0026#34;) private boolean printStackTrace; private final ProductService productService; @GetMapping(\u0026#34;/{id}\u0026#34;) public Product getProduct(@PathVariable String id){ return productService.getProduct(id); } @PostMapping public Product addProduct(@RequestBody @Valid ProductInput input){ return productService.addProduct(input); } @ExceptionHandler(NoSuchElementFoundException.class) @ResponseStatus(HttpStatus.NOT_FOUND) public ResponseEntity\u0026lt;ErrorResponse\u0026gt; handleItemNotFoundException( NoSuchElementFoundException exception, WebRequest request ){ log.error(\u0026#34;Failed to find the requested element\u0026#34;, exception); return buildErrorResponse(exception, HttpStatus.NOT_FOUND, request); } @ExceptionHandler(MethodArgumentNotValidException.class) @ResponseStatus(HttpStatus.UNPROCESSABLE_ENTITY) public ResponseEntity\u0026lt;ErrorResponse\u0026gt; handleMethodArgumentNotValid( MethodArgumentNotValidException ex, WebRequest request ) { ErrorResponse errorResponse = new ErrorResponse( HttpStatus.UNPROCESSABLE_ENTITY.value(), \u0026#34;Validation error. Check \u0026#39;errors\u0026#39; field for details.\u0026#34; ); for (FieldError fieldError : ex.getBindingResult().getFieldErrors()) { errorResponse.addValidationError(fieldError.getField(), fieldError.getDefaultMessage()); } return ResponseEntity.unprocessableEntity().body(errorResponse); } @ExceptionHandler(Exception.class) @ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR) public ResponseEntity\u0026lt;ErrorResponse\u0026gt; handleAllUncaughtException( Exception exception, WebRequest request){ log.error(\u0026#34;Unknown error occurred\u0026#34;, exception); return buildErrorResponse( exception, \u0026#34;Unknown error occurred\u0026#34;, HttpStatus.INTERNAL_SERVER_ERROR, request ); } private ResponseEntity\u0026lt;ErrorResponse\u0026gt; buildErrorResponse( Exception exception, HttpStatus httpStatus, WebRequest request ) { return buildErrorResponse( exception, exception.getMessage(), httpStatus, request); } private ResponseEntity\u0026lt;ErrorResponse\u0026gt; buildErrorResponse( Exception exception, String message, HttpStatus httpStatus, WebRequest request ) { ErrorResponse errorResponse = new ErrorResponse( httpStatus.value(), exception.getMessage() ); if(printStackTrace \u0026amp;\u0026amp; isTraceOn(request)){ errorResponse.setStackTrace(ExceptionUtils.getStackTrace(exception)); } return ResponseEntity.status(httpStatus).body(errorResponse); } private boolean isTraceOn(WebRequest request) { String [] value = request.getParameterValues(TRACE); return Objects.nonNull(value) \u0026amp;\u0026amp; value.length \u0026gt; 0 \u0026amp;\u0026amp; value[0].contentEquals(\u0026#34;true\u0026#34;); } } Couple of things to note here:\nProviding a Stack Trace Providing stack trace in the error response can save our developers and QA engineers the trouble of crawling through the log files.\nAs we saw in Spring Boot\u0026rsquo;s Default Exception Handling Mechanism, Spring already provides us with this functionality. But now, as we are handling error responses ourselves, this also needs to be handled by us.\nTo achieve this, we have first introduced a server-side configuration property named reflectoring.trace which, if set to true, To achieve this, we have first introduced a server-side configuration property named reflectoring.trace which, if set to true, will enable the stackTrace field in the response. To actually get a stackTrace in an API response, our clients must additionally pass the trace parameter with the value true:\ncurl --location --request GET \u0026#39;http://localhost:8080/product/1?trace=true\u0026#39; Now, as the behavior of stackTrace is controlled by our feature flag in our properties file, we can remove it or set it to false when we deploy in production environments.\nCatch-All Exception Handler Gotta catch em all:\ntry{ performSomeOperation(); } catch(OperationSpecificException ex){ //... } catch(Exception catchAllExcetion){ //... } As a cautionary measure, we often surround our top-level method\u0026rsquo;s body with a catch-all try-catch exception handler block, to avoid any unwanted side effects or behavior. The handleAllUncaughtException() method in our controller behaves similarly. It will catch all the exceptions for which we don\u0026rsquo;t have a specific handler.\nOne thing I would like to note here is that even if we don\u0026rsquo;t have this catch-all exception handler, Spring will handle it anyway. But we want the response to be in our format rather than Spring\u0026rsquo;s, so we have to handle the exception ourselves.\nA catch-all handler method is also be a good place to log exceptions as they might give insight into a possible bug. We can skip logging on field validation exceptions such as MethodArgumentNotValidException as they are raised because of syntactically invalid input, but we should always log unknown exceptions in the catch-all handler.\nOrder of Exception Handlers The order in which you mention the handler methods doesn\u0026rsquo;t matter. Spring will first look for the most specific exception handler method.\nIf it fails to find it then it will look for a handler of the parent exception, which in our case is RuntimeException, and if none is found, the handleAllUncaughtException() method will finally handle the exception.\nThis should help us handle the exceptions in this particular controller, but what if these same exceptions are being thrown by other controllers too? How do we handle those? Do we create the same handlers in all controllers or create a base class with common handlers and extend it in all controllers?\nLuckily, we don\u0026rsquo;t have to do any of that. Spring provides a very elegant solution to this problem in form of \u0026ldquo;controller advice\u0026rdquo;.\nLet\u0026rsquo;s study them.\n@ControllerAdvice Why is it called \"Controller Advice\"?  The term 'Advice' comes from Aspect-Oriented Programming (AOP) which allows us to inject cross-cutting code (called \"advice\") around existing methods. A controller advice allows us to intercept and modify the return values of controller methods, in our case to handle exceptions.\n Controller advice classes allow us to apply exception handlers to more than one or all controllers in our application:\n@ControllerAdvice public class GlobalExceptionHandler extends ResponseEntityExceptionHandler { public static final String TRACE = \u0026#34;trace\u0026#34;; @Value(\u0026#34;${reflectoring.trace:false}\u0026#34;) private boolean printStackTrace; @Override @ResponseStatus(HttpStatus.UNPROCESSABLE_ENTITY) protected ResponseEntity\u0026lt;Object\u0026gt; handleMethodArgumentNotValid( MethodArgumentNotValidException ex, HttpHeaders headers, HttpStatus status, WebRequest request ) { //Body omitted as it\u0026#39;s similar to the method of same name  // in ProductController example...  //.....  } @ExceptionHandler(ItemNotFoundException.class) @ResponseStatus(HttpStatus.NOT_FOUND) public ResponseEntity\u0026lt;Object\u0026gt; handleItemNotFoundException( ItemNotFoundException itemNotFoundException, WebRequest request ){ //Body omitted as it\u0026#39;s similar to the method of same name  // in ProductController example...  //.....  } @ExceptionHandler(RuntimeException.class) @ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR) public ResponseEntity\u0026lt;Object\u0026gt; handleAllUncaughtException( RuntimeException exception, WebRequest request ){ //Body omitted as it\u0026#39;s similar to the method of same name  // in ProductController example...  //.....  } //....  @Override public ResponseEntity\u0026lt;Object\u0026gt; handleExceptionInternal( Exception ex, Object body, HttpHeaders headers, HttpStatus status, WebRequest request) { return buildErrorResponse(ex,status,request); } } The bodies of the handler functions and the other support code are omitted as they\u0026rsquo;re almost identical to the code we saw in the @ExceptionHandler section. Please find the full code in the Github Repo\u0026rsquo;s GlobalExceptionHandler class.\nA couple of things are new which we will talk about in a while. One major difference here is that these handlers will handle exceptions thrown by all the controllers in the application and not just ProductController.\nIf we want to selectively apply or limit the scope of the controller advice to a particular controller, or a package, we can use the properties provided by the annotation:\n @ControllerAdvice(\u0026quot;com.reflectoring.controller\u0026quot;): we can pass a package name or list of package names in the annotation\u0026rsquo;s value or basePackages parameter. With this, the controller advice will only handle exceptions of this package\u0026rsquo;s controllers. @ControllerAdvice(annotations = Advised.class): only controllers marked with the @Advised annotation will be handled by the controller advice.  Find other parameters in the @ControllerAdvice annotation docs.\nResponseEntityExceptionHandler ResponseEntityExceptionHandler is a convenient base class for controller advice classes. It provides exception handlers for internal Spring exceptions. If we don\u0026rsquo;t extend it, then all the exceptions will be redirected to DefaultHandlerExceptionResolver which returns a ModelAndView object. Since we are on the mission to shape our own error response, we don\u0026rsquo;t want that.\nAs you can see we have overridden two of the ResponseEntityExceptionHandler methods:\n handleMethodArgumentNotValid(): in the @ExceptionHandler section we have implemented a handler for it ourselves. In here we have only overridden its behavior. handleExceptionInternal(): all the handlers in the ResponseEntityExceptionHandler use this function to build the ResponseEntity similar to our buildErrorResponse(). If we don\u0026rsquo;t override this then the clients will receive only the HTTP status in the response header but since we want to include the HTTP status in our response bodies as well, we have overridden the method.  Handling NoHandlerFoundException Requires a Few Extra Steps  This exception occurs when you try to call an API that doesn't exist in the system. Despite us implementing its handler via ResponseEntityExceptionHandler class the exception is redirected to DefaultHandlerExceptionResolver.  To redirect the exception to our advice we need to set a couple of properties in the the properties file: spring.mvc.throw-exception-if-no-handler-found=true and spring.web.resources.add-mappings=false Credit: Stackoverflow user mengchengfeng.\n Some Points to Keep in Mind when Using @ControllerAdvice  To keep things simple always have only one controller advice class in the project. It\u0026rsquo;s good to have a single repository of all the exceptions in the application. In case you create multiple controller advice, try to utilize the basePackages or annotations properties to make it clear what controllers it\u0026rsquo;s going to advise. Spring can process controller advice classes in any order unless we have annotated it with the @Order annotation. So, be mindful when you write a catch-all handler if you have more than one controller advice. Especially when you have not specified basePackages or annotations in the annotation.  How Does Spring Process The Exceptions? Now that we have introduced the mechanisms available to us for handling exceptions in Spring, let\u0026rsquo;s understand in brief how Spring handles it and when one mechanism gets prioritized over the other.\nHave a look through the following flow chart that traces the process of the exception handling by Spring if we have not built our own exception handler:\nConclusion When an exception crosses the boundary of the controller, it\u0026rsquo;s destined to reach the client, either in form of a JSON response or an HTML web page.\nIn this article, we saw how Spring Boot translates those exceptions into a user-friendly output for our clients and also configurations and annotations that allow us to further mold them into the shape we desire.\nThank you for reading! You can find the working code at GitHub.\n","date":"December 31, 2020","image":"https://reflectoring.io/images/stock/0090-404-1200x628-branded_hu09a369bec6cd81282cda28392f89d387_72453_650x0_resize_q90_box.jpg","permalink":"/spring-boot-exception-handling/","title":"Complete Guide to Exception Handling in Spring Boot"},{"categories":["Java"],"contents":"In this series so far, we have learned about Resilience4j and its Retry, RateLimiter, TimeLimiter, and Bulkhead modules. In this article, we will explore the CircuitBreaker module. We will find out when and how to use it, and also look at a few examples.\n Example Code This article is accompanied by a working code example on GitHub. What is Resilience4j? Please refer to the description in the previous article for a quick intro into how Resilience4j works in general.\nWhat is a Circuit Breaker? The idea of circuit breakers is to prevent calls to a remote service if we know that the call is likely to fail or time out. We do this so that we don\u0026rsquo;t unnecessarily waste critical resources both in our service and in the remote service. Backing off like this also gives the remote service some time to recover.\nHow do we know that a call is likely to fail? By keeping track of the results of the previous requests made to the remote service. If, say, 8 out of the previous 10 calls resulted in a failure or a timeout, the next call will likely also fail.\nA circuit breaker keeps track of the responses by wrapping the call to the remote service. During normal operation, when the remote service is responding successfully, we say that the circuit breaker is in a \u0026ldquo;closed\u0026rdquo; state. When in the closed state, a circuit breaker passes the request through to the remote service normally.\nWhen a remote service returns an error or times out, the circuit breaker increments an internal counter. If the count of errors exceeds a configured threshold, the circuit breaker switches to an \u0026ldquo;open\u0026rdquo; state. When in the open state, a circuit breaker immediately returns an error to the caller without even attempting the remote call.\nAfter some configured time, the circuit breaker switches from open to a \u0026ldquo;half-open\u0026rdquo; state. In this state, it lets a few requests pass through to the remote service to check if it\u0026rsquo;s still unavailable or slow. If the error rate or slow call rate is above the configured threshold, it switches back to the open state. If the error rate or slow call rate is below the configured threshold, however, it switches to the closed state to resume normal operation.\nTypes of Circuit Breakers A circuit breaker can be count-based or time-based. A count-based circuit breaker switches state from closed to open if the last N number of calls failed or were slow. A time-based circuit breaker switches to an open state if the responses in the last N seconds failed or were slow. In both circuit breakers, we can also specify the threshold for failure or slow calls.\nFor example, we can configure a count-based circuit breaker to \u0026ldquo;open the circuit\u0026rdquo; if 70% of the last 25 calls failed or took more than 2s to complete. Similarly, we could tell a time-based circuit breaker to open the circuit if 80% of the calls in the last 30s failed or took more than 5s.\nResilience4j CircuitBreaker Concepts resilience4j-circuitbreaker works similarly to the other Resilience4j modules. We provide it the code we want to execute as a functional construct - a lambda expression that makes a remote call or a Supplier of some value which is retrieved from a remote service, etc. - and the circuit breaker decorates it with the code that keeps tracks of responses and switches states if required.\nResilience4j supports both count-based and time-based circuit breakers.\nWe specify the type of circuit breaker using the slidingWindowType() configuration. This configuration can take one of two values - SlidingWindowType.COUNT_BASED or SlidingWindowType.TIME_BASED.\nfailureRateThreshold() and slowCallRateThreshold() configure the failure rate threshold and the slow call rate in percentage.\nslowCallDurationThreshold() configures the time in seconds beyond which a call is considered slow.\nWe can specify a minimumNumberOfCalls() that are required before the circuit breaker can calculate the error rate or slow call rate.\nAs mentioned earlier, the circuit breaker switches from the open state to the half-open state after a certain time to check how the remote service is doing. waitDurationInOpenState() specifies the time that the circuit breaker should wait before switching to a half-open state.\npermittedNumberOfCallsInHalfOpenState() configures the number of calls that will be allowed in the half-open state and maxWaitDurationInHalfOpenState() determines the amount of time a circuit breaker can stay in the half-open state before switching back to the open state.\nThe default value of 0 for this configuration means that the circuit breaker will wait infinitely until all the permittedNumberOfCallsInHalfOpenState() is complete.\nBy default, the circuit breaker considers any Exception as a failure. But we can tweak this to specify a list of Exceptions that should be treated as a failure using the recordExceptions() configuration and a list of Exceptions to be ignored using the ignoreExceptions() configuration.\nIf we want even finer control when determining if an Exception should be treated as a failure or ignored, we can provide a Predicate\u0026lt;Throwable\u0026gt; as a recordException() or ignoreException() configuration.\nThe circuit breaker throws a CallNotPermittedException when it is rejecting calls in the open state. We can control the amount of information in the stack trace of a CallNotPermittedException using the writablestacktraceEnabled() configuration.\nUsing the Resilience4j CircuitBreaker Module Let\u0026rsquo;s see how to use the various features available in the resilience4j-circuitbreaker module.\nWe will use the same example as the previous articles in this series. Assume that we are building a website for an airline to allow its customers to search for and book flights. Our service talks to a remote service encapsulated by the class FlightSearchService.\nWhen using the Resilience4j circuit breaker CircuitBreakerRegistry, CircuitBreakerConfig, and CircuitBreaker are the main abstractions we work with.\nCircuitBreakerRegistry is a factory for creating and managing CircuitBreaker objects.\nCircuitBreakerConfig encapsulates all the configurations from the previous section. Each CircuitBreaker object is associated with a CircuitBreakerConfig.\nThe first step is to create a CircuitBreakerConfig:\nCircuitBreakerConfig config = CircuitBreakerConfig.ofDefaults(); This creates a CircuitBreakerConfig with these default values:\n   Configuration Default value     slidingWindowType COUNT_BASED   failureRateThreshold 50%   slowCallRateThreshold 100%   slowCallDurationThreshold 60s   minimumNumberOfCalls 100   permittedNumberOfCallsInHalfOpenState 10   maxWaitDurationInHalfOpenState `0s    Count-based Circuitbreaker Let\u0026rsquo;s say we want the circuitbreaker to open if 70% of the last 10 calls failed:\nCircuitBreakerConfig config = CircuitBreakerConfig .custom() .slidingWindowType(SlidingWindowType.COUNT_BASED) .slidingWindowSize(10) .failureRateThreshold(70.0f) .build(); We then create a CircuitBreaker with this config:\nCircuitBreakerRegistry registry = CircuitBreakerRegistry.of(config); CircuitBreaker circuitBreaker = registry.circuitBreaker(\u0026#34;flightSearchService\u0026#34;); Let\u0026rsquo;s now express our code to run a flight search as a Supplier and decorate it using the circuitbreaker:\nSupplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightsSupplier = () -\u0026gt; service.searchFlights(request); Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; decoratedFlightsSupplier = circuitBreaker.decorateSupplier(flightsSupplier); Finally, let\u0026rsquo;s call the decorated operation a few times to understand how the circuit breaker works. We can use CompletableFuture to simulate concurrent flight search requests from users:\nfor (int i=0; i\u0026lt;20; i++) { try { System.out.println(decoratedFlightsSupplier.get()); } catch (...) { // Exception handling  } } The output shows the first few flight searches succeeding followed by 7 flight search failures. At that point, the circuit breaker opens and throws CallNotPermittedException for subsequent calls:\nSearching for flights; current time = 12:01:12 884 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... ] Searching for flights; current time = 12:01:12 954 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... ] Searching for flights; current time = 12:01:12 957 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... ] Searching for flights; current time = 12:01:12 958 io.reflectoring.resilience4j.circuitbreaker.exceptions.FlightServiceException: Error occurred during flight search ... stack trace omitted ... io.github.resilience4j.circuitbreaker.CallNotPermittedException: CircuitBreaker \u0026#39;flightSearchService\u0026#39; is OPEN and does not permit further calls ... other lines omitted ... io.reflectoring.resilience4j.circuitbreaker.Examples.countBasedSlidingWindow_FailedCalls(Examples.java:56) at io.reflectoring.resilience4j.circuitbreaker.Examples.main(Examples.java:229) Now, let\u0026rsquo;s say we wanted the circuitbreaker to open if 70% of the last 10 calls took 2s or more to complete:\nCircuitBreakerConfig config = CircuitBreakerConfig .custom() .slidingWindowType(SlidingWindowType.COUNT_BASED) .slidingWindowSize(10) .slowCallRateThreshold(70.0f) .slowCallDurationThreshold(Duration.ofSeconds(2)) .build(); The timestamps in the sample output show requests consistently taking 2s to complete. After 7 slow responses, the circuitbreaker opens and does not permit further calls:\nSearching for flights; current time = 12:26:27 901 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... ] Searching for flights; current time = 12:26:29 953 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... ] Searching for flights; current time = 12:26:31 957 Flight search successful ... other lines omitted ... Searching for flights; current time = 12:26:43 966 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... ] io.github.resilience4j.circuitbreaker.CallNotPermittedException: CircuitBreaker \u0026#39;flightSearchService\u0026#39; is OPEN and does not permit further calls ... stack trace omitted ... at io.reflectoring.resilience4j.circuitbreaker.Examples.main(Examples.java:231) io.github.resilience4j.circuitbreaker.CallNotPermittedException: CircuitBreaker \u0026#39;flightSearchService\u0026#39; is OPEN and does not permit further calls ... stack trace omitted ... at io.reflectoring.resilience4j.circuitbreaker.Examples.main(Examples.java:231) Usually we would configure a single circuit breaker with both failure rate and slow call rate thresholds:\nCircuitBreakerConfig config = CircuitBreakerConfig .custom() .slidingWindowType(SlidingWindowType.COUNT_BASED) .slidingWindowSize(10) .failureRateThreshold(70.0f) .slowCallRateThreshold(70.0f) .slowCallDurationThreshold(Duration.ofSeconds(2)) .build(); Time-based Circuitbreaker Let\u0026rsquo;s say we want the circuit breaker to open if 70% of the requests in the last 10s failed:\nCircuitBreakerConfig config = CircuitBreakerConfig .custom() .slidingWindowType(SlidingWindowType.TIME_BASED) .minimumNumberOfCalls(3) .slidingWindowSize(10) .failureRateThreshold(70.0f) .build(); We create the CircuitBreaker, express the flight search call as a Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; and decorate it using the CircuitBreaker just as we did in the previous section.\nHere\u0026rsquo;s sample output after calling the decorated operation a few times:\nStart time: 18:51:01 552 Searching for flights; current time = 18:51:01 582 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, ... }] ... other lines omitted ... Searching for flights; current time = 18:51:01 631 io.reflectoring.resilience4j.circuitbreaker.exceptions.FlightServiceException: Error occurred during flight search ... stack trace omitted ... Searching for flights; current time = 18:51:01 632 io.reflectoring.resilience4j.circuitbreaker.exceptions.FlightServiceException: Error occurred during flight search ... stack trace omitted ... Searching for flights; current time = 18:51:01 633 ... other lines omitted ... io.github.resilience4j.circuitbreaker.CallNotPermittedException: CircuitBreaker \u0026#39;flightSearchService\u0026#39; is OPEN and does not permit further calls ... other lines omitted ... The first 3 requests were successful and the next 7 requests failed. At this point the circuitbreaker opened and the subsequent requests failed by throwing CallNotPermittedException.\nNow, let\u0026rsquo;s say we wanted the circuitbreaker to open if 70% of the calls in the last 10s took 1s or more to complete:\nCircuitBreakerConfig config = CircuitBreakerConfig .custom() .slidingWindowType(SlidingWindowType.TIME_BASED) .minimumNumberOfCalls(10) .slidingWindowSize(10) .slowCallRateThreshold(70.0f) .slowCallDurationThreshold(Duration.ofSeconds(1)) .build(); The timestamps in the sample output show requests consistently taking 1s to complete. After 10 requests(minimumNumberOfCalls), when the circuit breaker determines that 70% of the previous requests took 1s or more, it opens the circuit:\nStart time: 19:06:37 957 Searching for flights; current time = 19:06:37 979 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 19:06:39 066 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 19:06:40 070 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 19:06:41 070 ... other lines omitted ... io.github.resilience4j.circuitbreaker.CallNotPermittedException: CircuitBreaker \u0026#39;flightSearchService\u0026#39; is OPEN and does not permit further calls ... stack trace omitted ... Usually we would configure a single time-based circuit breaker with both failure rate and slow call rate thresholds:\nCircuitBreakerConfig config = CircuitBreakerConfig .custom() .slidingWindowType(SlidingWindowType.TIME_BASED) .slidingWindowSize(10) .minimumNumberOfCalls(10) .failureRateThreshold(70.0f) .slowCallRateThreshold(70.0f) .slowCallDurationThreshold(Duration.ofSeconds(2)) .build(); Specifying Wait Duration in Open State Let\u0026rsquo;s say we want the circuit breaker to wait 10s when it is in open state, then transition to half-open state and let a few requests pass through to the remote service:\nCircuitBreakerConfig config = CircuitBreakerConfig .custom() .slidingWindowType(SlidingWindowType.COUNT_BASED) .slidingWindowSize(10) .failureRateThreshold(25.0f) .waitDurationInOpenState(Duration.ofSeconds(10)) .permittedNumberOfCallsInHalfOpenState(4) .build(); The timestamps in the sample output show the circuit breaker transition to open state initially, blocking a few calls for the next 10s, and then changing to a half-open state. Later, consistent successful responses when in half-open state causes it to switch to closed state again:\nSearching for flights; current time = 20:55:58 735 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 20:55:59 812 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 20:56:00 816 ... other lines omitted ... io.reflectoring.resilience4j.circuitbreaker.exceptions.FlightServiceException: Flight search failed at ... stack trace omitted ...\t2020-12-13T20:56:03.850115+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; changed state from CLOSED to OPEN 2020-12-13T20:56:04.851700+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; recorded a call which was not permitted. 2020-12-13T20:56:05.852220+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; recorded a call which was not permitted. 2020-12-13T20:56:06.855338+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; recorded a call which was not permitted. ... other similar lines omitted ... 2020-12-13T20:56:12.862362+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; recorded a call which was not permitted. 2020-12-13T20:56:13.865436+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; changed state from OPEN to HALF_OPEN Searching for flights; current time = 20:56:13 865 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] ... other similar lines omitted ... 2020-12-13T20:56:16.877230+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; changed state from HALF_OPEN to CLOSED [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 20:56:17 879 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] ... other similar lines omitted ... Specifying a Fallback Method A common pattern when using circuit breakers is to specify a fallback method to be called when the circuit is open. The fallback method can provide some default value or behavior for the remote call that was not permitted.\nWe can use the Decorators utility class for setting this up. Decorators is a builder from the resilience4j-all module with methods like withCircuitBreaker(), withRetry(), withRateLimiter() to help apply multiple Resilience4j decorators to a Supplier, Function, etc.\nWe will use its withFallback() method to return flight search results from a local cache when the circuit breaker is open and throws CallNotPermittedException:\nSupplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightsSupplier = () -\u0026gt; service.searchFlights(request); Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; decorated = Decorators .ofSupplier(flightsSupplier) .withCircuitBreaker(circuitBreaker) .withFallback(Arrays.asList(CallNotPermittedException.class), e -\u0026gt; this.getFlightSearchResultsFromCache(request)) .decorate(); Here\u0026rsquo;s sample output showing search results being returned from cache after the circuit breaker opens:\nSearching for flights; current time = 22:08:29 735 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 22:08:29 854 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 22:08:29 855 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Searching for flights; current time = 22:08:29 855 2020-12-13T22:08:29.856277+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; recorded an error: \u0026#39;io.reflectoring.resilience4j.circuitbreaker.exceptions.FlightServiceException: Error occurred during flight search\u0026#39;. Elapsed time: 0 ms Searching for flights; current time = 22:08:29 912 ... other lines omitted ... 2020-12-13T22:08:29.926691+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; changed state from CLOSED to OPEN Returning flight search results from cache [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] Returning flight search results from cache [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... }] ... other lines omitted ... Reducing Information in the Stacktrace Whenever a circuit breaker is open, it throws a CallNotPermittedException:\nio.github.resilience4j.circuitbreaker.CallNotPermittedException: CircuitBreaker \u0026#39;flightSearchService\u0026#39; is OPEN and does not permit further calls at io.github.resilience4j.circuitbreaker.CallNotPermittedException.createCallNotPermittedException(CallNotPermittedException.java:48) ... other lines in stack trace omitted ... at io.reflectoring.resilience4j.circuitbreaker.Examples.timeBasedSlidingWindow_SlowCalls(Examples.java:169) at io.reflectoring.resilience4j.circuitbreaker.Examples.main(Examples.java:263) Apart from the first line, the other lines in the stack trace are not adding much value. If the CallNotPermittedException occurs multiple times, these stack trace lines would repeat in our log files.\nWe can reduce the amount of information that is generated in the stack trace by setting the writablestacktraceEnabled() configuration to false:\nCircuitBreakerConfig config = CircuitBreakerConfig .custom() .slidingWindowType(SlidingWindowType.COUNT_BASED) .slidingWindowSize(10) .failureRateThreshold(70.0f) .writablestacktraceEnabled(false) .build(); Now, when a CallNotPermittedException occurs, only a single line is present in the stack trace:\nSearching for flights; current time = 20:29:24 476 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... ] Searching for flights; current time = 20:29:24 540 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;12/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ... ] ... other lines omitted ... io.github.resilience4j.circuitbreaker.CallNotPermittedException: CircuitBreaker \u0026#39;flightSearchService\u0026#39; is OPEN and does not permit further calls io.github.resilience4j.circuitbreaker.CallNotPermittedException: CircuitBreaker \u0026#39;flightSearchService\u0026#39; is OPEN and does not permit further calls ... Other Useful Methods Similar to the Retry module, CircuitBreaker also has methods like ignoreExceptions(), recordExceptions() etc which let us specify which exceptions the CircuitBreaker should ignore and consider when tracking results of calls.\nFor example, we might not want to ignore a SeatsUnavailableException from the remote flight service - we don\u0026rsquo;t really want to open the circuit in this case.\nAlso similar to the other Resilience4j modules we have seen, the CircuitBreaker also provides additional methods like decorateCheckedSupplier(), decorateCompletionStage(), decorateRunnable(), decorateConsumer() etc. so we can provide our code in other constructs than a Supplier.\nCircuitbreaker Events CircuitBreaker has an EventPublisher which generates events of the types\n CircuitBreakerOnSuccessEvent, CircuitBreakerOnErrorEvent, CircuitBreakerOnStateTransitionEvent, CircuitBreakerOnResetEvent, CircuitBreakerOnIgnoredErrorEvent, CircuitBreakerOnCallNotPermittedEvent, CircuitBreakerOnFailureRateExceededEvent and CircuitBreakerOnSlowCallRateExceededEvent.  We can listen for these events and log them, for example:\ncircuitBreaker.getEventPublisher() .onCallNotPermitted(e -\u0026gt; System.out.println(e.toString())); circuitBreaker.getEventPublisher() .onError(e -\u0026gt; System.out.println(e.toString())); circuitBreaker.getEventPublisher() .onFailureRateExceeded(e -\u0026gt; System.out.println(e.toString())); circuitBreaker.getEventPublisher().onStateTransition(e -\u0026gt; System.out.println(e.toString())); The sample output shows what\u0026rsquo;s logged:\n2020-12-13T22:25:52.972943+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; recorded an error: \u0026#39;io.reflectoring.resilience4j.circuitbreaker.exceptions.FlightServiceException: Error occurred during flight search\u0026#39;. Elapsed time: 0 ms Searching for flights; current time = 22:25:52 973 ... other lines omitted ... 2020-12-13T22:25:52.974448+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; exceeded failure rate threshold. Current failure rate: 70.0 2020-12-13T22:25:52.984300+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; changed state from CLOSED to OPEN 2020-12-13T22:25:52.985057+05:30: CircuitBreaker \u0026#39;flightSearchService\u0026#39; recorded a call which was not permitted. ... other lines omitted ... CircuitBreaker Metrics CircuitBreaker exposes many metrics, these are some important ones:\n Total number of successful, failed, or ignored calls (resilience4j.circuitbreaker.calls) State of the circuit breaker (resilience4j.circuitbreaker.state) Failure rate of the circuit breaker (resilience4j.circuitbreaker.failure.rate) Total number of calls that have not been permitted (resilience4.circuitbreaker.not.permitted.calls) Slow call of the circuit breaker (resilience4j.circuitbreaker.slow.call.rate)  First, we create CircuitBreakerConfig, CircuitBreakerRegistry, and CircuitBreaker as usual. Then, we create a MeterRegistry and bind the CircuitBreakerRegistry to it:\nMeterRegistry meterRegistry = new SimpleMeterRegistry(); TaggedCircuitBreakerMetrics.ofCircuitBreakerRegistry(registry) .bindTo(meterRegistry); After running the circuit breaker-decorated operation a few times, we display the captured metrics. Here\u0026rsquo;s some sample output:\nThe number of slow failed calls which were slower than a certain threshold - resilience4j.circuitbreaker.slow.calls: 0.0 The states of the circuit breaker - resilience4j.circuitbreaker.state: 0.0, state: metrics_only Total number of not permitted calls - resilience4j.circuitbreakernot.permitted.calls: 0.0 The slow call of the circuit breaker - resilience4j.circuitbreaker.slow.call.rate: -1.0 The states of the circuit breaker - resilience4j.circuitbreaker.state: 0.0, state: half_open Total number of successful calls - resilience4j.circuitbreaker.calls: 0.0, kind: successful The failure rate of the circuit breaker - resilience4j.circuitbreaker.failure.rate: -1.0 In a real application, we would export the data to a monitoring system periodically and analyze it on a dashboard.\nConclusion In this article, we learned how we can use Resilience4j\u0026rsquo;s Circuitbreaker module to pause making requests to a remote service when it returns errors. We learned why this is important and also saw some practical examples on how to configure it.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"December 21, 2020","image":"https://reflectoring.io/images/stock/0089-circuitbreaker-1200x628-branded_hu8e2f8ade7889d0816340dc03cfcfcf4b_227084_650x0_resize_q90_box.jpg","permalink":"/circuitbreaker-with-resilience4j/","title":"Implementing a Circuit Breaker with Resilience4j"},{"categories":["Spring Boot"],"contents":"Elasticsearch is built on top of Apache Lucene and was first released by Elasticsearch N.V. (now Elastic) in 2010. According to the website of Elastic, it is a distributed open-source search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.\nThe operations of Elasticsearch are available as REST APIs. The primary functions are:\n storing documents in an index, searching the index with powerful queries to fetch those documents, and run analytic functions on the data.  Spring Data Elasticsearch provides a simple interface to perform these operations on Elasticsearch as an alternative to using the REST APIs directly.\nHere we will use Spring Data Elasticsearch to demonstrate the indexing and search capabilities of Elasticsearch, and towards the end, build a simple search application for searching products in a product inventory.\n Example Code This article is accompanied by a working code example on GitHub. Elasticsearch Concepts The easiest way to get introduced to Elasticsearch concepts is by drawing an analogy with a database as illustrated in this table:\n   Elasticsearch Database     Index Table   Document Row   Field Column    Any data we want to search or analyze is stored as a document in an index. In Spring Data, we represent a document in the form of a POJO and decorate it with annotations to define the mapping into an Elasticsearch document.\nUnlike a database, the text stored in Elasticsearch is first processed by various analyzers. The default analyzer splits the text by common word separators like space and punctuation and also removes common English words.\nIf we store the text \u0026ldquo;The sky is blue\u0026rdquo;, the analyzer will store this as a document with the \u0026lsquo;terms\u0026rsquo; \u0026ldquo;sky\u0026rdquo; and \u0026ldquo;blue\u0026rdquo;. We will be able to search this document with text in the form of \u0026ldquo;blue sky\u0026rdquo;, \u0026ldquo;sky\u0026rdquo;, or \u0026ldquo;blue\u0026rdquo; with a degree of the match given as a score.\nApart from text, Elasticsearch can store other types of data known as Field Type as explained under the section on mapping-types in the documentation.\nStarting an Elasticsearch Instance Before going any further, let\u0026rsquo;s start an Elasticsearch instance, which we will use for running our examples. There are numerous ways of running an Elasticsearch instance :\n Using a hosted service Using a managed service from a cloud provider like AWS or Azure DIY by installing Elasticsearch in a cluster of VMs. Running a Docker image  We will use the Docker image from Dockerhub, which is good enough for our demo application. Let\u0026rsquo;s start our Elasticsearch instance by running the Docker run command:\ndocker run -p 9200:9200 \\  -e \u0026#34;discovery.type=single-node\u0026#34; \\  docker.elastic.co/elasticsearch/elasticsearch:7.10.0 Executing this command will start an Elasticsearch instance listening on port 9200. We can verify the instance state by hitting the URL http://localhost:9200 and check the resulting output in our browser:\n{ \u0026#34;name\u0026#34; : \u0026#34;8c06d897d156\u0026#34;, \u0026#34;cluster_name\u0026#34; : \u0026#34;docker-cluster\u0026#34;, \u0026#34;cluster_uuid\u0026#34; : \u0026#34;Jkx..VyQ\u0026#34;, \u0026#34;version\u0026#34; : { \u0026#34;number\u0026#34; : \u0026#34;7.10.0\u0026#34;, ... }, \u0026#34;tagline\u0026#34; : \u0026#34;You Know, for Search\u0026#34; } We should get the above output if our Elasticsearch instance is started successfully.\nIndexing and Searching with the REST API Elasticsearch operations are accessed via REST APIs. There are two ways of adding documents to an index:\n adding one document at a time, or adding documents in bulk.  The API for adding individual documents accepts a document as a parameter.\nA simple PUT request to an Elasticsearch instance for storing a document looks like this:\nPUT /messages/_doc/1 { \u0026#34;message\u0026#34;: \u0026#34;The Sky is blue today\u0026#34; } This will store the message - \u0026ldquo;The Sky is blue today\u0026rdquo; as a document in an index named \u0026ldquo;messages\u0026rdquo;.\nWe can fetch this document with a search query sent to the search REST API:\nGET /messages/search { \u0026#34;query\u0026#34;: { \u0026#34;match\u0026#34;: {\u0026#34;message\u0026#34;: \u0026#34;blue sky\u0026#34;} } } Here we are sending a query of type match for fetching documents matching the string \u0026ldquo;blue sky\u0026rdquo;. We can specify queries for searching documents in multiple ways. Elasticsearch provides a JSON based Query DSL (Domain Specific Language) to define queries.\nFor bulk addition, we need to supply a JSON document containing entries similar to the following snippet:\nPOST /_bulk {\u0026#34;index\u0026#34;:{\u0026#34;_index\u0026#34;:\u0026#34;productindex\u0026#34;}} {\u0026#34;_class\u0026#34;:\u0026#34;..Product\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;Corgi Toys .. Car\u0026#34;,...\u0026#34;manufacturer\u0026#34;:\u0026#34;Hornby\u0026#34;} {\u0026#34;index\u0026#34;:{\u0026#34;_index\u0026#34;:\u0026#34;productindex\u0026#34;}} {\u0026#34;_class\u0026#34;:\u0026#34;..Product\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;CLASSIC TOY .. BATTERY\u0026#34;...,\u0026#34;manufacturer\u0026#34;:\u0026#34;ccf\u0026#34;} Elasticsearch Operations with Spring Data We have two ways of accessing Elasticsearch with Spring Data as shown here:\n  Repositories: We define methods in an interface, and Elasticsearch queries are generated from method names at runtime.\n  ElasticsearchRestTemplate: We create queries with method chaining and native queries to have more control over creating Elasticsearch queries in relatively complex scenarios.\n  We will look at these two ways in much more detail in the following sections.\nCreating the Application and Adding Dependencies Let\u0026rsquo;s first create our application with the Spring Initializr by including the dependencies for web, thymeleaf, and lombok. We are adding thymeleaf dependencies to add a user interface to the application.\nWe will now add the spring-data-elasticsearch dependency in our Maven pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.data\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-data-elasticsearch\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Connecting to the Elasticsearch Instance Spring Data Elasticsearch uses Java High Level REST Client (JHLC) to connect to the Elasticsearch server. JHLC is the default client of Elasticsearch. We will create a Spring Bean configuration to set this up:\n@Configuration @EnableElasticsearchRepositories(basePackages = \u0026#34;io.pratik.elasticsearch.repositories\u0026#34;) @ComponentScan(basePackages = { \u0026#34;io.pratik.elasticsearch\u0026#34; }) public class ElasticsearchClientConfig extends AbstractElasticsearchConfiguration { @Override @Bean public RestHighLevelClient elasticsearchClient() { final ClientConfiguration clientConfiguration = ClientConfiguration .builder() .connectedTo(\u0026#34;localhost:9200\u0026#34;) .build(); return RestClients.create(clientConfiguration).rest(); } } Here we are connecting to our Elasticsearch instance, which we started earlier. We can further customize the connection by adding more properties like enabling ssl, setting timeouts, etc.\nFor debugging and diagnostics, we will turn on request / response logging on the transport level in our logging configuration in logback-spring.xml:\n\u0026lt;logger name=\u0026#34;org.springframework.data.elasticsearch.client.WIRE\u0026#34; level=\u0026#34;trace\u0026#34;/\u0026gt; Representing the Document In our example, we will search for products by their name, brand, price, or description. So for storing the product as a document in Elasticsearch, we will represent the product as a POJO, and decorate it with Field annotations to configure the mapping with Elasticsearch as shown here:\n@Document(indexName = \u0026#34;productindex\u0026#34;) public class Product { @Id private String id; @Field(type = FieldType.Text, name = \u0026#34;name\u0026#34;) private String name; @Field(type = FieldType.Double, name = \u0026#34;price\u0026#34;) private Double price; @Field(type = FieldType.Integer, name = \u0026#34;quantity\u0026#34;) private Integer quantity; @Field(type = FieldType.Keyword, name = \u0026#34;category\u0026#34;) private String category; @Field(type = FieldType.Text, name = \u0026#34;desc\u0026#34;) private String description; @Field(type = FieldType.Keyword, name = \u0026#34;manufacturer\u0026#34;) private String manufacturer; ... } The @Document annotation specifies the index name.\nThe @Id annotation makes the annotated field the _id of our document, being the unique identifier in this index. The id field has a constraint of 512 characters.\nThe @Field annotation configures the type of a field. We can also set the name to a different field name.\nThe index by the name of productindex is created in Elasticsearch based on these annotations.\nIndexing and Searching with a Spring Data Repository Repositories provide the most convenient way to access data in Spring Data using finder methods. The Elasticsearch queries get created from method names. However, we have to be careful about not ending up with inefficient queries and putting a high load on the cluster.\nLet\u0026rsquo;s create a Spring Data repository interface by extending ElasticsearchRepository interface:\npublic interface ProductRepository extends ElasticsearchRepository\u0026lt;Product, String\u0026gt; { } Here the ProductRepository class inherits the methods like save(), saveAll(), find(), and findAll() are included from the ElasticsearchRepository interface.\nIndexing We will now store some products in the index by invoking the save() method for storing one product and the saveAll() method for bulk indexing. Before that we will put the repository interface inside a service class:\n@Service public class ProductSearchServiceWithRepo { private ProductRepository productRepository; public void createProductIndexBulk(final List\u0026lt;Product\u0026gt; products) { productRepository.saveAll(products); } public void createProductIndex(final Product product) { productRepository.save(product); } } When we call these methods from JUnit, we can see in the trace log that the REST APIs calls for indexing and bulk indexing.\nSearching For fulfilling our search requirements, we will add finder methods to our repository interface:\npublic interface ProductRepository extends ElasticsearchRepository\u0026lt;Product, String\u0026gt; { List\u0026lt;Product\u0026gt; findByName(String name); List\u0026lt;Product\u0026gt; findByNameContaining(String name); List\u0026lt;Product\u0026gt; findByManufacturerAndCategory (String manufacturer, String category); } On running the method findByName() with JUnit, we can see Elasticsearch queries generated in the trace logs before being sent to the server:\nTRACE Sending request POST /productindex/_search? ..: Request body: {..\u0026#34;query\u0026#34;:{\u0026#34;bool\u0026#34;:{\u0026#34;must\u0026#34;:[{\u0026#34;query_string\u0026#34;:{\u0026#34;query\u0026#34;:\u0026#34;apple\u0026#34;,\u0026#34;fields\u0026#34;:[\u0026#34;name^1.0\u0026#34;],..} Similarly, by running the method findByManufacturerAndCategory(), we can see the query generated with two query_string parameters corresponding to the two fields - \u0026ldquo;manufacturer\u0026rdquo; and \u0026ldquo;category\u0026rdquo;:\nTRACE .. Sending request POST /productindex/_search..: Request body: {..\u0026#34;query\u0026#34;:{\u0026#34;bool\u0026#34;:{\u0026#34;must\u0026#34;:[{\u0026#34;query_string\u0026#34;:{\u0026#34;query\u0026#34;:\u0026#34;samsung\u0026#34;,\u0026#34;fields\u0026#34;:[\u0026#34;manufacturer^1.0\u0026#34;],..}},{\u0026#34;query_string\u0026#34;:{\u0026#34;query\u0026#34;:\u0026#34;laptop\u0026#34;,\u0026#34;fields\u0026#34;:[\u0026#34;category^1.0\u0026#34;],..}}],..}},\u0026#34;version\u0026#34;:true} There are numerous combinations of method naming patterns that generate a wide range of Elasticsearch queries.\nIndexing and Searching with ElasticsearchRestTemplate The Spring Data repository may not be suitable when we need more control over how we design our queries or when the team already has expertise with Elasticsearch syntax.\nIn this situation, we use ElasticsearchRestTemplate. It is the new client of Elasticsearch based on HTTP, replacing the TransportClient of earlier versions, which used a node-to-node binary protocol.\nElasticsearchRestTemplate implements the interface ElasticsearchOperations, which does the heavy lifting for low-level search and cluster actions.\nIndexing This interface has the methods index() for adding a single document and bulkIndex() for adding multiple documents to the index. The code snippet here shows the use of bulkIndex() for adding multiple products to the index \u0026ldquo;productindex\u0026rdquo;:\n@Service @Slf4j public class ProductSearchService { private static final String PRODUCT_INDEX = \u0026#34;productindex\u0026#34;; private ElasticsearchOperations elasticsearchOperations; public List\u0026lt;String\u0026gt; createProductIndexBulk (final List\u0026lt;Product\u0026gt; products) { List\u0026lt;IndexQuery\u0026gt; queries = products.stream() .map(product-\u0026gt; new IndexQueryBuilder() .withId(product.getId().toString()) .withObject(product).build()) .collect(Collectors.toList());; return elasticsearchOperations .bulkIndex(queries,IndexCoordinates.of(PRODUCT_INDEX)); } ... } The document to be stored is enclosed within an IndexQuery object. The bulkIndex() method takes as input a list of IndexQuery objects and the name of the Index wrapped inside IndexCoordinates. We get a trace of the REST API for a bulk request when we execute this method:\nSending request POST /_bulk?timeout=1m with parameters: Request body: {\u0026#34;index\u0026#34;:{\u0026#34;_index\u0026#34;:\u0026#34;productindex\u0026#34;,\u0026#34;_id\u0026#34;:\u0026#34;383..35\u0026#34;}} {\u0026#34;_class\u0026#34;:\u0026#34;..Product\u0026#34;,\u0026#34;id\u0026#34;:\u0026#34;383..35\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;New Apple..phone\u0026#34;,..manufacturer\u0026#34;:\u0026#34;apple\u0026#34;} .. {\u0026#34;_class\u0026#34;:\u0026#34;..Product\u0026#34;,\u0026#34;id\u0026#34;:\u0026#34;d7a..34\u0026#34;,..\u0026#34;manufacturer\u0026#34;:\u0026#34;samsung\u0026#34;} Next, we use the index() method to add a single document:\n@Service @Slf4j public class ProductSearchService { private static final String PRODUCT_INDEX = \u0026#34;productindex\u0026#34;; private ElasticsearchOperations elasticsearchOperations; public String createProductIndex(Product product) { IndexQuery indexQuery = new IndexQueryBuilder() .withId(product.getId().toString()) .withObject(product).build(); String documentId = elasticsearchOperations .index(indexQuery, IndexCoordinates.of(PRODUCT_INDEX)); return documentId; } } The trace accordingly shows the REST API PUT request for adding a single document.\nSending request PUT /productindex/_doc/59d..987..: Request body: {\u0026#34;_class\u0026#34;:\u0026#34;..Product\u0026#34;,\u0026#34;id\u0026#34;:\u0026#34;59d..87\u0026#34;,..,\u0026#34;manufacturer\u0026#34;:\u0026#34;dell\u0026#34;} Searching ElasticsearchRestTemplate also has the search() method for searching documents in an index. This search operation resembles Elasticsearch queries and is built by constructing a Query object and passing it to a search method.\nThe Query object is of three variants - NativeQuery, StringQuery, and CriteriaQuery depending on how we construct the query. Let\u0026rsquo;s build a few queries for searching products.\nNativeQuery NativeQuery provides the maximum flexibility for building a query using objects representing Elasticsearch constructs like aggregation, filter, and sort. Here is a NativeQuery for searching products matching a particular manufacturer:\n@Service @Slf4j public class ProductSearchService { private static final String PRODUCT_INDEX = \u0026#34;productindex\u0026#34;; private ElasticsearchOperations elasticsearchOperations; public void findProductsByBrand(final String brandName) { QueryBuilder queryBuilder = QueryBuilders .matchQuery(\u0026#34;manufacturer\u0026#34;, brandName); Query searchQuery = new NativeSearchQueryBuilder() .withQuery(queryBuilder) .build(); SearchHits\u0026lt;Product\u0026gt; productHits = elasticsearchOperations .search(searchQuery, Product.class, IndexCoordinates.of(PRODUCT_INDEX)); } } Here we are building a query with a NativeSearchQueryBuilder which uses a MatchQueryBuilder to specify the match query containing the field \u0026ldquo;manufacturer\u0026rdquo;.\nStringQuery A StringQuery gives full control by allowing the use of the native Elasticsearch query as a JSON string as shown here:\n@Service @Slf4j public class ProductSearchService { private static final String PRODUCT_INDEX = \u0026#34;productindex\u0026#34;; private ElasticsearchOperations elasticsearchOperations; public void findByProductName(final String productName) { Query searchQuery = new StringQuery( \u0026#34;{\\\u0026#34;match\\\u0026#34;:{\\\u0026#34;name\\\u0026#34;:{\\\u0026#34;query\\\u0026#34;:\\\u0026#34;\u0026#34;+ productName + \u0026#34;\\\u0026#34;}}}\\\u0026#34;\u0026#34;); SearchHits\u0026lt;Product\u0026gt; products = elasticsearchOperations.search( searchQuery, Product.class, IndexCoordinates.of(PRODUCT_INDEX_NAME)); ... } } In this code snippet, we are specifying a simple match query for fetching products with a particular name sent as a method parameter.\nCriteriaQuery With CriteriaQuery we can build queries without knowing any terminology of Elasticsearch. The queries are built using method chaining with Criteria objects. Each object specifies some criteria used for searching documents:\n@Service @Slf4j public class ProductSearchService { private static final String PRODUCT_INDEX = \u0026#34;productindex\u0026#34;; private ElasticsearchOperations elasticsearchOperations; public void findByProductPrice(final String productPrice) { Criteria criteria = new Criteria(\u0026#34;price\u0026#34;) .greaterThan(10.0) .lessThan(100.0); Query searchQuery = new CriteriaQuery(criteria); SearchHits\u0026lt;Product\u0026gt; products = elasticsearchOperations .search(searchQuery, Product.class, IndexCoordinates.of(PRODUCT_INDEX_NAME)); } } In this code snippet, we are forming a query with CriteriaQuery for fetching products whose price is greater than 10.0 and less than 100.0.\nBuilding a Search Application We will now add a user interface to our application to see the product search in action. The user interface will have a search input box for searching products on name or description. The input box will have a autocomplete feature to show a list of suggestions based on the available products as shown here:\nWe will create auto-complete suggestions for user\u0026rsquo;s search input. Then search for products on name or description closely matching the search text entered by the user. We will build two search services to implement this use case:\n Fetch search suggestions for the auto-complete function Process search for searching products based on user\u0026rsquo;s search query  The Service class ProductSearchService will contain methods for search and fetching suggestions.\nThe full-blown application with a user interface is available in the GitHub repo.\nBuilding the Product Search Index The productindex is the same index we had used earlier for running the JUnit tests. We will first delete the productindex with Elasticsearch REST API, so that the productindex is created fresh during application startup with products loaded from our sample dataset of 50 fashion-line products:\ncurl -X DELETE http://localhost:9200/productindex We will get the message {\u0026quot;acknowledged\u0026quot;: true} if the delete operation is successful.\nNow, let\u0026rsquo;s create an index for the products in our inventory. We\u0026rsquo;ll use a sample dataset of fifty products to build our index. The products are arranged as separate rows in a CSV file.\nEach row has three attributes - id, name, and description. We want the index to be created during application startup. Note that in real production environments, index creation should be a separate process. We will read each row of the CSV and add it to the product index:\n@SpringBootApplication @Slf4j public class ProductsearchappApplication { ... @PostConstruct public void buildIndex() { esOps.indexOps(Product.class).refresh(); productRepo.saveAll(prepareDataset()); } private Collection\u0026lt;Product\u0026gt; prepareDataset() { Resource resource = new ClassPathResource(\u0026#34;fashion-products.csv\u0026#34;); ... return productList; } } In this snippet, we do some preprocessing by reading the rows from the dataset and passing those to the saveAll() method of the repository to add products to the index. On running the application we can see the below trace logs in the application startup.\n...Sending request POST /_bulk?timeout=1m with parameters: Request body: {\u0026#34;index\u0026#34;:{\u0026#34;_index\u0026#34;:\u0026#34;productindex\u0026#34;}} {\u0026#34;_class\u0026#34;:\u0026#34;io.pratik.elasticsearch.productsearchapp.Product\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;Hornby 2014 Catalogue\u0026#34;,\u0026#34;description\u0026#34;:\u0026#34;Product Desc..talogue\u0026#34;,\u0026#34;manufacturer\u0026#34;:\u0026#34;Hornby\u0026#34;} {\u0026#34;index\u0026#34;:{\u0026#34;_index\u0026#34;:\u0026#34;productindex\u0026#34;}} {\u0026#34;_class\u0026#34;:\u0026#34;io.pratik.elasticsearch.productsearchapp.Product\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;FunkyBuys..\u0026#34;,\u0026#34;description\u0026#34;:\u0026#34;Size Name:Lar..\u0026amp; Smoke\u0026#34;,\u0026#34;manufacturer\u0026#34;:\u0026#34;FunkyBuys\u0026#34;} {\u0026#34;index\u0026#34;:{\u0026#34;_index\u0026#34;:\u0026#34;productindex\u0026#34;}} . ... Searching Products with Multi-Field and Fuzzy Search Here is how we process the search request when we submit the search request in the method processSearch():\n@Service @Slf4j public class ProductSearchService { private static final String PRODUCT_INDEX = \u0026#34;productindex\u0026#34;; private ElasticsearchOperations elasticsearchOperations; public List\u0026lt;Product\u0026gt; processSearch(final String query) { log.info(\u0026#34;Search with query {}\u0026#34;, query); // 1. Create query on multiple fields enabling fuzzy search  QueryBuilder queryBuilder = QueryBuilders .multiMatchQuery(query, \u0026#34;name\u0026#34;, \u0026#34;description\u0026#34;) .fuzziness(Fuzziness.AUTO); Query searchQuery = new NativeSearchQueryBuilder() .withFilter(queryBuilder) .build(); // 2. Execute search  SearchHits\u0026lt;Product\u0026gt; productHits = elasticsearchOperations .search(searchQuery, Product.class, IndexCoordinates.of(PRODUCT_INDEX)); // 3. Map searchHits to product list  List\u0026lt;Product\u0026gt; productMatches = new ArrayList\u0026lt;Product\u0026gt;(); productHits.forEach(searchHit-\u0026gt;{ productMatches.add(searchHit.getContent()); }); return productMatches; } ... } Here we perform a search on multiple fields - name and description. We also attach the fuzziness() to search for closely matching text to account for spelling errors.\nFetching Suggestions with Wildcard Search Next, we build the autocomplete function for the search textbox. When we type into the search text field, we will fetch suggestions by performing a wild card search with the characters entered in the search box.\nWe build this function in the fetchSuggestions() method shown here:\n@Service @Slf4j public class ProductSearchService { private static final String PRODUCT_INDEX = \u0026#34;productindex\u0026#34;; public List\u0026lt;String\u0026gt; fetchSuggestions(String query) { QueryBuilder queryBuilder = QueryBuilders .wildcardQuery(\u0026#34;name\u0026#34;, query+\u0026#34;*\u0026#34;); Query searchQuery = new NativeSearchQueryBuilder() .withFilter(queryBuilder) .withPageable(PageRequest.of(0, 5)) .build(); SearchHits\u0026lt;Product\u0026gt; searchSuggestions = elasticsearchOperations.search(searchQuery, Product.class, IndexCoordinates.of(PRODUCT_INDEX)); List\u0026lt;String\u0026gt; suggestions = new ArrayList\u0026lt;String\u0026gt;(); searchSuggestions.getSearchHits().forEach(searchHit-\u0026gt;{ suggestions.add(searchHit.getContent().getName()); }); return suggestions; } } We are using a wildcard query in the form of search input text appended with * so that if we type \u0026ldquo;red\u0026rdquo; we will get suggestions starting with \u0026ldquo;red\u0026rdquo;. We are restricting the number of suggestions to 5 with the withPageable() method. Some screenshots of the search results from the running application can be seen here:\nConclusion In this article, we introduced the main operations of Elasticsearch - indexing documents, bulk indexing, and search - which are provided as REST APIs. The Query DSL in combination with different analyzers makes the search very powerful.\nSpring Data Elasticsearch provides convenient interfaces to access those operations in an application either by using Spring Data Repositories or ElasticsearchRestTemplate.\nWe finally built an application where we saw how the bulk indexing and search capabilities of Elasticsearch can be used in a close to real-life application.\n","date":"December 18, 2020","image":"https://reflectoring.io/images/stock/0019-magnifying-glass-1200x628-branded_hudd3c41ec99aefbb7f273ca91d0ef6792_109335_650x0_resize_q90_box.jpg","permalink":"/spring-boot-elasticsearch/","title":"Using Elasticsearch with Spring Boot"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you want to make time for important things you want to replace busyness with meaningful tasks you want to try out a framework to change your defaults (habits)  Book Facts  Title: Make Time Authors: Jake Knapp and John Zeratsky Word Count: ~ 60.000 (4 hours at 250 words / minute) Reading Ease: easy Writing Style: conversational, very short chapters, actionable  Overview {% include book-link.html book=\u0026ldquo;make-time\u0026rdquo; %} is a book by two ex-Silicon Valley workers who created a framework to get the most joy out of their (work) days. This framework they call \u0026ldquo;Make Time\u0026rdquo;, because it helps to reduce busyness and thus make time for the important things.\nThe authors are the creators of the \u0026ldquo;Design Sprint\u0026rdquo; methodology, in which a team takes a week of time to create solutions for problems in a structured manner. In the Design Sprints they conducted they came up with tactics to make time for meaningful work and translated that into the \u0026ldquo;Make Time\u0026rdquo; framework.\nThe framework consists of these steps:\n pick a highlight for the day, laser-focus on that highlight for a given time, energize to keep your body in a state in which it can concentrate, and reflect on what you\u0026rsquo;ve been doing to improve in the future.  Most of the book is a collection of tactics you can choose from to implement each of those steps.\nIt\u0026rsquo;s written in a very readable manner, I went through it in about two weeks of 20-30 minute lunch breaks. The chapters are very short, some only a paragraph or two, which - ironically - makes it snackable even when you don\u0026rsquo;t make much time for reading.\nNotes Here are my notes, as usual with some comments in italics.\nIntroduction  by default, we let our calendars be filled with busyness instead of work that is meaningful to us we operate between the \u0026ldquo;Busy Bandwagon\u0026rdquo; of busyness and the \u0026ldquo;Infinity Pools\u0026rdquo; of distraction (social media, email, \u0026hellip;) overcoming the Busy Bandwagon and Infinity Pools with willpower or productivity doesn\u0026rsquo;t work - you need a system, a framework, a habit (James Clear, the author of \u0026ldquo;Atomic Habits\u0026rdquo; would agree) lessons from conducting Design Sprints:  highlight: have one high-priority goal for the day laser: ban devices to get people more involved energize: take breaks and take a walk to stay energized reflect: perform experiments to find out what works and what doesn\u0026rsquo;t    Highlight  a daily highlight is something between a fine-grained task (that takes minutes to solve) and a long-term goal (that takes weeks to reach) a highlight helps to be intentional with your allocation of attention select your daily highlight out of these categories:  what is urgent? what brings satisfaction? what brings joy?   \u0026ldquo;You only waste time if you\u0026rsquo;re not intentional on how to spend it.\u0026rdquo; a time frame of 60 to 90 minutes is the sweet spot for a daily highlight  Choose Your Highlight  Write It Down - to keep the highlight in mind and help you be intentional with your allocation of attention Groundhog It - repeat yesterday\u0026rsquo;s highlight if it\u0026rsquo;s not finished yet, or brought you joy Stack Rank Your Life - prioritize the big themes in your life and use this list to choose a highlight Batch the Little Stuff - make a batch of small tasks your highlight The Might-Do List - have a list of options and choose a highlight from it The Burner List - select the most and second most important projects and put their todos on a piece of paper in two columns (\u0026ldquo;front burner\u0026rdquo; for the first, and \u0026ldquo;back burner\u0026rdquo; for the second project) - recreate this list every couple of days Personal Sprint - schedule several days in a row for the same project to stay in the context  Make Time for Your Highlight  Schedule Your Highlight - to commit yourself Block Your Calendar - block the time you\u0026rsquo;re most productive and then be intentional with the blocked time Bulldoze Your Calendar - compress or move meetings to get more highlight time Flake It Till You Make It - bail out on a commitment to make time for your highlight (but don\u0026rsquo;t make this your default) Just Say No - decline commitments that you\u0026rsquo;re not likely going to keep and which would steal away your highlight time Design Your Day - plan your day in detail so you can concentrate on how to do things instead of thinking about what to do Become a Morning Person - make a habit of getting up early to make time for your highlight in the early hours (I can confirm that this works - I have trained myself to be a morning person two years ago, and it has allowed me to write a book, multiply the traffic on my blog, work a 9-to-5 day job, and still spend time with my family in the evenings) Nighttime is Highlight Time - design your nighttime before bed as highlight time (This doesn\u0026rsquo;t work for me because I\u0026rsquo;m too tired after my day job) Quit When You\u0026rsquo;re Done - stop before you\u0026rsquo;re exhausted so you\u0026rsquo;re fresh the next day  Laser  once you\u0026rsquo;ve selected a highlight, work on it in a laser-focused mode \u0026ldquo;Distracted has become the new default.\u0026rdquo; \u0026ldquo;Willpower is not enough to protect your focus.\u0026rdquo; today\u0026rsquo;s Infinity Pools are very effective because they\u0026rsquo;re made to be \u0026ldquo;If you want control, you have to redesign your relationship with technology.\u0026rdquo; \u0026ldquo;The best way to defeat distraction is to make it harder to react.\u0026rdquo; it can take days to get \u0026ldquo;into the zone\u0026rdquo; for a certain task - distractions wreak havoc with your focus! (I can relate to that - when I\u0026rsquo;m writing on a book chapter or a longer blog post, I usually work on it an hour in the morning and only after a couple of days have I reached a state of \u0026ldquo;flow\u0026rdquo; where I can write almost effortlessly - sadly, the chapter or blog post is almost done, by then :))  Be The Boss Of Your Phone  Try a Distraction-Free Phone - experiment with uninstalling distracting apps from your phone Log Out - log out of the apps that consume your time to make it a hassle to log back in Nix Notifications - turn off notifications Clear Your Homescreen - put everything on screens right and left from the home screen to add more friction Wear a Wrist Watch - so you don\u0026rsquo;t need to look at your phone to know the time Leave Devices Behind - put your phone into a drawer to add friction  Stay Out of Infinity Pools  Skip the Morning Check-In - don\u0026rsquo;t check email or news in the morning to avoid distraction (Once I stopped checking the news and my email first thing in the morning, I won at least 20 minutes of highlight time every morning!) Block Distraction Kryptonite - if you feel regret after spending a couple of minutes on a distraction, it\u0026rsquo;s kryptonite - block it Ignore the News - read the news weekly instead of daily Put Your Toys Away - clean the desktop, sign out of apps, close browser tabs to avoid distractions the next day (I hate that, by default, IntelliJ opens all the projects I\u0026rsquo;ve been working on yesterday and that all my browser tabs are still there) Fly Without Wi-Fi - when flying in a plane, turn off the Wi-Fi to get some focus time Put a Timer on the Internet - to remove the potential distraction during highlight time Cancel the Internet - completely get rid of an internet connection in your house and use your phones mobile data instead, which adds more friction (oh, how my kids would hate me for that) Watch Out for Time Craters - interruptions through social media and anticipation of meetings put craters in your calendar - move them around so they don\u0026rsquo;t affect so much of the day (*when I know a meeting is coming up in 30 minutes, my brain gets blocked - I try to batch such meetings into the afternoon when I\u0026rsquo;m not that productive for deep work anyways) Trade Fake Wins for Real Wins - many \u0026ldquo;wins\u0026rdquo; we feel are fake (like going through email) - only the highlight time is a real win Become a Fair-Weather Fan - if being a sports fan consumes a lot of your time, reduce this time to make things for other joyful highlights  Slow Your Inbox  Deal with Email at the End of the Day - you\u0026rsquo;ll have less energy and won\u0026rsquo;t overcommit when answering emails (the probability of you saying \u0026ldquo;no\u0026rdquo; is bigger) Schedule Email Time - block some time in your calendar to go through email to have peace of mind because you know you\u0026rsquo;ll be getting to it eventually Empty Your Inbox Once a Week - to avoid the stress of daily \u0026ldquo;Inbox Zero\u0026rdquo; Pretend Messages are Letters - paper letters are only delivered once a day and for most emails that\u0026rsquo;s fine, too Be Slow to Respond - everybody can reach you digitally - you don\u0026rsquo;t want to be at the beck and call of internet strangers! Reset Expectations - train your co-workers that you\u0026rsquo;re slow to respond to take pressure off yourself Set Up Send-Only Mail - configure your mail so that you can send email without looking into the inbox, which is the ultimate Infinity Pool Vacation off the Grid - turn off internet access when you leave for a vacation Lock Yourself Out - lock internet/email on your devices for given periods of time  Make TV a Sometimes Treat  \u0026ldquo;If you\u0026rsquo;re constantly exposed to other people\u0026rsquo;s ideas, it can be tough to think up your own.\u0026rdquo; Don\u0026rsquo;t Watch the News - TV news are made to be attention-seeking - better read the news than watch them Put Your TV in the Corner - rearrange your living room so that the TV is not the center of it Ditch your TV for a Projector - a projector is more hassle to set up so it will be reserved for special events Go A-La-Carte Instead of Always On - cancel streaming subscriptions and rent episodes one at a time to change the default If You Love Something, Set it Free - ditch your TV completely for a month and then recap what you did with the time you won  Find Flow  Shut the Door - or put headphones on to avoid interruptions Invent a Deadline - for example, register for a running event before you\u0026rsquo;re prepared for it to create some (positive) pressure Play a Laser Soundtrack - choose a song to play each highlight time to trigger a focus habit Set a Visible Timer - use a timer to make conscious how important the highlight is Avoid the Lure of Fancy Tools - avoid time management tools because they\u0026rsquo;re potentially fragile - instead, go with pen \u0026amp; paper (I use simple Trello boards - one for work and one for my other projects - to collect and sort my tasks) Start on Paper - instead of diving into digital tools because you can do pretty much anything on paper Explode Your Highlight - cut it into many small, completable tasks to generate flow (This is a major tactic I use to unblock myself - if I have a list of very doable tasks, I don\u0026rsquo;t need to think about what to do anymore, I just tick off those tasks one by one)  Stay in the Zone  Make a \u0026ldquo;Random Question\u0026rdquo; List - during highlight time, write down anything that comes to your mind that might distract you on a piece of paper to have peace of mind that it will be attended to after your highlight time is over Notice One Breath - breathe in and out a couple of times to regain focus Be Bored - to give your mind a chance to wander (this increases creativity, see the \u0026ldquo;R-Mode\u0026rdquo; in \u0026ldquo;Pragmatic Thinking and Learning\u0026rdquo;) Be Stuck - instead of \u0026ldquo;fleeing\u0026rdquo; into email or other distractions, stare at the blank screen for a bit and let your mind mull the problem unconsciously Take a Day Off - to recharge if other things don\u0026rsquo;t work Go All In - attack your highlight wholeheartedly / let down your guards to create focus (when I consider something a chore or not worth my time, but I still have to do it, I\u0026rsquo;m spending far too much time on it, during which I feel miserable - once I go \u0026ldquo;all-in\u0026rdquo;, I\u0026rsquo;m focused, feel better, and get it done much faster)  Energize  \u0026ldquo;If you want energy for your brain, you need to take care of your body.\u0026rdquo; we\u0026rsquo;re wired for a life of constant movement, a varied diet, and restful sleep from way back when we were hunting mammoths Exercise Every Day - \u0026ldquo;go small and go every day\u0026rdquo; to stimulate your brain and general well-being  \u0026ldquo;There is more to you than how you sweat\u0026rdquo; - don\u0026rsquo;t think that you have to do only big workouts   Pound the Pavement - switch the default from \u0026ldquo;drive when possible\u0026rdquo; to \u0026ldquo;walk when possible\u0026rdquo; Inconvenience Yourself - instead of avoiding some inconveniences, embrace them to keep moving Squeeze in a Super Short Workout - a couple of minutes of intense workout are better than an hour of medium intensity workout  Eat Real Food  Eat Like a Hunter-Gatherer - eat non-processed food Central Park Your Plate - put salad on your plate before anything else Stay Hungry - skip a meal now and then to stay sharp Snack Like a Toddler - make high-quality snacks available throughout the day (like fruit or nuts, not chips and sweets) Go on the Dark Chocolate Plan - dark chocolate contains caffeine and less sugar than normal chocolate, so it doesn\u0026rsquo;t create that much of a sugar crash afterward  Optimize Caffeine  caffeine doesn\u0026rsquo;t give an energy boost, it blocks an energy dip Wake Up Before You Caffeinate - drink the first cup of coffee no sooner than around 9.30 to avoid withdrawal symptoms Caffeinate Before You Crash - have coffee 30 minutes before you usually crash Take a Caffeine Nap - drink a coffee and immediately take a 15-minute nap to get a boost Turbo Your Highlight - have a coffee right before you start your highlight session Learn Your Last Call - if not sleeping well, chances are you\u0026rsquo;re having your last coffee too late Disconnect Sugar - don\u0026rsquo;t mix sugar with coffee because sugar causes a sugar crash  Go Off the Grid  Get Woodsy - take a walk in the forest or park to replenish (I have started the habit of going for a walk through a nearby wood every morning before work and I do feel better overall) Trick Yourself into Meditating - try a guided meditation app Leave Your Headphones at Home - to allow some time for boredom Take Real Breaks - don\u0026rsquo;t waste breaks on social media of TV, change the default to something else  Make it Personal  Spend Time With Your Tribe - have real conversations with people who give you energy Eat Without Screens - to recharge while eating  Sleep in a Cave  Make Your Bedroom a Bed Room (oh how I had to fight to get the TV out of our bedroom) \u0026ldquo;It\u0026rsquo;s easier to change your environment than to rely on willpower to change your behavior.\u0026rdquo; Fake the Sunset - turn down lights in the evening and turn them back up in the morning to simulate the sun and trick the brain into sleep or wake mode Sneak a Nap - napping makes you smarter! Don\u0026rsquo;t Jet-Lag Yourself - don\u0026rsquo;t build up \u0026ldquo;sleep debt\u0026rdquo;, but don\u0026rsquo;t try to repay it by sleeping long, either - get up at the same time every day Put On Your Own Oxygen Mask First - if you don\u0026rsquo;t take care of yourself first, you won\u0026rsquo;t have the energy to take care of others who might depend on you  Reflect  every day, reflect on the day to check  how focused were you? how energized were you? did you make time for your highlight? which tactics did you try? which tactics do you want to try tomorrow?    Conclusion I liked the book a lot. While many of the tactics are not groundbreaking news, the simple act of choosing a highlight for the day has a lot of potential, I believe.\nAlso, I haven\u0026rsquo;t really been scientific about what helps me concentrate and what doesn\u0026rsquo;t. I\u0026rsquo;ll try out reflecting on that every evening to get a sense of what works and what doesn\u0026rsquo;t.\nThis is a very actionable book that I can recommend for every knowledge worker out there.\n","date":"December 15, 2020","image":"https://reflectoring.io/images/covers/make-time-teaser_hu16903ffc118a89675c6eb085ade103d2_256660_650x0_resize_q90_box.jpg","permalink":"/book-review-make-time/","title":"Book Notes: Make Time"},{"categories":["Spring Boot"],"contents":"Wouldn\u0026rsquo;t it be nice to have a codebase that is cut into loosely coupled modules, with each module having a dedicated set of responsibilities?\nThis would mean we can easily find each responsibility in the codebase to add or modify code. It would mean that the codebase is easy to grasp because we would only have to load one module into our brain\u0026rsquo;s working memory at a time.\nAnd, since each module has its own API, it would mean that we can create a reusable mock for each module. When writing an integration test, we just import a mock module and call its API to start mocking away. We no longer have to know every detail about the classes we\u0026rsquo;re mocking.\nIn this article, we\u0026rsquo;re going to look at creating such modules, discuss why mocking whole modules is better than mocking single beans, and then introduce a simple but effective way of mocking complete modules for easy test setup with Spring Boot.\n Example Code This article is accompanied by a working code example on GitHub. What\u0026rsquo;s a Module? When I talk about \u0026ldquo;modules\u0026rdquo; in this article, what I mean is this:\n A module is a set of highly cohesive classes that have a dedicated API with a set of associated responsibilities.\n We can combine multiple modules to bigger modules and finally to a complete application.\nA module may use another module by calling its API.\nYou could also call them \u0026ldquo;components\u0026rdquo;, but in this article, I\u0026rsquo;m going to stick with \u0026ldquo;module\u0026rdquo;.\nHow Do I Build a Module? When building an application, I suggest doing a little up-front thinking about how to modularize the codebase. What are going to be the natural boundaries within our codebase?\nDo we have an external system that our application needs to talk to? That\u0026rsquo;s a natural module boundary. We can build a module whose responsibility it is to talk to that external system!.\nHave we identified a functional \u0026ldquo;bounded context\u0026rdquo; of use cases that belong together? This is another good module boundary. We\u0026rsquo;ll build a module that implements the use cases in this functional slice of our application!.\nThere are more ways to split an application into modules, of course, and often it\u0026rsquo;s not easy to find the boundaries between them. They might even change over time! All the more important to have a clear structure within our codebase so we can easily move concepts between modules!\nTo make the modules apparent in our codebase, I propose the following package structure:\n each module has its own package each module package has a sub-package api that contains all classes that are exposed to other modules each module package has a sub-package internal that contains:  all classes that implement the functionality exposed by the API a Spring configuration class that contributes the beans to the Spring application context that are needed to implement that API   like a Matryoshka doll, each module\u0026rsquo;s internal sub-package may contain packages with sub-modules, each with their own api and internal packages classes within a given internal package may only be accessed by classes within that package.  This makes for a very clear codebase that is easy to navigate. Read more about this code structure in my article about clear architecture boundaries or look at some code in the code examples.\nNow, that\u0026rsquo;s a nice package structure, but what does that have to do with testing and mocking?\nWhat\u0026rsquo;s Wrong With Mocking Single Beans? As I said in the beginning, we want to look at mocking whole modules instead of single beans. But what\u0026rsquo;s wrong with mocking single beans in the first place?\nLet\u0026rsquo;s take a look at a very common way of creating integration tests with Spring Boot.\nLet\u0026rsquo;s say we want to write an integration test for a REST controller that is supposed to create a repository on GitHub and then send an email to the user.\nThe integration test might look like this:\n@WebMvcTest class RepositoryControllerTestWithoutModuleMocks { @Autowired private MockMvc mockMvc; @MockBean private GitHubMutations gitHubMutations; @MockBean private GitHubQueries gitHubQueries; @MockBean private EmailNotificationService emailNotificationService; @Test void givenRepositoryDoesNotExist_thenRepositoryIsCreatedSuccessfully() throws Exception { String repositoryUrl = \u0026#34;https://github.com/reflectoring/reflectoring\u0026#34;; given(gitHubQueries.repositoryExists(...)).willReturn(false); given(gitHubMutations.createRepository(...)).willReturn(repositoryUrl); mockMvc.perform(post(\u0026#34;/github/repository\u0026#34;) .param(\u0026#34;token\u0026#34;, \u0026#34;123\u0026#34;) .param(\u0026#34;repositoryName\u0026#34;, \u0026#34;foo\u0026#34;) .param(\u0026#34;organizationName\u0026#34;, \u0026#34;bar\u0026#34;)) .andExpect(status().is(200)); verify(emailNotificationService).sendEmail(...); verify(gitHubMutations).createRepository(...); } } This test actually looks quite neat, and I have seen (and written) many tests like it. But the devil is in the details, as they say.\nWe\u0026rsquo;re using the @WebMvcTest annotation to set up a Spring Boot application context for testing Spring MVC controllers. The application context will contain all the beans necessary to get the controllers working and nothing else.\nBut our controller needs some additional beans in the application context to work, namely GitHubMutations, GitHubQueries, and EmailNotificationService. So, we add mocks of those beans to the application context via the @MockBean annotation.\nIn the test method, we define the state of these mocks in a couple of given() statements, then call the controller endpoint we want to test, and then verify() that certain methods have been called on the mocks.\nSo, what\u0026rsquo;s wrong with this test? Two main things come to mind:\nFirst, to set up the given() and verify() sections, the test needs to know which methods on the mocked beans the controller is calling. This low-level knowledge of implementation details makes the test vulnerable to modifications. Each time an implementation detail changes, we have to update the test as well. This dilutes the value of the test and makes maintaining tests a chore rather than a \u0026ldquo;sometimes routine\u0026rdquo;.\nSecond, the @MockBean annotations will cause Spring to create a new application context for each test (unless they have exactly the same fields). In a codebase with more than a couple of controllers, this will increase the test runtime considerably.\nIf we invest a bit of effort into building a modular codebase like outlined in the previous section, we can get around both of these disadvantages by building reusable mock modules.\nLet\u0026rsquo;s find out how by looking at a concrete example.\nA Modular Spring Boot Application Ok, let\u0026rsquo;s look at how we can implement reusable mock modules with Spring Boots.\nHere\u0026rsquo;s the folder structure of an example application. You can find the code on GitHub if you want to follow along:\n├── github | ├── api | | ├── \u0026lt;I\u0026gt; GitHubMutations | | ├── \u0026lt;I\u0026gt; GitHubQueries | | └── \u0026lt;C\u0026gt; GitHubRepository | └── internal | ├── \u0026lt;C\u0026gt; GitHubModuleConfiguration | └── \u0026lt;C\u0026gt; GitHubService ├── mail | ├── api | | └── \u0026lt;I\u0026gt; EmailNotificationService | └── internal | ├── \u0026lt;C\u0026gt; EmailModuleConfiguration | ├── \u0026lt;C\u0026gt; EmailNotificationServiceImpl | └── \u0026lt;C\u0026gt; MailServer ├── rest | └── internal | └── \u0026lt;C\u0026gt; RepositoryController └── \u0026lt;C\u0026gt; DemoApplication The application has 3 modules:\n the github module provides an interface to interact with the GitHub API, the mail module provides email functionality, and the rest module provides a REST API to interact with the application.  Let\u0026rsquo;s look into each module in a bit more detail.\nThe GitHub Module The github module provides two interfaces (marked with \u0026lt;I\u0026gt;) as part of its API:\n GitHubMutations, which provides some write operations to the GitHub API, and GitHubQueries, which provides some read operations on the GitHub API.  This is what the interfaces look like:\npublic interface GitHubMutations { String createRepository(String token, GitHubRepository repository); } public interface GitHubQueries { List\u0026lt;String\u0026gt; getOrganisations(String token); List\u0026lt;String\u0026gt; getRepositories(String token, String organisation); boolean repositoryExists(String token, String repositoryName, String organisation); } It also provides the class GitHubRepository, which is used in the signatures of those interfaces.\nInternally, the github module has the class GitHubService, which implements both interfaces, and the class GitHubModuleConfiguration, which is a Spring configuration the contributes a GitHubService instance to the application context:\n@Configuration class GitHubModuleConfiguration { @Bean GitHubService gitHubService(){ return new GitHubService(); } } Since GitHubService implements the whole API of the github module, this one bean is enough to make the module\u0026rsquo;s API available to other modules in the same Spring Boot application.\nThe Mail Module The mail module is built similarly. Its API consists of a single interface EmailNotificationService:\npublic interface EmailNotificationService { void sendEmail(String to, String subject, String text); } This interface is implemented by the internal bean EmailNotificationServiceImpl.\nNote that I\u0026rsquo;m using a different naming convention in the mail module than in the github module. While the github module has an internal class ending with *Service, the mail module has a *Service class as part of its API. While the github module doesn\u0026rsquo;t use the ugly *Impl suffix, the mail module does.\nI did this on purpose to make the code a bit more realistic. Have you ever seen a codebase (that you didn\u0026rsquo;t write by yourself) that uses the same naming conventions all over the place? I haven\u0026rsquo;t.\nBut if you build modules like we do in this article, it doesn\u0026rsquo;t really matter much. The ugly *Impl class is hidden behind the module\u0026rsquo;s API anyway.\nInternally, the mail module has the EmailModuleConfiguration class that contributes implementations for the API to the Spring application context:\n@Configuration class EmailModuleConfiguration { @Bean EmailNotificationService emailNotificationService() { return new EmailNotificationServiceImpl(); } } The REST Module The rest module consists of a single REST controller:\n@RestController class RepositoryController { private final GitHubMutations gitHubMutations; private final GitHubQueries gitHubQueries; private final EmailNotificationService emailNotificationService; // constructor omitted  @PostMapping(\u0026#34;/github/repository\u0026#34;) ResponseEntity\u0026lt;Void\u0026gt; createGitHubRepository( @RequestParam(\u0026#34;token\u0026#34;) String token, @RequestParam(\u0026#34;repositoryName\u0026#34;) String repoName, @RequestParam(\u0026#34;organizationName\u0026#34;) String orgName ) { if (gitHubQueries.repositoryExists(token, repoName, orgName)) { return ResponseEntity.status(HttpStatus.BAD_REQUEST).build(); } String repoUrl = gitHubMutations.createRepository( token, new GitHubRepository(repoName, orgName)); emailNotificationService.sendEmail( \u0026#34;user@mail.com\u0026#34;, \u0026#34;Your new repository\u0026#34;, \u0026#34;Here\u0026#39;s your new repository: \u0026#34; + repoUrl); return ResponseEntity.ok().build(); } } The controller calls the github module\u0026rsquo;s API to create a GitHub repository and then sends a mail via the mail module\u0026rsquo;s API to let the user know about the new repository.\nMocking the GitHub Module Now, let\u0026rsquo;s see how we can build a reusable mock for the github module. We create a @TestConfiguration class that provides all the beans of the module\u0026rsquo;s API:\n@TestConfiguration public class GitHubModuleMock { private final GitHubService gitHubServiceMock = Mockito.mock(GitHubService.class); @Bean @Primary GitHubService gitHubServiceMock() { return gitHubServiceMock; } public void givenCreateRepositoryReturnsUrl(String url) { given(gitHubServiceMock.createRepository(any(), any())).willReturn(url); } public void givenRepositoryExists(){ given(gitHubServiceMock.repositoryExists( anyString(), anyString(), anyString())).willReturn(true); } public void givenRepositoryDoesNotExist(){ given(gitHubServiceMock.repositoryExists( anyString(), anyString(), anyString())).willReturn(false); } public void assertRepositoryCreated(){ verify(gitHubServiceMock).createRepository(any(), any()); } public void givenDefaultState(String defaultRepositoryUrl){ givenRepositoryDoesNotExist(); givenCreateRepositoryReturnsUrl(defaultRepositoryUrl); } public void assertRepositoryNotCreated(){ verify(gitHubServiceMock, never()).createRepository(any(), any()); } } Additionally to providing a mocked GitHubService bean, we have added a bunch of given*() and assert*() methods to this class.\nThe given*() methods allow us to set the mock into a desired state and the verify*() methods allow us to check if some interaction with the mock has happened or not after having run a test.\nThe @Primary annotation makes sure that if both the mock and the real bean are loaded into the application context, the mock takes precedence.\nMocking the Email Module We build a very similar mock configuration for the mail module:\n@TestConfiguration public class EmailModuleMock { private final EmailNotificationService emailNotificationServiceMock = Mockito.mock(EmailNotificationService.class); @Bean @Primary EmailNotificationService emailNotificationServiceMock() { return emailNotificationServiceMock; } public void givenSendMailSucceeds() { // nothing to do, the mock will simply return  } public void givenSendMailThrowsError() { doThrow(new RuntimeException(\u0026#34;error when sending mail\u0026#34;)) .when(emailNotificationServiceMock).sendEmail(anyString(), anyString(), anyString()); } public void assertSentMailContains(String repositoryUrl) { verify(emailNotificationServiceMock).sendEmail(anyString(), anyString(), contains(repositoryUrl)); } public void assertNoMailSent() { verify(emailNotificationServiceMock, never()).sendEmail(anyString(), anyString(), anyString()); } } Using the Mock Modules in a Test Now, with the mock modules in place, we can use them in the integration test of our controller:\n@WebMvcTest @Import({ GitHubModuleMock.class, EmailModuleMock.class }) class RepositoryControllerTest { @Autowired private MockMvc mockMvc; @Autowired private EmailModuleMock emailModuleMock; @Autowired private GitHubModuleMock gitHubModuleMock; @Test void givenRepositoryDoesNotExist_thenRepositoryIsCreatedSuccessfully() throws Exception { String repositoryUrl = \u0026#34;https://github.com/reflectoring/reflectoring.github.io\u0026#34;; gitHubModuleMock.givenDefaultState(repositoryUrl); emailModuleMock.givenSendMailSucceeds(); mockMvc.perform(post(\u0026#34;/github/repository\u0026#34;) .param(\u0026#34;token\u0026#34;, \u0026#34;123\u0026#34;) .param(\u0026#34;repositoryName\u0026#34;, \u0026#34;foo\u0026#34;) .param(\u0026#34;organizationName\u0026#34;, \u0026#34;bar\u0026#34;)) .andExpect(status().is(200)); emailModuleMock.assertSentMailContains(repositoryUrl); gitHubModuleMock.assertRepositoryCreated(); } @Test void givenRepositoryExists_thenReturnsBadRequest() throws Exception { String repositoryUrl = \u0026#34;https://github.com/reflectoring/reflectoring.github.io\u0026#34;; gitHubModuleMock.givenDefaultState(repositoryUrl); gitHubModuleMock.givenRepositoryExists(); emailModuleMock.givenSendMailSucceeds(); mockMvc.perform(post(\u0026#34;/github/repository\u0026#34;) .param(\u0026#34;token\u0026#34;, \u0026#34;123\u0026#34;) .param(\u0026#34;repositoryName\u0026#34;, \u0026#34;foo\u0026#34;) .param(\u0026#34;organizationName\u0026#34;, \u0026#34;bar\u0026#34;)) .andExpect(status().is(400)); emailModuleMock.assertNoMailSent(); gitHubModuleMock.assertRepositoryNotCreated(); } } We use the @Import annotation to import the mocks into the application context.\nNote that the @WebMvcTest annotation will cause the real modules to be loaded into the application context as well. That\u0026rsquo;s why we used the @Primary annotation on the mocks so that the mocks take precedence.\nWhat To Do About Misbehaving Modules? A module may misbehave by trying to connect to some external service during startup. The mail module, for example, may create a pool of SMTP connections on startup. This naturally fails when there is no SMTP server available. This means that when we load the module in an integration test, the startup of the Spring context will fail.\n To make the module behave better during tests, we can introduce a configuration property mail.enabled. Then, we annotate the module's configuration class with @ConditionalOnProperty to tell Spring not to load this configuration if the property is set to false.  Now, during a test, only the mock module is being loaded.  Instead of mocking out the specific method calls in the test, we now call the prepared given*() methods on the mock modules. This means the test no longer requires internal knowledge of the classes the test subject is calling.\nAfter executing the code, we can use the prepared verify*() methods to verify if a repository has been created or a mail has been sent. Again, without knowing about the specific underlying method calls.\nIf we need the github or mail modules in another controller, we can use the same mock modules in the test for that controller.\nIf we later decide to build another integration that uses the real version of some modules, but the mocked versions of other modules, it\u0026rsquo;s a matter of a couple of @Import annotations to build the application context we need.\nThis is the whole idea of modules: we can take the real module A and the mock of module B, and we\u0026rsquo;ll still have a working application that we can run tests against.\nThe mock modules are our central place for mocking behavior within that module. They can translate high-level mocking expectations like \u0026ldquo;make sure that a repository can be created\u0026rdquo; into low-level calls to mocks of the API beans.\nConclusion By being intentional about what is part of a module\u0026rsquo;s API and what is not, we can build a properly modular codebase with little chance of introducing unwanted dependencies.\nSince we know what is part of the API and what is not, we can build a dedicated mock for the API of each module. We don\u0026rsquo;t care about the internals, we\u0026rsquo;re only mocking the API.\nA mock module can provide an API to mock certain states and to verify certain interactions. By using the API of the mock module instead of mocking each single method call, our integration tests become more resilient to change.\n","date":"December 8, 2020","image":"https://reflectoring.io/images/stock/0088-jigsaw-1200x628-branded_hu5d0fbb80fd5a577c9426d368c189788e_197833_650x0_resize_q90_box.jpg","permalink":"/spring-boot-modules-mocking/","title":"Building Reusable Mock Modules with Spring Boot"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you want to become a more structured learner you need some tips and tricks on tapping your creativity you want to become more self-aware of your proficiency  Book Facts  Title: Pragmatic Thinking \u0026amp; Learning Authors: Andy Hunt Word Count: ~ 80.000 (5.5 hours at 250 words / minute) Reading Ease: easy Writing Style: conversational, rather long chapters, practical  Overview {% include book-link.html book=\u0026ldquo;pragmatic-thinking\u0026rdquo; %} is all about our brain and how we can make the most of it. Andy Hunt often relates to computer programming in the book, so it\u0026rsquo;s best read as a software developer.\nThe book provides guidance on how to tap our creativity, how to focus, and how to learn deliberately and successfully.\nI\u0026rsquo;ve been applying some of the methods described in this book before even reading it, so it\u0026rsquo;s great to have them confirmed by the book.\nNotes Here are my notes, as usual with some comments in italics.\nIntroduction  \u0026ldquo;We tend to make programming much harder on ourselves than we need.\u0026rdquo;  Journey from Novice to Expert  the Dreyfus Model of Proficiency categorizes learners into five categories:  Novice  needs rules to follow to produce results is not necessarily inclined to learn cannot always succeed because not everything can be put into rules   Advanced Beginner  can work on some tasks on their own doesn\u0026rsquo;t have the big picture, yet has problems with troubleshooting   Competent  exercises deliberate planning solves new problems   Proficient  able to self-improve learns from others' experience understands and applies maxims   Expert  relies on their intuition fails if you impose rules on them needs to have access to the big picture     \u0026ldquo;The only path to a more correct self-assessment is to improve the individual\u0026rsquo;s skill level.\u0026rdquo; it takes 10 years of deliberate practice to become an expert  This Is Your Brain  the brain has two modes  R-mode (rich mode): a rich, intuitive, non-verbal, unconscious, and largely uncontrollable mode that operates in the background (this is what Kahneman calls \u0026ldquo;system 1\u0026rdquo; in his book \u0026ldquo;Thinking, Fast and Slow\u0026rdquo;) L-mode (linear mode): a linear, step-by-step, verbal, conscious mode of thinking (this is what Kahneman calls \u0026ldquo;system 2\u0026rdquo;)   since the R-mode is uncontrollable and generates ideas at inconvenient times, always take something to write with you we learn better by synthesis (building things, involves R-mode) than by analysis (only involves L-mode) - this is so true for us as developers! I learn best when \u0026ldquo;playing around\u0026rdquo; with a new technology positive emotions make you more creative negative emotions hurt creativity good design fosters positive emotions and thus creativity (my takeaway: we should always take the time for well-designed software!)  Get In Your Right Mind  involve multiple senses to enrich creativity let the R-mode lead and follow up with L-mode in writing text or code: create an ugly first draft drawing on R-mode and then go over it again with L-mode pair programming is effective because the driver is in verbal L-mode, explaining what they\u0026rsquo;re doing, while the navigator is free to draw on their R-mode step away from the keyboard from time to time to allow your R-mode to produce ideas metaphors are a powerful creativity tool because they combine R-mode with L-mode try to harvest ideas from the unconscious R-mode  Morning Pages: write some journal pages first thing every morning to harvest R-mode ideas while you\u0026rsquo;re not yet fully awake go for a walk and think of nothing to let the unconscious R-mode chime in defocus: \u0026ldquo;bear the problem in mind\u0026rdquo;, but don\u0026rsquo;t actively try to solve it try a \u0026ldquo;whack on the head\u0026rdquo;: change perspective, turn the problem around, invert the problem, imagine you are part of the problem - this may trigger ideas from the R-mode    Debug Your Mind  our mind is bugged with biases  anchoring effect: something has been mentioned, and our mind \u0026ldquo;anchors\u0026rdquo; to that thought need for closure: we want to finish something - better defer a decision until you have more information confirmation bias: we believe information that confirms our suspicions more than other information exposure effect: the longer we\u0026rsquo;re exposed to some idea, the more we think of it Hawthorne effect: we behave differently when we feel we\u0026rsquo;re observed by others generational bias: depending on which generation we belong to, we have different tendencies (there\u0026rsquo;s an 8-pages-long discussion of different American generations from the \u0026ldquo;GI generation\u0026rdquo; to the \u0026ldquo;Homeland generation\u0026rdquo;, which I found odd rather than helpful)   know your biases \u0026ldquo;Trust your intuition, but verify\u0026rdquo; get feedback for your intuitive ideas  Learn Deliberately  \u0026ldquo;Learning isn\u0026rsquo;t done to you, it\u0026rsquo;s something you do.\u0026rdquo; a common form of training is \u0026ldquo;sheep dip\u0026rdquo; training  sheep are regularly dipped into a parasite-killing fluid, head and all, to clean them up - a very unpleasant experience humans are regularly \u0026ldquo;dipped\u0026rdquo; into short, context-free trainings to learn about some topic - a very ineffective way of learning   you need continuous goals and feedback to learn have SMART goals to increase the chance of success have a \u0026ldquo;Pragmatic Investment Plan\u0026rdquo;  have a concrete plan diversify - don\u0026rsquo;t learn only one topic, but look to the left and right active investment - re-evaluate your investment regularly invest regularly   a more effective way of learning is peer study groups \u0026ldquo;Reading is the least effective way of learning.\u0026rdquo; (that\u0026rsquo;s why I take notes and write them down in this blog!) read deliberately with the SQR3 system  survey: do a quick scan of a topic/book question: write down some questions you have about the topic read: read the book recite: while reading, take notes review: re-read sections, discuss it with others   use mind maps to explore a topic  activates R-mode, because it makes use of multiple senses do it on paper to increase the R-mode experience digital tools are more suited for documentation rather than for exploration   to create a mind map:  dump your brain without too much thinking put it aside, but add things when they come to mind re-write the mind map with a little more thinking and structure   \u0026ldquo;Documenting is more important than documentation.\u0026rdquo;  Gain Experience  exploring/playing with something should come before studying facts about it \u0026ldquo;Build to learn, not learn to build.\u0026rdquo; \u0026ldquo;Play more in order to learn more.\u0026rdquo; when faced with a problem, raise your awareness to that problem  close your eyes and imagine the problem then, imagine a solution \u0026ldquo;First be aware of the what, then think about the how.\u0026rdquo; (This is very helpful for debugging! I like to become really aware of a nasty bug by creating an \u0026ldquo;investigation page\u0026rdquo; in a wiki where I collect all the facts (the \u0026ldquo;what\u0026rdquo;). Then, I think about possible causes and how to fix then (the \u0026ldquo;how\u0026rdquo;))   \u0026ldquo;Trying too hard is a guarantee for failure.\u0026rdquo;  pressure kills cognition - it shuts off the R-mode because you\u0026rsquo;re too hectic to listen to it you are least creative when under pressure pressure leads to bad decisions you no longer see options this is a strong parallel to Carol Dweck\u0026rsquo;s \u0026ldquo;Mindset\u0026rdquo;: if you are in a \u0026ldquo;fixed mindset\u0026rdquo; environment you are expected to deliver, no matter what - if you are in a \u0026ldquo;growth mindset\u0026rdquo; environment, you are allowed to fail and learn   \u0026ldquo;If it\u0026rsquo;s OK to fail, you won\u0026rsquo;t.\u0026rdquo; \u0026ldquo;If failure is costly, there will be no experimentation.\u0026rdquo; imagining success will make success more likely - we can condition our minds to be more successful  Manage Focus  focus on the now, pay attention to what you\u0026rsquo;re doing now allocate your attention, not your time meditation exercises increase your attention ability  meditation technique: think of nothing, and let go of thoughts as they come   schedule \u0026ldquo;thinking time\u0026rdquo; to just meditate and unconsciously think of solutions and ideas have a place to collect ideas - for example, a paper notebook  this is an \u0026ldquo;exocortex\u0026rdquo; because is extends our brain\u0026rsquo;s memory once you have a place to collect thoughts and ideas, your brain will automatically create connections between them and create new ideas (I can confirm that once I have a list for a certain category in my paper notebook, ideas keep on coming - I have way more ideas than I have time to pursue)   transcribing raw notes into a cleaner space reinforces learning and creativity (again, I can confirm this - I often transcribe notes about a certain topic onto a new page to \u0026ldquo;start over\u0026rdquo; and it generates new ideas) increase the physical cost of context switching to stay in context  make it harder to leave the room make it hard to check email/social media   leave a trail of \u0026ldquo;breadcrumbs\u0026rdquo; so you can easily re-enter your context when you\u0026rsquo;re interrupted create a desktop space on your computer for each task  one for coding one for writing one for reading \u0026hellip;    Beyond Expertise  \u0026ldquo;Always keep a beginner\u0026rsquo;s mind.\u0026rdquo; stay curious  Conclusion The book didn\u0026rsquo;t introduce any ideas that were completely new to me, but it reinforced what I\u0026rsquo;ve read in other books and experienced myself and put everything into the context of a software developer.\nThe R-mode/L-mode metaphor is very similar to the system 1/system 2 metaphor that Daniel Kahneman writes about in \u0026ldquo;Thinking, Fast and Slow\u0026rdquo;. It\u0026rsquo;s nice to read about it in a different context. I have experienced myself often enough that my R-mode/system 1 generates ideas when I\u0026rsquo;m least prepared for them, so I\u0026rsquo;m not leaving the house without my paper notebook anymore.\nIn conclusion, there\u0026rsquo;s some good advice in this book!\n","date":"November 26, 2020","image":"https://reflectoring.io/images/covers/pragmatic-thinking-teaser_hud83a127503a9f768dc8c986c7722c166_66766_650x0_resize_q90_box.jpg","permalink":"/book-review-pragmatic-thinking/","title":"Book Notes: Pragmatic Thinking \u0026 Learning"},{"categories":["Spring Boot"],"contents":"If you are looking for a better way to manage your queries or want to generate dynamic and typesafe queries then you might find your solution in Spring Data JPA Specifications.\n Example Code This article is accompanied by a working code example on GitHub. What Are Specifications? Spring Data JPA Specifications is yet another tool at our disposal to perform database queries with Spring or Spring Boot.\nSpecifications are built on top of the Criteria API.\nWhen building a Criteria query we are required to build and manage Root, CriteraQuery, and CriteriaBuilder objects by ourselves:\n... EntityManager entityManagr = getEntityManager(); CriteriaBuilder builder = entityManager.getCriteriaBuilder(); CriteriaQuery\u0026lt;Product\u0026gt; productQuery = builder.createQuery(Product.class); Root\u0026lt;Person\u0026gt; personRoot = productQuery.from(Product.class); ... Specifications build on top of the Criteria API to simplify the developer experience. We simply need to implement the Specification interface:\ninterface Specification\u0026lt;T\u0026gt;{ Predicate toPredicate(Root\u0026lt;T\u0026gt; root, CriteriaQuery\u0026lt;?\u0026gt; query, CriteriaBuilder criteriaBuilder); } Using Specifications we can build atomic predicates, and combine those predicates to build complex dynamic queries.\nSpecifications are inspired by the Domain-Driven Design \u0026ldquo;Specification\u0026rdquo; pattern.\nWhy Do We Need Specifications? One of the most common ways to perform queries in Spring Boot is by using Query Methods like these:\ninterface ProductRepository extends JpaRepository\u0026lt;Product, String\u0026gt;, JpaSpecificationExecutor\u0026lt;Product\u0026gt; { List\u0026lt;Product\u0026gt; findAllByNameLike(String name); List\u0026lt;Product\u0026gt; findAllByNameLikeAndPriceLessThanEqual( String name, Double price ); List\u0026lt;Product\u0026gt; findAllByCategoryInAndPriceLessThanEqual( List\u0026lt;Category\u0026gt; categories, Double price ); List\u0026lt;Product\u0026gt; findAllByCategoryInAndPriceBetween( List\u0026lt;Category\u0026gt; categories, Double bottom, Double top ); List\u0026lt;Product\u0026gt; findAllByNameLikeAndCategoryIn( String name, List\u0026lt;Category\u0026gt; categories ); List\u0026lt;Product\u0026gt; findAllByNameLikeAndCategoryInAndPriceBetween( String name, List\u0026lt;Category\u0026gt; categories, Double bottom, Double top ); } The problem with query methods is that we can only specify a fixed number of criteria. Also, the number of query methods increases rapidly as the use cases increases.\nAt some point, there are many overlapping criteria across the query methods and if there is a change in any one of those, we\u0026rsquo;ll have to make changes in multiple query methods.\nAlso, the length of the query method might increase significantly when we have long field names and multiple criteria in our query. Plus, it might take a while for someone to understand such a lengthy query and its purpose:\nList\u0026lt;Product\u0026gt; findAllByNameLikeAndCategoryInAndPriceBetweenAndManufacturingPlace_State(String name, List\u0026lt;Category\u0026gt; categories, Double bottom, Double top, STATE state); With Specifications, we can tackle these issues by creating atomic predicates. And by giving those predicates a meaningful name we can clearly specify their intent. We\u0026rsquo;ll see how we can convert the above into a much more meaningful query in the section Writing Queries With Specifications section.\nSpecifications allow us to write queries programmatically. Because of this, we can build queries dynamically based on user input. We\u0026rsquo;ll see this in more detail in the section Dynamic Queries With Specifications.\nSetting Things Up First, we need to have the Spring Data Jpa dependency in our build.gradle file:\n... implementation \u0026#39;org.springframework.boot:spring-boot-starter-data-jpa\u0026#39; annotationProcessor \u0026#39;org.hibernate:hibernate-jpamodelgen\u0026#39; ... We have also added add the hibernate-jpamodelgen annotation processor dependency which will generate static metamodel classes of our entities.\nThe Generated Metamodel The classes generated by the Hibernate JPA model generator will allow us to write queries in a strongly-typed manner.\nFor instance, let\u0026rsquo;s look at the JPA entity Distributor:\n@Entity public class Distributor { @Id private String id; private String name; @OneToOne private Address address; //Getter setter ignored for brevity  } The metamodel class of the Distributor entity would look like the following:\n@Generated(value = \u0026#34;org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor\u0026#34;) @StaticMetamodel(Distributor.class) public abstract class Distributor_ { public static volatile SingularAttribute\u0026lt;Distributor, Address\u0026gt; address; public static volatile SingularAttribute\u0026lt;Distributor, String\u0026gt; name; public static volatile SingularAttribute\u0026lt;Distributor, String\u0026gt; id; public static final String ADDRESS = \u0026#34;address\u0026#34;; public static final String NAME = \u0026#34;name\u0026#34;; public static final String ID = \u0026#34;id\u0026#34;; } We can now use Distributor_.name in our criteria queries instead of directly using string field names of our entities. A major benefit of this is that queries using the metamodel evolve with the entities and are much easier to refactor than string queries.\nWriting Queries With Specifications Let\u0026rsquo;s convert the findAllByNameLike() query mentioned above into a Specification:\nList\u0026lt;Product\u0026gt; findAllByNameLike(String name); An equivalent Specification of this query method is:\nprivate Specification\u0026lt;Product\u0026gt; nameLike(String name){ return new Specification\u0026lt;Product\u0026gt;() { @Override public Predicate toPredicate(Root\u0026lt;Product\u0026gt; root, CriteriaQuery\u0026lt;?\u0026gt; query, CriteriaBuilder criteriaBuilder) { return criteriaBuilder.like(root.get(Product_.NAME), \u0026#34;%\u0026#34;+name+\u0026#34;%\u0026#34;); } }; } With a Java 8 Lambda we can simplify the above to the following:\nprivate Specification\u0026lt;Product\u0026gt; nameLike(String name){ return (root, query, criteriaBuilder) -\u0026gt; criteriaBuilder.like(root.get(Product_.NAME), \u0026#34;%\u0026#34;+name+\u0026#34;%\u0026#34;); } We can also write it in-line at the spot in the code where we need it:\n... Specification\u0026lt;Product\u0026gt; nameLike = (root, query, criteriaBuilder) -\u0026gt; criteriaBuilder.like(root.get(Product_.NAME), \u0026#34;%\u0026#34;+name+\u0026#34;%\u0026#34;); ... But this defeats our purpose of reusability, so let\u0026rsquo;s avoid this unless our use case requires it.\nTo execute Specifications we need to extend the JpaSpecificationExecutor interface in our Spring Data JPA repository:\ninterface ProductRepository extends JpaRepository\u0026lt;Product, String\u0026gt;, JpaSpecificationExecutor\u0026lt;Product\u0026gt; { } The JpaSpecificationExecutor interface adds methods which will allow us to execute Specifications, for example, these:\nList\u0026lt;T\u0026gt; findAll(Specification\u0026lt;T\u0026gt; spec); Page\u0026lt;T\u0026gt; findAll(Specification\u0026lt;T\u0026gt; spec, Pageable pageable); List\u0026lt;T\u0026gt; findAll(Specification\u0026lt;T\u0026gt; spec, Sort sort); Finally, to execute our query we can simply call:\nList\u0026lt;Product\u0026gt; products = productRepository.findAll(namelike(\u0026#34;reflectoring\u0026#34;)); We can also take advantage of findAll() functions overloaded with Pageable and Sort in case we are expecting a large number of records in the result or want records in sorted order.\nThe Specification interface also has the public static helper methods and(), or(), and where() that allow us to combine multiple specifications. It also provides a not() method which allows us to negate a Specification.\nLet\u0026rsquo;s look at an example:\npublic List\u0026lt;Product\u0026gt; getPremiumProducts(String name, List\u0026lt;Category\u0026gt; categories) { return productRepository.findAll( where(belongsToCategory(categories)) .and(nameLike(name)) .and(isPremium())); } private Specification\u0026lt;Product\u0026gt; belongsToCategory(List\u0026lt;Category\u0026gt; categories){ return (root, query, criteriaBuilder)-\u0026gt; criteriaBuilder.in(root.get(Product_.CATEGORY)).value(categories); } private Specification\u0026lt;Product\u0026gt; isPremium() { return (root, query, criteriaBuilder) -\u0026gt; criteriaBuilder.and( criteriaBuilder.equal( root.get(Product_.MANUFACTURING_PLACE) .get(Address_.STATE), STATE.CALIFORNIA), criteriaBuilder.greaterThanOrEqualTo( root.get(Product_.PRICE), PREMIUM_PRICE)); } Here, we have combined belongsToCategory(), nameLike() and isPremium() specifications into one using the where() and and() helper functions. This also reads really nice, don\u0026rsquo;t you think? Also, notice how isPremium() is giving more meaning to the query.\nCurrently, isPremium() is combining two predicates, but if we want, we can create separate specifications for each of those and combine again with and(). For now, we will keep it as is, because the predicates used in isPremium() are very specific to that query, and if in the future we need to use them in other queries too then we can always split them up without impacting the clients of isPremium() function.\nDynamic Queries With Specifications Let\u0026rsquo;s say we want to create an API that allows our clients to fetch all the products and also filter them based on a number of properties such as categories, price, color, etc. Here, we don\u0026rsquo;t know beforehand what combination of properties the client is going to use to filter the products.\nOne way to handle this is to write query methods for all possible combinations but that would require writing a lot of query methods. And that number would increase combinatorically as we introduce new fields.\nA better solution is to take predicates directly from clients and convert them to database queries using specifications. The client has to simply provide us the list of Filters, and our backend will take care of the rest. Let\u0026rsquo;s see how we can do this.\nFirst, let\u0026rsquo;s create an input object to take filters from the clients:\npublic class Filter { private String field; private QueryOperator operator; private String value; private List\u0026lt;String\u0026gt; values;//Used in case of IN operator } We will expose this object to our clients via a REST API.\nSecond, we need to write a function that will convert a Filter to a Specification:\nprivate Specification\u0026lt;Product\u0026gt; createSpecification(Filter input) { switch (input.getOperator()){ case EQUALS: return (root, query, criteriaBuilder) -\u0026gt; criteriaBuilder.equal(root.get(input.getField()), castToRequiredType(root.get(input.getField()).getJavaType(), input.getValue())); case NOT_EQUALS: return (root, query, criteriaBuilder) -\u0026gt; criteriaBuilder.notEqual(root.get(input.getField()), castToRequiredType(root.get(input.getField()).getJavaType(), input.getValue())); case GREATER_THAN: return (root, query, criteriaBuilder) -\u0026gt; criteriaBuilder.gt(root.get(input.getField()), (Number) castToRequiredType( root.get(input.getField()).getJavaType(), input.getValue())); case LESS_THAN: return (root, query, criteriaBuilder) -\u0026gt; criteriaBuilder.lt(root.get(input.getField()), (Number) castToRequiredType( root.get(input.getField()).getJavaType(), input.getValue())); case LIKE: return (root, query, criteriaBuilder) -\u0026gt; criteriaBuilder.like(root.get(input.getField()), \u0026#34;%\u0026#34;+input.getValue()+\u0026#34;%\u0026#34;); case IN: return (root, query, criteriaBuilder) -\u0026gt; criteriaBuilder.in(root.get(input.getField())) .value(castToRequiredType( root.get(input.getField()).getJavaType(), input.getValues())); default: throw new RuntimeException(\u0026#34;Operation not supported yet\u0026#34;); } } Here we have supported several operations such as EQUALS, LESS_THAN, IN, etc. We can also add more based on our requirements.\nNow, as we know, the Criteria API allows us to write typesafe queries. So, the values that we provide must be of the type compatible with the type of our field. Filter takes the value as String which means we will have to cast the values to a required type before passing it to CriteriaBuilder:\nprivate Object castToRequiredType(Class fieldType, String value) { if(fieldType.isAssignableFrom(Double.class)) { return Double.valueOf(value); } else if(fieldType.isAssignableFrom(Integer.class)) { return Integer.valueOf(value); } else if(Enum.class.isAssignableFrom(fieldType)) { return Enum.valueOf(fieldType, value); } return null; } private Object castToRequiredType(Class fieldType, List\u0026lt;String\u0026gt; value) { List\u0026lt;Object\u0026gt; lists = new ArrayList\u0026lt;\u0026gt;(); for (String s : value) { lists.add(castToRequiredType(fieldType, s)); } return lists; } Finally, we add a function that will combine multiple Filters to a specification:\nprivate Specification\u0026lt;Product\u0026gt; getSpecificationFromFilters(List\u0026lt;Filter\u0026gt; filter){ Specification\u0026lt;Product\u0026gt; specification = where(createSpecification(queryInput.remove(0))); for (Filter input : filter) { specification = specification.and(createSpecification(input)); } return specification; } Now, let\u0026rsquo;s try to fetch all the products belonging to the MOBILE or TV APPLIANCE category and whose prices are below 1000 using our new shiny dynamic specifications query generator.\nFilter categories = Filter.builder() .field(\u0026#34;category\u0026#34;) .operator(QueryOperator.IN) .values(List.of(Category.MOBILE.name(), Category.TV_APPLIANCES.name())) .build(); Filter lowRange = Filter.builder() .field(\u0026#34;price\u0026#34;) .operator(QueryOperator.LESS_THAN) .value(\u0026#34;1000\u0026#34;) .build(); List\u0026lt;Filter\u0026gt; filters = new ArrayList\u0026lt;\u0026gt;(); filters.add(lowRange); filters.add(categories); productRepository.getQueryResult(filters); The above code snippets should do for most filter cases but there is still a lot of room for improvement. Such as allowing queries based on nested entity properties (manufacturingPlace.state) or limiting the fields on which we want to allow filters. Consider this as an open-ended problem.\nWhen Should I Use Specifications Over Query Methods? One question that comes to mind is that if we can write any query with specifications then when do we prefer query methods? Or should we ever prefer them? I believe there are a couple of cases where query methods could come in handy.\nLet\u0026rsquo;s say our entity has only a handful of fields, and it only needs to be queried in a certain way then why bother writing Specifications when we can simply write a query method?\nAnd if future requirements come in for more queries for the given entity then we can always refactor it to use Specifications. Also, Specifications won\u0026rsquo;t be helpful in cases where we want to use database-specific features in a query, for example performing JSON queries with PostgresSQL.\nConclusion Specifications provide us with a way to write reusable queries and also fluent APIs with which we can combine and build more sophisticated queries.\nAll in all, Spring JPA Specifications is a great tool whether we want to create reusable predicates or want to generate typesafe queries programmatically.\nThank you for reading! You can find the working code at GitHub.\n","date":"November 16, 2020","image":"https://reflectoring.io/images/stock/0059-library-1200x628-branded_hufd1c76fdddcd68370f35d4cc8a896aad_297099_650x0_resize_q90_box.jpg","permalink":"/spring-data-specifications/","title":"Getting Started with Spring Data Specifications"},{"categories":["Java"],"contents":"In the world of microservices and the 6-month release cycle of Java, we often have to change between Java versions multiple times a day.\nSDKMAN! is a tool that helps us to manage multiple JDK installations (and installations of other SDKs) and to configure each codebase to use a specific JDK version without the hassle of changing the JAVA_HOME environment variable.\nMake sure to also check out the article about jEnv which is an alternative tool for the same purpose.\nInstalling SDKMAN! SDKMAN! is easy to install on any platform. The only thing you need is a terminal.\nFor installing and running SDKMAN! on Windows consider using Windows Subsystem for Linux.\nTo install SDKMAN! follow the official installation guide.\nInstalling a JDK From the SDKMAN! Repository SDKMAN! offers multiple JDK vendors such as AdoptOpenJDK, Alibaba, Amazon, etc\u0026hellip;\nTo see all the available JDKs simply run: sdk list java.\n================================================================================ Available Java Versions ================================================================================ Vendor | Use | Version | Dist | Status | Identifier -------------------------------------------------------------------------------- AdoptOpenJDK | | 15.0.1.j9 | adpt | | 15.0.1.j9-adpt | | 15.0.1.hs | adpt | | 15.0.1.hs-adpt | | 13.0.2.j9 | adpt | | 13.0.2.j9-adpt | | 13.0.2.hs | adpt | | 13.0.2.hs-adpt | | 12.0.2.j9 | adpt | | 12.0.2.j9-adpt | | 12.0.2.hs | adpt | | 12.0.2.hs-adpt | | 11.0.9.open | adpt | | 11.0.9.open-adpt | | 11.0.9.j9 | adpt | | 11.0.9.j9-adpt | \u0026gt;\u0026gt;\u0026gt; | 11.0.9.hs | adpt | installed | 11.0.9.hs-adpt | | 8.0.272.j9 | adpt | | 8.0.272.j9-adpt | | 8.0.272.hs | adpt | | 8.0.272.hs-adpt Alibaba | | 11.0.8 | albba | | 11.0.8-albba | | 8u262 | albba | | 8u262-albba Amazon | | 15.0.1 | amzn | | 15.0.1-amzn | | 11.0.9 | amzn | | 11.0.9-amzn | | 8.0.272 | amzn | | 8.0.272-amzn ================================================================================ To install the JDK of our choice run: sdk install java \u0026lt;candidate\u0026gt;. For example: sdk install java 15.0.1.j9-adpt.\nSDKMAN! will now download the desired JDK and will ask us if we want to set it as default.\nDownloading: java 15.0.1.j9-adpt In progress... Do you want java 15.0.1.j9-adpt to be set as default? (Y/n): If we run sdk list java again now, we should now see the installed status in the version we have just installed:\n================================================================================ Available Java Versions ================================================================================ Vendor | Use | Version | Dist | Status | Identifier -------------------------------------------------------------------------------- AdoptOpenJDK | \u0026gt;\u0026gt;\u0026gt; | 15.0.1.j9 | adpt | installed | 15.0.1.j9-adpt Setting the Global JDK With the 6-month version JDK cycle that is now being released, we might want to add a global (default) JDK for our computer that is sensible - for example an LTS version.\nTo do so run: sdk default java \u0026lt;candidate\u0026gt;. For example: sdk default java 11.0.9.hs-adpt.\nDefault java version set to 11.0.9.hs-adpt Setting the Local JDK Sometimes, we might want to try out the new Java version, but not set it globally. To achieve that, we can apply the new Java version only on the current shell session.\nThis is easy with SDKMAN!. Simply run: sdk use java \u0026lt;candidate\u0026gt;. For example: sdk use java 11.0.9.hs-adpt\nUsing java version 11.0.9.hs-adpt in this shell. Running java --version verifies that we are indeed using the desired version:\nopenjdk version \u0026#34;11.0.9\u0026#34; 2020-10-20 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.9+11) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.9+11, mixed mode) Setting Per Project JDK Usage When we often change versions between different projects we might want to create an env file where we define the desired JDK version for the project.\nRunning the command sdk env init, we can generate a file named .sdkmanrc:\n# Enable auto-env through the sdkman_auto_env config # Add key=value pairs of SDKs to use below java=11.0.9.hs-adpt For now, it defaults to our default java version. But let\u0026rsquo;s say that we want to use JDK 15 for this project. Just change the value of the java key to 15.0.0.hs-adpt:\njava=15.0.0.hs-adpt To apply this we just run the sdk env command in the folder with the .sdkmanrc file:\nUsing java version 15.0.0.hs-adpt in this shell If we want to automatically apply the sdk env command when navigating to the directory, we can change the SDKMAN! configuration which is located under ~/.sdkman/etc/config. Changing the value of sdkman_auto_env key from false to true will do the trick.\nUpgrading to a Newer JDK The sdk upgrade command makes it easy to upgrade to a newer version of a JDK. For example, we want to upgrade our JDK 11 Version from 11.0.8.hs-adpt to 11.0.9.hs-adpt SDK:\nUpgrade: java (15.0.0.hs-adpt, 8.0.265.hs-adpt, 11.0.8.hs-adpt \u0026lt; 11.0.9.hs-adpt) Upgrade candidate(s) and set latest version(s) as default? (Y/n): Y Downloading: java 11.0.9.hs-adpt In progress... Installing: java 11.0.9.hs-adpt Done installing! Setting java 11.0.9.hs-adpt as default. More Than a JDK Manager SDKMAN! is not just a JDK manager, it supports many more SDKs such as Maven, Gradle, Springboot, Micronaut, etc\u0026hellip;\nTo see all available SDKs just run the command sdk list.\nConclusion SDKMAN! is a great tool to manage the versions of our favorite tools. To explore all the features of the SDKMAN! visit the official site.\n","date":"November 7, 2020","image":"https://reflectoring.io/images/stock/0087-hammers-1200x628-branded_hue88066370a6d49ee72c2154c6588aa11_128814_650x0_resize_q90_box.jpg","permalink":"/manage-jdks-with-sdkman/","title":"Managing Multiple JDK Installations With SDKMAN!"},{"categories":["Spring Boot"],"contents":"The Twelve-Factor App is a set of guidelines for building cloud-native applications. By cloud-native, we will mean an application that is portable across environments, easy to update, and scalable enough to take advantage of the elastic capabilities of the cloud.\nThese twelve factors contain best practices on managing configuration data, abstracting library dependencies and backing services, log streaming, and administration.\nToday\u0026rsquo;s frameworks and methods already adhere to many of these principles by design, while some are supported by running the applications inside containers.\nSpring Boot is a popular framework for building microservice applications. In this article, we will look at the changes required to make a Spring Boot application adhere to the twelve factors.\nGoals of the Twelve Factors A common theme running through all the twelve principles is making the application portable to meet the demands of a dynamic environment provisioning typical of cloud platforms. The goals of the Twelve-Factor App as asserted in the documentation are:\n Using declarative formats to automate the setup. Maximizing portability across execution environments Suitable for deployment in Cloud Platforms Minimizing divergence between development and production by enabling continuous deployment for maximum agility Ability to scale up without significant changes to tooling, architecture, or development practices.  We will see these principles in action by applying them to a Spring Boot application.\n1. Codebase - Single Codebase Under Version Control for All Environments  One codebase tracked in revision control, many deploys.\n This helps to establish clear ownership of an application with a single individual or group. The application has a single codebase that evolves with new features, defect fixes, and upgrades to existing features. The application owners are accountable for building different versions and deploying to multiple environments like test, stage, and production during the lifetime of the application.\nThis principle advocates having a single codebase that can be built and deployed to multiple environments. Each environment has specific resource configurations like database, configuration data, and API URLs. To achieve this, we need to separate all the environment dependencies into a form that can be specified during the build and run phases of the application.\nThis helps to achieve the first two goals of the Twelve-Factor App - maximizing portability across environments using declarative formats.\nFollowing this principle, we\u0026rsquo;ll have a single Git repository containing the source code of our Spring Boot application. This code is compiled and packaged and then deployed to one or more environments.\nWe configure the application for a specific environment at runtime using Spring profiles and environment-specific properties.\nWe\u0026rsquo;re breaking this rule if we have to change the source code to configure it for a specific environment or if we have separate repositories for different environments like development and production.\n2. Dependencies  Explicitly declare and isolate dependencies.\n Dependencies provide guidelines for reusing code between applications. While the reusable code itself is maintained as a single codebase, it is packaged and distributed in the form of libraries to multiple applications.\nThe most likely dependencies of an application are open-source libraries or libraries built in-house by other teams. Dependencies could also take the form of specific software installed on the host system. We declare dependencies in external files leveraging the dependency management tools of the platform.\nFor the Spring Boot application, we declare the dependencies in a pom.xml file (or build.gradle if we use Gradle). Here is an example of a Spring Boot application using spring-boot-starter-web as one of its dependencies:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-web\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; This principle is an evolution from an earlier practice of sharing libraries across applications by storing them in a shared classpath. Using that approach introduced a coupling with the configuration of the host system.\nThe declarative style of specifying dependencies removes this coupling.\nIn the context of using Spring Boot, when using a dependency tool like Maven/Gradle we get:\n Versioning by declaring specific versions of the dependencies with which our application works, and Isolation by bundling dependencies with the application.  3. Config - Externalizing Configuration Properties  Store config in the environment.\n Ideally, the environments are dynamically provisioned in the cloud, so very little information is available while building the application.\nIsolating configuration properties into environment variables makes it easy and faster to deploy the application to different environments without any code changes.\nA few examples of configuration data are database connection URLs and credentials, and URLs of services on which an application depends. These most often have different values across environments. If these are hard-coded in the code or property files bundled with the application, we need to update the application for deploying to different environments.\nInstead, a better approach is to externalize the configuration using environment variables. The values of the environment variables are provided at runtime. We can provide the values from the command line if the application is run standalone.\nThe default behavior in Spring Boot applications is to apply the values from environment variables to override any values declared in property files. We can use configuration properties to use the configuration parameters in the code.\n4. Backing Services - Pluggable Data Sources, and Queues  Treat backing services as attached resources.\n This principle provides flexibility to change the backing service implementations without major changes to the application.\nPluggability can be best achieved by using an abstraction like JPA over an RDBMS data source and using configuration properties (like a JDBC URL) to configure the connection.\nThis way, we can just change the JDBC URL to swap out the database. And we can swap out the underlying database by changing the dependency. A snippet of a dependency on H2 database looks like this:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-data-jpa\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;com.h2database\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;h2\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;runtime\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; We can easily replace the H2 database with any other RDBMS like Oracle or MySQL. Similar to JPA, we can use JMS for messaging and SMTP for mails.\n5. Build, Release, Run - Leverage Containers for the Development Workflow  Strictly separate build and run stages.\n We should keep the stages for build, release, and run as separate. This separation is important to maintain application fidelity and integrity.\nThese stages occur in a sequence. Each stage has a different objective and produces output that is propagated to the subsequent stage.\nAny code changes including emergency fixes should happen in the build stage and follow an established release cycle before being promoted to production. Violating this principle for example by making a fix in production environments however small makes it difficult to propagate to the build stage, disturbs existing branches, and above all increases risk and overall cost of following this practice.\nFor Spring Boot applications, this is easy to achieve with the development workflow for containers:\n Build: we compile the source code and build a Docker image. Release: we tag the image and push it to a registry. Run: we pull the image from the registry and run it as a container instance.  If we are using containers to package and run our application, no application changes are required to adhere to this Twelve-Factor App principle.\n6. Processes - Stateless Applications  Execute the app as one or more stateless processes.\n Stateless processes give the application an ability to scale out quickly to handle a sudden increase in traffic and scale in when the traffic to the system decreases. To make it stateless, we need to store all data outside the application.\nSpring Boot applications execute as a Java process on the host system or inside a container runtime environment like Docker. This principle advocates that the processes should be stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service like a database.\nThis is a shift from the method of using “sticky sessions” in web applications that cache user session data in the memory of the application\u0026rsquo;s process and expecting future requests from the same session to be routed to the same process.\nSticky sessions are a violation of twelve-factor. Session state data should be stored outside the application in a datastore that offers time-expiration, such as Memcached or Redis.\n7. Port Binding - Port Defined as Environment Property  Export services via port binding.\n Port Binding refers to an application binding itself to a particular port and listening to all the requests from interested consumers on that port. The port is declared as an environment variable and provided during execution.\nApplications built following this principle do not depend on a web server. The application is completely self-contained and executes standalone. The web server is packaged as a library and bundled with the application.\nPort binding is one of the fundamental requirements for microservices to be autonomous and self-contained.\nSpring Boot embeds Tomcat in applications and exports HTTP as a service by binding to a port and listening to incoming requests to that port.\nWe can configure the port by setting the server.port configuration property. The default value is 8080.\n8. Concurrency - Stateless Applications Help to Scale Out  Scale out via the process model.\n Traditionally, whenever an application reached the limit of its capacity, the solution was to increase its capacity by adding RAM, CPU, and other resources - a process called vertical scaling.\nHorizontal scaling or \u0026ldquo;scaling out\u0026rdquo;, on the other hand, is a more modern approach, meant to work well with the elastic scalability of cloud environments. Instead of making a single process even larger, we create multiple processes and then distribute the load of our application among those processes.\nSpring Boot does not help us much with this factor. We have to make sure that our application is stateless, and thus can be scaled out to many concurrent workers to support the increased load. All kinds of state should be managed outside the application.\nAnd we also have to make sure to split our applications into multiple smaller applications (i.e. microservices) if we want to scale certain processes independently. Scaling is taken care of by container orchestration systems like Kubernetes and Docker Swarm.\n9. Disposability - Leverage Ephemeral Containers  Maximize robustness with fast startup and graceful shutdown.\n Disposability in an application allows it to be started or stopped rapidly.\nThe application cannot scale, deploy, or recover rapidly if it takes a long time to get into a steady state and shut down gracefully. If our application is under increasing load, and we need to bring up more instances to handle that load, any delay to startup could mean denial of requests during the time the application is starting up.\nSpring Boot applications should be run inside containers to make them disposable. Containers are ephemeral and can be started or stopped at any moment.\nSo it is important to minimize the startup time and ensure that the application shuts down gracefully when the container stops. Startup time is minimized with lazy initialization of dependent resources and by building optimized container images.\n10. Dev/Prod Parity - Build Once - Ship Anywhere  Keep development, staging, and production as similar as possible.\n The purpose of dev/prod parity is to ensure that the application will work in all environments ideally with no changes.\nMovement of code across environments has traditionally been a major factor slowing down the development velocity. This resulted from a difference in the infrastructure used for development and production.\nContainers made it possible to build once and ship to multiple target environments. They also allow to package all the dependencies including the OS.\nSpring Boot applications are packaged in Docker containers and pushed to a Docker registry. Apart from using a Docker file to create a Docker image, Spring Boot provides plugins for building OCI image from source with Cloud-Native buildpacks.\n11. Logs - Publish Logs as Event Streams  Treat Logs as Event Streams.\n The application should only produce logs as a sequence of events. In cloud environments, we have limited knowledge about the instances running the application. The instances can also be created and terminated, for example during elastic scaling.\nAn application diagnostic process based on logs stored in file systems of the host instances will be tedious and error-prone.\nSo the responsibility of storing, aggregating, and shipping logs to other systems for further analysis should be delegated to purpose-built software or observability services available in the underlying cloud platform.\nAlso simplifying your application’s log emission process allows us to reduce our codebase and focus more on our application’s core business value.\nSpring Boot logs only to the console by default and does not write log files. It is preconfigured with Logback as the default Logger implementation.\nLogback has a rich ecosystem of log appenders, filters, shippers, and thus supports many monitoring and visualization tools. All these are elaborated in configuring logging in Spring boot.\n12. Admin Processes - Built as API and Packaged with the Application  Run admin/management tasks as one-off processes.\n Most applications need to run one-off tasks for administration and management. The original recommendation emphasizes using programmatic interactive shells (REPL) more suited to languages like python and C. However this needs to be adapted suitably to align with the current development practices.\nExamples of administrative tasks include database scripts to initialize the database or scripts for fixing bad records. In keeping with the Twelve-Factor App\u0026rsquo;s original goals of building for maximum portability, this code should be packaged with the application and released together, and also run in the same environment.\nIn a Spring Boot application, we should expose administrative functions as separate endpoints that are invoked as one-off processes. Adding functions to execute one-off processes will go through the build, test, and release cycle.\nConclusion We looked at the Twelve-Factor principles for building a cloud-native application with Spring Boot. The following table summarizes what we have to do and what Spring Boot does for us to follow the twelve factors:\n   Factor What do we have to do?     Codebase Use one codebase for all environments.   Dependencies Declare all the dependencies in pom.xml or build.gradle.   Config Externalize configuration with environment variables.   Backing Services Build pluggable services by using abstractions like JPA.   Build/Release/Run Build and publish a Docker image.   Processes Build stateless services and store all state information outside the application, for example in a database.   Port Binding Configure port with the server.port environment variable.   Concurrency Build smaller stateless applications (microservices).   Disposability Package the application in a container image.   Dev/prod parity Build container images and ship to multiple environments.   Logs Publish logs to a central log aggregator.   Admin Processes Build one-off processes as API endpoints.    ","date":"November 5, 2020","image":"https://reflectoring.io/images/stock/0086-twelve-1200x628-branded_hu8efc97b23dc597652d6e5cd830cecfe5_132771_650x0_resize_q90_box.jpg","permalink":"/spring-boot-12-factor-app/","title":"12 Factor Apps with Spring Boot"},{"categories":["Java"],"contents":"As developers, we\u0026rsquo;re often working on different codebases at the same time. Especially in environments with microservices, we may be switching codebases multiple times a day.\nIn the days when a new Java version was published every couple of years, this was often not a problem, because most codebases needed the same Java version.\nThis changed when the Java release cadence changed to every 6 months. Today, if we\u0026rsquo;re working on multiple codebases, chances are that each codebase is using a different Java version.\njEnv is a tool that helps us to manage multiple JDK installations and configure each codebase to use a specific JDK version without having to change the JAVA_HOME environment variable.\nMake sure to check out the article about SDKMAN!, an alternative tool for managing JDKs (and other tools).\nInstalling jEnv jEnv supports Linux and MacOS operating systems. If you\u0026rsquo;re working with Windows, you\u0026rsquo;ll need to install the Windows Subsystem for Linux (or a bash emulator like GitBash) to use it.\nFollow the installation instructions on the jEnv homepage to install jEnv.\nInstalling a JDK If you\u0026rsquo;re reading this article, chances are that you want to set up a new JDK for a codebase you\u0026rsquo;re working on. Let\u0026rsquo;s download a JDK from the AdoptOpenJDK website.\nChoose the version you want and download it. Extract the .tar.gz file wherever you want.\nThe good thing about jEnv is that we don\u0026rsquo;t need to install the JDK via a package manager like brew, yum, or apt. We can just download a JDK and put it into a folder somewhere.\nYou can still use brew, yum, or apt to install your JDKs, you just need to find out the folder where your package manager has put the JDK afterward.\nAdding a JDK to jEnv To use the new JDK with jEnv, we need to tell jEnv where to find it. Let\u0026rsquo;s check first which versions of the JDK jEnv already knows about with the command jenv versions:\n* system (set by /home/tom/.jenv/version) 11 11.0 11.0.8 13 13.0 13.0.2 14 14.0 14.0.2 openjdk64-11.0.8 openjdk64-13.0.2 openjdk64-14.0.2 In my case, I have the JDKs 11, 13, and 14 already installed. Each version is available under three different names.\nLet\u0026rsquo;s say we\u0026rsquo;ve downloaded JDK 15 and extracted it into the folder ~/software/java/jdk-15+36.\nNow, we add the new JDK to jEnv:\njenv add /home/tom/software/java/jdk-15+36/ If we run jenv versions again, we get the following output:\n 11 11.0 11.0.8 13 13.0 13.0.2 14 14.0 14.0.2 15 openjdk64-11.0.8 openjdk64-13.0.2 openjdk64-14.0.2 openjdk64-15 The JDK 15 has been added under the names 15 and openjdk64-15.\nLocal vs. Global JDK jEnv supports the notion of a global JDK and multiple local JDKs.\nThe global JDK is the JDK that will be used if we type java into the command line anywhere on our computer.\nA local JDK is a JDK that is configured for a specific folder only. If we type java into the command line in this folder, it will not use the global JDK, but the local JDK instead.\nWe can use this to configure different JDKs for different projects (as long as they live in different folders).\nSetting the Global JDK First, we check the version of the global JDK:\njenv global The output in my case is:\nsystem This means that the system-installed JDK will be used as a global JDK. The name system is not very helpful because it doesn\u0026rsquo;t say which version it is. Let\u0026rsquo;s change the global JDK to a more meaningful JDK with a version number:\njenv global 11 This command has changed the globally used JDK version to 11. In my case, this was the same version as before, but if I type jenv global, I will now see which JDK version is my global version.\nSetting the Local JDK Remember the JDK 15 we\u0026rsquo;ve downloaded? The reason we downloaded it is probably that we\u0026rsquo;re working on a new project that needs JDK 15 to run.\nLet\u0026rsquo;s say this project lives in the folder ~/shiny-project. Let\u0026rsquo;s cd into this folder.\nIf I type java -version now, I get the following result:\nopenjdk version \u0026#34;11.0.8\u0026#34; 2020-07-14 OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1) OpenJDK 64-Bit Server VM (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing) That is because JDK 11 is my global JDK.\nLet\u0026rsquo;s change it to JDK 15 for this project:\njenv local 15 Now, type java -version again, and the output will be:\nopenjdk version \u0026#34;15\u0026#34; 2020-09-15 OpenJDK Runtime Environment AdoptOpenJDK (build 15+36) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 15+36, mixed mode, sharing) Calling java in this folder will now always call Java 15 instead of Java 11.\nHow does this work?\nAfter using the jenv local command, you\u0026rsquo;ll find a file called .java-version in the current folder. This file contains the version number of the local JDK.\nDuring installation, jEnv overrides the java command. Each time we call java now, jEnv looks for a .java-version file and if it finds one, starts the JDK version defined in that file. If it doesn\u0026rsquo;t find a .java-version file, it starts the globally configured JDK instead.\nWorking with Maven and Gradle So, if we call java via the command line, it will pick up a locally configured JDK now. Great!\nBut tools like Maven or Gradle still use the system version of the JDK!\nLet\u0026rsquo;s see what we can do about that.\nConfigure jEnv to Work With Maven Making Maven work with the local JDK defined by jEnv is easy. We just need to install the maven plugin:\njenv enable-plugin maven\nIf we run mvn -version in our ~/shiny-project folder from above now, we\u0026rsquo;ll get the following output:\nMaven home: .../apache-maven-3.6.3 Java version: 15, vendor: AdoptOpenJDK, runtime: /home/tom/software/java/jdk-15+36 Default locale: en_AU, platform encoding: UTF-8 OS name: \u0026#34;linux\u0026#34;, version: \u0026#34;5.4.0-52-generic\u0026#34;, arch: \u0026#34;amd64\u0026#34;, family: \u0026#34;unix\u0026#34; Maven is using the new JDK 15 now. Yay!\nConfigure jEnv to Work With Gradle In my case, Gradle picked up jEnv\u0026rsquo;s locally configured JDK automatically!\nIf it doesn\u0026rsquo;t work out of the box for you, you can install the gradle plugin analogously to the Maven plugin above:\njenv enable-plugin gradle If we run gradle -version in our ~/shiny-project folder from above now, we\u0026rsquo;ll get the following output:\n------------------------------------------------------------ Gradle 6.5 ------------------------------------------------------------ Build time: 2020-06-02 20:46:21 UTC Revision: a27f41e4ae5e8a41ab9b19f8dd6d86d7b384dad4 Kotlin: 1.3.72 Groovy: 2.5.11 Ant: Apache Ant(TM) version 1.10.7 compiled on September 1 2019 JVM: 15 (AdoptOpenJDK 15+36) OS: Linux 5.4.0-52-generic amd64 Not Picking the Right Java Version? Depending on your context, jenv might not pick the right Java version and you might end up with errors that complain about Java incompatibilities even though you have set a local Java version using jenv local correctly.\nIn this case, you might need to enable the export plugin, which sets the JAVA_HOME variable properly:\njenv enable-plugin export Now, when you run a command like ./gradlew ... or ./mvnw ..., it should pick the correct Java version.\nMore troubleshooting tips can be found on the official troubleshooting page.\nConclusion jEnv is a handy tool to manage multiple JDK versions between different projects. With jenv local \u0026lt;version\u0026gt; we can configure a JDK version to be used in the current folder.\n","date":"October 24, 2020","image":"https://reflectoring.io/images/stock/0085-numbers-1200x628-branded_hu44f6c2408174d2b728597d092071bb42_195252_650x0_resize_q90_box.jpg","permalink":"/manage-jdks-with-jenv/","title":"Managing Multiple JDK Installations With jEnv"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you feel you\u0026rsquo;re busy, but not productive you\u0026rsquo;re stretched thin between many different commitments you need arguments to say \u0026ldquo;no\u0026rdquo; to the next person asking you to do something  Book Facts  Title: Essentialism Authors: Greg McKeown Word Count: ~ 90.000 (6 hours at 250 words / minute) Reading Ease: easy Writing Style: conversational, bite-sized chapters, easy to follow  Overview {% include book-link.html book=\u0026ldquo;essentialism\u0026rdquo; %} shows the way of what the author calls an \u0026ldquo;Essentialist\u0026rdquo;. Essentialists don\u0026rsquo;t spend time and energy on the non-essentials in their lives, freeing them up to focus on the essentials.\nThe book is written in an easy-to-read manner, with short chapters that one can read in a lunch break. The chapters are each titled with a single succinct verb, in essentialist manner.\nNotes Here are my notes, as usual with some comments in italics.\nTrade-Off  \u0026ldquo;Saying yes to any opportunity by definition requires saying no to several others.\u0026rdquo; an essentialist strategy is based on principled decisions / trade-offs act or be acted upon  Escape  take time to think through options before committing to the best option create a regular space and time free of distractions to do deep work (like thinking about your options and deciding which to take) \u0026ldquo;The faster and busier things get, the more we need to build thinking time into our schedule.\u0026rdquo;  Look  filter out the noise in everything, look for signal maintain a journal and revisit it from time to time to check that you\u0026rsquo;re still doing your essentials ask questions (or the same question again and again) to gain clarity on your direction  Play  play fosters creativity and exploration relax every once in a while  Sleep  \u0026ldquo;The best asset we have for making a contribution to the world is ourselves.\u0026rdquo; protect your best asset enough sleep supports creativity and allows us to make better decisions  Select  having very selective criteria to make decisions makes the decisions easier (either \u0026ldquo;Hell yeah\u0026rdquo;, or \u0026ldquo;No!\u0026rdquo; - also see the book with this title by Derek Sivers) give options a rating between 0 and 100 - everything below 90 is out make your criteria explicit to make decisions almost automatic - write the criteria down ask yourself:  \u0026ldquo;what am I passionate about?\u0026rdquo; \u0026ldquo;what taps my talent?\u0026rdquo; \u0026ldquo;what meets a significant need in the world?\u0026rdquo;    Clarify  when deciding, ask yourself \u0026ldquo;If I didn\u0026rsquo;t have this opportunity, what would I be willing to do to acquire it?\u0026rdquo; (we often value what we already have more than what we don\u0026rsquo;t have, so this question might reduce this bias) people thrive when they have a high level of clarity on goals and roles find your essential intents to guide your decisions essential intents are inspirational, concrete, meaningful, and measurable  Dare  when you feel tension between what you feel is right and what someone expects of you, say \u0026ldquo;no\u0026rdquo; \u0026ldquo;Courage is the key to the process of elimination.\u0026rdquo; separate saying \u0026ldquo;no\u0026rdquo; to someone from the relationship to that person think about what you\u0026rsquo;re giving up when you say \u0026ldquo;yes\u0026rdquo; a \u0026ldquo;no\u0026rdquo; gains respect in the long run be slow with a \u0026ldquo;yes\u0026rdquo; and be quick with a \u0026ldquo;no\u0026rdquo;  Uncommit  uncommit from unfruitful projects - don\u0026rsquo;t fall for the sunk-cost bias the endowment effect makes it hard for us to let go of things - pretend you don\u0026rsquo;t know the things you\u0026rsquo;re attached to admit to failure to let go of projects pause before answering a request do a \u0026ldquo;reverse pilot\u0026rdquo; to get rid of commitments - try some time without doing it and see what happens (most of the times nothing bad happens and we can let go of the commitment)  Edit  editing is the process of removing things to make something better edit the non-essential things out of your work and life, even if you\u0026rsquo;ve put considerable effort into them make editing a habit to regularly correct your path (a great way of doing this is having a weekly distraction-free \u0026ldquo;rendezvous with yourself\u0026rdquo; and reviewing all the areas of your life for non-essential commitments)  Limit  clear boundaries empower us to concentrate on our goals instead of other people\u0026rsquo;s goals articulate your boundaries to make them clear to yourself write down each time you feel a boundary has been crossed to make boundary violations visible clarify your boundaries with colleagues before starting a project together  Buffer  things inevitably take longer than expected - build in a buffer start working on something at the earliest possible moment, not the latest, even if it\u0026rsquo;s just a few minutes of thinking add 50% to all estimations (we should make this a hard rule in software estimations!)  Subtract  invest time in removing obstacles just as you\u0026rsquo;re investing time in essential tasks remove the \u0026ldquo;slowest hiker\u0026rdquo; first (i.e. help the slowest person in a hiking group to make the whole group faster) - what\u0026rsquo;s the obstacle that slows you down most?  Progress  create small goals to make execution almost effortless celebrate small wins \u0026ldquo;Of all forms of human motivation, the most effective is progress.\u0026rdquo; start small and build momentum (this is also good advice for starting a bootstrapped business) follow the \u0026ldquo;minimum viable progress\u0026rdquo; - what\u0026rsquo;s the minimum step towards a goal? start \u0026ldquo;early and small\u0026rdquo; instead of \u0026ldquo;late and big\u0026rdquo; \u0026ldquo;Just a few seconds of preparation pay a valuable dividend.\u0026rdquo; visually reward progress (the easiest way is to check off a todo list, but you can be more creative about it)  Flow  don\u0026rsquo;t \u0026ldquo;push through\u0026rdquo; - instead, design a routine that makes execution almost effortless \u0026ldquo;Routine is one of the most powerful tools for removing obstacles.\u0026rdquo; connect existing cues to new routines (also see my book notes for \u0026ldquo;The Power of Habit\u0026rdquo;) do the hardest thing first if you\u0026rsquo;re working on multiple projects, have a theme for each day so your focus for the day is clear  Focus  don\u0026rsquo;t let your mind wander to past failures and successes or future challenges and opportunities - stay in the \u0026ldquo;now\u0026rdquo; to focus write down ideas to \u0026ldquo;get the future out of your head\u0026rdquo; take note of moments when you are fully present in the moment and try to re-create them  Be  clear out the \u0026ldquo;wardrobe of your life\u0026rdquo; to gain clarity (wardrobes are notoriously full of things we no longer need) pause, push back, stop rushing, take control live the moment whenever faced with a decision, ask yourself \u0026ldquo;what is essential?\u0026rdquo;  Conclusion The book is definitely worth a read, giving inspiration to re-think your decisions and plan your future decisions. I\u0026rsquo;ll be asking myself more often what really is essential in my life and what isn\u0026rsquo;t.\n","date":"October 21, 2020","image":"https://reflectoring.io/images/covers/essentialism-teaser_hubb23b73c1a6545ea65609fe952b6fc2b_173791_650x0_resize_q90_box.jpg","permalink":"/book-review-essentialism/","title":"Book Notes: Essentialism"},{"categories":["Spring Boot"],"contents":"If you want to integrate extensive full-text search features in your Spring Boot application without having to make major changes, Hibernate Search may be a way to go.\n Example Code This article is accompanied by a working code example on GitHub. Introduction Adding full-text search functionality with Hibernate Search is as easy as adding a dependency and a couple of annotations to your entities.\nWell, this is an oversimplification of the process, but yes, it\u0026rsquo;s easy.\nHibernate Search provides integration with Lucene and Elasticsearch which are highly optimized for full-text search. While Lucene and Elasticsearch handle searches, Hibernate Search provides seamless integration between them and Hibernate.\nWe only need to tell Hibernate Search which entities to index.\nThis kind of setup allows us to redirect our text-based queries to search frameworks and standard SQL queries to our RDBMS database.\nSetting Things Up To get started first we need to add the Hibernate Search dependency (Gradle notation):\nimplementation \u0026#39;org.hibernate:hibernate-search-orm:5.11.5.Final\u0026#39; For this tutorial, we\u0026rsquo;re going to use the Elasticsearch integration. The motivation is that it\u0026rsquo;s far easier to scale with Elasticsearch than with Lucene.\nimplementation \u0026#39;org.hibernate:hibernate-search-elasticsearch:5.11.5.Final\u0026#39; Also, we will need to add the following properties in our application.yml file:\nspring: jpa: properties: hibernate: search: default: indexmanager: elasticsearch elasticsearch: host: \u0026lt;Elasticsearch-url\u0026gt; index_schema_management_strategy: drop-and-create required_index_status: yellow A few things to note here:\n default means the following configurations apply to all the indexes. Hibernate Search allows us to apply configurations to a specific index, too. In this case, default must be replaced with the fully qualified class name of the indexed entity. The above configurations are common for all indexes. required_index_status indicates the safest status of the index after which further operations can be performed. The default value is green. If your Elasticsearch setup doesn\u0026rsquo;t have the required number of nodes, index status will be yellow. Further properties and its details can be found in the Hibernate Search docs.  One more thing to note here is that Hibernate Search v.5 only supports Elasticsearch up to v.5.2.x, though I have been using it with v.6.8, and it\u0026rsquo;s working just fine.\nIf you are using or planning on using Elasticsearch v.7 you might want to use Hibernate Search v.6 which is still in Beta at the time of this writing.\nIf you choose to stick with Lucene (which is the default integration) you can still follow along as the APIs are almost identical across integrations.\nHow Does Hibernate Search Work? Let\u0026rsquo;s have a look at how Hibernate Search works in general.\nFirst, we need to tell Hibernate what entities we want to index.\nWe can also tell Hibernate how to index the fields of those entities using analyzers and normalizers.\nThen, when we boot up the application Hibernate will either create, update, or validate index mappings in Elasticsearch, depending on our selected index_schema_management_strategy.\nOnce the application has started, Hibernate Search will keep track of any operations performed on the entities and will apply the same on its corresponding indexes in the Elasticsearch.\nOnce we have loaded some data into indexes we can perform search queries using Hibernate Search APIs.\nAt searching time Hibernate Search will again apply the same analyzers and normalizers that were used during indexing.\nSome Important Terms Text and Keyword A String field can be either mapped to the text or the keyword type of Elasticsearch.\nThe primary difference between text and a keyword is that a text field will be tokenized while a keyword cannot.\nWe can use the keyword type when we want to perform filtering or sorting operations on the field.\nFor instance, let\u0026rsquo;s assume that we have a String field called body, and let\u0026rsquo;s say it has the value \u0026lsquo;Hibernate is fun\u0026rsquo;.\nIf we choose to treat body as text then we will be able to tokenize it [\u0026lsquo;Hibernate\u0026rsquo;, \u0026lsquo;is\u0026rsquo;, \u0026lsquo;fun\u0026rsquo;] and we will be able to perform queries like body: Hibernate.\nIf we make it a keyword type, a match will only be found if we pass the complete text body: Hibernate is fun (wildcard will work, though: body: Hibernate*).\nElasticsearch supports numerous other types.\nAnalyzers and Normalizers Analyzers and normalizers are text analysis operations that are performed on text and keyword respectively, before indexing them and searching for them.\nWhen an analyzer is applied on text, it first tokenizes the text and then applies one or more filters such as a lowercase filter (which converts all the text to lowercase) or a stop word filter (which removes common English stop words such as \u0026lsquo;is\u0026rsquo;, \u0026lsquo;an\u0026rsquo;, \u0026lsquo;the\u0026rsquo; etc).\nNormalizers are similar to analyzers with the difference that normalizers don\u0026rsquo;t apply a tokenizer.\nOn a given field we can either apply an analyzer or a normalizer.\nTo summarize:\n   Text Keyword     Is tokenized Can not be tokenized   Is analyzed Can be normalized   Can perform term based search Can only match exact text    Preparing Entities For Indexing As mentioned in the introduction to index entities we just need to annotate the entities and their fields with a couple of annotations.\nLet\u0026rsquo;s have a look at those annotations.\n@Indexed Annotation @Entity @Indexed(index = \u0026#34;idx_post\u0026#34;) class Post { .... } As the name suggests, with @Indexed we make this entity eligible for indexing. We have also given the index the name idx_post which is not required.\nBy default, Hibernate Search will use the fully qualified class name as the index name.\nWith the @Entity annotation from JPA, we map a class to a database table and, its fields to the table columns.\nSimilarly, with @Indexed we map a class to Elasticsearch\u0026rsquo;s index and its fields to the document fields in the index (an Index is a collection of JSON documents).\nIn the case of @Entity, we have a companion annotation called @Column to map fields while in the case of @Indexed we have the @Field annotation to do the same.\n@Field Annotation We need to apply the @Field annotation on all the fields that we wish to search, sort, or that we need for projection.\n@Field has several properties which we can set to customize its behavior. By default, it will exhibit the following behavior:\n @Field has a property called name which when left empty picks the name of the field on which the annotation is placed. Hibernate Search then uses this name to store the field\u0026rsquo;s value in the index document. Hibernate Search maps this field to Elasticsearch native types. For instance, a field of type String gets mapped to text type, Boolean to boolean type, Date to date type of Elasticsearch. Elasticsearch also applies a default analyzer on the value. The default analyzer first applies a tokenizer that splits text on non-alphanumeric characters and then applies the lowercase filter. For instance, if the hashTags field has the value \u0026lsquo;#Food#Health\u0026rsquo;, it will be internally stored as ['food', 'health] after being analyzed.  @Analyzer @Field(name = \u0026#34;body\u0026#34;) @Field(name = \u0026#34;bodyFiltered\u0026#34;, analyzer = @Analyzer(definition = \u0026#34;stop\u0026#34;)) private String body; We can also apply multiple @Field annotations on a single field. Here we have given a different name to the field and have also provided a different analyzer.\nThis allows us to perform different kinds of search operations on the same entity field. We can also pass different analyzers using the analyzer property.\nHere, we have passed the stop value in the analyzer definition which refers to a built-in Elasticsearch analyzer called \u0026ldquo;Stop Analyzer\u0026rdquo;. It removes common stop words (\u0026lsquo;is\u0026rsquo;, \u0026lsquo;an\u0026rsquo;, etc) that aren\u0026rsquo;t very helpful while querying.\nHere\u0026rsquo;s a list of Elasticsearch\u0026rsquo;s other Built-in analyzers.\n@Normalizer @Entity @Indexed(index = \u0026#34;idx_post\u0026#34;) @NormalizerDef(name = \u0026#34;lowercase\u0026#34;, filters = @TokenFilterDef(factory = LowerCaseFilterFactory.class)) class Post { ... @Field(normalizer = @Normalizer(definition = \u0026#34;lowercase\u0026#34;)) @Enumerated(EnumType.STRING) private Tag tag; ... } The tag field, which is an enum, will mostly consist of a single word. We don\u0026rsquo;t need to analyze such fields. So, instead, we can either set the analyze property of @Field to Analyze.NO or we can apply a normalizer. Hibernate will then treat this field as keyword.\nThe \u0026lsquo;lowercase\u0026rsquo; normalizer that we have used here will be applied both at the time of indexing and searching. So, both \u0026lsquo;MOVIE\u0026rsquo; or \u0026lsquo;movie\u0026rsquo; will be a match.\n@Normalizer can apply one or more filters on the input. In the above example, we have only added the lowercase filter using LowerCaseFilterFactory but if required we can also add multiple filters such as StopFilterFactory which removes common English stop words, or SnowballPorterFilterFactory which performs stemming on the word (Stemming is a process of converting a given word to its base word. E.g., \u0026lsquo;Refactoring\u0026rsquo; gets converted to \u0026lsquo;Refactor\u0026rsquo;).\nYou can find a full list of other available filters in the Apache Solr docs.\n@SortableField @Field @SortableField private long likeCount; The @SortableField annotation is a companion annotation of @Field. When we add @SortableField to a field, Elasticsearch will optimize the index for sorting operations over those fields. We can still perform sorting operations over other fields that are not marked with this annotation but that will have some performance penalties.\nExclude a Field From Indexing @Field(index = Index.NO, store = Store.YES) private String middle; Index.NO indicates that the field won\u0026rsquo;t be indexed. We won\u0026rsquo;t be able to perform any search operation over it. You might be thinking \u0026ldquo;Why not simply remove the @Field annotation?\u0026rdquo;. And the answer is that we still need this field for projection.\nCombine Field Data @Field(store = Store.YES) @Field(name = \u0026#34;fullName\u0026#34;) private String first; @Field(store = Store.YES) @Field(name = \u0026#34;fullName\u0026#34;) private String last; In the section about @Analyzer, we saw that we can map one entity field to multiple index document fields. We can also do the inverse.\nIn the code above, @Field(name = \u0026quot;fullName\u0026quot;) is mapped to first and last both. This way, the index property fullName will have the content of both fields. So, instead of searching over the first and last fields separately, we can directly search over fullName.\nStore Property We can set store to Store.YES when we plan to use it in projection. Note that this will require extra space. Plus, Elasticsearch already stores the value in the _source field (you can find more on the source field in the Elasticsearch docs). So, the only reason to set the store property to true is that when we don\u0026rsquo;t want Elasticsearch to look up and extract value from the _source field.\nWe need to set store to Store.YES when we set Index.NO though, or else Elasticsearch won\u0026rsquo;t store it at all.\n@IndexedEmbedded and @ContainedIn @Entity @Indexed(index = \u0026#34;idx_post\u0026#34;) class Post { ... @ManyToOne @IndexedEmbedded private User user; ... } We use @IndexedEmbedded when we want to perform a search over nested objects fields. For instance, let\u0026rsquo;s say we want to search all posts made by a user with the first name \u0026lsquo;Joe\u0026rsquo; (user.first: joe).\n@Entity @Indexed(index = \u0026#34;idx_user\u0026#34;) class User { ... @ContainedIn @OneToMany(mappedBy = \u0026#34;user\u0026#34;) private List\u0026lt;Post\u0026gt; post; } @ContainedIn makes a @OneToMany relationship bidirectional. When the values of this entity are updated, its values in the index of the root Post entity will also be updated.\nLoading Current Data Into Elasticsearch Before we perform any queries, we first need to load data into Elasticsearch:\n@Service @RequiredArgsConstructor @Slf4j class IndexingService { private final EntityManager em; @Transactional public void initiateIndexing() throws InterruptedException { log.info(\u0026#34;Initiating indexing...\u0026#34;); FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(em); fullTextEntityManager.createIndexer().startAndWait(); log.info(\u0026#34;All entities indexed\u0026#34;); } } We can call the initiateIndexing() method either at the application startup or create an API in a REST controller to call it.\ncreateIndexer() also takes in class references as input. This gives us more choice over which entities we want to index.\nThis is going to be a one-time thing. After this, Hibernate Search will keep entities in both sources in sync. Unless of course for some reason our database goes out of sync with Elasticsearch in which case this indexing API might come in handy again.\nPerforming Queries With Elasticsearch integration we have two choices for writing queries:\n Hibernate Search query DSL: a nice way to write Lucene queries. If you are familiar with Specifications and the Criteria API you will find it easy to get your head around it. Elasticsearch query: Hibernate Search supports both Elasticsearch native queries and JSON queries.  In this tutorial, we are only going to look at Hibernate Search query DSL.\nKeyword Query Now let\u0026rsquo;s say we want to write a query to fetch all records from idx_post where either body or hashtags contain the word \u0026lsquo;food\u0026rsquo;:\n@Component @Slf4j @RequiredArgsConstructor public class SearchService { private final EntityManager entityManager; public List\u0026lt;Post\u0026gt; getPostBasedOnWord(String word){ FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(entityManager); QueryBuilder qb = fullTextEntityManager .getSearchFactory() .buildQueryBuilder() .forEntity(Post.class) .get(); Query foodQuery = qb.keyword() .onFields(\u0026#34;body\u0026#34;,\u0026#34;hashTags\u0026#34;) .matching(word) .createQuery(); FullTextQuery fullTextQuery = fullTextEntityManager .createFullTextQuery(foodQuery, Post.class); return (List\u0026lt;Post\u0026gt;) fullTextQuery.getResultList(); } } Let\u0026rsquo;s go through this code example:\n First, we create an object of FullTextEntityManager which is a wrapper over our EntityManager. Next, we create QueryBuilder for the index on which we want to perform a search. We also need to pass the entity class object in it. We use a QueryBuilder to build our Query. Next, we make use of the keyword query keyword() which allows us to look for a specific word in a field or fields. Finally, we pass the word that we want to search in the matching function. Lastly, we wrap everything in FullTextQuery and fetch the result list by calling getResultList().  One thing to note here is that although we are performing a query on Elasticsearch, Hibernate will still fire a query on the database to fetch the full entity.\nWhich makes sense, because as we saw in the previous section we didn\u0026rsquo;t store all the fields of the Post entity in the index and those fields still need to be retrieved. If we only want to fetch what\u0026rsquo;s stored in your index anyway and think this database call is redundant, we can make use of a Projection.\nRange Queries Let\u0026rsquo;s retrieve all the posts whose likeCount is greater than 1000 and should optionally contain the \u0026lsquo;food\u0026rsquo; hashtag and \u0026lsquo;Literature\u0026rsquo; tag:\npublic List\u0026lt;Post\u0026gt; getBasedOnLikeCountTags(Long likeCount, String hashTags, String tag){ FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(entityManager); QueryBuilder qb = fullTextEntityManager .getSearchFactory() .buildQueryBuilder() .forEntity(Post.class) .get(); Query likeCountGreater = qb.range() .onField(\u0026#34;likeCount\u0026#34;) .above(likeCount) .createQuery(); Query hashTagsQuery = qb.keyword() .onField(\u0026#34;hashTags\u0026#34;) .matching(hashTags) .createQuery(); Query tagQuery = qb.keyword() .onField(\u0026#34;tag\u0026#34;) .matching(tag) .createQuery(); Query finalQuery = qb.bool() .must(likeCountGreater) .should(tagQuery) .should(hashTagsQuery) .createQuery(); FullTextQuery fullTextQuery = fullTextEntityManager .createFullTextQuery(finalQuery, Post.class); fullTextQuery.setSort(qb.sort().byScore().createSort()); return (List\u0026lt;Post\u0026gt;) fullTextQuery.getResultList(); } For likeCount we are using range query. Using only above() is equivalent to the \u0026gt;= operator. If we want to exclude the limits, we just call excludeLimit() after above().\nFor the other two fields, we have again used a keyword query.\nNow, it\u0026rsquo;s time to combine all queries. To do so, we will make use of QueryBuilder\u0026rsquo;s bool() function which provides us with verbs such as should(), must(), and not().\nWe have used must() for likeCount query and should() for the rest as they are optional. Optional queries wrapped in should() contribute to the relevance score.\nFuzzy And Wildcard Search Queries Query similarToUser = qb.keyword().fuzzy() .withEditDistanceUpTo(2) .onField(\u0026#34;first\u0026#34;) .matching(first) .createQuery(); Up until now, we used keyword queries to perform exact match searches, but when combined it with the fuzzy() function it enables us to perform fuzzy searches too.\nFuzzy search gives relevant results even if you have some typos in your query. It gives end-users some flexibility in terms of searching by allowing some degree of error. The threshold of the error to be allowed can be decided by us.\nFor instance, here we have set edit distance to 2 (default is also 2 by the way) which means Elasticsearch will match all the words with a maximum of 2 differences to the input. e.g., \u0026lsquo;jab\u0026rsquo; will match \u0026lsquo;jane\u0026rsquo;.\nQuery similarToUser = qb.keyword().wildcard() .onField(\u0026#34;s?ring*\u0026#34;) .matching(first) .createQuery(); While Fuzzy queries allow us to search even when we have misspelled words in your query, wildcard queries allow us to perform pattern-based searches. For instance, a search query with \u0026rsquo;s?ring*' will match \u0026lsquo;spring\u0026rsquo;,\u0026lsquo;string\u0026rsquo;,\u0026lsquo;strings\u0026rsquo;' etc.\nHere \u0026lsquo;*\u0026rsquo; indicates zero or more characters and \u0026lsquo;?\u0026rsquo; indicates a single character.\nProjection Projection can be used when we want to fetch data directly from Elasticsearch without making another query to the database.\npublic List\u0026lt;User\u0026gt; getUserByFirstWithProjection(String first, int max, int page){ FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(entityManager); QueryBuilder qb = fullTextEntityManager .getSearchFactory() .buildQueryBuilder() .forEntity(User.class) .get(); Query similarToUser = qb.keyword().fuzzy() .withEditDistanceUpTo(2) .onField(\u0026#34;first\u0026#34;) .matching(first) .createQuery(); Query finalQuery = qb.bool() .must(similarToUser) .createQuery(); FullTextQuery fullTextQuery = fullTextEntityManager.createFullTextQuery( finalQuery, User.class); fullTextQuery.setProjection( FullTextQuery.ID, \u0026#34;first\u0026#34;, \u0026#34;last\u0026#34;, \u0026#34;middle\u0026#34;, \u0026#34;age\u0026#34;); fullTextQuery.setSort(qb.sort() .byField(\u0026#34;age\u0026#34;) .desc() .andByScore() .createSort()); fullTextQuery.setMaxResults(max); fullTextQuery.setFirstResult(page); return getUserList(fullTextQuery.getResultList()); } private List\u0026lt;User\u0026gt; getUserList(List\u0026lt;Object[]\u0026gt; resultList) { List\u0026lt;User\u0026gt; users = new ArrayList\u0026lt;\u0026gt;(); for (Object[] objects : resultList) { User user = new User(); user.setId((String) objects[0]); user.setFirst((String) objects[1]); user.setLast((String) objects[2]); user.setMiddle((String) objects[3]); user.setAge((Integer) objects[4]); users.add(user); } return users; } To use projection we need to pass the list of fields that we want in output in the setProjection method.\nNow when we fetch results Hibernate will return a list of object arrays which we have to map to the objects we want. Apart from fields, we can also fetch metadata such as id with FullTextQuery.ID or even score with FullTextQuery.SCORE.\nPagination FullTextQuery fullTextQuery = fullTextEntityManager.createFullTextQuery( finalQuery, User.class); //... fullTextQuery.setSort(qb.sort() .byField(\u0026#34;age\u0026#34;) .desc() .andByScore() .createSort()); fullTextQuery.setMaxResults(max); fullTextQuery.setFirstResult(page); Finally, let\u0026rsquo;s talk about pagination and sorting as we don\u0026rsquo;t want to fetch millions of records that we have stored in our Elasticsearch indexes in a single go.\nTo perform pagination we need two things, the number of results we want per page and page offset (or page number, to put it plainly).\nPrior we can pass call setMaxResult() and setFirstResult() when building our FullTextQuery. Then, the query will return results accordingly.\nQuery DSL also provides us a way to define a sort field and order using sort(). We can also perform sort operation on multiple fields by chaining with andByField().\nFurther Reading That\u0026rsquo;s it! I mean this is not everything, but I believe this is enough to get you started. For further reading you can explore the following:\n Phrase queries - Which allows us to search complete sentences Simple query Strings - It\u0026rsquo;s a powerful function that can translate string input into Lucene query. With this, you can allow your platform to take queries directly from the end-users. Fields on which the query needs to perform will still need to be specified. Faceting - Faceted search is a technique which allows us to divide the results of a query into multiple categories.  Conclusion Hibernate Search combined with Elasticsearch becomes a really powerful tool.\nWith Elasticsearch taking care of scaling and availability, and Hibernate Search managing the synchronization, it makes up for a perfect match.\nBut, this marriage comes at a cost. Keeping schemas in the database and Elasticsearch in-sync might require manual intervention in some cases.\nPlus, there is also the cost of calling Elasticsearch API for index updates and queries.\nHowever, if it\u0026rsquo;s allowing you to deliver more value to your customers in form of a full-text search then that cost becomes negligible.\nThank you for reading! You can find the working code at GitHub.\n","date":"October 7, 2020","image":"https://reflectoring.io/images/stock/0084-search-1200x628-branded_hu576cd6b54fcead906cd9ae98f616e2ec_204079_650x0_resize_q90_box.jpg","permalink":"/hibernate-search/","title":"Full-Text Search with Hibernate Search and Spring Boot"},{"categories":["Java"],"contents":"Streams, introduced in Java 8, use functional-style operations to process data declaratively. The elements of streams are consumed from data sources such as collections, arrays, or I/O resources like files.\nIn this article, we\u0026rsquo;ll explore the various possibilities of using streams to make life easier when it comes to the handling of files. We assume that you have a basic knowledge of Java 8 streams. If you are new to streams, you may want to check out this guide.\nIntroduction In the Stream API, there are operations to filter, map, and reduce data in any order without you having to write extra code. Here is a classic example:\nList\u0026lt;String\u0026gt; cities = Arrays.asList( \u0026#34;London\u0026#34;, \u0026#34;Sydney\u0026#34;, \u0026#34;Colombo\u0026#34;, \u0026#34;Cairo\u0026#34;, \u0026#34;Beijing\u0026#34;); cities.stream() .filter(a -\u0026gt; a.startsWith(\u0026#34;C\u0026#34;)) .map(String::toUpperCase) .sorted() .forEach(System.out::println); Here we filter a list of countries starting with the letter \u0026ldquo;C\u0026rdquo;, convert to uppercase and sort it before printing the result to the console.\nThe output is as below:\nCAIRO COLOMBO As the returned streams are lazily loaded, the elements are not read until they are used (which happens when the terminal operation is called on the stream).\nWouldn’t it be great to apply these SQL-like processing capabilities to files as well? How do we get streams from files? Can we walk through directories and locate matching files using streams? Let us get the answers to these questions.\n Example Code This article is accompanied by a working code example on GitHub. Getting Started Converting files to streams helps us to easily perform many useful operations like\n counting words in the lines, filtering files based on conditions, removing duplicates from the data retrieved, and others.  First, let us see how we can obtain streams from files.\nBuilding Streams from Files We can get a stream from the contents of a file line by line by calling the lines() method of the Files class.\nConsider a file bookIndex.txt with the following contents.\nPride and Prejudice- pride-and-prejudice.pdf Anne of Avonlea - anne-of-avonlea.pdf Anne of Green Gables - anne-of-green-gables.pdf Matilda - Matilda.pdf Why Icebergs Float - Why-Icebergs-Float.pdf Using Files.lines() Let us take a look an example where we read the contents of the above file:\nStream\u0026lt;String\u0026gt; lines = Files.lines(Path.of(\u0026#34;bookIndex.txt\u0026#34;)); lines.forEach(System.out::println); As shown in the example above, the lines() method takes the Path representing the file as an argument. This method does not read all lines into a List, but instead populates lazily as the stream is consumed and this allows efficient use of memory.\nThe output will be the contents of the file itself.\nUsing BufferedReader.lines() The same results can be achieved by invoking the lines() method on BufferedReader also. Here is an example:\nBufferedReader br = Files.newBufferedReader(Paths.get(\u0026#34;bookIndex.txt\u0026#34;)); Stream\u0026lt;String\u0026gt; lines = br.lines(); lines.forEach(System.out::println); As streams are lazy-loaded in the above cases (i.e. they generate elements upon request instead of storing them all in memory), reading and processing files will be efficient in terms of memory used.\nUsing Files.readAllLines() The Files.readAllLines() method can also be used to read a file into a List of String objects. It is possible to create a stream from this collection, by invoking the stream() method on it:\nList\u0026lt;String\u0026gt; strList = Files .readAllLines(Path.of(\u0026#34;bookIndex.txt\u0026#34;)); Stream\u0026lt;String\u0026gt; lines = strList.stream(); lines.forEach(System.out::println); However, this method loads the entire contents of the file in one go and hence is not memory efficient like the Files.lines() method.\nImportance of try-with-resources The try-with-resources syntax provides an exception handling mechanism that allows us to declare resources to be used within a Java try-with-resources block.\nWhen the execution leaves the try-with-resources block, the used resources are automatically closed in the correct order (whether the method successfully completes or any exceptions are thrown).\nWe can use try-with-resources to close any resource that implements either AutoCloseable or Closeable.\nStreams are AutoCloseable implementations and need to be closed if they are backed by files.\nLet us now rewrite the code examples from above using try-with-resources:\ntry (Stream\u0026lt;String\u0026gt; lines = Files .lines(Path.of(\u0026#34;bookIndex.txt\u0026#34;))) { lines.forEach(System.out::println); } try (Stream\u0026lt;String\u0026gt; lines = (Files.newBufferedReader(Paths.get(\u0026#34;bookIndex.txt\u0026#34;)) .lines())) { lines.forEach(System.out::println); } The streams will now be automatically closed when the try block is exited.\nUsing Parallel Streams By default, streams are serial, meaning that each step of a process is executed one after the other sequentially.\nStreams can be easily parallelized, however. This means that a source stream can be split into multiple sub-streams executing in parallel.\nEach substream is processed independently in a separate thread and finally merged to produce the final result.\nThe parallel() method can be invoked on any stream to get a parallel stream.\nUsing Stream.parallel() Let us see a simple example to understand how parallel streams work:\ntry (Stream\u0026lt;String\u0026gt; lines = Files.lines(Path.of(\u0026#34;bookIndex.txt\u0026#34;)) .parallel()) { lines.forEach(System.out::println); } Here is the output:\nAnne of Green Gables - anne-of-green-gables.pdf Why Icebergs Float - Why-Icebergs-Float.pdf Pride and Prejudice- pride-and-prejudice.pdf Matilda - Matilda.pdf Anne of Avonlea - anne-of-avonlea.pdf You can see that the stream elements are printed in random order. This is because the order of the elements is not maintained when forEach() is executed in the case of parallel streams.\nParallel streams may perform better only if there is a large set of data to process.\nIn other cases, the overhead might be more than that for serial streams. Hence, it is advisable to go for proper performance benchmarking before considering parallel streams.\nReading UTF-Encoded Files What if you need to read UTF-encoded files?\nAll the methods we saw until now have overloaded versions that take a specified charset also as an argument.\nConsider a file named input.txt with Japanese characters:\nakarui _ あかるい _ bright Let us see how we can read from this UTF-encoded file:\ntry (Stream\u0026lt;String\u0026gt; lines = Files.lines(Path.of(\u0026#34;input.txt\u0026#34;), StandardCharsets.UTF_8)) { lines.forEach(System.out::println); } In the above case, you can see that we pass StandardCharsets.UTF_8 as an argument to the Files.lines() method which allows us to read the UTF-encoded file.\nBytes from the file are decoded into characters using the specified charset.\nWe could also have used the overloaded version of BufferedReader for reading the file:\nBufferedReader reader = Files.newBufferedReader(path, StandardCharsets.UTF_8); Using Streams to Process Files Streams support functional programming operations such as filter, map, find, etc. which we can chain to form a pipeline to produce the necessary results.\nAlso, the Stream API provides ways to do standard file IO tasks such as listing files/folders, traversing the file tree, and finding files.\nLet’s now look into a few of such cases to demonstrate how streams make file processing simple. We shall use the same file bookIndex.txt that we saw in the first examples.\nFiltering by Data Let us look at an example to understand how the stream obtained by reading this file can be filtered to retain only some of their elements by specifying conditions:\ntry (Stream\u0026lt;String\u0026gt; lines = Files.lines(Path.of(\u0026#34;bookIndex.txt\u0026#34;))) { long i = lines.filter(line -\u0026gt; line.startsWith(\u0026#34;A\u0026#34;)) .count(); System.out.println(\u0026#34;The count of lines starting with \u0026#39;A\u0026#39; is \u0026#34; + i); } In this example, only the lines starting with \u0026ldquo;A\u0026rdquo; are filtered out by calling the filter() method and the number of such lines counted using the count() method.\nThe output is as below:\nThe count of lines starting with \u0026#39;A\u0026#39; is 2 Splitting Words So what if we want to split the lines from this file into words and eliminate duplicates?\ntry (Stream\u0026lt;String\u0026gt; lines = Files.lines(Path.of(\u0026#34;bookIndex.txt\u0026#34;))) { Stream\u0026lt;String\u0026gt; words = lines .flatMap(line -\u0026gt; Stream.of(line.split(\u0026#34;\\\\W+\u0026#34;))); Set\u0026lt;String\u0026gt; wordSet = words.collect(Collectors.toSet()); System.out.println(wordSet); } As shown in the example above, each line from the file can be split into words by invoking the split() method.\nThen we can combine all the individual streams of words into one single stream by invoking the flatMap() method.\nBy collecting the resulting stream into a Set, duplicates can be eliminated.\nThe output is as below:\n[green, anne, Why, Prejudice, Float, pdf, Pride, Avonlea, and, pride, of, prejudice, Matilda, gables, Anne, avonlea, Icebergs, Green, Gables] Reading From CSV Files Into Java Objects If we need to load data from a CSV file into a list of POJOs, how can we achieve it with minimum code?\nAgain, streams come to the rescue.\nWe can write a simple regex-based CSV parser by reading line by line from the file, splitting each line based on the comma separator, and then mapping the data into the POJO.\nFor example, assume that we want to read from the CSV file cakes.csv:\n#Cakes 1, Pound Cake,100 2, Red Velvet Cake,500 3, Carrot Cake,300 4, Sponge Cake,400 5, Chiffon Cake,600 We have a class Cake as defined below:\npublic class Cake { private int id; private String name; private int price; ... // constructor and accessors omitted } So how do we populate objects of class Cake using data from the cakes.csv file? Here is an example:\nPattern pattern = Pattern.compile(\u0026#34;,\u0026#34;); try (Stream\u0026lt;String\u0026gt; lines = Files.lines(Path.of(csvPath))) { List\u0026lt;Cake\u0026gt; cakes = lines.skip(1).map(line -\u0026gt; { String[] arr = pattern.split(line); return new Cake( Integer.parseInt(arr[0]), arr[1], Integer.parseInt(arr[2])); }).collect(Collectors.toList()); cakes.forEach(System.out::println); } In the above example, we follow these steps:\n Read the lines one by one using Files.lines() method to get a stream. Skip the first line by calling the skip() method on the stream as it is the file header. Call the map() method for each line in the file where each line is split based on comma and the data obtained used to create Cake objects. Use the Collectors.toList() method to collect all the Cake objects into a List.  The output is as follows:\nCake [id=1, name= Pound Cake, price=100] Cake [id=2, name= Red Velvet Cake, price=500] Cake [id=3, name= Carrot Cake, price=300] Cake [id=4, name= Sponge Cake, price=400] Cake [id=5, name= Chiffon Cake, price=600] Browsing, Walking, and Searching for Files java.nio.file.Files has many useful methods that return lazy streams for listing folder contents, navigating file trees, finding files, getting JAR file entries etc.\nThese can then be filtered, mapped, reduced, and so on using Java 8 Stream API. Let us explore this in more detail.\nConsider the folder structure below based on which we shall be looking at some examples below.\nListing Directory Contents What if we just want to list the contents of a directory? A simple way to do this is by invoking the Files.list() method, which returns a stream of Path objects representing the files inside the directory passed as the argument.\nListing Directories Let us look at some sample code to list directories:\ntry (Stream\u0026lt;Path\u0026gt; paths = Files.list(Path.of(folderPath))) { paths.filter(Files::isDirectory) .forEach(System.out::println); } ```text In the example, we use `Files.list()` and apply a filter to the resulting stream of paths to get only the directories printed out to the console. The output might look like this: ```text src/main/resources/books/non-fiction src/main/resources/books/fiction Listing Regular Files So what if we need to list only regular files and not directories? Let us look at an example:\ntry (Stream\u0026lt;Path\u0026gt; paths = Files.list(Path.of(folderPath))) { paths.filter(Files::isRegularFile) .forEach(System.out::println); } As shown in the above example, we can use the Files::IsRegularFile operation to list only the regular files.\nThe output is as below:\nsrc/main/resources/books/bookIndex.txt Walking Recursively The Files.list() method we saw above is non-recursive, meaning it does not traverse the subdirectories. What if we need to visit the subdirectories too?\nThe Files.walk() method returns a stream of Path elements by recursively walking the file tree rooted at a given directory.\nLet’s look at an example to understand more:\ntry (Stream\u0026lt;Path\u0026gt; stream = Files.walk(Path.of(folderPath))) { stream.filter(Files::isRegularFile) .forEach(System.out::println); } In the above example, we filter the stream returned by the Files.walk() method to return only regular files (subfolders are excluded).\nThe output is as below:\nsrc/main/resources/books/non-fiction/Why-Icebergs-Float.pdf src/main/resources/books/fiction/kids/anne-of-green-gables.pdf src/main/resources/books/fiction/kids/anne-of-avonlea.pdf src/main/resources/books/fiction/kids/Matilda.pdf src/main/resources/books/fiction/adults/pride-and-prejudice.pdf src/main/resources/books/bookIndex.txt Finding Files In the previous example, we saw how we can filter stream obtained from the Files.walk() method. There is a more efficient way of doing this by using the Files.find() method.\nFiles.find() evaluates a BiPredicate (a matcher function) for each file encountered while walking the file tree. The corresponding Path object is included in the returned stream if the BiPredicate returns true.\nLet us look at an example to see how we can use the find() method to find all PDF files anywhere within the given depth of the root folder:\nint depth = Integer.MAX_VALUE; try (Stream\u0026lt;Path\u0026gt; paths = Files.find( Path.of(folderPath), depth, (path, attr) -\u0026gt; { return attr.isRegularFile() \u0026amp;\u0026amp; path.toString().endsWith(\u0026#34;.pdf\u0026#34;); })) { paths.forEach(System.out::println); } In the above example, the find() method returns a stream with all the regular files having the .pdf extension.\nThe depth parameter is the maximum number of levels of directories to visit. A value of 0 means that only the starting file is visited, unless denied by the security manager. A value of MAX_VALUE may be used to indicate that all levels should be visited.\nOutput is:\nsrc/main/resources/books/non-fiction/Why-Icebergs-Float.pdf src/main/resources/books/fiction/kids/anne-of-green-gables.pdf src/main/resources/books/fiction/kids/anne-of-avonlea.pdf src/main/resources/books/fiction/kids/Matilda.pdf src/main/resources/books/fiction/adults/pride-and-prejudice.pdf Streaming JAR Files We can also use streams to read the contents of JAR files.\nThe JarFile.stream() method returns an ordered Stream over the ZIP file entries. Entries appear in the stream in the order they appear in the central directory of the ZIP file.\nConsider a JAR file with the following structure.\nSo how do we iterate through the entries of the JAR file? Here is an example which demonstrates this:\ntry (JarFile jFile = new JarFile(jarFile)) { jFile.stream().forEach(file -\u0026gt; System.out.println(file)); } The contents of the JAR file will be iterated and displayed as shown below:\nbookIndex.txt fiction/ fiction/adults/ fiction/adults/pride-and-prejudice.pdf fiction/kids/ fiction/kids/Matilda.pdf fiction/kids/anne-of-avonlea.pdf fiction/kids/anne-of-green-gables.pdf non-fiction/ non-fiction/Why-Icebergs-Float.pdf What if we need to look for specific entries within a JAR file?\nOnce we get the stream from the JAR file, we can always perform a filtering operation to get the matching JarEntry objects:\ntry (JarFile jFile = new JarFile(jarFile)) { Optional\u0026lt;JarEntry\u0026gt; searchResult = jFile.stream() .filter(file -\u0026gt; file.getName() .contains(\u0026#34;Matilda\u0026#34;)) .findAny(); System.out.println(searchResult.get()); } In the above example, we are looking for filenames containing the word “Matilda”. So the output will be as follows.\nfiction/kids/Matilda.pdf Conclusion In this article, we discussed how to generate Java 8 streams from files using the API from the java.nio.file.Files class .\nWhen we manage data in files, processing them becomes a lot easier with streams. A low memory footprint due to lazy loading of streams is another added advantage.\nWe saw that using parallel streams is an efficient approach for processing files, however we need to avoid any operations that require state or order to be maintained.\nTo prevent resource leaks, it is important to use the try-with-resources construct, thus ensuring that the streams are automatically closed.\nWe also explored the rich set of APIs offered by the Files class in manipulating files and directories.\nThe example code used in this article is available on GitHub.\n","date":"September 30, 2020","image":"https://reflectoring.io/images/stock/0083-files-1200x628-branded_hu5871c54ccb54bc23d74b5007369f7d86_155734_650x0_resize_q90_box.jpg","permalink":"/processing-files-using-java-8-streams/","title":"Processing Files With Java 8 Streams"},{"categories":["Spring Boot"],"contents":"In this article, we\u0026rsquo;ll look at Spring component scanning and how to use it. We\u0026rsquo;ll be using a Spring Boot application for all our examples throughout this article.\n Example Code This article is accompanied by a working code example on GitHub. What is Component Scanning? To do dependency injection, Spring creates a so-called application context.\nDuring startup, Spring instantiates objects and adds them to the application context. Objects in the application context are called \u0026ldquo;Spring beans\u0026rdquo; or \u0026ldquo;components\u0026rdquo;.\nSpring resolves dependencies between Spring beans and injects Spring beans into other Spring beans' fields or constructors.\nThe process of searching the classpath for classes that should contribute to the application context is called component scanning.\nStereotype Annotations If Spring finds a class annotated with one of several annotations, it will consider this class as a candidate for a Spring bean to be added to the application context during component scanning.\nSpring components are mainly made up of four types.\n@Component This is a generic stereotype annotation used indicates that the class is a Spring-managed component. Other stereotypes are a specialization of @Component.\n@Controller This indicates that the annotated class is a Spring-managed controller that provides methods annotated with @RequestMapping to answer web requests.\nSpring 4.0 introduced the @RestController annotation which combines both @Controller and @ResponseBody and makes it easy to create RESTful services that return JSON objects.\n@Service We can use the @Service stereotype for classes that contain business logic or classes which come in the service layer.\n@Repository We can use the @Repository stereotype for DAO classes which are responsible for providing access to database entities.\nIf we are using Spring Data for managing database operations, then we should use the Spring Data Repository interface instead of building our own @Repository-annotated classes.\nWhen to Use Component Scanning Spring provides a mechanism to identify Spring bean candidates explicitly through the @ComponentScan annotation.\nIf the application is a Spring Boot application, then all the packages under the package containing the Spring Boot application class will be covered by an implicit component scan.\nSpring Boot\u0026rsquo;s @SpringBootApplication annotation implies the @Configuration, @ComponentScan, and @EnableAutoConfiguration annotations.\nBy default, the @ComponentScan annotation will scan for components in the current package and all its sub-packages. So if your application doesn\u0026rsquo;t have a varying package structure then there is no need for explicit component scanning.\nSpecifying a @Configuration-annotated class in the default package will tell Spring to scan all the classes in all the JARS in the classpath. Don\u0026rsquo;t do that!\nHow to Use @ComponentScan We use the @ComponentScan annotation along with the @Configuration annotation to tell Spring to scan classes that are annotated with any stereotype annotation. The @ComponentScan annotation provides different attributes that we can modify to get desired scanning behavior.\nWe\u0026rsquo;ll be using ApplicationContext\u0026rsquo;s getBeanDefinitionNames() method throughout this article to check out the list of beans that have successfully been scanned and added to the application context:\n@Component class BeanViewer { private final Logger LOG = LoggerFactory.getLogger(getClass()); @EventListener public void showBeansRegistered(ApplicationReadyEvent event) { String[] beanNames = event.getApplicationContext() .getBeanDefinitionNames(); for(String beanName: beanNames) { LOG.info(\u0026#34;{}\u0026#34;, beanName); } } } The above BeanViewer will print all the beans that are registered with the application context. This will help us to check whether our components are loaded properly or not.\nSpring Boot\u0026rsquo;s Implicit Auto Scanning As said earlier, Spring Boot does auto scanning for all the packages that fall under the parent package. Let\u0026rsquo;s look at the folder structure:\n|- io.reflectoring.componentscan (main package) |- SpringComponentScanningApplication.java |- UserService.java (@Service stereotype) |- BeanViewer.java We have created a UserService class with the @Service stereotype in our parent package io.reflectoring.componentscan. As said earlier, since these classes are under the parent package where we have our @SpringBootApplication-annotated application class, the component will be scanned by default when we start the Spring Boot application:\n... INFO 95832 --- [main] i.reflectoring.componentscan.BeanViewer : beanViewer INFO 95832 --- [main] i.reflectoring.componentscan.BeanViewer : users ... The above output shows the bean created for BeanViewer, ExplicitScan, and Users are printed out by our BeanViewer.\nUsing @ComponentScan Without Any Attributes If we have a package that is not under our parent package, or we\u0026rsquo;re not using Spring Boot at all, we can use @ComponentScan along with a @Configuration bean.\nThis will tell Spring to scan the components in the package of this @Configuration class and its sub-packages:\npackage io.reflectoring.birds; @Configuration @ComponentScan public class BirdsExplicitScan { } The birds package is next to the main package of the application, so it\u0026rsquo;s not caught by Spring Boot\u0026rsquo;s default scanning:\n|- io.reflectoring.componentscan |- SpringComponentScanningApplication.java |- io.reflectoring.birds |- BirdsExplicitScan.java (@Configuration) |- Eagle.java (@Component stereotype) |- Sparrow.java (@Component stereotype) If we want to include the BirdsExplicitScan into our Spring Boot application, we have to import it:\n@SpringBootApplication @Import(value= {BirdsExplicitScan.class}) public class SpringComponentScanningApplication { public static void main(String[] args) { SpringApplication.run(SpringComponentScanningApplication.class, args); } } When we start the application, we get the following output:\n... INFO 95832 --- [main] i.reflectoring.componentscan.BeanViewer : beanViewer INFO 95832 --- [main] i.reflectoring.componentscan.BeanViewer : users INFO 84644 --- [main] i.reflectoring.componentscan.BeanViewer : eagle INFO 84644 --- [main] i.reflectoring.componentscan.BeanViewer : sparrow ... As we can see in the above output, beans got created for the Eagle and Sparrow classes.\nUsing @ComponentScan with Attributes Let\u0026rsquo;s have a look at attributes of the @ComponentScan annotation that we can use to modify its behavior:\n basePackages: Takes a list of package names that should be scanned for components. basePackageClasses: Takes a list of classes whose packages should be scanned. includeFilters: Enables us to specify what types of components should be scanned. excludeFilters: This is the opposite of includeFilters. We can specify conditions to ignore some of the components based on criteria while scanning. useDefaultFilters: If true, it enables the automatic detection of classes annotated with any stereotypes. If false, the components which fall under filter criteria defined by includeFilters and excludeFilters will be included.  To demonstrate the different attributes, let\u0026rsquo;s add some classes to the package io.reflectoring.vehicles (which is not a sub package of our application main package io.reflectoring.componentscan):\n|- io.reflectoring.componentscan (Main Package) |- ExplicitScan.java (@Configuration) |- io.reflectoring.birds |- io.reflectoring.vehicles |- Car.java |- Hyundai.java (@Component stereotype and extends Car) |- Tesla.java (@Component stereotype and extends Car) |- SpaceX.java (@Service stereotype) |- Train.java (@Service stereotype) Let\u0026rsquo;s see how we can control which classes are loaded during a component scan.\nScanning a Whole Package with basePackages We\u0026rsquo;ll create the class ExplicitScan class in the application\u0026rsquo;s main package so it gets picked up by the default component scan. Then, we add the package io.reflectoring.vehicles package via the basePackages attribute of the @ComponenScan annotation:\npackage io.reflectoring.componentscan; @Configuration @ComponentScan(basePackages= \u0026#34;io.reflectoring.vehicles\u0026#34;) public class ExplicitScan { } If we run the application, we see that all components in the vehicles package are included in the application context:\n... INFO 65476 --- [main] i.reflectoring.componentscan.BeanViewer : hyundai INFO 65476 --- [main] i.reflectoring.componentscan.BeanViewer : spaceX INFO 65476 --- [main] i.reflectoring.componentscan.BeanViewer : tesla INFO 65476 --- [main] i.reflectoring.componentscan.BeanViewer : train ... Including Components with includeFilters Let\u0026rsquo;s see how we can include only classes that extend the Car type for component scanning:\n@Configuration @ComponentScan(basePackages= \u0026#34;io.reflectoring.vehicles\u0026#34;, includeFilters= @ComponentScan.Filter( type=FilterType.ASSIGNABLE_TYPE, classes=Car.class), useDefaultFilters=false) public class ExplicitScan { } With a combination of includeFilters and FilterType, we can tell Spring to include classes that follow specified filter criteria.\nWe used the filter type ASSIGNABLE_TYPE to catch all classes that are assignable to / extend the Car class.\nOther available filter types are:\n ANNOTATION: Match only classes with a specific stereotype annotation. ASPECTJ: Match classes using an AspectJ type pattern expression ASSIGNABLE_TYPE: Match classes that extend or implement this class or interface. REGEX: Match classes using a regular expression for package names.  In the above example, we have modified our ExplicitScan class with includeFilters to include components that extend Car.class and we are changing useDefaultFilters = false so that only our specific filters are applied.\nNow, only the Hyundai and Tesla beans are being included in the component scan, because they extend the Car class:\nINFO 68628 --- [main] i.reflectoring.componentscan.BeanViewer : hyundai INFO 68628 --- [main] i.reflectoring.componentscan.BeanViewer : tesla Excluding Components with excludeFilters Similar to includeFilters, we can use FilterType with excludeFilters to exclude classes from getting scanned based on matching criteria.\nLet\u0026rsquo;s modify our ExplicitScan with excludeFilters and tell Spring to exclude classes that extend Car from component scanning.\n@Configuration @ComponentScan(basePackages= \u0026#34;io.reflectoring.vehicles\u0026#34;, excludeFilters= @ComponentScan.Filter( type=FilterType.ASSIGNABLE_TYPE, classes=Car.class)) public class ExplicitScan { } Note that we did not set useDefaultFilters to false, so that by default, Spring would include all classes in the package.\nThe output shows that the Hyundai and Tesla beans we excluded and only the other two classes in the package were included in the scan:\n... INFO 97832 --- [main] i.reflectoring.componentscan.BeanViewer : spaceX INFO 97832 --- [main] i.reflectoring.componentscan.BeanViewer : train ... Make Your Component Scan as Explicit as Possible Using the @ComponentScan annotation extensively can quickly lead to confusing rules on how your application is made up! Use it sparingly to make your application context rules as explicit as possible.\nA good practice is to explicitly import a @Configuration class with the @Import annotation and add the @ComponentScan annotation to that configuration class to auto-scan only the package of that class. This way, we have clean boundaries between the packages of our application.\nConclusion In this article, we\u0026rsquo;ve learned about Spring component stereotypes, what is component scanning and how to use component scanning, and its various attributes which we can modify to get the desired scanning behavior.\n","date":"September 24, 2020","image":"https://reflectoring.io/images/stock/0031-matrix-1200x628-branded_hufb3c207f9151b804bbf7fe86cefe5814_184798_650x0_resize_q90_box.jpg","permalink":"/spring-component-scanning/","title":"Component Scanning with Spring Boot"},{"categories":["Spring Boot"],"contents":"Monitoring and observability are essential in distributed environments and they rely on effective health checking mechanisms that can be observed at runtime.\nIn this article, we will build health check functions in Spring Boot applications and make them observable by capturing useful health metrics and integrate with popular monitoring tools.\n Example Code This article is accompanied by a working code example on GitHub. Why Do we use Health Checks? A distributed system is composed of many moving parts like a database, queues, and other services. Health check functions tell us the status of our running application like whether the service is slow or not available.\nWe also learn to predict the system health in the future by observing any anomalies in a series of metrics like memory utilization, errors, and disk space. This allows us to take mitigating actions like restarting instances, falling back to a redundant instance, or throttling the incoming requests.\nTimely detection and proactive mitigation will ensure that the application is stable and minimize any impact on business functions.\nApart from infrastructure and operations teams, health check metrics and insights derived from them are also becoming useful to the end-users.\nIn an API ecosystem, for instance, with API developers, partners, and third-party developers, the health status of APIs is regularly updated and published in a dashboard, like on this Dashboard by Twitter:\nThe dashboard gives a snapshot of the health status of the Twitter APIs as \u0026ldquo;Operational\u0026rdquo;, \u0026ldquo;Degraded Performance\u0026rdquo;, etc. helping us to understand the current status of those APIs.\nCommon Health Checking Techniques The simplest way of implementing a health check is to periodically check the “heartbeat” of a running application by sending requests to some of its API endpoints and getting a response payload containing the health of the system.\nThese heartbeat endpoints are HTTP GET or HEAD requests that run light-weight processes and do not change the state of the system. The response is interpreted from either the HTTP response status or from specific fields in the response payload.\nAlthough this method can tell us if the application itself is up and running, it does not tell us anything about the services that the application depends on like a database, or another service. So a composite health check made up of the health of dependent systems aggregated together gives a more complete view.\nA composite health check is sometimes also called a \u0026ldquo;deep check\u0026rdquo;.\nA more proactive approach involves monitoring a set of metrics indicating system health. These are more useful since they give us early indications of any deteriorating health of the system giving us time to take mitigating measures.\nWe will look at all of these approaches in the subsequent sections.\nAdding a Health Check in Spring Boot We will build a few APIs with Spring Boot and devise mechanisms to check and monitor their health.\nLet us create our application with the Spring Initializr by including the dependencies for web, lombok, webflux, and actuator.\nAdding the Actuator Dependency The Actuator module provides useful insight into the Spring environment for a running application with functions for health checking and metrics gathering by exposing multiple endpoints over HTTP and JMX. We can refer to the full description of the Actuator module in the Actuator Documentation.\nWe added the actuator dependency while creating the application from the Initializr. We can choose to add it later in our pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-actuator\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; For gradle, we add our dependency as:\ndependencies { compile(\u0026#34;org.springframework.boot:spring-boot-starter-actuator\u0026#34;) } Checking the Health Status with Zero Configuration We will first build our application created above with Maven or Gradle:\nmvn clean package Running this command will generate the executable in the fat jar format containing the actuator module. Let us execute this jar with:\njava -jar target/usersignup-0.0.1-SNAPSHOT.jar We will now run the application and access the /health endpoint using curl or by hitting the URL from the browser:\ncurl http://localhost:8080/actuator/health Running the curl command gives the output:\n{\u0026#34;status\u0026#34;:\u0026#34;UP\u0026#34;} The status UP indicates the application is running. This is derived from an evaluation of the health of multiple components called \u0026ldquo;health indicators\u0026rdquo; in a specific order.\nThe status will show DOWN if any of those health indicator components are \u0026lsquo;unhealthy\u0026rsquo; for example a database is not reachable.\nWe will look at health indicators in more detail in the following sections. However, in summary, the UP status from the Actuator health endpoint indicates that the application can operate with full functionality.\nChecking Health Status Details To view some more information about the application\u0026rsquo;s health, we will enable the property management.endpoint.health.show-details in application.properties:\n# Show details of health endpoint management.endpoint.health.show-details=always After we compile and run the application, we get the output with details of the components contributing to the health status:\n{ \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34;, \u0026#34;components\u0026#34;: { \u0026#34;diskSpace\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34;, \u0026#34;details\u0026#34;: { \u0026#34;total\u0026#34;: 250685575168, \u0026#34;free\u0026#34;: 12073996288, \u0026#34;threshold\u0026#34;: 10485760, \u0026#34;exists\u0026#34;: true } }, \u0026#34;ping\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34; } } } We can see in this output that the health status contains a component named diskSpace which is UP with details containing the total, free, and threshold space. This HealthIndicator checks available disk space and will report a status of DOWN when the free space drops below the threshold space.\nAggregating Health Status from Multiple Health Indicators Let us add some real-life flavor to our application by adding some APIs that will not only store information in a database but also read from it.\nWe will create three APIs in our application:\n add user activate user fetch users  These APIs will be using a controller, service, and repository class. The repository is based on JPA and uses the in-memory H2 database. The API for fetch users will also use a URL shortener service for shortening the user\u0026rsquo;s profile URL.\nYou can check out the code on GitHub.\nDatabase Health Indicator After we build and run our application as before and check the health status, we can see one additional component for the database named db included under the components key:\n{ \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34;, \u0026#34;components\u0026#34;: { \u0026#34;db\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34;, \u0026#34;details\u0026#34;: { \u0026#34;database\u0026#34;: \u0026#34;H2\u0026#34;, \u0026#34;validationQuery\u0026#34;: \u0026#34;isValid()\u0026#34; } }, \u0026#34;diskSpace\u0026#34;: { ... } }, \u0026#34;ping\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34; } } } The health status is composed of status contributed by multiple components called \u0026ldquo;health Indicators\u0026rdquo; in the Actuator vocabulary.\nIn our case, the health status is composed of health indicators of disk space and database.\nThe database health indicator is automatically added by Spring Boot if it detects a Datasource as we will see in the next section.\nOther Predefined Health Indicators Spring Boot Actuator comes with several predefined health indicators like\n DataSourceHealthIndicator, MongoHealthIndicator, RedisHealthIndicator, or CassandraHealthIndicator.  Each of them is a Spring bean that implements the HealthIndicator interface and checks the health of that component.\nSpring Boot automatically provides a health indicator for standard components (like a DataSource). The health check provided by a DataSource creates a connection to a database and performs a simple query, such as select 1 from dual to check that it is working.\nAggregating Health Indicators Spring Boot aggregates all health indicators it finds in the application context to create the result of the /health endpoint we have seen above.\nIf our application uses Redis, a Redis component is added to the endpoint. If we use MongoDB, a MongoDB component is added to the endpoint. And so on.\nThe aggregation is done by an implementation of StatusHealthAggregator which aggregates the statuses from all health indicators into a single overall status.\nSpring Boot auto-configures an instance of SimpleHealthAggregator. We can provide our own implementation of StatusHealthAggregator to supersede the default behavior.\nWe can also disable a particular health indicator using application properties:\nmanagement.health.mongo.enabled=false Checking the Health of APIs with Custom Health Indicators Predefined health indicators do not cover all use cases of a health check.\nFor example, if our API is dependent on any external service, we might like to know if the external service is available. Further, we might like to know the health of the individual APIs rather than the health of the entire application.\nFor this, we will now build two types of custom health checks in our application:\n a health check for individual components with health indicators a composite health check with composite health contributors  Checking the Health of Individual Components In our example, we are using an external service for shortening the URLs. We will monitor the availability of this service by building a health indicator of this service.\nCreating a custom health indicator is done in two steps:\n Implement the HealthIndicator interface and override the health() method. Register the health indicator class as a Spring bean by adding the @Component annotation (or by using Java Config).  Our custom health indicator for the UrlShortener Service looks like this:\n@Component @Slf4j public class UrlShortenerServiceHealthIndicator implements HealthIndicator { private static final String URL = \u0026#34;https://cleanuri.com/api/v1/shorten\u0026#34;; @Override public Health health() { // check if url shortener service url is reachable  try (Socket socket = new Socket(new java.net.URL(URL).getHost(),80)) { } catch (Exception e) { log.warn(\u0026#34;Failed to connect to: {}\u0026#34;,URL); return Health.down() .withDetail(\u0026#34;error\u0026#34;, e.getMessage()) .build(); } return Health.up().build(); } } In this class, we return the status as UP if the URL is reachable, otherwise, we return the DOWN status with an error message.\nComposite Health Checking with Health Contributors Earlier, we added three APIs to our application for adding, activating, and fetching users. It will be very useful to see the health of the individual APIs by checking specific resources on a per-endpoint basis. We will do this with CompositeHealthContributors.\nOur Fetch Users API depends on the database and the URL shortener service. This API can function only if both of these dependencies are available. We can do this in a single health indicator as described in the previous section.\nBut this can be done more elegantly with a CompositeHealthContributor which will combine the health checks from the database and the URL shortener service. The steps for building a composite health check are:\n Implement the CompositeHealthContributor interface in a Spring bean. Mark the contributing health indicators with the HealthContributorinterface. Override the iterator() method in the CompositeHealthContributor interface with the list of health contributors which are health indicators marked with the HealthContributor interface.  For our example, we will first create a database health indicator and mark it with the HealthContributor interface:\n@Component(\u0026#34;Database\u0026#34;) public class DatabaseHealthContributor implements HealthIndicator, HealthContributor { @Autowired private DataSource ds; @Override public Health health() { try(Connection conn = ds.getConnection()){ Statement stmt = conn.createStatement(); stmt.execute(\u0026#34;select FIRST_NAME,LAST_NAME,MOBILE,EMAIL from USERS\u0026#34;); } catch (SQLException ex) { return Health.outOfService().withException(ex).build(); } return Health.up().build(); } } For checking the health status of the database we execute a query on the USERS table used in the Fetch Users API.\nWe will next mark the URL shortener health indicator we created in the previous section with the HealthContributor interface:\npublic class UrlShortenerServiceHealthIndicator implements HealthIndicator, HealthContributor { ... } We will now create the composite health check of our Fetch Users API using the two health contributor components we created above:\n@Component(\u0026#34;FetchUsersAPI\u0026#34;) public class FetchUsersAPIHealthContributor implements CompositeHealthContributor { private Map\u0026lt;String, HealthContributor\u0026gt; contributors = new LinkedHashMap\u0026lt;\u0026gt;(); @Autowired public FetchUsersAPIHealthContributor( UrlShortenerServiceHealthIndicator urlShortenerServiceHealthContributor, DatabaseHealthContributor databaseHealthContributor) { contributors.put(\u0026#34;urlShortener\u0026#34;, urlShortenerServiceHealthContributor); contributors.put(\u0026#34;database\u0026#34;, databaseHealthContributor); } /** * return list of health contributors */ @Override public Iterator\u0026lt;NamedContributor\u0026lt;HealthContributor\u0026gt;\u0026gt; iterator() { return contributors.entrySet().stream() .map((entry) -\u0026gt; NamedContributor.of(entry.getKey(), entry.getValue())).iterator(); } @Override public HealthContributor getContributor(String name) { return contributors.get(name); } } The FetchUsersAPIHealthContributor class will publish the health status of Fetch Users API as UP if:\n the URL shortener service is reachable, and we can run SQL queries on the USERS table used in the API.  With this health indicator of the API added, our health check output now contains the health status of FetchUsers API in the list of components.\n\u0026#34;FetchUsersAPI\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34;, \u0026#34;components\u0026#34;: { \u0026#34;database\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34; }, \u0026#34;urlShortener\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34; } } }, ... } The corresponding error output appears when we introduce an error by specifying a non-existent table:\n\u0026#34;FetchUsersAPI\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;OUT_OF_SERVICE\u0026#34;, \u0026#34;components\u0026#34;: { \u0026#34;database\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;OUT_OF_SERVICE\u0026#34;, \u0026#34;details\u0026#34;: { \u0026#34;error\u0026#34;: \u0026#34;...\u0026#34; } }, \u0026#34;urlShortener\u0026#34;: { \u0026#34;status\u0026#34;: \u0026#34;UP\u0026#34; } } }, This output indicates that the Fetch Users API is out-of-service and cannot serve requests when the database is not set up although the URL shortener service is available.\nHealth Indicators can also be grouped for specific purposes. For example, we can have a group for database health and another for the health of our caches.\nMonitoring Application Health We monitor the health of our application by observing a set of metrics. We will enable the metrics endpoint to get many useful metrics like JVM memory consumed, CPU usage, open files, and many more.\nMicrometer is a library for collecting metrics from JVM-based applications and converting them in a format accepted by the monitoring tools. It is a facade between application metrics and the metrics infrastructure developed by different monitoring systems like Prometheus, New Relic, and many others.\nTo illustrate, we will integrate our Spring Boot application with one of these monitoring systems - Prometheus. Prometheus operates on a pull model by scraping metrics from an endpoint exposed by the application instances at fixed intervals.\nWe will first add the micrometer SDK for Prometheus:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.micrometer\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;micrometer-registry-prometheus\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; We can integrate with another monitoring system like New Relic similarly by adding micrometer-registry-newrelic dependency for metric collection. New Relic in contrast to Prometheus works on a push model so we need to additionally configure credentials for New Relic in the Spring Boot application.\nContinuing with our example with Prometheus, we will expose the Prometheus endpoint by updating the management.endpoints.web.exposure.include property in our application.properties.\nmanagement.endpoints.web.exposure.include=health,info,prometheus Here is a snippet of the metrics from the prometheus endpoint - http://localhost:8080/actuator/prometheus:\njvm_threads_daemon_threads 23.0 jvm_buffer_count_buffers{id=\u0026#34;mapped - \u0026#39;non-volatile memory\u0026#39;\u0026#34;,} 0.0 jvm_buffer_count_buffers{id=\u0026#34;mapped\u0026#34;,} 0.0 jvm_buffer_count_buffers{id=\u0026#34;direct\u0026#34;,} 14.0 process_files_open_files 33.0 hikaricp_connections_max{pool=\u0026#34;HikariPool-1\u0026#34;,} 10.0 ... Next, we will add the job in Prometheus with the configuration for scraping the above metrics emitted from our application. This configuration will be saved in prometheus-config.yml.\n- job_name: \u0026#39;user sign up\u0026#39; metrics_path: \u0026#39;/actuator/prometheus\u0026#39; scrape_interval: 5s static_configs: - targets: [\u0026#39;\u0026lt;HOST_NAME\u0026gt;:8080\u0026#39;] This configuration will scrape the metrics at 5-second intervals.\nWe will use Docker to run Prometheus. Specify the IP address of the host machine instead of localhost while running in Docker:\ndocker run \\ -p 9090:9090 \\ -v prometheus-config.yml:/etc/prometheus/prometheus.yml \\ prom/prometheus Now we can check our application as a target in Prometheus by visiting the URL - http://localhost:9090/targets:\nAs stated above, due to the Micrometer metrics facade we can integrate with other monitoring tools only by adding the provider-specific Micrometer dependency to the application.\nConfiguring Kubernetes Probes Microservices built with Spring Boot are commonly packaged in containers and deployed to container orchestration systems like Kubernetes. One of the key features of Kubernetes is self-healing, which it does by regularly checking the health of the application and replacing unhealthy instances with healthy instances.\nAmong its many components, the Kubelet ensures that the containers are running and replaced with a healthy instance, anytime it goes down. This is detected using two properties:\n Liveness Check: An endpoint indicating that the application is available. The Kubelet uses liveness probes to know when to restart a container. Readiness Check: The Kubelet uses readiness probes to know when a container is ready to start accepting traffic.  We will enable these two health checks by setting the property in application.properties.\nmanagement.health.probes.enabled=true After this when we compile and run the application, we can see these two health checks in the output of the health endpoint and also two health groups.\nWe can next use these two endpoints to configure HTTP probes for liveness and readiness checks in the container specification when creating the deployment object in Kubernetes. This definition of Deployment object along with the Service object is saved in deployment.yaml:\nlivenessProbe: httpGet: path: /actuator/health/liveness  port: 8080 readinessProbe: httpGet: path: /actuator/health/readiness  port: 8080 We will create these objects in Kubernetes by running\nkubectl apply -f deployment.yaml For the HTTP probe, the Kubelet process sends an HTTP request to the specified path and port to perform the liveness and readiness checks.\nConclusion We saw how we can build powerful monitoring and observability capabilities in Spring Boot applications with the help of the Actuator module. We configured health indicators and Kubernetes probes in a microservice application and enabled health check metrics to integrate with monitoring tools like Prometheus.\nObservability is a rapidly evolving area and we should expect to see more features along these lines in future releases of Spring Boot.\nYou can refer to all the source code used in the article on Github.\n","date":"September 22, 2020","image":"https://reflectoring.io/images/stock/0082-ekg-1200x628-branded_hu250e0338abc9286e612ce2ac6bb0f466_160693_650x0_resize_q90_box.jpg","permalink":"/spring-boot-health-check/","title":"Health Checks with Spring Boot"},{"categories":["Java"],"contents":"In this series so far, we have learned about Resilience4j and its Retry, RateLimiter, and TimeLimiter modules. In this article, we will explore the Bulkhead module. We will find out what problem it solves, when and how to use it, and also look at a few examples.\n Example Code This article is accompanied by a working code example on GitHub. What is Resilience4j? Please refer to the description in the previous article for a quick intro into how Resilience4j works in general.\nWhat is a Bulkhead? A few years back we had a production issue where one of the servers stopped responding to health checks and the load balancer took the server out of the pool.\nEven as we began investigating the issue, there was a second alert - another server had stopped responding to health checks and had also been taken out of the pool.\nIn a few minutes, every server had stopped responding to health probes and our service was completely down.\nWe were using Redis for caching some data for a couple of features supported by the application. As we found out later, there was some issue with the Redis cluster at the same time and it had stopped accepting new connections. We were using the Jedis library to connect to Redis and the default behavior of that library was to block the calling thread indefinitely until a connection was established.\nOur service was hosted on Tomcat and it had a default request handling thread pool size of 200 threads. So every request which went through a code path that connected to Redis ended up blocking the thread indefinitely.\nWithin minutes, all 2000 threads across the cluster had blocked indefinitely - there were no free threads to even respond to health checks from the load balancer.\nThe service itself supported several features and not all of them required accessing the Redis cache. But when a problem occurred in this one area, it ended up impacting the entire service.\nThis is exactly the problem that bulkhead addresses - it prevents a problem in one area of the service from affecting the entire service.\nWhile what happened to our service was an extreme example, we can see how a slow upstream dependency can impact an unrelated area of the calling service.\nIf we had had a limit of, say, 20 concurrent requests to Redis set on each of the server instances, only those threads would have been affected when the Redis connectivity issue occurred. The remaining request handling threads could have continued serving other requests.\nThe idea behind bulkheads is to set a limit on the number of concurrent calls we make to a remote service. We treat calls to different remote services as different, isolated pools and set a limit on how many calls can be made concurrently.\nThe term bulkhead itself comes from its usage in ships where the bottom portion of the ship is divided into sections separated from each other. If there is a breach, and water starts flowing in, only that section gets filled with water. This prevents the entire ship from sinking.\nResilience4j Bulkhead Concepts resilience4j-bulkhead works similar to the other Resilience4j modules. We provide it the code we want to execute as a functional construct - a lambda expression that makes a remote call or a Supplier of some value which is retrieved from a remote service, etc. - and the bulkhead decorates it with the code to control the number of concurrent calls.\nResilience4j provides two types of bulkheads - SemaphoreBulkhead and ThreadPoolBulkhead.\nThe SemaphoreBulkhead internally uses java.util.concurrent.Semaphore to control the number of concurrent calls and executes our code on the current thread.\nThe ThreadPoolBulkhead uses a thread from a thread pool to execute our code. It internally uses a java.util.concurrent.ArrayBlockingQueue and a java.util.concurrent.ThreadPoolExecutor to control the number of concurrent calls.\nSemaphoreBulkhead Let\u0026rsquo;s look at the configurations associated with the semaphore bulkhead and what they mean.\nmaxConcurrentCalls determines the maximum number of concurrent calls we can make to the remote service. We can think of this value as the number of permits that the semaphore is initialized with.\nAny thread which attempts to call the remote service over this limit can either get a BulkheadFullException immediately or wait for some time for a permit to be released by another thread. This is determined by the maxWaitDuration value.\nWhen there are multiple threads waiting for permits, the fairCallHandlingEnabled configuration determines if the waiting threads acquire permits in a first-in, first-out order.\nFinally, the writableStackTraceEnabled configuration lets us reduce the amount of information in the stack trace when a BulkheadFullException occurs. This can be useful because without it, our logs could get filled with a lot of similar information when the exception occurs multiple times. Usually when reading logs, just knowing that a BulkheadFullException has occurred is enough.\nThreadPoolBulkhead coreThreadPoolSize , maxThreadPoolSize , keepAliveDuration and queueCapacity are the main configurations associated with the ThreadPoolBulkhead. ThreadPoolBulkhead internally uses these configurations to construct a ThreadPoolExecutor.\nThe internalThreadPoolExecutor executes incoming tasks using one of the available, free threads. If no thread is free to execute an incoming task, the task is enqueued for executing later when a thread becomes available. If the queueCapacity has been reached, then the remote call is rejected with a BulkheadFullException.\nThreadPoolBulkhead also has awritableStackTraceEnabled configuration to control the amount of information in the stack trace of a BulkheadFullException.\nUsing the Resilience4j Bulkhead Module Let\u0026rsquo;s see how to use the various features available in the resilience4j-bulkhead module.\nWe will use the same example as the previous articles in this series. Assume that we are building a website for an airline to allow its customers to search for and book flights. Our service talks to a remote service encapsulated by the class FlightSearchService.\nSemaphoreBulkhead When using the semaphore-based bulkhead, BulkheadRegistry, BulkheadConfig, and Bulkhead are the main abstractions we work with.\nBulkheadRegistry is a factory for creating and managing Bulkhead objects.\nBulkheadConfig encapsulates the maxConcurrentCalls, maxWaitDuration, writableStackTraceEnabled, and fairCallHandlingEnabled configurations. Each Bulkhead object is associated with a BulkheadConfig.\nThe first step is to create a BulkheadConfig:\nBulkheadConfig config = BulkheadConfig.ofDefaults(); This creates a BulkheadConfig with default values formaxConcurrentCalls(25), maxWaitDuration(0s), writableStackTraceEnabled(true), and fairCallHandlingEnabled(true).\nLet\u0026rsquo;s say we want to limit the number of concurrent calls to 2 and that we are willing to wait 2s for a thread to acquire a permit:\nBulkheadConfig config = BulkheadConfig.custom() .maxConcurrentCalls(2) .maxWaitDuration(Duration.ofSeconds(2)) .build(); We then create a Bulkhead:\nBulkheadRegistry registry = BulkheadRegistry.of(config); Bulkhead bulkhead = registry.bulkhead(\u0026#34;flightSearchService\u0026#34;); Let\u0026rsquo;s now express our code to run a flight search as a Supplier and decorate it using the bulkhead:\nSupplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightsSupplier = () -\u0026gt; service.searchFlightsTakingOneSecond(request); Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; decoratedFlightsSupplier = Bulkhead.decorateSupplier(bulkhead, flightsSupplier); Finally, let\u0026rsquo;s call the decorated operation a few times to understand how the bulkhead works. We can use CompletableFuture to simulate concurrent flight search requests from users:\nfor (int i=0; i\u0026lt;4; i++) { CompletableFuture .supplyAsync(decoratedFlightsSupplier) .thenAccept(flights -\u0026gt; System.out.println(\u0026#34;Received results\u0026#34;)); } The timestamps and thread names in the output show that out of the 4 concurrent requests, the first two requests went through immediately:\nSearching for flights; current time = 11:42:13 187; current thread = ForkJoinPool.commonPool-worker-3 Searching for flights; current time = 11:42:13 187; current thread = ForkJoinPool.commonPool-worker-5 Flight search successful at 11:42:13 226 Flight search successful at 11:42:13 226 Received results Received results Searching for flights; current time = 11:42:14 239; current thread = ForkJoinPool.commonPool-worker-9 Searching for flights; current time = 11:42:14 239; current thread = ForkJoinPool.commonPool-worker-7 Flight search successful at 11:42:14 239 Flight search successful at 11:42:14 239 Received results Received results The third and the fourth requests were able to acquire permits only 1s later, after the previous requests completed.\nIf a thread is not able to acquire a permit in the 2s maxWaitDuration we specified, a BulkheadFullException is thrown:\nCaused by: io.github.resilience4j.bulkhead.BulkheadFullException: Bulkhead \u0026#39;flightSearchService\u0026#39; is full and does not permit further calls at io.github.resilience4j.bulkhead.BulkheadFullException.createBulkheadFullException(BulkheadFullException.java:49) at io.github.resilience4j.bulkhead.internal.SemaphoreBulkhead.acquirePermission(SemaphoreBulkhead.java:164) at io.github.resilience4j.bulkhead.Bulkhead.lambda$decorateSupplier$5(Bulkhead.java:194) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700) ... 6 more Apart from the first line, the other lines in the stack trace are not adding much value. If the BulkheadFullException occurs multiple times, these stack trace lines would repeat in our log files.\nWe can reduce the amount of information that is generated in the stack trace by setting the writableStackTraceEnabled configuration to false:\nBulkheadConfig config = BulkheadConfig.custom() .maxConcurrentCalls(2) .maxWaitDuration(Duration.ofSeconds(1)) .writableStackTraceEnabled(false) .build(); Now, when a BulkheadFullException occurs, only a single line is present in the stack trace:\nSearching for flights; current time = 12:27:58 658; current thread = ForkJoinPool.commonPool-worker-3 Searching for flights; current time = 12:27:58 658; current thread = ForkJoinPool.commonPool-worker-5 io.github.resilience4j.bulkhead.BulkheadFullException: Bulkhead \u0026#39;flightSearchService\u0026#39; is full and does not permit further calls Flight search successful at 12:27:58 699 Flight search successful at 12:27:58 699 Received results Received results Similar to the other Resilience4j modules we have seen, the Bulkhead also provides additional methods like decorateCheckedSupplier(), decorateCompletionStage(), decorateRunnable(), decorateConsumer() etc. so we can provide our code in other constructs than a Supplier.\nThreadPoolBulkhead When using the thread pool-based bulkhead, ThreadPoolBulkheadRegistry, ThreadPoolBulkheadConfig, and ThreadPoolBulkhead are the main abstractions we work with.\nThreadPoolBulkheadRegistry is a factory for creating and managing ThreadPoolBulkhead objects.\nThreadPoolBulkheadConfig encapsulates the coreThreadPoolSize , maxThreadPoolSize , keepAliveDuration and queueCapacity configurations. Each ThreadPoolBulkhead object is associated with a ThreadPoolBulkheadConfig.\nThe first step is to create a ThreadPoolBulkheadConfig:\nThreadPoolBulkheadConfig config = ThreadPoolBulkheadConfig.ofDefaults(); This creates a ThreadPoolBulkheadConfig with default values for coreThreadPoolSize (number of processors available - 1) , maxThreadPoolSize (maximum number of processors available) , keepAliveDuration (20ms) and queueCapacity (100).\nLet\u0026rsquo;s say we want to limit the number of concurrent calls to 2:\nThreadPoolBulkheadConfig config = ThreadPoolBulkheadConfig.custom() .maxThreadPoolSize(2) .coreThreadPoolSize(1) .queueCapacity(1) .build(); We then create a ThreadPoolBulkhead:\nThreadPoolBulkheadRegistry registry = ThreadPoolBulkheadRegistry.of(config); ThreadPoolBulkhead bulkhead = registry.bulkhead(\u0026#34;flightSearchService\u0026#34;); Let\u0026rsquo;s now express our code to run a flight search as a Supplier and decorate it using the bulkhead:\nSupplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightsSupplier = () -\u0026gt; service.searchFlightsTakingOneSecond(request); Supplier\u0026lt;CompletionStage\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt;\u0026gt; decoratedFlightsSupplier = ThreadPoolBulkhead.decorateSupplier(bulkhead, flightsSupplier); Unlike the SemaphoreBulkhead.decorateSupplier() which returned a Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt;, the ThreadPoolBulkhead.decorateSupplier() returns a Supplier\u0026lt;CompletionStage\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt;. This is because the ThreadPoolBulkHead does not execute the code synchronously on the current thread.\nFinally, let\u0026rsquo;s call the decorated operation a few times to understand how the bulkhead works:\nfor (int i=0; i\u0026lt;3; i++) { decoratedFlightsSupplier .get() .whenComplete((r,t) -\u0026gt; { if (r != null) { System.out.println(\u0026#34;Received results\u0026#34;); } if (t != null) { t.printStackTrace(); } }); } The timestamps and thread names in the output show that while the first two requests executed immediately, the third request was queued and later executed by one of the threads that freed up:\nSearching for flights; current time = 16:15:00 097; current thread = bulkhead-flightSearchService-1 Searching for flights; current time = 16:15:00 097; current thread = bulkhead-flightSearchService-2 Flight search successful at 16:15:00 136 Flight search successful at 16:15:00 135 Received results Received results Searching for flights; current time = 16:15:01 151; current thread = bulkhead-flightSearchService-2 Flight search successful at 16:15:01 151 Received results If there are no free threads and no capacity in the queue, a BulkheadFullException is thrown:\nException in thread \u0026#34;main\u0026#34; io.github.resilience4j.bulkhead.BulkheadFullException: Bulkhead \u0026#39;flightSearchService\u0026#39; is full and does not permit further calls at io.github.resilience4j.bulkhead.BulkheadFullException.createBulkheadFullException(BulkheadFullException.java:64) at io.github.resilience4j.bulkhead.internal.FixedThreadPoolBulkhead.submit(FixedThreadPoolBulkhead.java:157) ... other lines omitted ... We can use the writableStackTraceEnabled configuration to reduce the amount of information that is generated in the stack trace:\nThreadPoolBulkheadConfig config = ThreadPoolBulkheadConfig.custom() .maxThreadPoolSize(2) .coreThreadPoolSize(1) .queueCapacity(1) .writableStackTraceEnabled(false) .build(); Now, when a BulkheadFullException occurs, only a single line is present in the stack trace:\nSearching for flights; current time = 12:27:58 658; current thread = ForkJoinPool.commonPool-worker-3 Searching for flights; current time = 12:27:58 658; current thread = ForkJoinPool.commonPool-worker-5 io.github.resilience4j.bulkhead.BulkheadFullException: Bulkhead \u0026#39;flightSearchService\u0026#39; is full and does not permit further calls Flight search successful at 12:27:58 699 Flight search successful at 12:27:58 699 Received results Received results Context Propagation Sometimes we store data in a ThreadLocal variable and read it in a different area of the code. We do this to avoid explicitly passing the data as a parameter between method chains, especially when the value is not directly related to the core business logic we are implementing.\nFor example, we might want to log the current user ID or a transaction ID or some request tracking ID to every log statement to make it easier to search logs. Using a ThreadLocal is a useful technique for such scenarios.\nWhen using the ThreadPoolBulkhead, since our code is not executed on the current thread, the data we had stored on ThreadLocal variables will not be available in the other thread.\nLet\u0026rsquo;s look at an example to understand this problem. First we define a RequestTrackingIdHolder class, a wrapper class around a ThreadLocal:\nclass RequestTrackingIdHolder { static ThreadLocal\u0026lt;String\u0026gt; threadLocal = new ThreadLocal\u0026lt;\u0026gt;(); static String getRequestTrackingId() { return threadLocal.get(); } static void setRequestTrackingId(String id) { if (threadLocal.get() != null) { threadLocal.set(null); threadLocal.remove(); } threadLocal.set(id); } static void clear() { threadLocal.set(null); threadLocal.remove(); } } The static methods make it easy to set and get the value stored on the ThreadLocal. We next set a request tracking id before calling the bulkhead-decorated flight search operation:\nfor (int i=0; i\u0026lt;2; i++) { String trackingId = UUID.randomUUID().toString(); System.out.println(\u0026#34;Setting trackingId \u0026#34; + trackingId + \u0026#34; on parent, main thread before calling flight search\u0026#34;); RequestTrackingIdHolder.setRequestTrackingId(trackingId); decoratedFlightsSupplier .get() .whenComplete((r,t) -\u0026gt; { // other lines omitted  }); } The sample output shows that this value was not available in the bulkhead-managed thread:\nSetting trackingId 98ff99df-466a-47f7-88f7-5e31fc8fcb6b on parent, main thread before calling flight search Setting trackingId 6b98d73c-a590-4a20-b19d-c85fea783caf on parent, main thread before calling flight search Searching for flights; current time = 19:53:53 799; current thread = bulkhead-flightSearchService-1; Request Tracking Id = null Flight search successful at 19:53:53 824 Received results Searching for flights; current time = 19:53:54 836; current thread = bulkhead-flightSearchService-1; Request Tracking Id = null Flight search successful at 19:53:54 836 Received results To solve this problem, ThreadPoolBulkhead provides a ContextPropagator. ContextPropagator is an abstraction for retrieving, copying and cleaning up values across thread boundaries. It defines an interface with methods to get a value from the current thread (retrieve()), copy it to the new executing thread (copy()) and finally cleaning up on the executing thread (clear()).\nLet\u0026rsquo;s implement a RequestTrackingIdPropagator:\nclass RequestTrackingIdPropagator implements ContextPropagator { @Override public Supplier\u0026lt;Optional\u0026gt; retrieve() { System.out.println(\u0026#34;Getting request tracking id from thread: \u0026#34; + Thread.currentThread().getName()); return () -\u0026gt; Optional.of(RequestTrackingIdHolder.getRequestTrackingId()); } @Override Consumer\u0026lt;Optional\u0026gt; copy() { return optional -\u0026gt; { System.out.println(\u0026#34;Setting request tracking id \u0026#34; + optional.get() + \u0026#34; on thread: \u0026#34; + Thread.currentThread().getName()); optional.ifPresent(s -\u0026gt; RequestTrackingIdHolder.setRequestTrackingId(s.toString())); }; } @Override Consumer\u0026lt;Optional\u0026gt; clear() { return optional -\u0026gt; { System.out.println(\u0026#34;Clearing request tracking id on thread: \u0026#34; + Thread.currentThread().getName()); optional.ifPresent(s -\u0026gt; RequestTrackingIdHolder.clear()); }; } } We provide the ContextPropagator to the ThreadPoolBulkhead by setting it on the ThreadPoolBulkheadConfig:\nThreadPoolBulkheadConfig config = ThreadPoolBulkheadConfig.custom() .maxThreadPoolSize(2) .coreThreadPoolSize(1) .queueCapacity(1) .contextPropagator(new RequestTrackingIdPropagator()) .build(); Now, the sample output shows that the request tracking id was made available in the bulkhead-managed thread:\nSetting trackingId 71d44cb8-dab6-4222-8945-e7fd023528ba on parent, main thread before calling flight search Getting request tracking id from thread: main Setting trackingId 5f9dd084-f2cb-4a20-804b-038828abc161 on parent, main thread before calling flight search Getting request tracking id from thread: main Setting request tracking id 71d44cb8-dab6-4222-8945-e7fd023528ba on thread: bulkhead-flightSearchService-1 Searching for flights; current time = 20:07:56 508; current thread = bulkhead-flightSearchService-1; Request Tracking Id = 71d44cb8-dab6-4222-8945-e7fd023528ba Flight search successful at 20:07:56 538 Clearing request tracking id on thread: bulkhead-flightSearchService-1 Received results Setting request tracking id 5f9dd084-f2cb-4a20-804b-038828abc161 on thread: bulkhead-flightSearchService-1 Searching for flights; current time = 20:07:57 542; current thread = bulkhead-flightSearchService-1; Request Tracking Id = 5f9dd084-f2cb-4a20-804b-038828abc161 Flight search successful at 20:07:57 542 Clearing request tracking id on thread: bulkhead-flightSearchService-1 Received results Bulkhead Events Both Bulkhead and ThreadPoolBulkhead have an EventPublisher which generates events of the types\n BulkheadOnCallPermittedEvent, BulkheadOnCallRejectedEvent, and BulkheadOnCallFinishedEvent.  We can listen for these events and log them, for example:\nBulkhead bulkhead = registry.bulkhead(\u0026#34;flightSearchService\u0026#34;); bulkhead.getEventPublisher().onCallPermitted(e -\u0026gt; System.out.println(e.toString())); bulkhead.getEventPublisher().onCallFinished(e -\u0026gt; System.out.println(e.toString())); bulkhead.getEventPublisher().onCallRejected(e -\u0026gt; System.out.println(e.toString())); The sample output shows what\u0026rsquo;s logged:\n2020-08-26T12:27:39.790435: Bulkhead \u0026#39;flightSearch\u0026#39; permitted a call. ... other lines omitted ... 2020-08-26T12:27:40.290987: Bulkhead \u0026#39;flightSearch\u0026#39; rejected a call. ... other lines omitted ... 2020-08-26T12:27:41.094866: Bulkhead \u0026#39;flightSearch\u0026#39; has finished a call. Bulkhead Metrics SemaphoreBulkhead Bulkhead exposes two metrics:\n the maximum number of available permissions (resilience4j.bulkhead.max.allowed.concurrent.calls), and the number of allowed concurrent calls (resilience4j.bulkhead.available.concurrent.calls).  The bulkhead.available metric is the same as maxConcurrentCalls that we configure on the BulkheadConfig.\nFirst, we create BulkheadConfig, BulkheadRegistry, and Bulkhead as usual. Then, we create a MeterRegistry and bind the BulkheadRegistry to it:\nMeterRegistry meterRegistry = new SimpleMeterRegistry(); TaggedBulkheadMetrics.ofBulkheadRegistry(registry) .bindTo(meterRegistry); After running the bulkhead-decorated operation a few times, we display the captured metrics:\nConsumer\u0026lt;Meter\u0026gt; meterConsumer = meter -\u0026gt; { String desc = meter.getId().getDescription(); String metricName = meter.getId().getName(); Double metricValue = StreamSupport.stream(meter.measure().spliterator(), false) .filter(m -\u0026gt; m.getStatistic().name().equals(\u0026#34;VALUE\u0026#34;)) .findFirst() .map(m -\u0026gt; m.getValue()) .orElse(0.0); System.out.println(desc + \u0026#34; - \u0026#34; + metricName + \u0026#34;: \u0026#34; + metricValue); }; meterRegistry.forEachMeter(meterConsumer); Here\u0026rsquo;s some sample output:\nThe maximum number of available permissions - resilience4j.bulkhead.max.allowed.concurrent.calls: 8.0 The number of available permissions - resilience4j.bulkhead.available.concurrent.calls: 3.0 ThreadPoolBulkhead ThreadPoolBulkhead exposes five metrics:\n the current length of the queue (resilience4j.bulkhead.queue.depth), the current size of the thread pool (resilience4j.bulkhead.thread.pool.size), the core and maximum sizes of the thread pool (resilience4j.bulkhead.core.thread.pool.size and resilience4j.bulkhead.max.thread.pool.size), and the capacity of the queue ( resilience4j.bulkhead.queue.capacity).  First, we create ThreadPoolBulkheadConfig, ThreadPoolBulkheadRegistry, and ThreadPoolBulkhead as usual. Then, we create a MeterRegistry and bind the ThreadPoolBulkheadRegistry to it:\nMeterRegistry meterRegistry = new SimpleMeterRegistry(); TaggedThreadPoolBulkheadMetrics.ofThreadPoolBulkheadRegistry(registry).bindTo(meterRegistry); After running the bulkhead-decorated operation a few times, we display the captured metrics:\nThe queue capacity - resilience4j.bulkhead.queue.capacity: 5.0 The queue depth - resilience4j.bulkhead.queue.depth: 1.0 The thread pool size - resilience4j.bulkhead.thread.pool.size: 5.0 The maximum thread pool size - resilience4j.bulkhead.max.thread.pool.size: 5.0 The core thread pool size - resilience4j.bulkhead.core.thread.pool.size: 3.0 In a real application, we would export the data to a monitoring system periodically and analyze it on a dashboard.\nGotchas and Good Practices When Implementing Bulkhead Make the Bulkhead a Singleton All calls to a given remote service should go through the same Bulkhead instance. For a given remote service the Bulkhead must be a singleton.\nIf we don\u0026rsquo;t enforce this, some areas of our codebase may make a direct call to the remote service, bypassing the Bulkhead. To prevent this, the actual call to the remote service should be in a core, internal layer and other areas should use the bulkhead decorator exposed by the internal layer.\nHow can we ensure that a new developer understands this intent in the future? Check out Tom\u0026rsquo;s article which shows one way of solving such problems by organizing the package structure to make such intents clear. Additionally, it shows how to enforce this by codifying the intent in ArchUnit tests.\nCombine with Other Resilience4j Modules It\u0026rsquo;s more effective to combine a bulkhead with one or more of the other Resilience4j modules like retry and rate limiter. We may want to retry after some delay if there is a BulkheadFullException, for example.\nConclusion In this article, we learned how we can use Resilience4j\u0026rsquo;s Bulkhead module to set a limit on the concurrent calls that we make to a remote service. We learned why this is important and also saw some practical examples on how to configure it.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"September 17, 2020","image":"https://reflectoring.io/images/stock/0081-safe-1200x628-branded_hu3cea99ddea81138af0ed883346ac5ed4_108622_650x0_resize_q90_box.jpg","permalink":"/bulkhead-with-resilience4j/","title":"Implementing Bulkhead with Resilience4j"},{"categories":["Spring Boot"],"contents":"Containers have emerged as the preferred means of packaging an application with all the software and operating system dependencies and then shipping that across to different environments.\nThis article looks at different ways of containerizing a Spring Boot application:\n building a Docker image using a Docker file, building an OCI image from source code with Cloud-Native Buildpack, and optimizing the image at runtime by splitting parts of the JAR into different layers using layered tools.   Example Code This article is accompanied by a working code example on GitHub. Container Terminology We will start with the container terminologies used throughout the article:\n  Container image: a file with a specific format. We convert our application into a container image by running a build tool.\n  Container: the runtime instance of a container image.\n  Container engine: the daemon process responsible for running the Container.\n  Container host: the host machine on which the container engine runs.\n  Container registry: the shared location that is used for publishing and distributing the container image.\n  OCI Standard: the Open Container Initiative (OCI) is a lightweight, open governance structure formed under the Linux Foundation. The OCI Image Specification defines industry standards for container image formats and runtimes to ensure that all container engines can run container images produced by any build tool.\n  To containerize an application, we enclose our application inside a container image and publish that image to a shared registry. The container runtime pulls this image from the registry, unpacks the image, and runs the application inside it.\nThe 2.3 release of Spring Boot provides plugins for building OCI images.\nDocker happens to be the most commonly used container implementation and we are using Docker in our examples, so all subsequent reference to a container in this article will mean Docker.\nBuilding a Container Image the Conventional Way It is very easy to create Docker images of Spring Boot applications by adding a few instructions to a Docker file.\nWe first build an executable JAR and as part of the Docker file instructions, copy the executable JAR over a base JRE image after applying necessary customizations.\nLet us create our Spring Boot application from Spring Initializr with dependencies for web, lombok, and actuator. We also add a rest controller to expose an API with the GET method.\nCreating a Docker File Next, we containerize this application by adding a Dockerfile:\nFROM adoptopenjdk:11-jre-hotspot ARG JAR_FILE=target/*.jar COPY ${JAR_FILE} application.jar EXPOSE 8080 ENTRYPOINT [\u0026#34;java\u0026#34;,\u0026#34;-jar\u0026#34;,\u0026#34;/application.jar\u0026#34;] Our Docker file contains a base image from adoptopenjdk over which we copy our JAR file and then expose the port 8080 which will listen for requests.\nBuilding the Application We first build the application with Maven or Gradle. We are using Maven here:\nmvn clean package This creates an executable JAR of the application. We need to convert this executable JAR into a Docker image for running in a Docker engine.\nBuilding the Container Image Next, we put this executable JAR in a Docker image by running the docker build command from the root project directory containing the Docker file created earlier:\ndocker build -t usersignup:v1 . We can see our image listed with the command:\ndocker images The output of the above command includes our image usersignup along with the base image adoptopenjdk specified in our Docker file.\nREPOSITORY TAG SIZE usersignup v1 249MB adoptopenjdk 11-jre-hotspot 229MB Viewing the Layers Inside the Container Image Let us see the stack of layers inside the image. We will use the dive tool to view those layers:\ndive usersignup:v1 Here is part of the output from running the Dive command:\nAs we can see the application layer forms a significant part of the image size. We will aim to reduce the size of this layer in the following sections as part of our optimization.\nBuilding a Container Image with Buildpack Buildpacks is a generic term used by various Platform as a Service(PAAS) offerings to build a container image from source code. It was started by Heroku in 2011 and has since been adopted by Cloud Foundry, Google App Engine, Gitlab, Knative, and some others.\nAdvantage of Cloud-Native Buildpacks One main advantage of using Buildpack for building images is that changes to the image configuration can be managed in a centralized place (the builder) and propagated to all applications which are using the builder.\nBuildpacks were tightly coupled to the platform. Cloud-Native Buildpacks bring standardization across platforms by supporting the OCI image format which ensures the image can be run by a Docker engine.\nUsing the Spring Boot Plugin The Spring Boot plugin creates OCI images from the source code using a Buildpack. Images are built using the bootBuildImage task (Gradle) or the spring-boot:build-image goal (Maven) and a local Docker installation.\nWe can customize the name of the image required for pushing to the Docker Registry by specifying the name in the image tag:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;image\u0026gt; \u0026lt;name\u0026gt;docker.io/pratikdas/${project.artifactId}:v1\u0026lt;/name\u0026gt; \u0026lt;/image\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; Let us use Maven to run the build-image goal to build the application and create the container image. We are not using any Docker file now.\nmvn spring-boot:build-image Running this will produce an output similar to:\n[INFO] --- spring-boot-maven-plugin:2.3.3.RELEASE:build-image (default-cli) @ usersignup --- [INFO] Building image \u0026#39;docker.io/pratikdas/usersignup:v1\u0026#39; [INFO] [INFO] \u0026gt; Pulling builder image \u0026#39;gcr.io/paketo-buildpacks/builder:base-platform-api-0.3\u0026#39; 0% . . .. [creator] Adding label \u0026#39;org.springframework.boot.version\u0026#39; .. [creator] *** Images (c311fe74ec73): .. [creator] docker.io/pratikdas/usersignup:v1 [INFO] [INFO] Successfully built image \u0026#39;docker.io/pratikdas/usersignup:v1\u0026#39; From the output, we can see the paketo Cloud-Native buildpack being used to build a runnable OCI image. As we did earlier, we can see the image listed as a Docker image by running the command:\ndocker images Output:\nREPOSITORY SIZE paketobuildpacks/run 84.3MB gcr.io/paketo-buildpacks/builder 652MB pratikdas/usersignup 257MB Building a Container Image with Jib Jib is an image builder plugin from Google and provides an alternate method of building a container image from source code.\nWe configure the jib-maven-plugin in pom.xml:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;com.google.cloud.tools\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;jib-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.5.2\u0026lt;/version\u0026gt; \u0026lt;/plugin\u0026gt; Next, we trigger the Jib plugin with the Maven command to build the application and create the container image. As before, we are not using any Docker file here:\nmvn compile jib:build -Dimage=\u0026lt;docker registry name\u0026gt;/usersignup:v1 We get the following output after running the above Maven command:\n[INFO] Containerizing application to pratikdas/usersignup:v1... . . [INFO] Container entrypoint set to [java, -cp, /app/resources:/app/classes:/app/libs/*, io.pratik.users.UsersignupApplication] [INFO] [INFO] Built and pushed image as pratikdas/usersignup:v1 [INFO] Executing tasks: [INFO] [==============================] 100.0% complete The output shows that the container image is built and pushed to the registry.\nMotivations and Techniques for Building Optimized Images We have two main motivations for optimization:\n Performance: in a container orchestration system, the container image is pulled from the image registry to a host running a container engine. This process is called scheduling. Pulling large-sized images from the registry result in long scheduling times in container orchestration systems and long build times in CI pipelines. Security: large-sized images also have a greater surface area for vulnerabilities.  A Docker image is composed of a stack of layers each representing an instruction in our Dockerfile. Each layer is a delta of the changes over the underlying layer. When we pull the Docker image from the registry, it is pulled by layers and cached in the host.\nSpring Boot uses a \u0026ldquo;fat JAR\u0026rdquo; as its default packaging format. When we inspect the fat JAR, we can see that the application forms a very small part of the entire JAR. This is the part that changes most frequently. The remaining part is composed of the Spring Framework dependencies.\nThe optimization formula centers around isolating the application into a separate layer from the Spring Framework dependencies.\nThe dependencies layer forming the bulk of the fat JAR is downloaded only once and cached in the host system.\nOnly the thin layer of application is pulled during application updates and container scheduling as illustrated in this diagram:\nLet\u0026rsquo;s have a look at how to build those optimized images for a Spring Boot application in the next sections.\nBuilding an Optimized Container Image for a Spring Boot Application with Buildpack Spring Boot 2.3 supports layering by extracting parts of the fat JAR into separate layers. The layering feature is turned off by default and needs to be explicitly enabled with the Spring Boot Maven plugin:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;layers\u0026gt; \u0026lt;enabled\u0026gt;true\u0026lt;/enabled\u0026gt; \u0026lt;/layers\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; We will use this configuration to generate our container image first with Buildpack and then with Docker in the following sections.\nLet us run the Maven build-image goal to create the container image: images/stock/-1200x628-branded.jpg\nmvn spring-boot:build-image If we run Dive to see the layers in the resulting image, we can see the application layer (encircled in red) is much smaller in the range of kilobytes compared to what we had obtained by using the fat JAR format:\nBuilding an Optimized Container Image for a Spring Boot Application with Docker Instead of using the Maven or Gradle plugin, we can also create a layered JAR Docker image with a Docker file.\nWhen we are using Docker, we need to perform two additional steps for extracting the layers and copying those in the final image.\nThe contents of the resulting JAR after building with Maven with the layering feature turned on will look like this:\nMETA-INF/ . BOOT-INF/lib/ . BOOT-INF/lib/spring-boot-jarmode-layertools-2.3.3.RELEASE.jar BOOT-INF/classpath.idx BOOT-INF/layers.idx The output shows an additional JAR named spring-boot-jarmode-layertools and a layersfle.idx file. The layering feature is provided by this additional JAR as explained in the next section.\nExtracting the Dependencies in Separate Layers To view and extract the layers from our layered JAR, we use a system property -Djarmode=layertools to launch the spring-boot-jarmode-layertools JAR instead of the application:\njava -Djarmode=layertools -jar target/usersignup-0.0.1-SNAPSHOT.jar Running this command produces the output containing available command options:\nUsage: java -Djarmode=layertools -jar usersignup-0.0.1-SNAPSHOT.jar Available commands: list List layers from the jar that can be extracted extract Extracts layers from the jar for image creation help Help about any command The output shows the commands list, extract, and help with help being the default. Let us run the command with the list option:\njava -Djarmode=layertools -jar target/usersignup-0.0.1-SNAPSHOT.jar list dependencies spring-boot-loader snapshot-dependencies application We can see the list of dependencies that can be added as layers.\nThe default layers are:\n   Layer name Contents     dependencies any dependency whose version does not contain SNAPSHOT   spring-boot-loader JAR loader classes   snapshot-dependencies any dependency whose version contains SNAPSHOT   application application classes and resources    The layers are defined in a layers.idx file in the order that they should be added to the Docker image. These layers get cached in the host after the first pull since they do not change. Only the updated application layer is downloaded to the host which is faster because of the reduced size.\nBuilding the Image with Dependencies Extracted in Separate Layers We will build the final image in two stages using a method called multi-stage build. In the first stage, we will extract the dependencies and in the second stage, we will copy the extracted dependencies to the final image.\nLet us modify our Docker file for multi-stage build:\n# the first stage of our build will extract the layers FROM adoptopenjdk:14-jre-hotspot as builder WORKDIR application ARG JAR_FILE=target/*.jar COPY ${JAR_FILE} application.jar RUN java -Djarmode=layertools -jar application.jar extract # the second stage of our build will copy the extracted layers FROM adoptopenjdk:14-jre-hotspot WORKDIR application COPY --from=builder application/dependencies/ ./ COPY --from=builder application/spring-boot-loader/ ./ COPY --from=builder application/snapshot-dependencies/ ./ COPY --from=builder application/application/ ./ ENTRYPOINT [\u0026#34;java\u0026#34;, \u0026#34;org.springframework.boot.loader.JarLauncher\u0026#34;] We save this configuration in a separate file - Dockerfile2.\nWe build the Docker image using the command:\ndocker build -f Dockerfile2 -t usersignup:v1 . After running this command, we get this output:\nSending build context to Docker daemon 20.41MB Step 1/12 : FROM adoptopenjdk:14-jre-hotspot as builder 14-jre-hotspot: Pulling from library/adoptopenjdk . . Successfully built a9ebf6970841 Successfully tagged userssignup:v1 We can see the Docker image is created with an Image ID and then tagged.\nWe finally run the Dive command as before to check the layers inside the generated Docker image. We can specify either the Image ID or tag as input to the Dive command:\ndive userssignup:v1 As we can see in the output, the layer containing the application is only 11 kB now with the dependencies cached in separate layers.\nExtracting Internal Dependencies in Separate Layers We can further reduce the application layer size by extracting any of our custom dependencies in a separate layer instead of packaging them with the application by declaring them in a yml like file named layers.idx:\n- \u0026quot;dependencies\u0026quot;: - \u0026quot;BOOT-INF/lib/\u0026quot; - \u0026quot;spring-boot-loader\u0026quot;: - \u0026quot;org/\u0026quot; - \u0026quot;snapshot-dependencies\u0026quot;: - \u0026quot;custom-dependencies\u0026quot;: - \u0026quot;io/myorg/\u0026quot; - \u0026quot;application\u0026quot;: - \u0026quot;BOOT-INF/classes/\u0026quot; - \u0026quot;BOOT-INF/classpath.idx\u0026quot; - \u0026quot;BOOT-INF/layers.idx\u0026quot; - \u0026quot;META-INF/\u0026quot; In this file -layers.idx we have added a custom dependency with the name io.myorg containing organization dependencies pulled from a shared repository.\nConclusion In this article, we looked at using Cloud-Native Buildpacks to create the container image directly from source code. This is an alternative to using Docker for building the container image using the conventional way, by first building the fat executable JAR and then packaging it in a container image by specifying the instructions in a Dockerfile.\nWe also looked at optimizing our container by enabling the layering feature which extracts the dependencies in separate layers that get cached in the host and the thin layer of application is downloaded during scheduling in container runtime engines.\nYou can refer to all the source code used in the article on Github.\nCommand Reference Here is a summary of commands which we used throughout this article for quick reference.\nClean our environment:\ndocker system prune -a Build container image with Docker file:\ndocker build -f \u0026lt;Docker file name\u0026gt; -t \u0026lt;tag\u0026gt; . Build container image from source (without Dockerfile):\nmvn spring-boot:build-image View layers of dependencies. Ensure the layering feature is enabled in spring-boot-maven-plugin before building the application JAR:\njava -Djarmode=layertools -jar application.jar list Extract layers of dependencies. Ensure the layering feature is enabled in spring-boot-maven-plugin before building the application JAR:\n java -Djarmode=layertools -jar application.jar extract View list of container images\ndocker images View layers inside container image (Ensure dive tool is installed):\ndive \u0026lt;image ID or image tag\u0026gt; ","date":"September 5, 2020","image":"https://reflectoring.io/images/stock/0080-containers-1200x628-branded_huad491f94a79bc79a4173308daff66796_259104_650x0_resize_q90_box.jpg","permalink":"/spring-boot-docker/","title":"Creating Optimized Docker Images for a Spring Boot Application"},{"categories":["Spring Boot"],"contents":"Logging is a vital part of all applications and brings benefits not only to us developers but also to ops and business people. Spring Boot applications need to capture relevant log data to help us diagnose and fix problems and measure business metrics.\nThe Spring Boot framework is preconfigured with Logback as a default implementation in its opinionated framework. This article looks at different ways of configuring logging in Spring Boot.\n Example Code This article is accompanied by a working code example on GitHub. Why Is Logging Important? The decisions on what to log and where are often strategic and are taken after considering that the application will malfunction in live environments. Logs play a key role in helping the application to recover quickly from any such failures and resume normal operations.\nMaking Errors At Integration Points Visible The distributed nature of today\u0026rsquo;s applications built using microservice architecture introduces a lot of moving parts. As such, it is natural to encounter problems due to temporary interruptions in any of the surrounding systems.\nException logs captured at the integration points enable us to detect the root cause of the interruption and allow us to take appropriate actions to recover with minimum impact on the end-user experience.\nDiagnosing Functional Errors In Production There could be customer complaints of an incorrect transaction amount. To diagnose this, we need to drill into our logs to find the sequence of operations starting from the request payload when the API is invoked until the response payload at the end of API processing.\nAnalyzing Event History Log statements capture a footprint of the application execution. We refer to these logs after the fact to analyze any normal or unexpected behavior of the application for a variety of tasks.\nWe can find out the number of users logged in within a particular time window or how many users are actively making use of any newly released feature which is valuable information to plan the changes for future releases.\nMonitoring Observability tools monitor the logs in real-time to gather important metrics useful for both business and operations and can also be configured to raise alarms when these metrics exceed specific thresholds. Developers use logs for debugging and tracing and even to capture important events for build and test runs in CI/CD pipelines.\nSpring Boot\u0026rsquo;s Default Logging Configuration The default logging configuration in Spring Boot is a Logback implementation at the info level for logging the output to console.\nLet us see this behavior in action by creating a Spring Boot application. We generate a minimal application with just the web dependency using start.spring.io. Next, we add some log statements to the application class file:\n@SpringBootApplication public class SpringLoggerApplication { static final Logger log = LoggerFactory.getLogger(SpringLoggerApplication.class); public static void main(String[] args) { log.info(\u0026#34;Before Starting application\u0026#34;); SpringApplication.run(SpringLoggerApplication.class, args); log.debug(\u0026#34;Starting my application in debug with {} args\u0026#34;, args.length); log.info(\u0026#34;Starting my application with {} args.\u0026#34;, args.length); } } After compiling with Maven or Gradle and running the resulting jar file, we can see our log statements getting printed in the console:\n13:21:45.673 [main] INFO io.pratik.springLogger.SpringLoggerApplication - Before Starting application . ____ _ __ _ _ /\\\\ / ___\u0026#39;_ __ _ _(_)_ __ __ _ \\ \\ \\ \\ ( ( )\\___ | \u0026#39;_ | \u0026#39;_| | \u0026#39;_ \\/ _` | \\ \\ \\ \\ \\\\/ ___)| |_)| | | | | || (_| | ) ) ) ) \u0026#39; |____| .__|_| |_|_| |_\\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.3.2.RELEASE) . . . ... : Started SpringLoggerApplication in 3.054 seconds (JVM running for 3.726) ... : Starting my application 0 The first info log is printed, followed by a seven-line banner of Spring and then the next info log. The debug statement is suppressed.\nHigh-Level Logging Configuration Spring Boot offers considerable support for configuring the logger to meet our logging requirements.\nOn a high level, we can modify command-line parameters or add properties to application.properties (or application.yml) so configure some logging features.\nConfiguring the Log Level with a Command-Line Parameter Sometimes we need to see detailed logs to troubleshoot an application behavior. To achieve that we send our desired log level as an argument when running our application.\njava -jar target/springLogger-0.0.1-SNAPSHOT.jar --trace This will start to output from trace level printing logs of trace, debug, info, warn, and error.\nConfiguring Package-Level Logging Most of the time, we are more interested in the log output of the code we have written instead of log output from frameworks like Spring. We control the logging by specifying package names in the environment variable log.level.\u0026lt;package-name\u0026gt; :\njava \\\\ -jar target/springLogger-0.0.1-SNAPSHOT.jar \\\\ -Dlogging.level.org.springframework=ERROR \\\\ -Dlogging.level.io.pratik=TRACE Alternatively, we can specify our package in application.properties:\nlogging.level.org.springframework=ERROR logging.level.io.app=TRACE Logging to a File We can write our logs to a file path by setting only one of the properties logging.file.name or logging.file.path in our application.properties. By default, for file output, the log level is set to info.\n# Output to a file named application.log. logging.file.name=application.log # Output to a file named spring.log in path /Users logging.file.path=/Users If both properties are set, only logging.file.name takes effect.\nNote that the name of these properties has changed in Spring 2.2 onwards but the official documentation does not yet reflect this. Our example is working with version 2.3.2.RELEASE.\nApart from the file name, we can override the default logging pattern with the property logging.pattern.file:\n# Logging pattern for file logging.pattern.file= %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg% Other properties related to the logging file :\n   Property What It Means Value If Not Set     logging.file.max-size maximum total size of log archive before a file is rotated 10 Mb   logging.file.max-history how many days worth of rotated log files to be kept 7 Days   logging.file.total-size-cap total size of log archives. Backups are deleted when the total size of log archives exceeds that threshold. not specified   logging.file.clean-history-on-start force log archive cleanup on application startup false    We can apply the same customization in a separate configuration file as we will see in the next section.\nSwitching Off the Banner The spring banner at the top of the log file does not add any value. We can switch off the banner by setting the property to off in application.properties:\nspring.main.banner-mode=off Changing the Color of Log Output in the Console We can display ANSI color-coded output by setting the spring.output.ansi.enabled property. The possible values are ALWAYS, DETECT, and NEVER.\nspring.output.ansi.enabled=ALWAYS The property spring.output.ansi.enabled is set to DETECT by default. The colored output takes effect only if the target terminal supports ANSI codes.\nSwitching the Logger Implementation Logback starter is part of the default Spring Boot starter. We can change this to log4j or java util implementations by including their starters and excluding the default spring-boot-starter-logging in pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-web\u0026lt;/artifactId\u0026gt; \u0026lt;exclusions\u0026gt; \u0026lt;exclusion\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-logging\u0026lt;/artifactId\u0026gt; \u0026lt;/exclusion\u0026gt; \u0026lt;/exclusions\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.boot\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-boot-starter-log4j2\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; Low-Level Logging Configuration in logback-spring.xml We can isolate the log configuration from the application by specifying the configuration in logback.xml or logback-spring.xml in XML or groovy syntax. Spring recommends using logback-spring.xml or logback-spring.groovy because they are more powerful.\nThe default configuration is comprised of an appender element inside a root configuration tag. The pattern is specified inside an encoder element :\n\u0026lt;configuration \u0026gt; \u0026lt;include resource=\u0026#34;/org/springframework/boot/logging/logback/base.xml\u0026#34; /\u0026gt; \u0026lt;appender name=\u0026#34;STDOUT\u0026#34; class=\u0026#34;ch.qos.logback.core.ConsoleAppender\u0026#34;\u0026gt; \u0026lt;encoder\u0026gt; \u0026lt;pattern\u0026gt;%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n \u0026lt;/pattern\u0026gt; \u0026lt;/encoder\u0026gt; \u0026lt;/appender\u0026gt; \u0026lt;/configuration\u0026gt; Logging with Logback Configuration If we set the debug property in the configuration tag to true, we can see the values of logback configuration during application startup.\n\u0026lt;configuration debug=\u0026#34;true\u0026#34;\u0026gt; Starting our application with this setting produces the output containing the configuration values of logback used in the application:\n...- About to instantiate appender of type [...ConsoleAppender] ...- About to instantiate appender of type [...RollingFileAppender] ..SizeAndTimeBasedRollingPolicy.. - setting totalSizeCap to 0 Bytes ..SizeAndTimeBasedRollingPolicy.. - ..limited to [10 MB] each. ..SizeAndTimeBasedRollingPolicy.. Will use gz compression ..SizeAndTimeBasedRollingPolicy..use the pattern /var/folders/ ..RootLoggerAction - Setting level of ROOT logger to INFO Tracing Requests Across Microservices Debugging and tracing in microservice applications is challenging since the microservices are deployed and run independently resulting in their logs being distributed across many individual components.\nWe can correlate our logs and trace requests across microservices by adding tracking information to the logging pattern in logback-spring.xml to. Please check out tracing across distributed systems for a more elaborate explanation on distributed tracing.\nAggregating Logs on a Log Server Logs from different microservices are aggregated to a central location. For Spring Boot, we need to output logs in a format compatible with the log aggregation software. Let us look at an appender configured for Logstash :\n\u0026lt;appender name=\u0026#34;LOGSTASH\u0026#34; class=\u0026#34;net.logstash.logback.appender.LogstashTcpSocketAppender\u0026#34;\u0026gt; \u0026lt;destination\u0026gt;localhost:4560\u0026lt;/destination\u0026gt; \u0026lt;encoder charset=\u0026#34;UTF-8\u0026#34; class=\u0026#34;net.logstash.logback.encoder.LogstashEncoder\u0026#34; /\u0026gt; \u0026lt;/appender\u0026gt; Here, the LogstashEncoder encodes logs in JSON format and sends them to a log server at localhost:4560. We can then apply various visualization tools to query logs.\nConfiguring Logging Differently For Each Environment We often have different logging formats for local and production runtime environments. Spring profiles are an elegant way to implement different logging for each environment. You can refer to a very good use case in this article about environment-specific logging.\nUsing Lombok to Get a Logger Reference Just as a hint to save some typing: we can use the Lombok annotation Slf4j to provide a reference to the logger:\n@Service @Slf4j public class UserService { public String getUser(final String userID) { log.info(\u0026#34;Service: Fetching user with id {}\u0026#34;, userID); } } Conclusion In this article, we saw how to use logging in Spring Boot and how to customize it further to suit our requirements. But to fully leverage the benefits, the logging capabilities of the framework need to be complemented with robust and standardized logging practices in engineering teams.\nThese practices will also need to be enforced with a mix of peer reviews and automated code quality tools. Everything taken together will ensure that when production errors happen we have the maximum information available for our diagnosis.\nYou can refer to all the source code used in the article on Github.\n","date":"August 24, 2020","image":"https://reflectoring.io/images/stock/0031-matrix-1200x628-branded_hufb3c207f9151b804bbf7fe86cefe5814_184798_650x0_resize_q90_box.jpg","permalink":"/springboot-logging/","title":"Logging In Spring Boot"},{"categories":["Java"],"contents":"In this series so far, we have learned about Resilience4j and its Retry and RateLimiter modules. In this article, we will continue exploring Resilience4j with a look into the TimeLimiter. We will find out what problem it solves, when and how to use it, and also look at a few examples.\n Example Code This article is accompanied by a working code example on GitHub. What is Resilience4j? Please refer to the description in the previous article for a quick intro into how Resilience4j works in general.\nWhat is Time Limiting? Setting a limit on the amount of time we are willing to wait for an operation to complete is called time limiting. If the operation does not complete within the time we specified, we want to be notified about it with a timeout error.\nSometimes, this is also referred to as \u0026ldquo;setting a deadline\u0026rdquo;.\nOne main reason why we would do this is to ensure that we don\u0026rsquo;t make users or clients wait indefinitely. A slow service that does not give any feedback can be frustrating to the user.\nAnother reason we set time limits on operations is to make sure we don\u0026rsquo;t hold up server resources indefinitely. The timeout value that we specify when using Spring\u0026rsquo;s @Transactional annotation is an example - we don\u0026rsquo;t want to hold up database resources for long in this case.\nWhen to Use the Resilience4j TimeLimiter? Resilience4j\u0026rsquo;s TimeLimiter can be used to set time limits (timeouts) on asynchronous operations implemented with CompleteableFutures.\nThe CompletableFuture class introduced in Java 8 makes asynchronous, non-blocking programming easier. A slow method can be executed on a different thread, freeing up the current thread to handle other tasks. We can provide a callback to be executed when slowMethod() returns:\nint slowMethod() { // time-consuming computation or remote operation  return 42; } CompletableFuture.supplyAsync(this::slowMethod) .thenAccept(System.out::println); The slowMethod() here could be some computation or remote operation. Usually, we want to set a time limit when making an asynchronous call like this. We don\u0026rsquo;t want to wait indefinitely for slowMethod() to return. If slowMethod() takes more than a second, for example, we may want to return a previously computed, cached value or maybe even error out.\nIn Java 8\u0026rsquo;s CompletableFuture there\u0026rsquo;s no easy way to set a time limit on an asynchronous operation. CompletableFuture implements the Future interface and Future has an overloaded get() method to specify how long we can wait:\nCompletableFuture\u0026lt;Integer\u0026gt; completableFuture = CompletableFuture .supplyAsync(this::slowMethod); Integer result = completableFuture.get(3000, TimeUnit.MILLISECONDS); System.out.println(result); But there\u0026rsquo;s a problem here - the get() method is a blocking call. So it defeats the purpose of using CompletableFuture in the first place, which was to free up the current thread.\nThis is the problem that Resilience4j\u0026rsquo;s TimeLimiter solves - it lets us set a time limit on the asynchronous operation while retaining the benefit of being non-blocking when working with CompletableFuture in Java 8.\nThis limitation of CompletableFuture has been addressed in Java 9. We can set time limits directly using methods like orTimeout() or completeOnTimeout() on CompletableFuture in Java 9 and above. With Resilience4J\u0026rsquo;s metrics and events, it still provides added value compared to the plain Java 9 solution, however.\nResilience4j TimeLimiter Concepts The TimeLimiter supports both Future and CompletableFuture. But using it with Future is equivalent to a Future.get(long timeout, TimeUnit unit). So we will focus on the CompletableFuture in the remainder of this article.\nLike the other Resilience4j modules, the TimeLimiter works by decorating our code with the required functionality - returning a TimeoutException if an operation did not complete in the specified timeoutDuration in this case.\nWe provide the TimeLimiter a timeoutDuration, a ScheduledExecutorService and the asynchronous operation itself expressed as a Supplier of a CompletionStage. It returns a decorated Supplier of a CompletionStage.\nInternally, it uses the scheduler to schedule a timeout task - the task of completing the CompletableFuture by throwing a TimeoutException. If the operation finishes first, the TimeLimiter cancels the internal timeout task.\nAlong with the timeoutDuration, there is another configuration cancelRunningFuture associated with a TimeLimiter. This configuration applies to Future only and not CompletableFuture. When a timeout occurs, it cancels the running Future before throwing a TimeoutException.\nUsing the Resilience4j TimeLimiter Module TimeLimiterRegistry, TimeLimiterConfig, and TimeLimiter are the main abstractions in resilience4j-timelimiter.\nTimeLimiterRegistry is a factory for creating and managing TimeLimiter objects.\nTimeLimiterConfig encapsulates the timeoutDuration and cancelRunningFuture configurations. Each TimeLimiter object is associated with a TimeLimiterConfig.\nTimeLimiter provides helper methods to create or execute decorators for Future and CompletableFuture Suppliers.\nLet\u0026rsquo;s see how to use the various features available in the TimeLimiter module. We will use the same example as the previous articles in this series. Assume that we are building a website for an airline to allow its customers to search for and book flights. Our service talks to a remote service encapsulated by the class FlightSearchService.\nThe first step is to create a TimeLimiterConfig:\nTimeLimiterConfig config = TimeLimiterConfig.ofDefaults(); This creates a TimeLimiterConfig with default values for timeoutDuration (1000ms) and cancelRunningFuture (true).\nLet\u0026rsquo;s say we want to set a timeout value of 2s instead of the default:\nTimeLimiterConfig config = TimeLimiterConfig.custom() .timeoutDuration(Duration.ofSeconds(2)) .build(); We then create a TimeLimiter:\nTimeLimiterRegistry registry = TimeLimiterRegistry.of(config); TimeLimiter limiter = registry.timeLimiter(\u0026#34;flightSearch\u0026#34;); We want to asynchronously call FlightSearchService.searchFlights() which returns a List\u0026lt;Flight\u0026gt;. Let\u0026rsquo;s express this as a Supplier\u0026lt;CompletionStage\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt;\u0026gt;:\nSupplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightSupplier = () -\u0026gt; service.searchFlights(request); Supplier\u0026lt;CompletionStage\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt;\u0026gt; origCompletionStageSupplier = () -\u0026gt; CompletableFuture.supplyAsync(flightSupplier); We can then decorate the Supplier using the TimeLimiter:\nScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor(); Supplier\u0026lt;CompletionStage\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt;\u0026gt; decoratedCompletionStageSupplier = limiter.decorateCompletionStage(scheduler, origCompletionStageSupplier); Finally, let\u0026rsquo;s call the decorated asynchronous operation:\ndecoratedCompletionStageSupplier.get().whenComplete((result, ex) -\u0026gt; { if (ex != null) { System.out.println(ex.getMessage()); } if (result != null) { System.out.println(result); } }); Here\u0026rsquo;s sample output for a successful flight search that took less than the 2s timeoutDuration we specified:\nSearching for flights; current time = 19:25:09 783; current thread = ForkJoinPool.commonPool-worker-3 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;08/30/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, Flight{flightNumber=\u0026#39;XY 746\u0026#39;, flightDate=\u0026#39;08/30/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}] on thread ForkJoinPool.commonPool-worker-3 And this is sample output for a flight search that timed out:\nException java.util.concurrent.TimeoutException: TimeLimiter \u0026#39;flightSearch\u0026#39; recorded a timeout exception on thread pool-1-thread-1 at 19:38:16 963 Searching for flights; current time = 19:38:18 448; current thread = ForkJoinPool.commonPool-worker-3 Flight search successful at 19:38:18 461 The timestamps and thread names above show that the calling thread got a TimeoutException even as the asynchronous operation completed later on the other thread.\nWe would use decorateCompletionStage() if we wanted to create a decorator and re-use it at a different place in the codebase. If we want to create it and immediately execute the Supplier\u0026lt;CompletionStage\u0026gt;, we can use executeCompletionStage() instance method instead:\nCompletionStage\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; decoratedCompletionStage = limiter.executeCompletionStage(scheduler, origCompletionStageSupplier); TimeLimiter Events TimeLimiter has an EventPublisher which generates events of the types TimeLimiterOnSuccessEvent, TimeLimiterOnErrorEvent, and TimeLimiterOnTimeoutEvent. We can listen for these events and log them, for example:\nTimeLimiter limiter = registry.timeLimiter(\u0026#34;flightSearch\u0026#34;); limiter.getEventPublisher().onSuccess(e -\u0026gt; System.out.println(e.toString())); limiter.getEventPublisher().onError(e -\u0026gt; System.out.println(e.toString())); limiter.getEventPublisher().onTimeout(e -\u0026gt; System.out.println(e.toString())); The sample output shows what\u0026rsquo;s logged:\n2020-08-07T11:31:48.181944: TimeLimiter \u0026#39;flightSearch\u0026#39; recorded a successful call. ... other lines omitted ... 2020-08-07T11:31:48.582263: TimeLimiter \u0026#39;flightSearch\u0026#39; recorded a timeout exception. TimeLimiter Metrics TimeLimiter tracks the number of successful, failed, and timed-out calls.\nFirst, we create TimeLimiterConfig, TimeLimiterRegistry, and TimeLimiter as usual. Then, we create a MeterRegistry and bind the TimeLimiterRegistry to it:\nMeterRegistry meterRegistry = new SimpleMeterRegistry(); TaggedTimeLimiterMetrics.ofTimeLimiterRegistry(registry) .bindTo(meterRegistry); After running the time-limited operation a few times, we display the captured metrics:\nConsumer\u0026lt;Meter\u0026gt; meterConsumer = meter -\u0026gt; { String desc = meter.getId().getDescription(); String metricName = meter.getId().getName(); String metricKind = meter.getId().getTag(\u0026#34;kind\u0026#34;); Double metricValue = StreamSupport.stream(meter.measure().spliterator(), false) .filter(m -\u0026gt; m.getStatistic().name().equals(\u0026#34;COUNT\u0026#34;)) .findFirst() .map(Measurement::getValue) .orElse(0.0); System.out.println(desc + \u0026#34; - \u0026#34; + metricName + \u0026#34;(\u0026#34; + metricKind + \u0026#34;)\u0026#34; + \u0026#34;: \u0026#34; + metricValue); }; meterRegistry.forEachMeter(meterConsumer); Here\u0026rsquo;s some sample output:\nThe number of timed out calls - resilience4j.timelimiter.calls(timeout): 6.0 The number of successful calls - resilience4j.timelimiter.calls(successful): 4.0 The number of failed calls - resilience4j.timelimiter.calls(failed): 0.0 In a real application, we would export the data to a monitoring system periodically and analyze it on a dashboard.\nGotchas and Good Practices When Implementing Time Limiting Usually, we deal with two kinds of operations - queries (or reads) and commands (or writes). It is safe to time-limit queries because we know that they don\u0026rsquo;t change the state of the system. The searchFlights() operation we saw was an example of a query operation.\nCommands usually change the state of the system. A bookFlights() operation would be an example of a command. When time-limiting a command we have to keep in mind that the command is most likely still running when we timeout. A TimeoutException on a bookFlights() call for example doesn\u0026rsquo;t necessarily mean that the command failed.\nWe need to manage the user experience in such cases - perhaps on timeout, we can notify the user that the operation is taking longer than we expected. We can then query the upstream to check the status of the operation and notify the user later.\nConclusion In this article, we learned how we can use Resilience4j\u0026rsquo;s TimeLimiter module to set a time limit on asynchronous, non-blocking operations. We learned when to use it and how to configure it with some practical examples.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"August 18, 2020","image":"https://reflectoring.io/images/stock/0079-stopwatch-1200x628-branded_hud3c835126b8b54498dc5975d82508778_152822_650x0_resize_q90_box.jpg","permalink":"/time-limiting-with-resilience4j/","title":"Implementing Timeouts with Resilience4j"},{"categories":["Java"],"contents":"What are you doing when you\u0026rsquo;ve made a change to a Spring Boot app and want to test it?\nYou probably restart it and go get a coffee or swipe through your Twitter feed until it\u0026rsquo;s up and running again.\nThen, you log back into the app, navigate to where you were before, and check if your changes work.\nSound familiar? That\u0026rsquo;s pretty much how I developed Spring Boot apps for a long time. Until I got fed up with it and gave Spring Boot Dev Tools a try.\nIt took me some time to set it up to my satisfaction (and then some more time to build a Gradle plugin that makes the setup easier), but it was worth it.\nThis article explains how Spring Boot Dev Tools works and how to configure it to your Spring Boot application consisting of a single or multiple Gradle modules (it will probably also work with Maven, with some changes, but this article will only show the Gradle configuration).\n Example Code This article is accompanied by a working code example on GitHub. The Perfect Dev Loop Before we start, let\u0026rsquo;s describe what we want to achieve for our developer experience with Spring Boot.\nWe want that any changes we do to files are visible in the running Spring Boot app a couple of seconds later.\nThese files include:\n Java files static assets like Javascript files or CSS HTML templates resources files like properties or other configuration files.  Files that need to be compiled (like Java files), will require a restart of the Spring application context.\nFor files that don\u0026rsquo;t need to be compiled (like HTML templates), we want the turnaround time to be even faster, as they don\u0026rsquo;t require a restart of the application context.\nSo, the dev loop we\u0026rsquo;re aiming for looks like this:\n we start the Spring Boot app via ./gradlew bootrun or ./mvnw spring-boot:run we change a file in our IDE and save it the IDE runs a background task that updates the classpath of the running application our browser window automatically refreshes and shows the changes  How Does Spring Boot Dev Tools Work? You might say it\u0026rsquo;s not important to know the details of how Spring Boot Dev Tools work, but since a lot of things can break in auto-reloading files, I think it\u0026rsquo;s good to know how Spring Boot Dev Tools works under the cover.\nHaving a solid understanding will help in finding and fixing inevitable issues when optimizing the dev loop of your project.\nSpring Boot Dev Tools hooks into the classloader of Spring Boot to provide a way to restart the application context on-demand or to reload changed static files without a restart.\nTo do this, Spring Boot Dev Tools divides the application\u0026rsquo;s classpath into two classloaders:\n the base classloader contains rarely changing resources like the Spring Boot JARs or 3rd party libraries the restart classloader contains the files of our application, which are expected to change in our dev loop.  The restart functionality of Spring Boot Dev Tools listens to changes to the files in our application and then throws away and restarts the restart classloader. This is faster than a full restart because only the classes of our application have to be reloaded.\nInstalling a Live Reload Plugin Before configuring Spring Boot Dev Tools, make sure to have a Livereload plugin installed for your browser. Spring Boot Dev Tools ships with a livereload server that will trigger such a plugin and cause the current page to be reloaded automatically.\nThe Chrome plugin shows an icon with two arrows and a dot in the middle (). Click on it to activate livereload for the currently active browser tab and the dot in the middle will turn black ().\nSetting up Dev Tools for a Single-Module App Let\u0026rsquo;s first discuss setting up Spring Boot Dev Tools for the most common case: we have a single Gradle (or Maven) module that contains all the code we\u0026rsquo;re working on. We may pull in some 1st party or 3rd party JARs from other projects, but we\u0026rsquo;re not changing their code, so our dev loop only needs to support changes to the code within the Spring Boot module.\nIf you want to play around with a working example, have a look at the app module of my example app on GitHub.\nBasic setup To activate the basic features of Spring Boot Dev Tools, we only need to add it to our dependencies:\nplugins { id \u0026#39;org.springframework.boot\u0026#39; version \u0026#39;2.3.2.RELEASE\u0026#39; } dependencies { developmentOnly(\u0026#34;org.springframework.boot:spring-boot-devtools\u0026#34;) // other dependencies } The Spring Boot Gradle plugin automatically adds the developmentOnly configuration. Any dependency in this configuration will not be included in the production build. In older versions of the Spring Boot plugin, we might have to create the developmentOnly configuration ourselves.\nRestarting on Changes to Java Files With the dev tools declared as a dependency, all we need to do is to start the application with ./gradlew bootrun, change a Java file, and hit \u0026ldquo;compile\u0026rdquo; in our IDE. The changed class will be compiled into the folder /build/classes, which is on the classpath for the running Spring Boot app.\nSpring Boot Dev Tools will notice that a file has changed and trigger a restart of the application context. Once that is done, the embedded livereload server will call out to the browser plugin which will refresh the page that\u0026rsquo;s currently open in our browser.\nPretty neat.\nBut changing a static file like an HTML template or a Javascript file will also trigger a restart, even though this isn\u0026rsquo;t necessary!\nReloading on Changes to Static Files In addition to re-starting, Spring Boot Dev Tools supports re-loading without restarting the application context.\nIt will reload any static files that are excluded from a restart in our application.yml:\nspring: devtools: restart: exclude: static/**,templates/**,custom/** Any changes to a file in src/main/resources/static, src/main/resources/templates, and src/main/resources/custom will now trigger a reload instead of a restart.\nTo reload on changing a static file, we need a way to copy the changed files into the classpath of the running app. With Gradle, this is as easy as adding a custom task to build.gradle:\ntask reload(type: Copy) { from \u0026#39;src/main/resources\u0026#39; into \u0026#39;build/resources/main\u0026#39; include \u0026#39;static/**\u0026#39; include \u0026#39;templates/**\u0026#39; include \u0026#39;custom/**\u0026#39; } When we run ./gradlew reload now, all files in src/main/resources/static, src/main/resources/templates, and src/main/resources/custom will be copied into the classpath of the running Spring Boot app.\nNow, if we run ./gradlew reload, it won\u0026rsquo;t trigger a restart, but changes to any of the files we included in the task will still be visible in the running app almost instantly.\nIf our IDE supports save actions or other shortcuts, we can link this task to a shortcut to quickly update the running app with our changes to static files.\nSetting up Dev Tools for a Multi-Module App The above works quite well already for a single module app, i.e. when we\u0026rsquo;re interested in code changes within the Gradle or Maven module that contains our Spring Boot app.\nProperly modularized applications usually consist of multiple build modules.\nIn addition to the main module that contains the Spring Boot application, we may have specialized modules that contribute the UI, a REST API, or a business component from a certain bounded context.\nEach of the submodules is declared as a dependency in the main module and thus will contribute a JAR file to the final Spring Boot JAR (or WAR) file.\nBut Spring Boot Dev Tools only listens for changes in the build folder of the main module and not for changes in a contributing JAR file.\nThat means we have to go the extra mile to trigger a restart or a reload on changes in the contributing modules.\nThe example app on GitHub contains a module named module if you want to have a closer look.\nRestarting on Changes in Java Files of the Module Like with changes to Java files in the main module, we want changes in a Java file of the contributing module to trigger a restart of the application context.\nWe can achieve this with two more custom Gradle tasks in the build.gradle of our main module (or their equivalent in Maven):\ntask restart { dependsOn(classes) dependsOn(\u0026#39;restartModule\u0026#39;) } task restartModule(type: Copy){ from \u0026#39;../module/build/classes/\u0026#39; into \u0026#39;build/classes\u0026#39; dependsOn(\u0026#39;:module:classes\u0026#39;) } In the restart task, we make sure that the classes task of the main module will be called to update the files in the build folder. Also, we trigger the restartModule task, which in turn triggers the same task in the module and copies the resulting files into the build folder of the main module.\nCalling ./gradlew restart will now compile all changed classes and resources and update the running app\u0026rsquo;s classpath, triggering a restart.\nThis will work for changes in any file in the main module or the contributing submodule.\nBut again, this will always trigger a restart. For lightweight changes on static resources, we don\u0026rsquo;t want to trigger a restart.\nReloading on Changes in Static Files of the Module So, we create another task, called reload, that doesn\u0026rsquo;t trigger a restart:\ntask reload(type: Copy) { from \u0026#39;src/main/resources\u0026#39; into \u0026#39;build/resources/main\u0026#39; include \u0026#39;static/**\u0026#39; include \u0026#39;templates/**\u0026#39; include \u0026#39;custom/**\u0026#39; dependsOn(\u0026#39;reloadModule\u0026#39;) } task reloadModule(type: Copy){ from \u0026#39;../module/src/main/resources\u0026#39; into \u0026#39;build/resources/main\u0026#39; include \u0026#39;static/**\u0026#39; include \u0026#39;templates/**\u0026#39; include \u0026#39;custom/**\u0026#39; } The task is the same as in the single module example above, with the addition of calling the reloadModule task, which will copy the module\u0026rsquo;s resources into the build folder of the main module to update the running app\u0026rsquo;s classpath.\nNow, as with the single module example, we can call ./gradlew reload to trigger a reload of static resources that does not trigger a restart of the application context.\nAvoiding Classloading Issues If you run into classloading issues when starting a multi-module app with Dev Tools enabled, the cause may be that a contributing module\u0026rsquo;s JAR file was put into the base classloader and not into the restart classloader.\nChanging dependencies between classes across the two classloaders will cause problems.\nTo fix these issues, we need to tell Spring Boot Dev Tools to include all the JARs of our contributing modules in the restart class loader. In META-INF/spring-devtools.properties, we need to mark each JAR file that should be part of the restart class loader:\nrestart.include.modules=/devtools-demo.*\\.jar And What if I Have Many Modules? The above works nicely if we have a single module that contributes a JAR file to the main Spring Boot application. But what if we have many modules like that?\nWe can just create a restartModule and a reloadModule task for each of those modules and add them as a dependency to the main tasks restart and reload and it should work fine.\nHowever, note that the more modules are involved during a restart or a reload, the longer it will take to run the Gradle tasks!\nAt some point, we\u0026rsquo;ll have lost most of the speed advantage over just restarting the Spring Boot app manually.\nSo, choose wisely for which modules you want to support reloading and restarting. Most likely, you\u0026rsquo;re not working on all modules at the same time anyways, so you might want to change the configuration to restart and reload only the modules you\u0026rsquo;re currently working on.\nMy Gradle plugin makes configuring multiple modules easy, by the way :).\nDon\u0026rsquo;t Lose Your Session When Spring Boot Dev Tools restarts the application context, any server-side user session will be lost.\nIf we were logged in before the restart, we\u0026rsquo;ll see the login screen again after the restart. We have to log back in and then navigate to the page we\u0026rsquo;re currently working on. This costs a lot of time.\nTo fix this, I suggest storing the session in the database.\nFor this, we need to add this dependency to our build.gradle:\ndependencies { implementation \u0026#39;org.springframework.session:spring-session-jdbc\u0026#39; ... } Then, we need to provide the database tables for Spring Session JDBC to use. We can pick one of the schema files, add it to our Flyway or Liquibase scripts, and we\u0026rsquo;re done.\nThe session will now be stored in the database and will survive a restart of the Spring Boot application context.\nNice bonus: the session will also survive a failover from one application instance to another, so we don\u0026rsquo;t have to configure sticky sessions in a load balancer if we\u0026rsquo;re running more than one instance.\nBe aware, though, that everything stored in the session now has to implement the Serializable interface and we have to be a bit more careful with changing the classes that we store in the session to not cause problems to the users when we\u0026rsquo;re updating our application.\nUsing the Spring Boot Dev Tools Gradle Plugin If you don\u0026rsquo;t want to build custom Gradle tasks, have a look at the Spring Boot Dev Tools Gradle Plugin, which I have built to cover most of the use cases described in this article with an easier configuration. Give it a try and let me know what\u0026rsquo;s missing!\nConclusion Updating the classpath of a running app is often considered to be black magic. This tutorial gave some insights into this \u0026ldquo;magic\u0026rdquo; and outlined a plain non-magic way to optimize the turnaround time when developing a Spring Boot application.\nSpring Boot Dev Tools is the tool that makes it possible and my Gradle plugin makes it even easier to configure your project for a quick dev loop.\n","date":"August 13, 2020","image":"https://reflectoring.io/images/stock/0078-hourglass-1200x628-branded_hu8b9f0ddfa8764f98c7d0bb320c1c47dc_128423_650x0_resize_q90_box.jpg","permalink":"/spring-boot-dev-tools/","title":"Optimize Your Dev Loop with Spring Boot Dev Tools"},{"categories":["Spring Boot"],"contents":"Providing an Inversion-of-Control Container is one of the core provisions of the Spring Framework. Spring orchestrates the beans in its application context and manages their lifecycle. In this tutorial, we\u0026rsquo;re looking at the lifecycle of those beans and how we can hook into it.\n Example Code This article is accompanied by a working code example on GitHub. What Is a Spring Bean? Let\u0026rsquo;s start with the basics. Every object that is under the control of Spring\u0026rsquo;s ApplicationContext in terms of creation, orchestration, and destruction is called a Spring Bean.\nThe most common way to define a Spring bean is using the @Component annotation:\n@Component class MySpringBean { ... } If Spring\u0026rsquo;s component scanning is enabled, an object of MySpringBean will be added to the application context.\nAnother way is using Spring\u0026rsquo;s Java config:\n@Configuration class MySpringConfiguration { @Bean public MySpringBean mySpringBean() { return new MySpringBean(); } } The Spring Bean Lifecycle When we look into the lifecycle of Spring beans, we can see numerous phases starting from the object instantiation up to their destruction.\nTo keep it simple, we group them into creation and destruction phases: Let\u0026rsquo;s explain these phases in a little bit more detail.\nBean Creation Phases  Instantiation: This is where everything starts for a bean. Spring instantiates bean objects just like we would manually create a Java object instance. Populating Properties: After instantiating objects, Spring scans the beans that implement Aware interfaces and starts setting relevant properties. Pre-Initialization: Spring\u0026rsquo;s BeanPostProcessors get into action in this phase. The postProcessBeforeInitialization() methods do their job. Also, @PostConstruct annotated methods run right after them. AfterPropertiesSet: Spring executes the afterPropertiesSet() methods of the beans which implement InitializingBean. Custom Initialization: Spring triggers the initialization methods that we defined in the initMethod attribute of our @Beanannotations. Post-Initialization: Spring\u0026rsquo;s BeanPostProcessors are in action for the second time. This phase triggers the postProcessAfterInitialization() methods.  Bean Destruction Phases  Pre-Destroy: Spring triggers@PreDestroy annotated methods in this phase. Destroy: Spring executes the destroy() methods of DisposableBean implementations. Custom Destruction: We can define custom destruction hooks with the destroyMethod attribute in the @Bean annotation and Spring runs them in the last phase.  Hooking Into the Bean Lifecycle There are numerous ways to hook into the phases of the bean lifecycle in a Spring application.\nLet\u0026rsquo;s see some examples for each of them.\nUsing Spring\u0026rsquo;s Interfaces We can implement Spring\u0026rsquo;s InitializingBean interface to run custom operations in afterPropertiesSet() phase:\n@Component class MySpringBean implements InitializingBean { @Override public void afterPropertiesSet() { //...  } } Similarly, we can implement DisposableBean to have Spring call the destroy() method in the destroy phase:\n@Component class MySpringBean implements DisposableBean { @Override public void destroy() { //...  } } Using JSR-250 Annotations Spring supports the @PostConstruct and @PreDestroy annotations of the JSR-250 specification.\nTherefore, we can use them to hook into the pre-initialization and destroy phases:\n@Component class MySpringBean { @PostConstruct public void postConstruct() { //...  } @PreDestroy public void preDestroy() { //...  } } Using Attributes of the @Bean Annotation Additionally, when we define our Spring beans we can set the initMethod and destroyMethod attributes of the @Bean annotation in Java configuration:\n@Configuration class MySpringConfiguration { @Bean(initMethod = \u0026#34;onInitialize\u0026#34;, destroyMethod = \u0026#34;onDestroy\u0026#34;) public MySpringBean mySpringBean() { return new MySpringBean(); } } We should note that if we have a public method named close() or shutdown() in our bean, then it is automatically triggered with a destruction callback by default:\n@Component class MySpringBean { public void close() { //...  } } However, if we do not wish this behavior, we can disable it by setting destroyMethod=\u0026quot;\u0026quot;:\n@Configuration class MySpringConfiguration { @Bean(destroyMethod = \u0026#34;\u0026#34;) public MySpringBean mySpringBean() { return new MySpringBean(); } } XML Configuration For legacy applications, we might have still some beans left in XML configuration. Luckily, we can still configure these attributes in our XML bean definitions.  Using BeanPostProcessor Alternatively, we can make use of the BeanPostProcessor interface to be able to run any custom operation before or after a Spring bean initializes and even return a modified bean:\nclass MyBeanPostProcessor implements BeanPostProcessor { @Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { //...  return bean; } @Override public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { //...  return bean; } } BeanPostProcessor Is Not Bean Specific We should pay attention that Spring's BeanPostProcessors are executed for each bean defined in the spring context.  Using Aware Interfaces Another way of getting into the lifecycle is by using the Aware interfaces:\n@Component class MySpringBean implements BeanNameAware, ApplicationContextAware { @Override public void setBeanName(String name) { //...  } @Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { //...  } } There are additional Aware interfaces which we can use to inject certain aspects of the Spring context into our beans.\nWhy Would I Need to Hook Into the Bean Lifecycle? When we need to extend our software with new requirements, it is critical to find the best practices to keep our codebase maintainable in the long run.\nIn Spring Framework, hooking into the bean lifecycle is a good way to extend our application in most cases.\nAcquiring Bean Properties One of the use cases is acquiring the bean properties (like bean name) at runtime. For example, when we do some logging:\n@Component class NamedSpringBean implements BeanNameAware { Logger logger = LoggerFactory.getLogger(NamedSpringBean.class); public void setBeanName(String name) { logger.info(name + \u0026#34; created.\u0026#34;); } } Dynamically Changing Spring Bean Instances In some cases, we need to define Spring beans programmatically. This can be a practical solution when we need to re-create and change our bean instances at runtime.\nLet\u0026rsquo;s create an IpToLocationService service which is capable of dynamically updating IpDatabaseRepository to the latest version on-demand:\n@Service class IpToLocationService implements BeanFactoryAware { DefaultListableBeanFactory listableBeanFactory; IpDatabaseRepository ipDatabaseRepository; @Override public void setBeanFactory(BeanFactory beanFactory) throws BeansException { listableBeanFactory = (DefaultListableBeanFactory) beanFactory; updateIpDatabase(); } public void updateIpDatabase(){ String updateUrl = \u0026#34;https://download.acme.com/ip-database-latest.mdb\u0026#34;; AbstractBeanDefinition definition = BeanDefinitionBuilder .genericBeanDefinition(IpDatabaseRepository.class) .addPropertyValue(\u0026#34;file\u0026#34;, updateUrl) .getBeanDefinition(); listableBeanFactory .registerBeanDefinition(\u0026#34;ipDatabaseRepository\u0026#34;, definition); ipDatabaseRepository = listableBeanFactory .getBean(IpDatabaseRepository.class); } } We access the BeanFactory instance with the help of BeanFactoryAware interface. Thus, we dynamically create our IpDatabaseRepository bean with the latest database file and update our bean definition by registering it to the Spring context.\nAlso, we call our updateIpDatabase() method right after we acquire the BeanFactory instance in the setBeanFactory() method. Therefore, we can initially create the first instance of the IpDatabaseRepository bean while the Spring context boots up.\nAccessing Beans From the Outside of the Spring Context Another scenario is accessing the ApplicationContext or BeanFactory instance from outside of the Spring context.\nFor example, we may want to inject the BeanFactory into a non-Spring class to be able to access Spring beans or configurations inside that class. The integration between Spring and the Quartz library is a good example to show this use case:\nclass AutowireCapableJobFactory extends SpringBeanJobFactory implements ApplicationContextAware { private AutowireCapableBeanFactory beanFactory; @Override public void setApplicationContext(final ApplicationContext context) { beanFactory = context.getAutowireCapableBeanFactory(); } @Override protected Object createJobInstance(final TriggerFiredBundle bundle) throws Exception { final Object job = super.createJobInstance(bundle); beanFactory.autowireBean(job); return job; } } In this example, we\u0026rsquo;re using the ApplicationContextAware interface to get access to the bean factory and use the bean factory to autowire the dependencies in a Job bean that is initially not managed by Spring.\nAlso, a common Spring - Jersey integration is another clear example of this:\n@Configuration class JerseyConfig extends ResourceConfig { @Autowired private ApplicationContext applicationContext; @PostConstruct public void registerResources() { applicationContext.getBeansWithAnnotation(Path.class).values() .forEach(this::register); } } By marking Jersey\u0026rsquo;s ResourceConfig as a Spring @Configuration, we inject the ApplicationContext and lookup all the beans which are annotated by Jersey\u0026rsquo;s @Path, to easily register them on application startup.\nThe Execution Order Let\u0026rsquo;s write a Spring bean to see the execution order of each phase of the lifecycle:\nclass MySpringBean implements BeanNameAware, ApplicationContextAware, InitializingBean, DisposableBean { private String message; public void sendMessage(String message) { this.message = message; } public String getMessage() { return this.message; } @Override public void setBeanName(String name) { System.out.println(\u0026#34;--- setBeanName executed ---\u0026#34;); } @Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { System.out.println(\u0026#34;--- setApplicationContext executed ---\u0026#34;); } @PostConstruct public void postConstruct() { System.out.println(\u0026#34;--- @PostConstruct executed ---\u0026#34;); } @Override public void afterPropertiesSet() { System.out.println(\u0026#34;--- afterPropertiesSet executed ---\u0026#34;); } public void initMethod() { System.out.println(\u0026#34;--- init-method executed ---\u0026#34;); } @PreDestroy public void preDestroy() { System.out.println(\u0026#34;--- @PreDestroy executed ---\u0026#34;); } @Override public void destroy() throws Exception { System.out.println(\u0026#34;--- destroy executed ---\u0026#34;); } public void destroyMethod() { System.out.println(\u0026#34;--- destroy-method executed ---\u0026#34;); } } Additionally, we create a BeanPostProcessor to hook into the before and after initialization phases:\nclass MyBeanPostProcessor implements BeanPostProcessor { @Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { if (bean instanceof MySpringBean) { System.out.println(\u0026#34;--- postProcessBeforeInitialization executed ---\u0026#34;); } return bean; } @Override public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { if (bean instanceof MySpringBean) { System.out.println(\u0026#34;--- postProcessAfterInitialization executed ---\u0026#34;); } return bean; } } Next, we write a Spring configuration to define our beans:\n@Configuration class MySpringConfiguration { @Bean public MyBeanPostProcessor myBeanPostProcessor(){ return new MyBeanPostProcessor(); } @Bean(initMethod = \u0026#34;initMethod\u0026#34;, destroyMethod = \u0026#34;destroyMethod\u0026#34;) public MySpringBean mySpringBean(){ return new MySpringBean(); } } Finally, we write a @SpringBootTest to run our Spring context:\n@SpringBootTest class BeanLifecycleApplicationTests { @Autowired public MySpringBean mySpringBean; @Test public void testMySpringBeanLifecycle() { String message = \u0026#34;Hello World\u0026#34;; mySpringBean.sendMessage(message); assertThat(mySpringBean.getMessage()).isEqualTo(message); } } As a result, our test method logs the execution order between the lifecycle phases:\n--- setBeanName executed --- --- setApplicationContext executed --- --- postProcessBeforeInitialization executed --- --- @PostConstruct executed --- --- afterPropertiesSet executed --- --- init-method executed --- --- postProcessAfterInitialization executed --- ... --- @PreDestroy executed --- --- destroy executed --- --- destroy-method executed --- Conclusion In this tutorial, we learned what the bean lifecycle phases are, why, and how we hook into lifecycle phases in Spring.\nSpring has numerous phases in a bean lifecycle as well as many ways to receive callbacks. We can hook into these phases both via annotations on our beans or from a common class as we do in BeanPostProcessor.\nAlthough each method has its purpose, we should note that using Spring interfaces couples our code to the Spring Framework.\nOn the other hand, @PostConstruct and @PreDestroy annotations are a part of the Java API. Therefore, we consider them a better alternative to receiving lifecycle callbacks because they decouple our components even from Spring.\nAll the code examples and more are over on Github for you to play with.\n","date":"August 11, 2020","image":"https://reflectoring.io/images/stock/0017-coffee-beans-1200x628-branded_huece543939443a9c461a0d4760d3503b7_299333_650x0_resize_q90_box.jpg","permalink":"/spring-bean-lifecycle/","title":"Hooking Into the Spring Bean Lifecycle"},{"categories":["Java"],"contents":"In the previous article in this series, we learned about Resilience4j and how to use its Retry module. Let\u0026rsquo;s now learn about the RateLimiter - what it is, when and how to use it, and what to watch out for when implementing rate limiting (or \u0026ldquo;throttling\u0026rdquo;, as it\u0026rsquo;s also called).\n Example Code This article is accompanied by a working code example on GitHub. What is Resilience4j? Please refer to the description in the previous article for a quick intro into how Resilience4j works in general.\nWhat is Rate Limiting? We can look at rate limiting from two perspectives - as a service provider and as a service consumer.\nServer-side Rate Limiting As a service provider, we implement rate limiting to protect our resources from overload and Denial of Service (DoS) attacks.\nTo meet our service level agreement (SLA) with all our consumers, we want to ensure that one consumer that is causing a traffic spike doesn\u0026rsquo;t impact the quality of our service to others.\nWe do this by setting a limit on how many requests a consumer is allowed to make in a given unit of time. We reject any requests above the limit with an appropriate response, like HTTP status 429 (Too Many Requests). This is called server-side rate limiting.\nThe rate limit is specified in terms of requests per second (rps), requests per minute (rpm), or similar. Some services have multiple rate limits for different durations (50 rpm and not more than 2500 rph, for example) and different times of day (100 rps during the day and 150 rps at night, for example). The limit may apply to a single user (identified by user id, IP address, API access key, etc.) or a tenant in a multi-tenant application.\nClient-side Rate Limiting As a consumer of a service, we want to ensure that we are not overloading the service provider. Also, we don\u0026rsquo;t want to incur unexpected costs - either monetarily or in terms of quality of service.\nThis could happen if the service we are consuming is elastic. Instead of throttling our requests, the service provider might charge us extra for the additional load. Some even ban misbehaving clients for short periods. Rate limiting implemented by a consumer to prevent such issues is called client-side rate limiting.\nWhen to Use RateLimiter? resilience4j-ratelimiter is intended for client-side rate limiting.\nServer-side rate limiting requires things like caching and coordination between multiple server instances, which is not supported by resilience4j. For server-side rate limiting, there are API gateways and API filters like Kong API Gateway and Repose API Filter. Resilience4j\u0026rsquo;s RateLimiter module is not intended to replace them.\nResilience4j RateLimiter Concepts A thread that wants to call a remote service first asks the RateLimiter for permission. If the RateLimiter permits it, the thread proceeds. Otherwise, the RateLimiter parks the thread or puts it in a waiting state.\nThe RateLimiter creates new permissions periodically. When a permission becomes available, the thread is notified and it can then continue.\nThe number of calls that are permitted during a period is called limitForPeriod. How often the RateLimiter refreshes the permissions is specified by limitRefreshPeriod. How long a thread can wait to acquire permission is specified by timeoutDuration. If no permission is available at the end of the wait time, the RateLimiter throws a RequestNotPermitted runtime exception.\nUsing the Resilience4j RateLimiter Module RateLimiterRegistry, RateLimiterConfig, and RateLimiter are the main abstractions in resilience4j-ratelimiter.\nRateLimiterRegistry is a factory for creating and managing RateLimiter objects.\nRateLimiterConfig encapsulates the limitForPeriod, limitRefreshPeriod and timeoutDuration configurations. Each RateLimiter object is associated with a RateLimiterConfig.\nRateLimiter provides helper methods to create decorators for the functional interfaces or lambda expressions containing the remote call.\nLet\u0026rsquo;s see how to use the various features available in the RateLimiter module. Assume that we are building a website for an airline to allow its customers to search for and book flights. Our service talks to a remote service encapsulated by the class FlightSearchService.\nBasic Example The first step is to create a RateLimiterConfig:\nRateLimiterConfig config = RateLimiterConfig.ofDefaults(); This creates a RateLimiterConfig with default values for limitForPeriod (50), limitRefreshPeriod(500ns), and timeoutDuration (5s).\nSuppose our contract with the airline\u0026rsquo;s service says that we can call their search API at 1 rps. Then we would create the RateLimiterConfig like this:\nRateLimiterConfig config = RateLimiterConfig.custom() .limitForPeriod(1) .limitRefreshPeriod(Duration.ofSeconds(1)) .timeoutDuration(Duration.ofSeconds(1)) .build(); If a thread is not able to acquire permission in the 1s timeoutDuration specified, it will error out.\nWe then create a RateLimiter and decorate the searchFlights() call:\nRateLimiterRegistry registry = RateLimiterRegistry.of(config); RateLimiter limiter = registry.rateLimiter(\u0026#34;flightSearchService\u0026#34;); // FlightSearchService and SearchRequest creation omitted Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightsSupplier = RateLimiter.decorateSupplier(limiter, () -\u0026gt; service.searchFlights(request)); Finally, we use the decorated Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; a few times:\nfor (int i=0; i\u0026lt;3; i++) { System.out.println(flightsSupplier.get()); } The timestamps in the sample output show one request being made every second:\nSearching for flights; current time = 15:29:39 847 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, ... }, ... ] Searching for flights; current time = 15:29:40 786 ... [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, ... }, ... ] Searching for flights; current time = 15:29:41 791 ... [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, ... }, ... ] If we exceed the limit, we get a RequestNotPermitted exception:\nException in thread \u0026#34;main\u0026#34; io.github.resilience4j.ratelimiter.RequestNotPermitted: RateLimiter \u0026#39;flightSearchService\u0026#39; does not permit further calls at io.github.resilience4j.ratelimiter.RequestNotPermitted.createRequestNotPermitted(RequestNotPermitted.java:43) at io.github.resilience4j.ratelimiter.RateLimiter.waitForPermission(RateLimiter.java:580) ... other lines omitted ... Decorating Methods Throwing Checked Exceptions Suppose we\u0026rsquo;re calling FlightSearchService.searchFlightsThrowingException() which can throw a checked Exception. Then we cannot use RateLimiter.decorateSupplier(). We would use RateLimiter.decorateCheckedSupplier() instead:\nCheckedFunction0\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flights = RateLimiter.decorateCheckedSupplier(limiter, () -\u0026gt; service.searchFlightsThrowingException(request)); try { System.out.println(flights.apply()); } catch (...) { // exception handling } RateLimiter.decorateCheckedSupplier() returns a CheckedFunction0 which represents a function with no arguments. Notice the call to apply() on the CheckedFunction0 object to invoke the remote operation.\nIf we don\u0026rsquo;t want to work with Suppliers , RateLimiter provides more helper decorator methods like decorateFunction(), decorateCheckedFunction(), decorateRunnable(), decorateCallable() etc. to work with other language constructs. The decorateChecked* methods are used to decorate methods that throw checked exceptions.\nApplying Multiple Rate Limits Suppose the airline\u0026rsquo;s flight search had multiple rate limits: 2 rps and 40 rpm. We can apply multiple limits on the client-side by creating multiple RateLimiters:\nRateLimiterConfig rpsConfig = RateLimiterConfig.custom(). limitForPeriod(2). limitRefreshPeriod(Duration.ofSeconds(1)). timeoutDuration(Duration.ofMillis(2000)).build(); RateLimiterConfig rpmConfig = RateLimiterConfig.custom(). limitForPeriod(40). limitRefreshPeriod(Duration.ofMinutes(1)). timeoutDuration(Duration.ofMillis(2000)).build(); RateLimiterRegistry registry = RateLimiterRegistry.of(rpsConfig); RateLimiter rpsLimiter = registry.rateLimiter(\u0026#34;flightSearchService_rps\u0026#34;, rpsConfig); RateLimiter rpmLimiter = registry.rateLimiter(\u0026#34;flightSearchService_rpm\u0026#34;, rpmConfig); We then decorate the searchFlights() method using both the RateLimiters:\nSupplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; rpsLimitedSupplier = RateLimiter.decorateSupplier(rpsLimiter, () -\u0026gt; service.searchFlights(request)); Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightsSupplier = RateLimiter.decorateSupplier(rpmLimiter, rpsLimitedSupplier); The sample output shows 2 requests being made every second and being limited to 40 requests:\nSearching for flights; current time = 15:13:21 246 ... Searching for flights; current time = 15:13:21 249 ... Searching for flights; current time = 15:13:22 212 ... Searching for flights; current time = 15:13:40 215 ... Exception in thread \u0026#34;main\u0026#34; io.github.resilience4j.ratelimiter.RequestNotPermitted: RateLimiter \u0026#39;flightSearchService_rpm\u0026#39; does not permit further calls at io.github.resilience4j.ratelimiter.RequestNotPermitted.createRequestNotPermitted(RequestNotPermitted.java:43) at io.github.resilience4j.ratelimiter.RateLimiter.waitForPermission(RateLimiter.java:580) Changing Limits at Runtime If required, we can change the values for limitForPeriod and timeoutDuration at runtime:\nlimiter.changeLimitForPeriod(2); limiter.changeTimeoutDuration(Duration.ofSeconds(2)); This feature is useful if our rate limits vary based on time of day, for example - we could have a scheduled thread to change these values. The new values won\u0026rsquo;t affect the threads that are currently waiting for permissions.\nUsing RateLimiter and Retry Together Let\u0026rsquo;s say we want to retry if we get a RequestNotPermitted exception since it is a transient error. We would create RateLimiter and Retry objects as usual. We then decorate a rate-limited Supplier and wrap it with a Retry:\nSupplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; rateLimitedFlightsSupplier = RateLimiter.decorateSupplier(rateLimiter, () -\u0026gt; service.searchFlights(request)); Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; retryingFlightsSupplier = Retry.decorateSupplier(retry, rateLimitedFlightsSupplier); The sample output shows the request being retried for a RequestNotPermitted exception:\nSearching for flights; current time = 17:10:09 218 ... [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;07/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ...] 2020-07-27T17:10:09.484: Retry \u0026#39;rateLimitedFlightSearch\u0026#39;, waiting PT1S until attempt \u0026#39;1\u0026#39;. Last attempt failed with exception \u0026#39;io.github.resilience4j.ratelimiter.RequestNotPermitted: RateLimiter \u0026#39;flightSearchService\u0026#39; does not permit further calls\u0026#39;. Searching for flights; current time = 17:10:10 492 ... 2020-07-27T17:10:10.494: Retry \u0026#39;rateLimitedFlightSearch\u0026#39; recorded a successful retry attempt... [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;07/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ...] The order in which we created the decorators is important. It would not work if we wrapped the Retry with the RateLimiter.\nRateLimiter Events RateLimiter has an EventPublisher which generates events of the types RateLimiterOnSuccessEvent and RateLimiterOnFailureEvent when calling a remote operation to indicate if acquiring a permission was successful or not. We can listen for these events and log them, for example:\nRateLimiter limiter = registry.rateLimiter(\u0026#34;flightSearchService\u0026#34;); limiter.getEventPublisher().onSuccess(e -\u0026gt; System.out.println(e.toString())); limiter.getEventPublisher().onFailure(e -\u0026gt; System.out.println(e.toString())); The sample output shows what\u0026rsquo;s logged:\nRateLimiterEvent{type=SUCCESSFUL_ACQUIRE, rateLimiterName=\u0026#39;flightSearchService\u0026#39;, creationTime=2020-07-21T19:14:33.127+05:30} ... other lines omitted ... RateLimiterEvent{type=FAILED_ACQUIRE, rateLimiterName=\u0026#39;flightSearchService\u0026#39;, creationTime=2020-07-21T19:14:33.186+05:30} RateLimiter Metrics Suppose after implementing client-side throttling we find that the response times of our APIs have increased. This is possible - as we have seen, if permissions are not available when a thread invokes a remote operation, the RateLimiter puts the thread in a waiting state.\nIf our request handling threads are often waiting to get permission, it could mean that our limitForPeriod is too low. Perhaps we need to work with our service provider and get additional quota provisioned first.\nMonitoring RateLimiter metrics helps us identify such capacity issues and ensure that the values we\u0026rsquo;ve set on the RateLimiterConfig are working well.\nRateLimiter tracks two metrics: the number of permissions available (resilience4j.ratelimiter.available.permissions), and the number of threads waiting for permissions (resilience4j.ratelimiter.waiting.threads).\nFirst, we create RateLimiterConfig, RateLimiterRegistry, and RateLimiter as usual. Then, we create a MeterRegistry and bind the RateLimiterRegistry to it:\nMeterRegistry meterRegistry = new SimpleMeterRegistry(); TaggedRateLimiterMetrics.ofRateLimiterRegistry(registry) .bindTo(meterRegistry); After running the rate-limited operation a few times, we display the captured metrics:\nConsumer\u0026lt;Meter\u0026gt; meterConsumer = meter -\u0026gt; { String desc = meter.getId().getDescription(); String metricName = meter.getId().getName(); Double metricValue = StreamSupport.stream(meter.measure().spliterator(), false) .filter(m -\u0026gt; m.getStatistic().name().equals(\u0026#34;VALUE\u0026#34;)) .findFirst() .map(m -\u0026gt; m.getValue()) .orElse(0.0); System.out.println(desc + \u0026#34; - \u0026#34; + metricName + \u0026#34;: \u0026#34; + metricValue); }; meterRegistry.forEachMeter(meterConsumer); Here\u0026rsquo;s some sample output:\nThe number of available permissions - resilience4j.ratelimiter.available.permissions: -6.0 The number of waiting threads - resilience4j.ratelimiter.waiting_threads: 7.0 The negative value for resilience4j.ratelimiter.available.permissions shows the number of permissions that have been reserved for requesting threads. In a real application, we would export the data to a monitoring system periodically and analyze it on a dashboard.\nGotchas and Good Practices When Implementing Client-side Rate Limiting Make the Rate Limiter a Singleton All calls to a given remote service should go through the same RateLimiter instance. For a given remote service the RateLimiter must be a singleton.\nIf we don\u0026rsquo;t enforce this, some areas of our codebase may make a direct call to the remote service, bypassing the RateLimiter. To prevent this, the actual call to the remote service should be in a core, internal layer and other areas should use a rate-limited decorator exposed by the internal layer.\nHow can we ensure that a new developer understands this intent in the future? Check out Tom\u0026rsquo;s article which shows one way of solving such problems by organizing the package structure to make such intents clear. Additionally, it shows how to enforce this by codifying the intent in ArchUnit tests.\nConfigure the Rate Limiter for Multiple Server Instances Figuring out the right values for the configurations can be tricky. If we are running multiple instances of our service in a cluster, the value for limitForPeriod must account for this.\nFor example, if the upstream service has a rate limit of 100 rps and we have 4 instances of our service, then we would configure 25 rps as the limit on each instance.\nThis assumes, however, that the load on each of our instances will be roughly the same. If that\u0026rsquo;s not the case or if our service itself is elastic and the number of instances can vary, then Resilience4j\u0026rsquo;s RateLimiter may not be a good fit.\nIn that case, we would need a rate limiter that maintains its data in a distributed cache and not in-memory like Resilience4j RateLimiter. But that would impact the response times of our service. Another option is to implement some kind of adaptive rate limiting. While Resilience4j may support it in the future, it is not clear when it will be available.\nChoose the Right Timeout For the timeoutDuration configuration value, we should keep the expected response times of our APIs in mind.\nIf we set the timeoutDuration too high, the response times and throughput will suffer. If it is too low, our error rate may increase.\nSince there could be some trial and error involved here, a good practice is to maintain the values we use in RateLimiterConfig like timeoutDuration, limitForPeriod, and limitRefreshPeriod as a configuration outside our service. Then we can change them without changing code.\nTune Client-side and Server-side Rate Limiters Implementing client-side rate limiting does not guarantee that we will never get rate limited by our upstream service.\nSuppose we had a limit of 2 rps from the upstream service and we had configured limitForPeriod as 2 and limitRefreshPeriod as 1s. If we make two requests in the last few milliseconds of the second, with no other calls until then, the RateLimiter would permit them. If we make another two calls in the first few milliseconds of the next second, the RateLimiter would permit them too since two new permissions would be available. But the upstream service could reject these two requests since servers often implement sliding window-based rate limiting.\nTo guarantee that we will never get a rate exceeded from an upstream service, we would need to configure the fixed window in the client to be shorter than the sliding window in the service. So if we had configured limitForPeriod as 1 and limitRefreshPeriod as 500ms in the previous example, we would not get a rate limit exceeded error. But then, all the three requests after the first one would wait, increasing the response times and reducing the throughput. Check out this video which talks about the problems with static rate limiting and the advantages of adaptive control.\nConclusion In this article, we learned how we can use Resilience4j\u0026rsquo;s RateLimiter module to implement client-side rate limiting. We looked at the different ways to configure it with practical examples. We learned some good practices and things to keep in mind when implementing rate limiting.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"August 5, 2020","image":"https://reflectoring.io/images/stock/0108-speed-limit-1200x628-branded_hu0f4048910dd781cfc13d6156f43e1822_180547_650x0_resize_q90_box.jpg","permalink":"/rate-limiting-with-resilience4j/","title":"Implementing Rate Limiting with Resilience4j"},{"categories":["Spring Boot"],"contents":"The request/response pattern is well-known and widely used, mainly in synchronous communication. This article shows how to implement this pattern asynchronously with a message broker using the AMQP protocol and Spring Boot.\n Example Code This article is accompanied by a working code example on GitHub. What is the Request/Response Pattern? The request/response interaction between two parties is pretty easy. The client sends a request to the server, the server starts the work and sends the response to the client once the work is done.\nThe best-known example of this interaction is communication via the HTTP protocol, where the request and response are sent through the same channel / the same connection.\nNormally, the client sends the request directly to the server and waits for the response synchronously. In this case, the client has to know the API of the server.\nWhy Do We Need an Async Request/Response Pattern? A software enterprise system consists of many components. These components communicate with each other. Sometimes it is enough just to send a message to another component and not wait for an answer. But in many cases, a component may need to get the response to a request.\nWhen we use direct synchronous communication, the client has to know the API of the server. When one component has a big number of different API calls to another component, we\u0026rsquo;re building coupling them to eath other tightly, and the whole picture can become hard to change.\nTo reduce the coupling a bit we can use a message broker as a central component for communication between the components, instead of a synchronous protocol.\nAsynchronous Communication Since we use messaging for requests and responses, the communication is now working asynchronously.\nHere\u0026rsquo;s how it works:\n The client sends the request to the request channel. The server consumes the request from the request channel. The server sends the response to the response channel. The client consumes the response from the response channel.  When the client sends a request, it waits for the response by listening to the response channel. If the client sends many requests, then it expects a response for every request. But how does the client know which response is for which request?\nTo solve this problem, the client should send a unique correlation IDentifier along with each request. The server should obtain this identifier and add it to the response. Now the client can assign a response to its request.\nThe important things are:\n We have two channels. One for requests and one for responses. We use a correlation ID on both ends of the communication.  Another point we have to note is that the client has to have a state.\nThe client generates a unique correlation ID, for example, my unique id. Then the client sends the request to the channel and keeps the correlation ID in memory or in a database.\nAfter that, the client waits for the responses in the response channel. Every response from the channel has a correlation ID, and the client has to compare this correlation ID with those in memory to find the respective request and proceed with processing the response in the context of that request.\nThe server, on the other hand, is still stateless. The server just reads the correlation ID from the request channel and sends it back to the response channel along with the response.\nRemote Procedure Call with AMQP Now let\u0026rsquo;s see how we can implement this asynchronous communication with Spring Boot as client and server, and RabbitMQ as a message broker.\nLet\u0026rsquo;s create two Spring Boot applications. A client application that sends the request to the server and waits for the response, and a server application, that accepts the request, processes it, and sends the response back to the client.\nWe will use Spring AMQP for sending and receiving messages.\nClient First, we have to add the AMQP starter to the dependencies (Gradle notation):\nimplementation \u0026#39;org.springframework.boot:spring-boot-starter-amqp:2.3.2.RELEASE\u0026#39; Second, we create the configuration of the client application:\n@Configuration class ClientConfiguration { @Bean public DirectExchange directExchange() { return new DirectExchange(\u0026#34;reflectoring.cars\u0026#34;); } @Bean public MessageConverter jackson2MessageConverter() { return new Jackson2JsonMessageConverter(); } } The DirectExchange supports binding to different queues depending on the routing key. In this case, we create an exchange with the namereflectoring.cars. When sending a message to this exchange, the client has to provide a routing key. The message broker will forward the message to the queue, that is bound to the exchange with the given routing key.\nYou can find more details on the AMQP messaging concepts in the article about events with RabbitMQ.\nWe declare Jackson2JsonMessageConverter as default MessageConverter to send the messages to the message broker in JSON format.\nNow we are ready to send a request message:\n@Component class StatefulBlockingClient { private final RabbitTemplate template; private final DirectExchange directExchange; public static final String ROUTING_KEY = \u0026#34;old.car\u0026#34;; public void send() { CarDto carDto = CarDto.builder() // ...  .build(); RegistrationDto registrationDto = template.convertSendAndReceiveAsType( directExchange.getName(), ROUTING_KEY, carDto, new ParameterizedTypeReference\u0026lt;\u0026gt;() { }); } } Spring AMQP provides built-in support for the request/response pattern.\nIf we use the method convertSendAndReceiveAsType() of RabbitTemplate, Spring AMQP takes care of the request/response scenario. It creates a callback channel for the response, generates a correlation ID, configures the message broker, and receives the response from the server. The information about the callback queue and correlation ID will be sent to the server too. It is transparent for the caller.\nSince we configured MessageConverter in the configuration above, it will be used by the template and the carDto will be sent as JSON to the channel.\nServer Now let\u0026rsquo;s create a server application to proceed with the request and create the response. First, we create a configuration for the server:\n@Configuration class ServerConfiguration { @Bean public DirectExchange directExchange() { return new DirectExchange(\u0026#34;reflectoring.cars\u0026#34;); } @Bean public Queue queue() { return new Queue(\u0026#34;request\u0026#34;); } @Bean public Binding binding(DirectExchange directExchange, Queue queue) { return BindingBuilder.bind(queue) .to(directExchange) .with(\u0026#34;old.car\u0026#34;); } @Bean public MessageConverter jackson2MessageConverter() { return new Jackson2JsonMessageConverter(); } } We declare the same exchange as on the client side. Then we create a queue for the request and bind it to the exchange with the same routing key old.car that we used in the client.\nAll messages we send to the exchange with this routing key will be forwarded to the request queue. We have to note that we don\u0026rsquo;t configure the callback queue or response configuration at all. Spring AMQP will detect this from the message properties of the request and configure everything automatically.\nNow we have to implement the listener that listens to the request queue:\n@Component class Consumer { @RabbitListener(queues = \u0026#34;#{queue.name}\u0026#34;, concurrency = \u0026#34;3\u0026#34;) public Registration receive(Car car) { return Registration.builder() .id(car.getId()) .date(new Date()) .owner(\u0026#34;Ms. Rabbit\u0026#34;) .signature(\u0026#34;Signature of the registration\u0026#34;) .build(); } } This listener gets messages from the request queue.\nWe declare the Jackson2JsonMessageConverter in the configuration. This converter will convert the String payload of the message to a Car object.\nThe method receive() starts the business logic and returns a Registration object.\nSpring AMQP takes care of the rest again. It will convert the Registration to JSON, add the correlation ID of the request to the response, and send it to the response queue. We don\u0026rsquo;t even know the name of the response queue or the value of the correlation ID.\nThe client will get this response from the callback queue, read the correlation ID, and continue working.\nIf we have several threads on the client side that are working in parallel and sending requests, or if we have several methods that use the same request channel, or even if we have many instances of the client, Spring AMQP will always correlate the response message to the sender.\nThat\u0026rsquo;s it. Now the client can call a method that invokes logic on the server side. From the client perspective, this is a normal blocking remote call.\nRetrieving An Asynchronous Result Later Normally the APIs are fast, and the client expects the response after a few milliseconds or seconds.\nBut there are cases when the server takes longer to send the response. It can be because of security policies, high load, or some other long operations on the server-side. While waiting for the response, the client could work on something different and process the response later.\nWe can use AsyncRabbitTemplate to achieve this:\n@Configuration class ClientConfiguration { @Bean public AsyncRabbitTemplate asyncRabbitTemplate( RabbitTemplate rabbitTemplate){ return new AsyncRabbitTemplate(rabbitTemplate); } // Other methods omitted. } We have to declare the bean of AsyncRabbitTemplate in the client configuration. We pass the rabbitTemplate bean to the constructor, because Spring AMQP configured it for us, and we just want to use it asynchronously.\nAfter that, we can use it for sending messages:\n@Component class StatefulFutureClient { public void sendWithFuture() { CarDto carDto = CarDto.builder() // ...  .build(); ListenableFuture\u0026lt;RegistrationDto\u0026gt; listenableFuture = asyncRabbitTemplate.convertSendAndReceiveAsType( directExchange.getName(), ROUTING_KEY, carDto, new ParameterizedTypeReference\u0026lt;\u0026gt;() { }); // do some other work...  try { RegistrationDto registrationDto = listenableFuture.get(); } catch (InterruptedException | ExecutionException e) { // ...  } } } We use the method with the same signature as with RabbitTemplate, but this method returns an implementation of ListenableFuture interface. After calling the method convertSendAndReceiveAsType() we can execute other code and then call the method get() on the ListenableFuture to obtain the response from the server. If we call the method get() and the response is not returned, we still have to wait and cannot execute further code.\nRegistering a Callback To avoid a blocking call we can register a callback, that is called asynchronously when the response message is received. The AsyncRabbitTemplate supports this approach:\n@Component class StatefulCallbackClient { public void sendAsynchronouslyWithCallback() { CarDto carDto = CarDto.builder() // ...  .build(); RabbitConverterFuture\u0026lt;RegistrationDto\u0026gt; rabbitConverterFuture = asyncRabbitTemplate.convertSendAndReceiveAsType( directExchange.getName(), ROUTING_KEY, carDto, new ParameterizedTypeReference\u0026lt;\u0026gt;() {}); rabbitConverterFuture.addCallback(new ListenableFutureCallback\u0026lt;\u0026gt;() { @Override public void onFailure(Throwable ex) { // ...  } @Override public void onSuccess(RegistrationDto registrationDto) { LOGGER.info(\u0026#34;Registration received {}\u0026#34;, registrationDto); } }); } } We declare RabbitConverterFuture as return type of the method convertSendAndReceiveAsType(). Then we add an ListenableFutureCallback to the RabbitConverterFuture. From this place, we can continue proceeding without waiting for the response. The ListenableFutureCallback will be called when the response reaches in the callback queue.\nBoth approaches with using a ListenableFuture and registering a callback don\u0026rsquo;t require any changes on the server-side.\nDelayed Response with a Separate Listener All these approaches work fine with Spring AMQP and RabbitMQ, but there are cases when they have a drawback. The client always has a state. It means if the client sends a request, the client has to keep the correlation ID in memory and assign the response to the request.\nIt means that only the sender of the request can get the response.\nLet\u0026rsquo;s say we have many instances of the client. One instance sends a request to the server and this instance, unfortunately, crashes for some reason and is not available anymore. The response cannot be proceeded anymore and is lost.\nIn a different case, the server can take longer than usual for proceeding request and the client doesn\u0026rsquo;t want to wait anymore and times out. Again, the response is lost.\nTo solve this problem we have to let other instances proceed with the response.\nTo achieve this, we create the request sender and the response listener separately.\nFirst, we have to create a response queue and set up a listener that is listening to this queue on the client side. Second, we have to take care about the correlation between requests and responses ourselves.\nWe declare the response queue in the client configuration:\n@Configuration class ClientConfiguration { @Bean public Queue response(){ return new Queue(\u0026#34;response\u0026#34;); } // other methods omitted. } Now we send the request to the same exchange as in the example above:\n@Component class StatelessClient { public void sendAndForget() { CarDto carDto = CarDto.builder() // ...  .build(); UUID correlationId = UUID.randomUUID(); registrationService.saveCar(carDto, correlationId); MessagePostProcessor messagePostProcessor = message -\u0026gt; { MessageProperties messageProperties = message.getMessageProperties(); messageProperties.setReplyTo(replyQueue.getName()); messageProperties.setCorrelationId(correlationId.toString()); return message; }; template.convertAndSend(directExchange.getName(), \u0026#34;old.car\u0026#34;, carDto, messagePostProcessor); } } The first difference to the approach with the remote procedure call is that we generate a correlation ID in the code and don\u0026rsquo;t delegate it to Spring AMQP anymore.\nIn the next step, we save the correlation ID to the database. Another instance of the client, that uses the same database, can read it later. Now, we use the method convertAndSend() and not convertSendAndReceiveAsType(), because we don\u0026rsquo;t want to wait for the response after the call. We send messages in a fire-and-forget manner.\nIt is important to add the information about the correlation ID and the response queue to the message. The server will read this information and send the response to the response queue.\nWe do this by using the MessagePostProcessor. With MessagePostProcessor we can change the message properties. In this case, we add the correlation ID we saved in the database and the name of the response queue.\nThe request message has all data to proceed on the server-side properly, so we don\u0026rsquo;t need to change anything on the server-side\nNow we implement the listener, that is listening to the response queue:\n@Component class ReplyConsumer { @RabbitListener(queues = \u0026#34;#{response.name}\u0026#34;) public void receive(RegistrationDto registrationDto, Message message){ String correlationId = message.getMessageProperties().getCorrelationId(); registrationService.saveRegistration( UUID.fromString(correlationId), registrationDto); } } We use the annotation @RabbitListener for the listener to the response queue. In the method receive() we need the payload of the message and the meta information of the message to read the correlation ID. We easily do it by adding the Message as the second parameter. Now we can read the correlation ID from the message, find the correlated data in the database, and proceed with the business logic.\nSince we split the message sender and the listener for responses, we can scale the client application. One instance can send the request and another instance of the client can proceed with the response.\nWith this approach both sides of the interaction are scalable.\nConclusion Spring AMQP provides support for implementing the request/response pattern with a message broker synchronously or asynchronously. With minimal effort, it is possible to create scalable and reliable applications.\nYou\u0026rsquo;ll find a project with sample code on GitHub.\n","date":"August 3, 2020","image":"https://reflectoring.io/images/stock/0077-request-response-1200x628-branded_hub0acddd9d3251f270c0c84786c3942f9_709974_650x0_resize_q90_box.jpg","permalink":"/amqp-request-response/","title":"Request/Response Pattern with Spring AMQP"},{"categories":["Java"],"contents":"In this article, we\u0026rsquo;ll learn about the Maven Wrapper - what problem it solves, how to set it up, and how it works.\nWhy Do We Need the Maven Wrapper? Years ago, I was on a team developing a desktop-based Java application. We wanted to share our artifact with a couple of business users in the field to get some feedback. It was unlikely they had Java installed. Asking them to download, install, and configure version 1.2 of Java (yes, this was that long ago!) to run our application would have been a hassle for them.\nLooking around trying to find how others had solved this problem, I came across this idea of \u0026ldquo;bundling the JRE\u0026rdquo;. The idea was to include within the artifact itself the Java Runtime Environment that our application depended on. Then users don\u0026rsquo;t need to have a particular version or even any version of Java pre-installed - a neat solution to a specific problem.\nOver the years I came across this idea in many places. Today when we containerize our application for cloud deployment, it\u0026rsquo;s the same general idea: encapsulate the dependent and its dependency into a single unit to hide some complexity.\nWhat\u0026rsquo;s this got to do with the Maven Wrapper? Replace \u0026ldquo;business users\u0026rdquo; with \u0026ldquo;other developers\u0026rdquo; and \u0026ldquo;Java\u0026rdquo; with \u0026ldquo;Maven\u0026rdquo; in my story and it\u0026rsquo;s the same problem that the Maven Wrapper solves - we use it to encapsulate our source code and Maven build system. This lets other developers build our code without having Maven pre-installed.\nThe Maven Wrapper makes it easy to build our code on any machine, including CI/CD servers. We don\u0026rsquo;t have to worry about installing the right version of Maven on the CI servers anymore!\nSetting Up the Maven Wrapper From the project\u0026rsquo;s root directory (where pom.xml is located), we run this Maven command:\nmvn -N io.takari:maven:0.7.7:wrapper If we wanted to use a particular Maven version, we can specify it like this:\nmvn -N io.takari:maven:wrapper -Dmaven=3.6.3 This creates two files (mvnw, mvnw.cmd) and a hidden directory (.mvn). mvnw can be used in Unix-like environments and mvnw.cmd can be used in Windows.\nAlong with our code, we check in the two files and the .mvn directory and its contents into our source control system like Git. Here\u0026rsquo;s how other developers can now build the code:\n./mvnw clean install Instead of the usual mvn command, they would use mvnw.\nAlternatively, we can set up the wrapper by copying over the mvn, mvnw.cmd files and .mvn directory from an existing project.\nStarting from 3.7.0 version of Maven, the Wrapper will be included as a feature within core Maven itself making it even more convenient.\nHow Does the Maven Wrapper Work? The .mvn/wrapper directory has a jar file maven-wrapper.jar that downloads the required version of Maven if it\u0026rsquo;s not already present. It installs it in the ./m2/wrapper/dists directory under the user\u0026rsquo;s home directory.\nWhere does it download Maven from? This information is present in the mvn/wrapper/maven-wrapper.properties file:\ndistributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.5.2/apache-maven-3.5.2-bin.zip wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar Conclusion In this article, we learned what problem the Maven Wrapper solves, how to use it, and how it works. You can read a similar article on this blog on Gradle Wrapper.\n","date":"July 29, 2020","image":"https://reflectoring.io/images/stock/0076-airmail-1200x628-branded_hu11b26946a4345a7ce4c5465e5e627838_150840_650x0_resize_q90_box.jpg","permalink":"/maven-wrapper/","title":"Run Your Maven Build Anywhere with the Maven Wrapper"},{"categories":["AWS"],"contents":"When we build applications with AWS, we access various AWS services for multiple purposes: store files in S3, save some data in DynamoDB, send messages to SQS, write event handlers with lambda functions, and many others.\nHowever, in the early days of development, we prefer to focus on writing application code instead of spending time on setting up the environment for accessing AWS services. Setting up a development environment for using these services is time-consuming and incurs unwanted cost with AWS.\nTo avoid getting bogged down by these mundane tasks, we can use LocalStack to develop and test our applications with mock implementations of these services.\nSimply put, LocalStack is an open-source mock of the real AWS services. It provides a testing environment on our local machine with the same APIs as the real AWS services. We switch to using the real AWS services only in the integration environment and beyond.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. Why Use LocalStack? The method of temporarily using dummy (or mock, fake, proxy) objects in place of actual ones is a popular way of running tests for applications with external dependencies. Most appropriately, these dummies are called test doubles.\nWith LocalStack, we will implement test doubles of our AWS services with LocalStack. LocalStack supports:\n running our applications without connecting to AWS. avoiding the complexity of AWS configuration and focus on development. running tests in our CI/CD pipeline. configuring and testing error scenarios.  How To Use LocalStack LocalStack is a Python application designed to run as an HTTP request processor while listening on specific ports.\nOur usage of LocalStack is centered around two tasks:\n Running LocalStack. Overriding the AWS endpoint URL with the URL of LocalStack.  LocalStack usually runs inside a Docker container, but we can also run it as a Python application instead.\nRunning LocalStack With Python We first install the LocalStack package using pip:\npip install localstack We then start localstack with the \u0026ldquo;start\u0026rdquo; command as shown below:\nlocalstack start This will start LocalStack inside a Docker container.\nRunning LocalStack With Docker We can also run LocalStack directly as a Docker image either with the Docker run command or with docker-compose.\nWe will use docker-compose. For that, we download the base version of docker-compose.yml from the GitHub repository of LocalStack and customize it as shown in the next section or run it without changes if we prefer to use the default configuration:\nTMPDIR=/private$TMPDIR docker-compose up This starts up LocalStack. The part TMPDIR=/private$TMPDIR is required only in MacOS.\nCustomizing LocalStack The default behavior of LocalStack is to spin up all the supported services with each of them listening on port 4566. We can override this behavior of LocalStack by setting a few environment variables.\nThe default port 4566 can be overridden by setting the environment variable EDGE_PORT. We can also configure LocalStack to spin up a limited set of services by setting a comma-separated list of service names as value for the environment variable SERVICES:\nversion: \u0026#39;2.1\u0026#39; services: localstack: container_name: \u0026#34;${LOCALSTACK_DOCKER_NAME-localstack_main}\u0026#34; image: images/stock/localstack/localstack-1200x628-branded.jpg ports: - \u0026#34;4566-4599:4566-4599\u0026#34; - \u0026#34;${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}\u0026#34; environment: - SERVICES=s3,dynamodb In this docker-compose.yml, we set the environment variable SERVICES to the name of the services we want to use in our application (S3 and DynamoDB).\nConnecting With LocalStack We access AWS services via the AWS CLI or from our applications using the AWS SDK (Software Development Kit).\nThe AWS SDK and CLI are an integral part of our toolset for building applications with AWS services. The SDK provides client libraries in all the popular programming languages like Java, Node js, or Python for accessing various AWS services.\nBoth the AWS SDK and the CLI provide an option of overriding the URL of the AWS API. We usually use this to specify the URL of our proxy server when connecting to AWS services from behind a corporate proxy server. We will use this same feature in our local environment for connecting to LocalStack.\nWe do this in the AWS CLI using commands like this:\naws --endpointurl http://localhost:4956 kinesis list-streams Executing this command will send the requests to the URL of LocalStack specified as the value of the endpoint URL command line parameter (localhost on port 4956) instead of the real AWS endpoint.\nWe use a similar approach when using the SDK:\nURI endpointOverride = new URI(\u0026#34;http://localhost:4566\u0026#34;); S3Client s3 = S3Client.builder() .endpointOverride(endpointOverride ) // \u0026lt;-- Overriding the endpoint  .region(region) .build(); Here, we have overridden the AWS endpoint of S3 by providing the value of the URL of LocalStack as the parameter to the endpointOverride method in the S3ClientBuilder class.\nCommon Usage Patterns Creating a CLI Profile for LocalStack We start by creating a fake profile in the AWS CLI so that we can later use the AWS CLI for invoking the services provided by LocalStack:\naws configure --profile localstack Here we create a profile named localstack (we can call it whatever we want).\nThis will prompt for the AWS Access Key, Secret Access Key, and an AWS region. We can provide any dummy value for the credentials and a valid region name like us-east-1, but we can\u0026rsquo;t leave any of the values blank.\nUnlike AWS, LocalStack does not validate these credentials but complains if no profile is set. So far, it\u0026rsquo;s just like any other AWS profile which we will use to work with LocalStack.\nRunning CLI Commands Against LocalStack With our profile created, we proceed to execute the AWS CLI commands by passing an additional parameter for overriding the endpoint URL:\naws s3 --endpoint-url http://localhost:4566 create-bucket io.pratik.mybucket This command created an S3 bucket in LocalStack.\nWe can also execute a regular CloudFormation template that describes multiple AWS resources:\naws cloudformation create-stack \\ --endpoint-url http://localhost:4566 \\ --stack-name samplestack \\ --template-body file://sample.yaml \\ --profile localstack Similarly, we can run CLI commands for all the services supported and spun up by our instance of LocalStack.\nRunning JUnit Tests Against LocalStack If we want to run tests against the AWS APIs, we can do this from within a JUnit test.\nAt the start of a test, we start LocalStack as a Docker container on a random port and after all tests have finished execution we stop it again:\n@ExtendWith(LocalstackDockerExtension.class) @LocalstackDockerProperties(services = { \u0026#34;s3\u0026#34;, \u0026#34;sqs\u0026#34; }) class AwsServiceClientTest { private static final Logger logger = Logger.getLogger(AwsServiceClient.class.getName()); private static final Region region = Region.US_EAST_1; private static final String bucketName = \u0026#34;io.pratik.mybucket\u0026#34;; private AwsServiceClient awsServiceClient = null; @BeforeEach void setUp() throws Exception { String endpoint = Localstack.INSTANCE.getEndpointS3(); awsServiceClient = new AwsServiceClient(endpoint); createBucket(); } @Test void testStoreInS3() { logger.info(\u0026#34;Executing test...\u0026#34;); awsServiceClient.storeInS3(\u0026#34;image1\u0026#34;); assertTrue(keyExistsInBucket(), \u0026#34;Object created\u0026#34;); } } The code snippet is a JUnit Jupiter test used to test a Java class to store an object in an S3 bucket. LocalstackDockerExtension in the ExtendsWith annotation is the JUnit test runner that pulls and runs the latest LocalStack Docker image and stops the container when tests are complete.\nThe container is configured to spin up S3 and DynamoDB services with the @LocalstackDockerProperties annotation.\nNote that the LocalStack endpoint is allocated dynamically and is accessed using methods in the form of Localstack.INSTANCE.getEndpointS3() in our setup method. Similarly, we use Localstack.INSTANCE.getEndpointDynamoDB() to access the dynamically allocated port for DynamoDB.\nUsing LocalStack with Spring Boot Configuring a Spring Boot Application to Use LocalStack Now, we will create a simple customer registration application using the popular Spring Boot framework. Our application will have an API that will take a first name, last name, email, mobile, and a profile picture. This API will save the record in DynamoDB, and store the profile picture in an S3 bucket.\nWe start by creating a Spring Boot REST API using https://start.spring.io with dependencies to the web and Lombok modules.\nNext, we add the AWS dependencies to our pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;software.amazon.awssdk\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;dynamodb\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;software.amazon.awssdk\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;s3\u0026lt;/artifactId\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;cloud.localstack\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;localstack-utils\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.2.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; We have also added a test scoped dependency on LocalStack to start the LocalStack container when the JUnit test starts.\nAfter that, we create the controller class containing the endpoint and two service classes for invoking the S3 and DynamoDB services.\nWe use the default Spring Boot profile for real AWS services and create an additional profile named \u0026ldquo;local\u0026rdquo; for testing with LocalStack (mock AWS services). The LocalStack URL is configured in application-local.properties:\naws.local.endpoint=http://localhost:4566 Let\u0026rsquo;s now take a look at the service class connecting to DynamoDB:\n@Service public class CustomerProfileStore { private static final String TABLE_NAME = \u0026#34;entities\u0026#34;; private static final Region region = Region.US_EAST_1; private final String awsEndpoint; public CustomerProfileStore(@Value(\u0026#34;${aws.local.endpoint:#{null}}\u0026#34;) String awsEndpoint) { super(); this.awsEndpoint = awsEndpoint; } private DynamoDbClient getDdbClient() { DynamoDbClient dynamoDB = null;; try { DynamoDbClientBuilder builder = DynamoDbClient.builder(); // awsLocalEndpoint is set only in local environments  if(awsEndpoint != null) { // override aws endpoint with localstack URL in dev environment  builder.endpointOverride(new URI(awsEndpoint)); } dynamoDB = builder.region(region).build(); }catch(URISyntaxException ex) { log.error(\u0026#34;Invalid url {}\u0026#34;,awsEndpoint); throw new IllegalStateException(\u0026#34;Invalid url \u0026#34;+awsEndpoint,ex); } return dynamoDB; } We inject the URL of LocalStack from the configuration parameter aws.local.endpoint. The value is set only when we run our application using the local profile, else it has the default value null.\nIn the method getDdbClient(), we pass this variable to the endpointOverride() method in the DynamoDbClientBuilder class only if the variable awsLocalEndpoint has a value which is the case when using the local profile.\nI created the AWS resources - S3 Bucket and DynamoDB table using a cloudformation template. I prefer this approach instead of creating the resources individually from the console. It allows me to create and clean up all the resources with a single command at the end of the exercise following the principles of Infrastructure as Code.\nRunning the Spring Boot Application First, we start LocalStack with docker-compose as we did before.\nNext, We create our resources with the CloudFormation service:\naws cloudformation create-stack \\ --endpoint-url http://localhost:4566 \\ --stack-name samplestack \\ --template-body file://sample.yaml \\ --profile localstack Here we define the S3 bucket and DynamoDB table in a CloudFormation Template file - sample.yaml. After creating our resources, we run our Spring Boot application with the spring profile named \u0026ldquo;local\u0026rdquo;:\njava -Dspring.profiles.active=local -jar target/customerregistration-1.0.jar I have set 8085 as the port for my application. I tested my API by sending the request using curl. You can also use Postman or any other REST client:\ncurl -X POST -H \u0026#34;Content-Type: application/json\u0026#34; -d \u0026#39;{\u0026#34;firstName\u0026#34;:\u0026#34;Peter\u0026#34;,\u0026#34;lastName\u0026#34;:\u0026#34;Parker\u0026#34;, \u0026#34;email\u0026#34;:\u0026#34;peter.parker@fox.com\u0026#34;, \u0026#34;phone\u0026#34;:\u0026#34;476576576\u0026#34;, \u0026#34;photo\u0026#34;:\u0026#34;iVBORw0KGgo...AAAASUVORK5CYII=\u0026#34; }\u0026#39; http://localhost:8085/customers/ Finally, we run our Spring Boot app connected to the real AWS services by switching to the default profile.\nConclusion We saw how to use LocalStack for testing the integration of our application with AWS services locally. Localstack also has an enterprise version available with more services and features.\nI hope this will help you to feel empowered and have more fun while working with AWS services during development and lead to higher productivity, shorter development cycles, and lower AWS cloud bills.\nYou can refer to all the source code used in the article on Github.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"July 27, 2020","image":"https://reflectoring.io/images/stock/0074-stack-1200x628-branded_hu068f2b0d815bda96ddb686d2b65ba146_143922_650x0_resize_q90_box.jpg","permalink":"/aws-localstack/","title":"Local Development with AWS on LocalStack"},{"categories":["Spring Boot"],"contents":"In this article, we\u0026rsquo;ll look at how to integrate a Spring Boot application with Apache Kafka and start sending and consuming messages from our application. We\u0026rsquo;ll be going through each section with code examples.\n Example Code This article is accompanied by a working code example on GitHub. Why Kafka? Traditional messaging queues like ActiveMQ, RabbitMQ can handle high throughput usually used for long-running or background jobs and communicating between services.\nKafka is a stream-processing platform built by LinkedIn and currently developed under the umbrella of the Apache Software Foundation. Kafka aims to provide low-latency ingestion of large amounts of event data.\nWe can use Kafka when we have to move a large amount of data and process it in real-time. An example would be when we want to process user behavior on our website to generate product suggestions or monitor events produced by our micro-services.\nKafka is built from ground up with horizontal scaling in mind. We can scale by adding more brokers to the existing Kafka cluster.\nKafka Vocabulary Let\u0026rsquo;s look at the key terminologies of Kafka:\n Producer: A producer is a client that sends messages to the Kafka server to the specified topic. Consumer: Consumers are the recipients who receive messages from the Kafka server. Broker: Brokers can create a Kafka cluster by sharing information using Zookeeper. A broker receives messages from producers and consumers fetch messages from the broker by topic, partition, and offset. Cluster: Kafka is a distributed system. A Kafka cluster contains multiple brokers sharing the workload. Topic: A topic is a category name to which messages are published and from which consumers can receive messages. Partition: Messages published to a topic are spread across a Kafka cluster into several partitions. Each partition can be associated with a broker to allow consumers to read from a topic in parallel. Offset: Offset is a pointer to the last message that Kafka has already sent to a consumer.  Configuring a Kafka Client We should have a Kafka server running on our machine. If you don\u0026rsquo;t have Kafka setup on your system, take a look at the Kafka quickstart guide. Once we have a Kafka server up and running, a Kafka client can be easily configured with Spring configuration in Java or even quicker with Spring Boot.\nLet\u0026rsquo;s start by adding spring-kafka dependency to our pom.xml:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.springframework.kafka\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spring-kafka\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.5.2.RELEASE\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Using Java Configuration Let\u0026rsquo;s now see how to configure a Kafka client using Spring\u0026rsquo;s Java Configuration. To split up responsibilities, we have separated KafkaProducerConfig and KafkaConsumerConfig.\nLet\u0026rsquo;s have a look at the producer configuration first:\n@Configuration class KafkaProducerConfig { @Value(\u0026#34;${io.reflectoring.kafka.bootstrap-servers}\u0026#34;) private String bootstrapServers; @Bean public Map\u0026lt;String, Object\u0026gt; producerConfigs() { Map\u0026lt;String, Object\u0026gt; props = new HashMap\u0026lt;\u0026gt;(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class); return props; } @Bean public ProducerFactory\u0026lt;String, String\u0026gt; producerFactory() { return new DefaultKafkaProducerFactory\u0026lt;\u0026gt;(producerConfigs()); } @Bean public KafkaTemplate\u0026lt;String, String\u0026gt; kafkaTemplate() { return new KafkaTemplate\u0026lt;\u0026gt;(producerFactory()); } } The above example shows how to configure the Kafka producer to send messages. ProducerFactory is responsible for creating Kafka Producer instances.\nKafkaTemplate helps us to send messages to their respective topic. We\u0026rsquo;ll see more about KafkaTemplate in the sending messages section.\nIn producerConfigs() we are configuring a couple of properties:\n BOOTSTRAP_SERVERS_CONFIG - Host and port on which Kafka is running. KEY_SERIALIZER_CLASS_CONFIG - Serializer class to be used for the key. VALUE_SERIALIZER_CLASS_CONFIG - Serializer class to be used for the value. We are using StringSerializer for both keys and values.  Now that our producer config is ready, let\u0026rsquo;s create a configuration for the consumer:\n@Configuration class KafkaConsumerConfig { @Value(\u0026#34;${io.reflectoring.kafka.bootstrap-servers}\u0026#34;) private String bootstrapServers; @Bean public Map\u0026lt;String, Object\u0026gt; consumerConfigs() { Map\u0026lt;String, Object\u0026gt; props = new HashMap\u0026lt;\u0026gt;(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); return props; } @Bean public ConsumerFactory\u0026lt;String, String\u0026gt; consumerFactory() { return new DefaultKafkaConsumerFactory\u0026lt;\u0026gt;(consumerConfigs()); } @Bean public KafkaListenerContainerFactory\u0026lt;ConcurrentMessageListenerContainer\u0026lt;String, String\u0026gt;\u0026gt; kafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory\u0026lt;String, String\u0026gt; factory = new ConcurrentKafkaListenerContainerFactory\u0026lt;\u0026gt;(); factory.setConsumerFactory(consumerFactory()); return factory; } } We use ConcurrentKafkaListenerContainerFactory to create containers for methods annotated with @KafkaListener. The KafkaListenerContainer receives all the messages from all topics or partitions on a single thread. We\u0026rsquo;ll see more about message listener containers in the consuming messages section.\nUsing Spring Boot Auto Configuration Spring Boot does most of the configuration automatically, so we can focus on building the listeners and producing the messages. It also provides the option to override the default configuration through application.properties. The Kafka configuration is controlled by the configuration properties with the prefix spring.kafka.*:\nspring.kafka.bootstrap-servers=localhost:9092 spring.kafka.consumer.group-id=myGroup Creating Kafka Topics A topic must exist to start sending messages to it. Let`s now have a look at how we can create Kafka topics:\n@Configuration class KafkaTopicConfig { @Bean public NewTopic topic1() { return TopicBuilder.name(\u0026#34;reflectoring-1\u0026#34;).build(); } @Bean public NewTopic topic2() { return TopicBuilder.name(\u0026#34;reflectoring-2\u0026#34;).build(); } ... } A KafkaAdmin bean is responsible for creating new topics in our broker. With Spring Boot, a KafkaAdmin bean is automatically registered.\nFor a non Spring Boot application we have to manually register KafkaAdmin bean:\n@Bean KafkaAdmin admin() { Map\u0026lt;String, Object\u0026gt; configs = new HashMap\u0026lt;\u0026gt;(); configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, ...); return new KafkaAdmin(configs); } To create a topic, we register a NewTopic bean for each topic to the application context. If the topic already exists, the bean will be ignored. We can make use of TopicBuilder to create these beans. KafkaAdmin also increases the number of partitions if it finds that an existing topic has fewer partitions than NewTopic.numPartitions.\nSending Messages Using KafkaTemplate KafkaTemplate provides convenient methods to send messages to topics:\n@Component class KafkaSenderExample { private KafkaTemplate\u0026lt;String, String\u0026gt; kafkaTemplate; ... @Autowired KafkaSenderExample(KafkaTemplate\u0026lt;String, String\u0026gt; kafkaTemplate, ...) { this.kafkaTemplate = kafkaTemplate; ... } void sendMessage(String message, String topicName) { kafkaTemplate.send(topicName, message); } ... } All we need to do is to call the sendMessage() method with the message and the topic name as parameters.\nSpring Kafka also allows us to configure an async callback:\n@Component class KafkaSenderExample { ... void sendMessageWithCallback(String message) { ListenableFuture\u0026lt;SendResult\u0026lt;String, String\u0026gt;\u0026gt; future = kafkaTemplate.send(topic1, message); future.addCallback(new ListenableFutureCallback\u0026lt;SendResult\u0026lt;String, String\u0026gt;\u0026gt;() { @Override public void onSuccess(SendResult\u0026lt;String, String\u0026gt; result) { LOG.info(\u0026#34;Message [{}] delivered with offset {}\u0026#34;, message, result.getRecordMetadata().offset()); } @Override public void onFailure(Throwable ex) { LOG.warn(\u0026#34;Unable to deliver message [{}]. {}\u0026#34;, message, ex.getMessage()); } }); } } The send() method of KafkaTemplate returns a ListenableFuture\u0026lt;SendResult\u0026gt;. We can register a ListenableFutureCallback with the listener to receive the result of the send and do some work within an execution context.\nIf we don\u0026rsquo;t want to work with Futures, we can register a ProducerListener instead:\n@Configuration class KafkaProducerConfig { @Bean KafkaTemplate\u0026lt;String, String\u0026gt; kafkaTemplate() { KafkaTemplate\u0026lt;String, String\u0026gt; kafkaTemplate = new KafkaTemplate\u0026lt;\u0026gt;(producerFactory()); ... kafkaTemplate.setProducerListener(new ProducerListener\u0026lt;String, String\u0026gt;() { @Override public void onSuccess( ProducerRecord\u0026lt;String, String\u0026gt; producerRecord, RecordMetadata recordMetadata) { LOG.info(\u0026#34;ACK from ProducerListener message: {} offset: {}\u0026#34;, producerRecord.value(), recordMetadata.offset()); } }); return kafkaTemplate; } } We configured KafkaTemplate with a ProducerListener which allows us to implement the onSuccess() and onError() methods.\nUsing RoutingKafkaTemplate We can use RoutingKafkaTemplate when we have multiple producers with different configurations and we want to select producer at runtime based on the topic name.\n@Configuration class KafkaProducerConfig { ... @Bean public RoutingKafkaTemplate routingTemplate(GenericApplicationContext context) { // ProducerFactory with Bytes serializer  Map\u0026lt;String, Object\u0026gt; props = new HashMap\u0026lt;\u0026gt;(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class); DefaultKafkaProducerFactory\u0026lt;Object, Object\u0026gt; bytesPF = new DefaultKafkaProducerFactory\u0026lt;\u0026gt;(props); context.registerBean(DefaultKafkaProducerFactory.class, \u0026#34;bytesPF\u0026#34;, bytesPF); // ProducerFactory with String serializer  props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class); DefaultKafkaProducerFactory\u0026lt;Object, Object\u0026gt; stringPF = new DefaultKafkaProducerFactory\u0026lt;\u0026gt;(props); Map\u0026lt;Pattern, ProducerFactory\u0026lt;Object, Object\u0026gt;\u0026gt; map = new LinkedHashMap\u0026lt;\u0026gt;(); map.put(Pattern.compile(\u0026#34;.*-bytes\u0026#34;), bytesPF); map.put(Pattern.compile(\u0026#34;reflectoring-.*\u0026#34;), stringPF); return new RoutingKafkaTemplate(map); } ... } RoutingKafkaTemplate takes a map of java.util.regex.Pattern and ProducerFactory instances and routes messages to the first ProducerFactory matching a given topic name. If we have two patterns ref.* and reflectoring-.*, the pattern reflectoring-.* should be at the beginning because the ref.* pattern would \u0026ldquo;override\u0026rdquo; it, otherwise.\nIn the above example, we have created two patterns .*-bytes and reflectoring-.*. The topic names ending with \u0026lsquo;-bytes\u0026rsquo; and starting with reflectoring-.* will use ByteArraySerializer and StringSerializer respectively when we use RoutingKafkaTemplate instance.\nConsuming Messages Message Listener A KafkaMessageListenerContainer receives all messages from all topics on a single thread.\nA ConcurrentMessageListenerContainer assigns these messages to multiple KafkaMessageListenerContainer instances to provide multi-threaded capability.\nUsing @KafkaListener at Method Level The @KafkaListener annotation allows us to create listeners:\n@Component class KafkaListenersExample { Logger LOG = LoggerFactory.getLogger(KafkaListenersExample.class); @KafkaListener(topics = \u0026#34;reflectoring-1\u0026#34;) void listener(String data) { LOG.info(data); } @KafkaListener( topics = \u0026#34;reflectoring-1, reflectoring-2\u0026#34;, groupId = \u0026#34;reflectoring-group-2\u0026#34;) void commonListenerForMultipleTopics(String message) { LOG.info(\u0026#34;MultipleTopicListener - {}\u0026#34;, message); } } To use this annotation we should add the @EnableKafka annotation on one of our @Configuration classes. Also, it requires a listener container factory, which we have configured in KafkaConsumerConfig.java.\nUsing @KafkaListener will make this bean method a listener and wrap the bean in MessagingMessageListenerAdapter. We can also specify multiple topics for a single listener using the topics attribute as shown above.\nUsing @KafkaListener at Class Level We can also use the @KafkaListener annotation at class level. If we do so, we need to specify @KafkaHandler at the method level:\n@Component @KafkaListener(id = \u0026#34;class-level\u0026#34;, topics = \u0026#34;reflectoring-3\u0026#34;) class KafkaClassListener { ... @KafkaHandler void listen(String message) { LOG.info(\u0026#34;KafkaHandler[String] {}\u0026#34;, message); } @KafkaHandler(isDefault = true) void listenDefault(Object object) { LOG.info(\u0026#34;KafkaHandler[Default] {}\u0026#34;, object); } } When the listener receives messages, it converts them into the target types and tries to match that type against the method signatures to find out which method to call.\nIn the example, messages of type String will be received by listen() and type Object will be received by listenDefault(). Whenever there is no match, the default handler (defined by isDefault=true) will be called.\nConsuming Messages from a Specific Partition with an Initial Offset We can configure listeners to listen to multiple topics, partitions, and a specific initial offset.\nFor example, if we want to receive all the messages sent to a topic from the time of its creation on application startup we can set the initial offset to zero:\n@Component class KafkaListenersExample { ... @KafkaListener( groupId = \u0026#34;reflectoring-group-3\u0026#34;, topicPartitions = @TopicPartition( topic = \u0026#34;reflectoring-1\u0026#34;, partitionOffsets = { @PartitionOffset( partition = \u0026#34;0\u0026#34;, initialOffset = \u0026#34;0\u0026#34;) })) void listenToPartitionWithOffset( @Payload String message, @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition, @Header(KafkaHeaders.OFFSET) int offset) { LOG.info(\u0026#34;Received message [{}] from partition-{} with offset-{}\u0026#34;, message, partition, offset); } } Since we have specified initialOffset = \u0026quot;0\u0026quot;, we will receive all the messages starting from offset 0 every time we restart the application.\nWe can also retrieve some useful metadata about the consumed message using the @Header() annotation.\nFiltering Messages Spring provides a strategy to filter messages before they reach our listeners:\nclass KafkaConsumerConfig { @Bean KafkaListenerContainerFactory\u0026lt;ConcurrentMessageListenerContainer\u0026lt;String, String\u0026gt;\u0026gt; kafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory\u0026lt;String, String\u0026gt; factory = new ConcurrentKafkaListenerContainerFactory\u0026lt;\u0026gt;(); factory.setConsumerFactory(consumerFactory()); factory.setRecordFilterStrategy(record -\u0026gt; record.value().contains(\u0026#34;ignored\u0026#34;)); return factory; } } Spring wraps the listener with a FilteringMessageListenerAdapter. It takes an implementation of RecordFilterStrategy in which we implement the filter method. Messages that match the filter will be discarded before reaching the listener.\nIn the above example, we have added a filter to discard the messages which contain the word \u0026ldquo;ignored\u0026rdquo;.\nReplying with @SendTo Spring allows sending method\u0026rsquo;s return value to the specified destination with @SendTo:\n@Component class KafkaListenersExample { ... @KafkaListener(topics = \u0026#34;reflectoring-others\u0026#34;) @SendTo(\u0026#34;reflectoring-1\u0026#34;) String listenAndReply(String message) { LOG.info(\u0026#34;ListenAndReply [{}]\u0026#34;, message); return \u0026#34;This is a reply sent after receiving message\u0026#34;; } } The Spring Boot default configuration gives us a reply template. Since we are overriding the factory configuration above, the listener container factory must be provided with a KafkaTemplate by using setReplyTemplate() which is then used to send the reply.\nIn the above example, we are sending the reply message to the topic \u0026ldquo;reflectoring-1\u0026rdquo;.\nCustom Messages Let\u0026rsquo;s now look at how to send/receive a Java object. We\u0026rsquo;ll be sending and receiving User objects in our example.\nclass User { private String name; ... } Configuring JSON Serializer \u0026amp; Deserializer To achieve this, we must configure our producer and consumer to use a JSON serializer and deserializer:\n@Configuration class KafkaProducerConfig { ... @Bean public ProducerFactory\u0026lt;String, User\u0026gt; userProducerFactory() { ... configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class); return new DefaultKafkaProducerFactory\u0026lt;\u0026gt;(configProps); } @Bean public KafkaTemplate\u0026lt;String, User\u0026gt; userKafkaTemplate() { return new KafkaTemplate\u0026lt;\u0026gt;(userProducerFactory()); } } @Configuration class KafkaConsumerConfig { ... public ConsumerFactory\u0026lt;String, User\u0026gt; userConsumerFactory() { Map\u0026lt;String, Object\u0026gt; props = new HashMap\u0026lt;\u0026gt;(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ConsumerConfig.GROUP_ID_CONFIG, \u0026#34;reflectoring-user\u0026#34;); return new DefaultKafkaConsumerFactory\u0026lt;\u0026gt;( props, new StringDeserializer(), new JsonDeserializer\u0026lt;\u0026gt;(User.class)); } @Bean public ConcurrentKafkaListenerContainerFactory\u0026lt;String, User\u0026gt; userKafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory\u0026lt;String, User\u0026gt; factory = new ConcurrentKafkaListenerContainerFactory\u0026lt;\u0026gt;(); factory.setConsumerFactory(userConsumerFactory()); return factory; } ... } Spring Kafka provides JsonSerializer and JsonDeserializer implementations that are based on the Jackson JSON object mapper. It allows us to convert any Java object to bytes[].\nIn the above example, we are creating one more ConcurrentKafkaListenerContainerFactory for JSON serialization. In this, we have configured JsonSerializer.class as our value serializer in the producer config and JsonDeserializer\u0026lt;\u0026gt;(User.class) as our value deserializer in the consumer config.\nFor this, we are creating a separate Kafka listener container userKafkaListenerContainerFactory(). If we have multiple Java object types to be serialized/deserialized, we have to create a listener container for each type as shown above.\nSending Java Objects Now that we have configured our serializer and deserializer, we can send a User object using the KafkaTemplate:\n@Component class KafkaSenderExample { ... @Autowired private KafkaTemplate\u0026lt;String, User\u0026gt; userKafkaTemplate; void sendCustomMessage(User user, String topicName) { userKafkaTemplate.send(topicName, user); } ... } Receiving Java Objects We can listen to User objects by using the @KafkaListener annotation:\n@Component class KafkaListenersExample { @KafkaListener( topics = \u0026#34;reflectoring-user\u0026#34;, groupId=\u0026#34;reflectoring-user\u0026#34;, containerFactory=\u0026#34;userKafkaListenerContainerFactory\u0026#34;) void listener(User user) { LOG.info(\u0026#34;CustomUserListener [{}]\u0026#34;, user); } } Since we have multiple listener containers, we are specifying which container factory to use.\nIf we don\u0026rsquo;t specify the containerFactory attribute it defaults to kafkaListenerContainerFactory which uses StringSerializer and StringDeserializer in our case.\nConclusion In this article, we covered how we can leverage the Spring support for Kafka. Build Kafka based messaging with code examples that can help to get started quickly.\nYou can play around with the code on GitHub.\n","date":"July 22, 2020","image":"https://reflectoring.io/images/stock/0075-envelopes-1200x628-branded_hu2f9dd448936f3159981d5b962b2c979c_136735_650x0_resize_q90_box.jpg","permalink":"/spring-boot-kafka/","title":"Using Kafka with Spring Boot"},{"categories":["Java"],"contents":"In this article, we\u0026rsquo;ll start with a quick intro to Resilience4j and then deep dive into its Retry module. We\u0026rsquo;ll learn when and how to use it, and what features it provides. Along the way, we\u0026rsquo;ll also learn a few good practices when implementing retries.\n Example Code This article is accompanied by a working code example on GitHub. What is Resilience4j? Many things can go wrong when applications communicate over the network. Operations can time out or fail because of broken connections, network glitches, unavailability of upstream services, etc. Applications can overload one another, become unresponsive, or even crash.\nResilience4j is a Java library that helps us build resilient and fault-tolerant applications. It provides a framework for writing code to prevent and handle such issues.\nWritten for Java 8 and above, Resilience4j works on constructs like functional interfaces, lambda expressions, and method references.\nResilience4j Modules Let\u0026rsquo;s have a quick look at the modules and their purpose:\n   Module Purpose     Retry Automatically retry a failed remote operation   RateLimiter Limit how many times we call a remote operation in a certain period   TimeLimiter Set a time limit when calling remote operation   Circuit Breaker Fail fast or perform default actions when a remote operation is continuously failing   Bulkhead Limit the number of concurrent remote operations   Cache Store results of costly remote operations    Usage Pattern While each module has its abstractions, here\u0026rsquo;s the general usage pattern:\n Create a Resilience4j configuration object Create a Registry object for such configurations Create or get a Resilience4j object from the Registry Code the remote operation as a lambda expression or a functional interface or a usual Java method Create a decorator or wrapper around the code from step 4 using one of the provided helper methods Call the decorator method to invoke the remote operation  Steps 1-5 are usually done one time at application start. Let\u0026rsquo;s look at these steps for the retry module:\nRetryConfig config = RetryConfig.ofDefaults(); // ----\u0026gt; 1 RetryRegistry registry = RetryRegistry.of(config); // ----\u0026gt; 2 Retry retry = registry.retry(\u0026#34;flightSearchService\u0026#34;, config); // ----\u0026gt; 3  FlightSearchService searchService = new FlightSearchService(); SearchRequest request = new SearchRequest(\u0026#34;NYC\u0026#34;, \u0026#34;LAX\u0026#34;, \u0026#34;07/21/2020\u0026#34;); Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightSearchSupplier = () -\u0026gt; searchService.searchFlights(request); // ----\u0026gt; 4  Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; retryingFlightSearch = Retry.decorateSupplier(retry, flightSearchSupplier); // ----\u0026gt; 5  System.out.println(retryingFlightSearch.get()); // ----\u0026gt; 6 When to Use Retry? A remote operation can be any request made over the network. Usually, it\u0026rsquo;s one of these:\n Sending an HTTP request to a REST endpoint Calling a remote procedure (RPC) or a web service Reading and writing data to/from a data store (SQL/NoSQL databases, object storage, etc.) Sending messages to and receiving messages from a message broker (RabbitMQ/ActiveMQ/Kafka etc.)  We have two options when a remote operation fails - immediately return an error to our client, or retry the operation. If it succeeds on retry, it\u0026rsquo;s great for the clients - they don\u0026rsquo;t even have to know that there was a temporary issue.\nWhich option to choose depends on the error type (transient or permanent), the operation (idempotent or nonidempotent), the client (person or application), and the use case.\nTransient errors are temporary and usually, the operation is likely to succeed if retried. Requests being throttled by an upstream service, a connection drop or a timeout due to temporary unavailability of some service are examples.\nA hardware failure or a 404 (Not Found) response from a REST API are examples of permanent errors where retrying won\u0026rsquo;t help.\nIf we want to apply retries, the operation must be idempotent. Suppose the remote service received and processed our request, but an issue occurred when sending out the response. In that case, when we retry, we don\u0026rsquo;t want the service to treat the request as a new one or return an unexpected error (think money transfer in banking).\nRetries increase the response time of APIs. This may not be an issue if the client is another application like a cron job or a daemon process. If it\u0026rsquo;s a person, however, sometimes it\u0026rsquo;s better to be responsive, fail quickly, and give feedback rather than making the person wait while we keep retrying.\nFor some critical use cases, reliability can be more important than response time and we may need to implement retries even if the client is a person. Money transfer in banking or a travel agency booking flights and hotels for a trip are good examples - users expect reliability, not an instantaneous response for such use cases. We can be responsive by immediately notifying the user that we have accepted their request and letting them know once it is completed.\nUsing the Resilience4j Retry Module RetryRegistry, RetryConfig, and Retry are the main abstractions in resilience4j-retry. RetryRegistry is a factory for creating and managing Retry objects. RetryConfig encapsulates configurations like how many times retries should be attempted, how long to wait between attempts etc. Each Retry object is associated with a RetryConfig. Retry provides helper methods to create decorators for the functional interfaces or lambda expressions containing the remote call.\nLet\u0026rsquo;s see how to use the various features available in the retry module. Assume that we are building a website for an airline to allow its customers to search for and book flights. Our service talks to a remote service encapsulated by the class FlightSearchService.\nSimple Retry In a simple retry, the operation is retried if a RuntimeException is thrown during the remote call. We can configure the number of attempts, how long to wait between attempts etc.:\nRetryConfig config = RetryConfig.custom() .maxAttempts(3) .waitDuration(Duration.of(2, SECONDS)) .build(); // Registry, Retry creation omitted  FlightSearchService service = new FlightSearchService(); SearchRequest request = new SearchRequest(\u0026#34;NYC\u0026#34;, \u0026#34;LAX\u0026#34;, \u0026#34;07/31/2020\u0026#34;); Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightSearchSupplier = () -\u0026gt; service.searchFlights(request); Supplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; retryingFlightSearch = Retry.decorateSupplier(retry, flightSearchSupplier); System.out.println(retryingFlightSearch.get()); We created a RetryConfig specifying that we want to retry a maximum of 3 times and wait for 2s between attempts. If we used the RetryConfig.ofDefaults() method instead, default values of 3 attempts and 500ms wait duration would be used.\nWe expressed the flight search call as a lambda expression - a Supplier of List\u0026lt;Flight\u0026gt;. The Retry.decorateSupplier() method decorates this Supplier with retry functionality. Finally, we called the get() method on the decorated Supplier to make the remote call.\nWe would use decorateSupplier() if we wanted to create a decorator and re-use it at a different place in the codebase. If we want to create it and immediately execute it, we can use executeSupplier() instance method instead:\nList\u0026lt;Flight\u0026gt; flights = retry.executeSupplier( () -\u0026gt; service.searchFlights(request)); Here\u0026rsquo;s sample output showing the first request failing and then succeeding on the second attempt:\nSearching for flights; current time = 20:51:34 975 Operation failed Searching for flights; current time = 20:51:36 985 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;07/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;}, ...] Retrying on Checked Exceptions Now, suppose we want to retry for both checked and unchecked exceptions. Let\u0026rsquo;s say we\u0026rsquo;re calling FlightSearchService.searchFlightsThrowingException() which can throw a checked Exception. Since a Supplier cannot throw a checked exception, we would get a compiler error on this line:\nSupplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightSearchSupplier = () -\u0026gt; service.searchFlightsThrowingException(request); We might try handling the Exception within the lambda expression and returning Collections.emptyList(), but this doesn\u0026rsquo;t look good. But more importantly, since we are catching Exception ourselves, the retry doesn\u0026rsquo;t work anymore:\nSupplier\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; flightSearchSupplier = () -\u0026gt; { try { return service.searchFlightsThrowingException(request); } catch (Exception e) { // don\u0026#39;t do this, this breaks the retry!  } return Collections.emptyList(); }; So what should we do when we want to retry for all exceptions that our remote call can throw? We can use the Retry.decorateCheckedSupplier() (or the executeCheckedSupplier() instance method) instead of Retry.decorateSupplier():\nCheckedFunction0\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt; retryingFlightSearch = Retry.decorateCheckedSupplier(retry, () -\u0026gt; service.searchFlightsThrowingException(request)); try { System.out.println(retryingFlightSearch.apply()); } catch (...) { // handle exception that can occur after retries are exhausted } Retry.decorateCheckedSupplier() returns a CheckedFunction0 which represents a function with no arguments. Notice the call to apply() on the CheckedFunction0 object to invoke the remote operation.\nIf we don\u0026rsquo;t want to work with Suppliers , Retry provides more helper decorator methods like decorateFunction(), decorateCheckedFunction(), decorateRunnable(), decorateCallable() etc. to work with other language constructs. The difference between the decorate* and decorateChecked* versions is that the decorate* version retries on RuntimeExceptions and decorateChecked* version retries on Exception.\nConditional Retry The simple retry example above showed how to retry when we get a RuntimeException or a checked Exception when calling a remote service. In real-world applications, we may not want to retry for all exceptions. For example, if we get an AuthenticationFailedException retrying the same request will not help. When we make an HTTP call, we may want to check the HTTP response status code or look for a particular application error code in the response to decide if we should retry. Let\u0026rsquo;s see how to implement such conditional retries.\nPredicate-based Conditional Retry Let\u0026rsquo;s say that the airline\u0026rsquo;s flight service initializes flight data in its database regularly. This internal operation takes a few seconds for a given day\u0026rsquo;s flight data. If we call the flight search for that day while this initialization is in progress, the service returns a particular error code FS-167. The flight search documentation says that this is a temporary error and that the operation can be retried after a few seconds.\nLet\u0026rsquo;s see how we would create the RetryConfig:\nRetryConfig config = RetryConfig.\u0026lt;SearchResponse\u0026gt;custom() .maxAttempts(3) .waitDuration(Duration.of(3, SECONDS)) .retryOnResult(searchResponse -\u0026gt; searchResponse .getErrorCode() .equals(\u0026#34;FS-167\u0026#34;)) .build(); We use the retryOnResult() method and pass a Predicate that does this check. The logic in this Predicate can be as complex as we want - it could be a check against a set of error codes, or it can be some custom logic to decide if the search should be retried.\nException-based Conditional Retry Suppose we had a general exception FlightServiceBaseException that\u0026rsquo;s thrown when anything unexpected happens during the interaction with the airline\u0026rsquo;s flight service. As a general policy, we want to retry when this exception is thrown. But there is one subclass of SeatsUnavailableException which we don\u0026rsquo;t want to retry on - if there are no seats available on the flight, retrying will not help. We can do this by creating the RetryConfig like this:\nRetryConfig config = RetryConfig.custom() .maxAttempts(3) .waitDuration(Duration.of(3, SECONDS)) .retryExceptions(FlightServiceBaseException.class) .ignoreExceptions(SeatsUnavailableException.class) .build(); In retryExceptions() we specify a list of exceptions. Resilience4j will retry any exception which matches or inherits from the exceptions in this list. We put the ones we want to ignore and not retry into ignoreExceptions(). If the code throws some other exception at runtime, say an IOException, it will also not be retried.\nLet\u0026rsquo;s say that even for a given exception we don\u0026rsquo;t want to retry in all instances. Maybe we want to retry only if the exception has a particular error code or a certain text in the exception message. We can use the retryOnException method in that case:\nPredicate\u0026lt;Throwable\u0026gt; rateLimitPredicate = rle -\u0026gt; (rle instanceof RateLimitExceededException) \u0026amp;\u0026amp; \u0026#34;RL-101\u0026#34;.equals(((RateLimitExceededException) rle).getErrorCode()); RetryConfig config = RetryConfig.custom() .maxAttempts(3) .waitDuration(Duration.of(1, SECONDS)) .retryOnException(rateLimitPredicate) build(); As in the predicate-based conditional retry, the checks within the predicate can be as complex as required.\nBackoff Strategies Our examples so far had a fixed wait time for the retries. Often we want to increase the wait time after each attempt - this is to give the remote service sufficient time to recover in case it is currently overloaded. We can do this using IntervalFunction.\nIntervalFunction is a functional interface - it\u0026rsquo;s a Function that takes the attempt count as a parameter and returns the wait time in milliseconds.\nRandomized Interval Here we specify a random wait time between attempts:\nRetryConfig config = RetryConfig.custom() .maxAttempts(4) .intervalFunction(IntervalFunction.ofRandomized(2000)) .build(); The IntervalFunction.ofRandomized() has a randomizationFactor associated with it. We can set this as the second parameter to ofRandomized(). If it\u0026rsquo;s not set, it takes a default value of 0.5. This randomizationFactor determines the range over which the random value will be spread. So for the default of 0.5 above, the wait times generated will be between 1000ms (2000 - 2000 * 0.5) and 3000ms (2000 + 2000 * 0.5).\nThe sample output shows this behavior:\nSearching for flights; current time = 20:27:08 729 Operation failed Searching for flights; current time = 20:27:10 643 Operation failed Searching for flights; current time = 20:27:13 204 Operation failed Searching for flights; current time = 20:27:15 236 Flight search successful [Flight{flightNumber=\u0026#39;XY 765\u0026#39;, flightDate=\u0026#39;07/31/2020\u0026#39;, from=\u0026#39;NYC\u0026#39;, to=\u0026#39;LAX\u0026#39;},...] Exponential Interval For exponential backoff, we specify two values - an initial wait time and a multiplier. In this method, the wait time increases exponentially between attempts because of the multiplier. For example, if we specified an initial wait time of 1s and a multiplier of 2, the retries would be done after 1s, 2s, 4s, 8s, 16s, and so on. This method is a recommended approach when the client is a background job or a daemon.\nHere\u0026rsquo;s how we would create the RetryConfig for exponential backoff:\nRetryConfig config = RetryConfig.custom() .maxAttempts(6) .intervalFunction(IntervalFunction.ofExponentialBackoff(1000, 2)) .build(); The sample output below shows this behavior:\nSearching for flights; current time = 20:37:02 684 Operation failed Searching for flights; current time = 20:37:03 727 Operation failed Searching for flights; current time = 20:37:05 731 Operation failed Searching for flights; current time = 20:37:09 731 Operation failed Searching for flights; current time = 20:37:17 731 IntervalFunction also provides an exponentialRandomBackoff() method which combines both the approaches above. We can also provide custom implementations of IntervalFunction.\nRetrying Asynchronous Operations The examples we saw until now were all synchronous calls. Let\u0026rsquo;s see how to retry asynchronous operations. Suppose we were searching for flights asynchronously like this:\nCompletableFuture.supplyAsync(() -\u0026gt; service.searchFlights(request)) .thenAccept(System.out::println); The searchFlight() call happens on a different thread and when it returns, the returned List\u0026lt;Flight\u0026gt; is passed to thenAccept() which just prints it.\nWe can do retries for asynchronous operations like above using the executeCompletionStage() method on the Retry object. This method takes two parameters - a ScheduledExecutorService on which the retry will be scheduled and a Supplier\u0026lt;CompletionStage\u0026gt; that will be decorated. It decorates and executes the CompletionStage and then returns a CompletionStage on which we can call thenAccept as before:\nScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor(); Supplier\u0026lt;CompletionStage\u0026lt;List\u0026lt;Flight\u0026gt;\u0026gt;\u0026gt; completionStageSupplier = () -\u0026gt; CompletableFuture.supplyAsync(() -\u0026gt; service.searchFlights(request)); retry.executeCompletionStage(scheduler, completionStageSupplier) .thenAccept(System.out::println); In a real application, we would use a shared thread pool (Executors.newScheduledThreadPool()) for scheduling the retries instead of the single-threaded scheduled executor shown here.\nRetry Events In all these examples, the decorator has been a black box - we don\u0026rsquo;t know when an attempt failed and the framework code is attempting a retry. Suppose for a given request, we wanted to log some details like the attempt count or the wait time until the next attempt. We can do that using Retry events that are published at different points of execution. Retry has an EventPublisher that has methods like onRetry(), onSuccess(), etc.\nWe can collect and log details by implementing these listener methods:\nRetry.EventPublisher publisher = retry.getEventPublisher(); publisher.onRetry(event -\u0026gt; System.out.println(event.toString())); publisher.onSuccess(event -\u0026gt; System.out.println(event.toString())); Similarly, RetryRegistry also has an EventPublisher which publishes events when Retry objects are added or removed from the registry.\nRetry Metrics Retry maintains counters to track how many times an operation\n Succeeded on the first attempt Succeeded after retrying Failed without retrying Failed even after retrying  It updates these counters each time a decorator is executed.\nWhy Capture Metrics? Capturing and regularly analyzing metrics can give us insights into the behavior of upstream services. It can also help identify bottlenecks and other potential problems.\nFor example, if we find that an operation usually fails on the first attempt, we can look into the cause for this. If we find that our requests are getting throttled or that we are getting a timeout when establishing a connection, it could indicate that the remote service needs additional resources or capacity.\nHow to Capture Metrics? Resilience4j uses Micrometer to publish metrics. Micrometer provides a facade over instrumentation clients for monitoring systems like Prometheus, Azure Monitor, New Relic, etc. So we can publish the metrics to any of these systems or switch between them without changing our code.\nFirst, we create RetryConfig and RetryRegistry and Retry as usual. Then, we create a MeterRegistry and bind the RetryRegistry to it:\nMeterRegistry meterRegistry = new SimpleMeterRegistry(); TaggedRetryMetrics.ofRetryRegistry(retryRegistry).bindTo(meterRegistry); After running the retryable operation a few times, we display the captured metrics:\nConsumer\u0026lt;Meter\u0026gt; meterConsumer = meter -\u0026gt; { String desc = meter.getId().getDescription(); String metricName = meter.getId().getTag(\u0026#34;kind\u0026#34;); Double metricValue = StreamSupport.stream(meter.measure().spliterator(), false) .filter(m -\u0026gt; m.getStatistic().name().equals(\u0026#34;COUNT\u0026#34;)) .findFirst() .map(m -\u0026gt; m.getValue()) .orElse(0.0); System.out.println(desc + \u0026#34; - \u0026#34; + metricName + \u0026#34;: \u0026#34; + metricValue); }; meterRegistry.forEachMeter(meterConsumer); Here\u0026rsquo;s some sample output:\nThe number of successful calls without a retry attempt - successful_without_retry: 4.0 The number of failed calls without a retry attempt - failed_without_retry: 0.0 The number of failed calls after a retry attempt - failed_with_retry: 0.0 The number of successful calls after a retry attempt - successful_with_retry: 6.0 Of course, in a real application, we would export the data to a monitoring system and view it on a dashboard.\nGotchas and Good Practices When Retrying Often services provide client libraries or SDKs which have a built-in retry mechanism. This is especially true for cloud services. For example, Azure CosmosDB and Azure Service Bus provide client libraries with a built-in retry facility. They allow applications to set retry policies to control the retry behavior.\nIn such cases, it\u0026rsquo;s better to use the built-in retries rather than coding our own. If we do need to write our own, we should disable the built-in default retry policy - otherwise, it could lead to nested retries where each attempt from the application causes multiple attempts from the client library.\nSome cloud services document transient error codes. Azure SQL for example, provides a list of error codes for which it expects database clients to retry. It\u0026rsquo;s good to check if service providers have such lists before deciding to add retry for a particular operation.\nAnother good practice is to maintain the values we use in RetryConfig like maximum attempts, wait time, and retryable error codes and exceptions as a configuration outside our service. If we discover new transient errors or we need to tweak the interval between attempts, we can make the change without building and redeploying the service.\nUsually when retrying, there is likely a Thread.sleep() happening somewhere in the framework code. This would be the case for synchronous retries with a wait time between retries. If our code is running in the context of a web application, this Thread will most likely be the web server\u0026rsquo;s request handling thread. So if we do too many retries it would reduce the throughput of our application.\nConclusion In this article, we learned what Resilience4j is and how we can use its retry module to make our applications resilient to temporary errors. We looked at the different ways to configure retries and some examples for deciding between the various approaches. We learned some good practices to follow when implementing retries and the importance of collecting and analyzing retry metrics.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\n","date":"July 16, 2020","image":"https://reflectoring.io/images/stock/0073-broken-1200x628-branded_hu8c9c07dffa3decedec7d1e5022e5b907_194743_650x0_resize_q90_box.jpg","permalink":"/retry-with-resilience4j/","title":"Implementing Retry with Resilience4j"},{"categories":["AWS"],"contents":"The AWS journey started with deploying a Spring Boot application in a Docker container manually and we continued with automatically deploying it with CloudFormation and connecting it to an RDS database instance.\nOn the road to a production-grade, continuously deployable system, we now want to find out how we can deploy a new version of our Docker image without any downtime using CloudFormation and ECS.\nCheck Out the Book!  This article gives only a first impression of what you can do with CloudFormation and ECS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. Recap: The CloudFormation Stacks In the previous blog posts of this series, we have created a set of CloudFormation stacks that we\u0026rsquo;ll reuse in this article:\n a network stack that creates a virtual private cloud (VPC) network, a load balancer, and all the wiring that\u0026rsquo;s necessary to deploy a Docker container with Amazon\u0026rsquo;s ECS service, a service stack that takes a Docker image as input and creates an ECS service and task to deploy that image into the VPC created by the network stack.  You can review both stacks in YML format on Github.\nWe can spin up the stacks using the AWS CLI with this Bash script:\naws cloudformation create-stack \\  --stack-name reflectoring-network \\  --template-body file://network.yml \\  --capabilities CAPABILITY_IAM aws cloudformation wait stack-create-complete --stack-name reflectoring-network aws cloudformation create-stack \\  --stack-name reflectoring-service \\  --template-body file://service.yml \\  --parameters \\  ParameterKey=StackName,ParameterValue=reflectoring-network \\  ParameterKey=ServiceName,ParameterValue=reflectoring-hello-world \\  ParameterKey=ImageUrl,ParameterValue=docker.io/reflectoring/aws-hello-world:latest \\  ParameterKey=ContainerPort,ParameterValue=8080 \\  ParameterKey=HealthCheckPath,ParameterValue=/hello \\  ParameterKey=HealthCheckIntervalSeconds,ParameterValue=90 aws cloudformation wait stack-create-complete --stack-name reflectoring-service The stacks are fairly well configurable, so we can play around with the parameters to deploy any Docker container.\nBeing able to deploy an application by creating CloudFormation stacks is nice and all, but to implement a continuous deployment pipeline, we need to deploy new versions of the Docker image without downtime.\nHow can we do that?\nOptions for Updating a CloudFormation Stack We\u0026rsquo;ll discuss four options to update a running CloudFormation stack:\n The first option is to simply update the stack using CloudFormations update command. We modify the template and/or the parameters and then run the update command. A more secure approach is to use a changeset. This way, we can preview the changes CloudFormation will do and can execute the changes once we\u0026rsquo;re satisfied that CloudFormation will only apply intended changes. Another option is to delete a stack and then re-create it. Finally, we can use the ECS API to replace the ECS task with a new one carrying the new Docker image.  Let\u0026rsquo;s investigate how we can use each of these options to deploy a new version of a Docker image into our service stack.\nOption 1: Updating the Service Stack Let\u0026rsquo;s say we have started our service stack with the aws cloudformation create-stack command from above. We passed the Docker image docker.io/reflectoring/aws-hello-world:latest into the ImageUrl parameter. The stack has spun up an ECS cluster running 2 Docker containers with that image (2 is the default DesiredCount in the service stack).\nNow, let\u0026rsquo;s say we have published a new version of our Docker image and want to deploy this new version. We can simply run an update-stack command:\naws cloudformation update-stack \\ --stack-name reflectoring-service \\ --use-previous-template \\ --parameters \\ ParameterKey=ImageUrl,ParameterValue=docker.io/reflectoring/aws-hello-world:v3 \\ ... more parameters aws cloudformation wait stack-update-complete --stack-name reflectoring-service To make sure that we haven\u0026rsquo;t accidentally changed anything in the cloudformation template, we\u0026rsquo;re using the parameter --use-previous-template, which takes the template from the previous call to create-stack.\nWe have to be careful to only change the parameters we want to change. In this case, we have only changed the ImageUrl parameter to docker.io/reflectoring/aws-hello-world:v3.\nWe cannot use the popular latest tag to specify the latest version of a Docker image, even though it would point to the same version. That\u0026rsquo;s because CloudFormation compares the input parameters of the update call to the input parameters we used when we created the stack to identify if there was a change. If we used docker.io/reflectoring/aws-hello-world:latest in both cases, CloudFormation wouldn\u0026rsquo;t identify a change and do nothing.\nOnce the update command has run, ECS will spin up two Docker containers with the new image version, drain any connections from the old two containers, send new requests to the new containers and finally remove the old ones.\nAll this works because we have configured a DesiredCount of 2 and a MaximumPercent of 200 in our ECS service configuration. This allows a maximum of 200% (i.e. 4) of the desired instances to run during the update.\nThat\u0026rsquo;s it. The stack has been updated with a new version of the Docker image.\nThis method is easy, but it has the drawback of being error-prone. We might accidentally change one of the other 5 parameters or have made a change in the stack yml file. All these unwanted changes would automatically be applied!\nOption 2: Avoid Accidental Changes with Changesets If we want to make sure not to apply accidental changes during an aws cloudformation update-stack command, we can use changesets.\nTo create a changeset, we use the create-change-set command:\naws cloudformation create-change-set \\ --change-set-name update-reflectoring-service \\ --stack-name reflectoring-service \\ --use-previous-template \\ --parameters \\ ParameterKey=ImageUrl,ParameterValue=docker.io/reflectoring/aws-hello-world:v4 \\ ... more parameters This command calculates any changes to the currently running stack and stores them for our approval.\nAgain, we pass the --use-previous-template parameter to avoid accidental changes to the stack template. We could just as well pass a template file, however, and any changes in that template compared to the one we previously used would be reflected in the changeset.\nAfter having created a changeset, we can review it in the AWS console or with this CLI command:\naws cloudformation describe-change-set \\ --stack-name reflectoring-service \\ --change-set-name update-reflectoring-service This outputs a bunch of JSON or YAML (depending on your preferences), which lists the resources that would be updated when we execute the changeset.\nWhen we\u0026rsquo;re happy with the changes, we can execute the changeset:\naws cloudformation execute-change-set \\ --stack-name reflectoring-service \\ --change-set-name update-reflectoring-service Now, the stack will be updated, same as with the update-stack command, and the Docker containers will be replaced with new ones carrying the new Docker image.\nWhile I get the idea of having a manual review step before deploying changes, I find that the changesets are hard to interpret. They list the resources that are being changed, but they don\u0026rsquo;t highlight the attributes of the resources that changed. I imagine it to be very hard to properly review a changeset for potential errors.\nAlso, ** a manual review changesets defeats the purpose of continuous delivery**. We don\u0026rsquo;t want any manual steps in between merging the code to the main branch and the actual deployment.\nI guess we could build some fancy automation that validates a changeset for us, but what validations would we program into it? That smells of too much of a maintenance overhead for me, so I\u0026rsquo;m opting out of changesets for my purposes.\nOption 3: Delete and Re-create a Granular Stack The third, and most destructive, option to deploy a new version of our app is to simply delete and then re-create a CloudFormation stack.\nIn the case of the network and service stack above, that would mean we have a downtime, though! If we delete the service stack, the currently running Docker containers would be deleted as well. Only after the new stack with the new Docker image has been created would the application be available again.\nIn some cases, it might be possible to split the CloudFormation stacks into multiple, more granular pieces and then delete and re-create one of the stacks in isolation without causing a downtime. But this doesn\u0026rsquo;t work with ECS and the Fargate deployment option. We\u0026rsquo;d have to delete the ECS::Service resource and that means a downtime.\nThis is not a solution when we want to update a Docker image with ECS and Fargate without downtime.\nOption 4: Update the ECS Service via the API The last option is to call the ECS API directly to update the ECS task (credit for researching this option goes to Philip Riecks, with whom I\u0026rsquo;m currently creating an AWS training resource).\nFor this option, we need to create a JSON file describing the ECS task we want to update. That looks something like this (this file is from a different project than the stacks discussed earlier, so it won\u0026rsquo;t match up):\n{ \u0026#34;family\u0026#34;: \u0026#34;aws101-todo-app\u0026#34;, \u0026#34;cpu\u0026#34;: \u0026#34;256\u0026#34;, \u0026#34;memory\u0026#34;: \u0026#34;512\u0026#34;, \u0026#34;requiresCompatibilities\u0026#34;: [ \u0026#34;FARGATE\u0026#34; ], \u0026#34;networkMode\u0026#34;: \u0026#34;awsvpc\u0026#34;, \u0026#34;executionRoleArn\u0026#34;: \u0026#34;\u0026lt;ROLE_ARN\u0026gt;\u0026#34;, \u0026#34;containerDefinitions\u0026#34;: [ { \u0026#34;cpu\u0026#34;: 256, \u0026#34;memory\u0026#34;: 512, \u0026#34;name\u0026#34;: \u0026#34;aws101-todo-app\u0026#34;, \u0026#34;image\u0026#34;: \u0026#34;\u0026lt;IMAGE_URL\u0026gt;\u0026#34;, \u0026#34;portMappings\u0026#34;: [ { \u0026#34;containerPort\u0026#34;: 8080 } ], \u0026#34;logConfiguration\u0026#34;: { \u0026#34;logDriver\u0026#34;: \u0026#34;awslogs\u0026#34;, \u0026#34;options\u0026#34;: { \u0026#34;awslogs-group\u0026#34;: \u0026#34;aws101-todo-app\u0026#34;, \u0026#34;awslogs-region\u0026#34;: \u0026#34;eu-central-1\u0026#34;, \u0026#34;awslogs-stream-prefix\u0026#34;: \u0026#34;aws101-todo-app\u0026#34; } } } ] } To create the above file, we need to research some parameters from the CloudFormation stacks like the ARN of the IAM role that we want to assign to the task.\nThen, we register the task with ECS:\naws ecs register-task-definition --cli-input-json file://ecs-task.json And finally, we update the ECS service that we created with the CloudFormation stack and replace the existing ECS task with the new one:\naws ecs update-service \\ --cluster \u0026lt;ecs-cluster-name\u0026gt; \\ --service \u0026lt;ecs-service-name\u0026gt; \\ --task-definition \u0026lt;ecs-task-arn\u0026gt; \\ This requires the ECS service to be running already, naturally, so we\u0026rsquo;d need to have created the CloudFormation stack before running this command.\nAlso, we need to find out the name of the ECS cluster and the ECS Service as well as the ARN (Amazon Resource Name) of the ECS task that we just created.\nCalling the API directly gives us ultimate control over our resources, but I don\u0026rsquo;t particularly like the idea of modifying resources that we have previously created via a CloudFormation stack.\nWhile this example is probably harmless, if we\u0026rsquo;re using APIs to modify resources that we have created with CloudFormation too much, we might put a CloudFormation stack in a state where we can\u0026rsquo;t update it via CloudFormation any more.\nI guess that\u0026rsquo;s not a problem when you\u0026rsquo;re not planning to run updates via CloudFormation anyways, but I like the fact that CloudFormation is managing the resources for me and don\u0026rsquo;t want to interfere with that unless I must.\nConclusion So many different ways to update an ECS task to replace one Docker image with another! Most of the options discussed provide a way to deploy a new version without downtime.\nI\u0026rsquo;ll take advantage of CloudFormation\u0026rsquo;s resource management for now, so I\u0026rsquo;ll stick with the simple update-stack option, at least until I find a reason why that\u0026rsquo;s not working anymore.\nThe AWS Journey By now, we have successfully deployed a highly available Spring Boot application and a (not so highly available) PostgreSQL instance all with running a few commands from the command line. In this article, we have discussed some options to deploy a new version of a Docker image without downtime.\nBut there\u0026rsquo;s more to do on the road to a production-ready, continuously deployable system.\nHere\u0026rsquo;s a list of the questions I want to answer on this journey. If there\u0026rsquo;s a link, it has already been answered with a blog post! If not, stay tuned!\n How can I deploy an application from the web console? How can I deploy an application from the command line? How can I implement high availability for my deployed application? How do I set up load balancing? How can I deploy a database in a private subnet and access it from my application? How can I deploy a new version of my application without downtime? (this article) How can I deploy my application from a CI/CD pipeline? How can I deploy my application into multiple environments (test, staging, production)? How can I auto-scale my application horizontally on high load? How can I implement sticky sessions in the load balancer (if I\u0026rsquo;m building a session-based web app)? How can I monitor what’s happening on my application? How can I bind my application to a custom domain? How can I access other AWS resources (like SQS queues and DynamoDB tables) from my application? How can I implement HTTPS?  Check Out the Book!  This article gives only a first impression of what you can do with CloudFormation and ECS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"July 13, 2020","image":"https://reflectoring.io/images/stock/0061-cloud-1200x628-branded_hu34d6aa247e0bb2675461b5a0146d87a8_82985_650x0_resize_q90_box.jpg","permalink":"/aws-cloudformation-ecs-deployment/","title":"The AWS Journey Part 4: Zero-Downtime Deployment with CloudFormation and ECS"},{"categories":["Software Craft"],"contents":"This article gives a quick intro to the Liskov Substitution Principle (LSP), why it\u0026rsquo;s important, and how to use it to validate object-oriented designs. We\u0026rsquo;ll also see some examples and learn how to correctly identify and fix violations of the LSP.\n Example Code This article is accompanied by a working code example on GitHub. What is the LSP? At a high level, the LSP states that in an object-oriented program, if we substitute a superclass object reference with an object of any of its subclasses, the program should not break.\nSay we had a method that used a superclass object reference to do something:\nclass SomeClass { void aMethod(SuperClass superClassReference) { doSomething(superClassReference); } // definition of doSomething() omitted } This should work as expected for every possible subclass object of SuperClass that is passed to it. If substituting a superclass object with a subclass object changes the program behavior in unexpected ways, the LSP is violated.\nThe LSP is applicable when there\u0026rsquo;s a supertype-subtype inheritance relationship by either extending a class or implementing an interface. We can think of the methods defined in the supertype as defining a contract. Every subtype is expected to stick to this contract. If a subclass does not adhere to the superclass\u0026rsquo;s contract, it\u0026rsquo;s violating the LSP.\nThis makes sense intuitively - a class\u0026rsquo;s contract tells its clients what to expect. If a subclass extends or overrides the behavior of the superclass in unintended ways, it would break the clients.\nHow can a method in a subclass break a superclass method\u0026rsquo;s contract? There are several possible ways:\n Returning an object that\u0026rsquo;s incompatible with the object returned by the superclass method. Throwing a new exception that\u0026rsquo;s not thrown by the superclass method. Changing the semantics or introducing side effects that are not part of the superclass\u0026rsquo;s contract.  Java and other statically-typed languages prevent 1 (unless we use very generic classes like Object) and 2 (for checked exceptions) by flagging them at compile-time. It\u0026rsquo;s still possible to violate the LSP in these languages via the third way.\nWhy is the LSP Important? LSP violations are a design smell. We may have generalized a concept prematurely and created a superclass where none is needed. Future requirements for the concept might not fit the class hierarchy we have created.\nIf client code cannot substitute a superclass reference with a subclass object freely, it would be forced to do instanceof checks and specially handle some subclasses. If this kind of conditional code is spread across the codebase, it will be difficult to maintain.\nEvery time we add or modify a subclass, we would have to comb through the codebase and change multiple places. This is difficult and error-prone.\nIt also defeats the purpose of introducing the supertype abstraction in the first place which is to make it easy to enhance the program.\nIt may not even be possible to identify all the places and change them - we may not own or control the client code. We could be developing our functionality as a library and providing them to external users, for example.\nViolating the LSP - An Example Suppose we were building the payment module for our eCommerce website. Customers order products on the site and pay using payment instruments like a credit card or a debit card.\nWhen a customer provides their card details, we want to\n validate it, run it through a third-party fraud detection system, and then send the details to a payment gateway for processing.  While some basic validations are required on all cards, there are additional validations needed on credit cards. Once the payment is done, we record it in our database. Because of various security and regulatory reasons, we don\u0026rsquo;t store the actual card details in our database, but a fingerprint identifier for it that\u0026rsquo;s returned by the payment gateway.\nGiven these requirements, we might model our classes as below:\nabstract class PaymentInstrument { String name; String cardNumber; String verificationCode; Date expiryDate; String fingerprint; void validate() throws PaymentInstrumentInvalidException { // basic validation on name, expiryDate etc.  if (name == null || name.isEmpty()) { throw new PaymentInstrumentInvalidException(\u0026#34;Name is invalid\u0026#34;); } // other validations  } void runFraudChecks() throws FraudDetectedException { // run checks against a third-party system  } void sendToPaymentGateway() throws PaymentFailedException { // send details to payment gateway (PG) and set fingerprint from  // the payment gateway response  } } class CreditCard extends PaymentInstrument { @Override void validate() throws PaymentInstrumentInvalidException { super.validate(); // additional validations for credit cards  } // other credit card-specific code } class DebitCard extends PaymentInstrument { // debit card-specific code } A different area in our codebase where we process a payment might look something like this:\nclass PaymentProcessor { void process(OrderDetails orderDetails, PaymentInstrument paymentInstrument) { try { paymentInstrument.validate(); paymentInstrument.runFraudChecks(); paymentInstrument.sendToPaymentGateway(); saveToDatabase(orderDetails, paymentInstrument); } catch (...){ // exception handling  } } void saveToDatabase( OrderDetails orderDetails, PaymentInstrument paymentInstrument) { String fingerprint = paymentInstrument.getFingerprint(); // save fingerprint and order details in DB  } } Of course, in an actual production system, there would be many complex aspects to handle. The single processor class above might well be a bunch of classes in multiple packages across service and repository layers.\nAll is well and our system is processing payments as expected. At some point, the marketing team decides to introduce reward points to increase customer loyalty. Customers get a small number of reward points for each purchase. They can use the points to buy products on the site.\nIdeally, we should be able to just add a RewardsCard class that extends PaymentInstrument and be done with it. But we find that adding it violates the LSP!\nThere are no fraud checks for Rewards Cards. Details are not sent to payment gateways and there is no concept of a fingerprint identifier. PaymentProcessor breaks as soon as we add RewardsCard.\nWe might try force-fitting RewardsCard into the current class hierarchy by overriding runFraudChecks() and sendToPaymentGateway() with empty, do-nothing implementations.\nThis would still break the application - we might get a NullPointerException from the saveToDatabase() method since the fingerprint would be null. Can we handle it just this once as a special case in saveToDatabase() by doing an instanceof check on the PaymentInstrument argument?\nBut we know that if we do it once, we\u0026rsquo;ll do it again. Soon our codebase will be strewn with multiple checks and special cases to handle the problems created by the incorrect class model. We can imagine the pain this will cause each time we enhance the payment module.\nFor example, what if the business decides to accept Bitcoins? Or marketing introduces a new payment mode like Cash on Delivery?\nFixing the Design Let\u0026rsquo;s revisit the design and create supertype abstractions only if they are general enough to create code that is flexible to requirement changes. We will also use the following object-oriented design principles:\n Program to interface, not implementation Encapsulate what varies Prefer composition over inheritance  To start with, what we can be sure of is that our application needs to collect payment - both at present and in the future. It\u0026rsquo;s also reasonable to think that we would want to validate whatever payment details we collect. Almost everything else could change. So let\u0026rsquo;s define the below interfaces:\ninterface IPaymentInstrument { void validate() throws PaymentInstrumentInvalidException; PaymentResponse collectPayment() throws PaymentFailedException; } class PaymentResponse { String identifier; } PaymentResponse encapsulates an identifier - this could be the fingerprint for credit and debit cards or the card number for rewards cards. It could be something else for a different payment instrument in the future. The encapsulation ensures IPaymentInstrument can remain unchanged if future payment instruments have more data.\nPaymentProcessor class now looks like this:\nclass PaymentProcessor { void process( OrderDetails orderDetails, IPaymentInstrument paymentInstrument) { try { paymentInstrument.validate(); PaymentResponse response = paymentInstrument.collectPayment(); saveToDatabase(orderDetails, response.getIdentifier()); } catch (...) { // exception handling  } } void saveToDatabase(OrderDetails orderDetails, String identifier) { // save the identifier and order details in DB  } } There are no runFraudChecks() and sendToPaymentGateway() calls in PaymentProcessor anymore - these are not general enough to apply to all payment instruments.\nLet\u0026rsquo;s add a few more interfaces for other concepts which seem general enough in our problem domain:\ninterface IFraudChecker { void runChecks() throws FraudDetectedException; } interface IPaymentGatewayHandler { PaymentGatewayResponse handlePayment() throws PaymentFailedException; } interface IPaymentInstrumentValidator { void validate() throws PaymentInstrumentInvalidException; } class PaymentGatewayResponse { String fingerprint; } And here are the implementations:\nclass ThirdPartyFraudChecker implements IFraudChecker { // members omitted  @Override void runChecks() throws FraudDetectedException { // external system call omitted  } } class PaymentGatewayHandler implements IPaymentGatewayHandler { // members omitted  @Override PaymentGatewayResponse handlePayment() throws PaymentFailedException { // send details to payment gateway (PG), set the fingerprint  // received from PG on a PaymentGatewayResponse and return  } } class BankCardBasicValidator implements IPaymentInstrumentValidator { // members like name, cardNumber etc. omitted  @Override void validate() throws PaymentInstrumentInvalidException { // basic validation on name, expiryDate etc.  if (name == null || name.isEmpty()) { throw new PaymentInstrumentInvalidException(\u0026#34;Name is invalid\u0026#34;); } // other basic validations  } } Let\u0026rsquo;s build CreditCard and DebitCard abstractions by composing the above building blocks in different ways. We first define a class that implements IPaymentInstrument :\nabstract class BaseBankCard implements IPaymentInstrument { // members like name, cardNumber etc. omitted  // below dependencies will be injected at runtime  IPaymentInstrumentValidator basicValidator; IFraudChecker fraudChecker; IPaymentGatewayHandler gatewayHandler; @Override void validate() throws PaymentInstrumentInvalidException { basicValidator.validate(); } @Override PaymentResponse collectPayment() throws PaymentFailedException { PaymentResponse response = new PaymentResponse(); try { fraudChecker.runChecks(); PaymentGatewayResponse pgResponse = gatewayHandler.handlePayment(); response.setIdentifier(pgResponse.getFingerprint()); } catch (FraudDetectedException e) { // exception handling  } return response; } } class CreditCard extends BaseBankCard { // constructor omitted  @Override void validate() throws PaymentInstrumentInvalidException { basicValidator.validate(); // additional validations for credit cards  } } class DebitCard extends BaseBankCard { // constructor omitted } Though CreditCard and DebitCard extend a class, it\u0026rsquo;s not the same as before. Other areas of our codebase now depend only on the IPaymentInstrument interface, not on BaseBankCard. Below snippet shows CreditCard object creation and processing:\nIPaymentGatewayHandler gatewayHandler = new PaymentGatewayHandler(name, cardNum, code, expiryDate); IPaymentInstrumentValidator validator = new BankCardBasicValidator(name, cardNum, code, expiryDate); IFraudChecker fraudChecker = new ThirdPartyFraudChecker(name, cardNum, code, expiryDate); CreditCard card = new CreditCard( name, cardNum, code, expiryDate, validator, fraudChecker, gatewayHandler); paymentProcessor.process(order, card); Our design is now flexible enough to let us add a RewardsCard - no force-fitting and no conditional checks. We just add the new class and it works as expected.\nclass RewardsCard implements IPaymentInstrument { String name; String cardNumber; @Override void validate() throws PaymentInstrumentInvalidException { // Rewards card related validations  } @Override PaymentResponse collectPayment() throws PaymentFailedException { PaymentResponse response = new PaymentResponse(); // Steps related to rewards card payment like getting current  // rewards balance, updating balance etc.  response.setIdentifier(cardNumber); return response; } } And here\u0026rsquo;s client code using the new card:\nRewardsCard card = new RewardsCard(name, cardNum); paymentProcessor.process(order, card); Advantages of the New Design The new design not only fixes the LSP violation but also gives us a loosely-coupled, flexible set of classes to handle changing requirements. For example, adding new payment instruments like Bitcoin and Cash on Delivery is easy - we just add new classes that implement IPaymentInstrument.\nBusiness needs debit cards to be processed by a different payment gateway? No problem - we add a new class that implements IPaymentGatewayHandler and inject it into DebitCard. If DebitCard\u0026rsquo;s requirements begin to diverge a lot from CreditCard\u0026rsquo;s, we can have it implement IPaymentInstrument directly instead of extending BaseBankCard - no other class is impacted.\nIf we need an in-house fraud check for RewardsCard, we add an InhouseFraudChecker that implements IFraudChecker, inject it into RewardsCard and only change RewardsCard.collectPayment().\nHow to Identify LSP Violations? Some good indicators to identify LSP violations are:\n Conditional logic (using the instanceof operator or object.getClass().getName() to identify the actual subclass) in client code Empty, do-nothing implementations of one or more methods in subclasses Throwing an UnsupportedOperationException or some other unexpected exception from a subclass method  For point 3 above, the exception needs to be unexpected from the superclass\u0026rsquo;s contract perspective. So, if our superclass method\u0026rsquo;s signature explicitly specified that subclasses or implementations could throw an UnsupportedOperationException, then we would not consider it as an LSP violation.\nConsider java.util.List\u0026lt;E\u0026gt; interface\u0026rsquo;s add(E e) method. Since java.util.Arrays.asList(T ...) returns an unmodifiable list, client code which adds an element to a List would break if it were passed a List returned by Arrays.asList.\nIs this an LSP violation? No - the List.add(E e) method\u0026rsquo;s contract says implementations may throw an UnsupportedOperationException. Clients are expected to handle this when using the method.\nConclusion The LSP is a very useful idea to keep in mind both when developing a new application and when enhancing or modifying an existing one.\nWhen designing the class hierarchy for a new application, the LSP helps make sure that we are not prematurely generalizing concepts in our problem domain.\nWhen enhancing an existing application by adding or changing a subclass, being mindful of the LSP helps ensure that our changes are in line with the superclass\u0026rsquo;s contract and that the client code\u0026rsquo;s expectations continue to be met.\nYou can play around with a complete application illustrating these ideas using the code on GitHub.\nReferences A keynote address in which Liskov first formulated the principle: Liskov, B. (May 1988). \u0026ldquo;Keynote address - data abstraction and hierarchy\u0026rdquo;. ACM SIGPLAN Notices. 23 (5): 17–34.\n","date":"July 6, 2020","image":"https://reflectoring.io/images/stock/0066-blueprint-1200x628-branded_hu48e3d47c178704853b652b021bfc1958_111939_650x0_resize_q90_box.jpg","permalink":"/lsp-explained/","title":"The Liskov Substitution Principle Explained"},{"categories":["AWS"],"contents":"AWS (Amazon Web Services) is a cloud computing platform with a wide portfolio of services like compute, storage, networking, data, security, and many more.\nThis article provides an overview of the most important AWS services, which are often hidden behind an acronym. I hope it serves as a valuable aid to begin the exploration of AWS. I have selected the AWS services by considering the components required to build and run a customer-facing n-tier application.\nWhile reading this article, you\u0026rsquo;ll come across the IAAS (Infrastructure As A Service) and PAAS (Platform As A Service) categories of services. I have also included services under the serverless category, and services for running containers. I have not included services under specialized subjects like machine learning, IoT, security, and Big Data.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n Choose a Region and Availability Zone Whenever we think of cloud, one of the first decisions we make is where to run our applications. Where are our servers located? We may like to host our applications closer to the location of our customers.\nAWS data centers are located all across the globe. AWS Regions and AZs (Availability Zones) are essential entities of this global infrastructure.\nAn AWS region is composed of multiple AZs. An AZ is a logical data center within a region. Each AZ is mapped to physical data centers located in that region, with redundant power, networking, and connectivity.\nAWS resources are bound either to a region, to an AZ, or are global.\nRun Virtual Machines with EC2 Next, we create our VM (Virtual Machine) to run our applications. EC2 (Elastic Compute Cloud) is the service used to create and run VMs. We create the VM as an EC2 instance using a pre-built machine image from AWS (AMI - Amazon Machine Image) or a custom machine image.\nA machine image is similar to a pre-built template containing the operating system with some pre-configured applications installed over it. For example, we can use a machine image for Windows 2016 server with SQL Server or an RHEL Linux with Docker for creating our EC2 instance.\nWe also select an instance family to assign the number of CPUs and RAM for our VM. These range from nano instances, with one virtual CPU, to instance families of high-end configurations with a lot of processing power and memory.\nWe can enable autoscaling to create additional instances when we exceed a certain threshold of capacity utilization. Autoscaling will also take care of terminating instances when our servers are underutilized.\nEach EC2 instance is backed by storage in the form of EBS (Elastic Block Storage) volumes. An EBS volume is block-level storage used to store data that we want to persist beyond the lifetime of our EC2 instances.\nEBS volumes are attached and mounted as disks to our VM. EBS volumes are automatically replicated in the same availability zone to achieve redundancy and high availability.\nDistribute Traffic with ELB ELB (Elastic Load Balancing) is the load balancing service of AWS. ELB load balancers can distribute incoming traffic at the application layer (layer 7) or the transport layer (layer 4) across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. A load balancer is a region-level resource.\nWe always have the option of deploying our own, custom load balancer on an EC2 instance. But ELB comes as a fully managed service. It scales automatically to handle the load of our application\u0026rsquo;s traffic and distributes load to our targets in a single AZ, or across multiple AZs, thereby making our applications highly available and fault-tolerant.\nCreate a Network with VPC Our EC2 instances need to communicate with each other to be useful. We will also need to protect these instances. We do this by putting them into a secure private network called VPC (Virtual Private Cloud).\nA VPC is our logically isolated network with private and/or public subnets, route tables and network gateways within an AWS region. A VPC contains a certain range of IP addresses that we can bind to our resources.\nA VPC is divided into multiple subnets, each of them associated with a subset of the IP addresses available to the parent VPC. Our EC2 instances are launched within a subnet and are assigned IP addresses from the subnet\u0026rsquo;s pool of IP addresses.\nOur instances get a different IP address every time we launch an instance. If we need fixed IPs, we reserve them using EIP (Elastic IP) addresses.\nProtect Instance with Security Groups and Access Control Lists We control traffic to an EC2 instance using a Security Group (sometimes abbreviated SG). With a Security Group, we set up rules for incoming traffic (ingress) and outgoing traffic (egress).\nAdditionally, we control traffic for an entire subnet using a network ACL (Access Control List).\nConnect To On-Premises Systems with VPN or DX Enterprises operate hybrid cloud environments to connect their on-premises resources to resources in the VPC. AWS provides two categories of services - VPN (Virtual Private Network) and DX (AWS Direct Connect).\nWith AWS VPN service, we can create IPsec Site-to-Site VPN tunnels from a VPC to an on-premises network over the public internet. This is a good option when we have to adhere to corporate rules that require certain systems to be available from the cloud but to run in our own data center.\nThe other way around, with DX, we can establish dedicated ultra-low latency connections from on-premises to AWS. We also reduce network charges and experience a more consistent network performance with higher bandwidth.\nControl Access with IAM IAM (Identity and Access Management) is an all-encompassing service for authentication and authorization in AWS, coming into action from the time we create our AWS account.\nWe create users, groups, and roles with IAM and grant or deny access to resources declaratively with policies. We then provide our identity in the form of a username and password or an access token and secret, to access the AWS resources.\nIAM also provides SSO (Single Sign-on) capabilities by integrating with SAML (Security Assertion Markup Language) and OpenID based identity providers residing within or outside of AWS.\nAn SCP (Service Control Policy) is used to draw permission boundaries across one or more AWS accounts.\nAn STS (Security Token Service) is used to generate a temporary access token to invoke an AWS service, either using the AWS SDK (Software Development Kit) or from the AWS CLI (Command Line Interface).\nStore Objects on S3 S3 (Simple Storage Service) is one of the most widely used services in the AWS portfolio. It is the foundation on which many AWS services are built. It embodies many of the features inherent to the Cloud.\nS3 provides unlimited object storage, scales to any extent, possesses a layered security model, and comes with a simple API. We can store all kinds of objects in S3 like files, images, videos, EBS snapshots, or machine images without worrying about file size or data integrity and durability.\nWe store an object in a container called bucket, with a key and some metadata as object attributes. We apply our access controls on the bucket or the S3 object. S3 offers a range of storage classes to store our objects in relevant storage tiers based on our access requirements.\nAdditionally, we can use lifecycle policies to define rules, for example, a lifecycle policy on a bucket for deleting or moving an object after a certain time.\nS3 is widely used in various use cases, like web hosting, data lakes in big data, archiving, and secure log storage. S3 also plays a big part in the migration of different workloads to the cloud.\nStore Data in Databases We could install our own, custom database on an EC2 instance but this will entail the rigmarole of database administration tasks like applying security patches and running scheduled backups. AWS provides managed services for different kinds of databases.\nStore Relational Data with RDS RDS (Relational Data Service) is the managed database offering for relational databases where we can choose our database between Oracle, SQL Server, MySQL, PostgreSQL, MariaDB, and Aurora.\nWe can select the processing and memory power as well as the VPC the database shall be placed into.\nStore NoSQL with DDB DDB (DynamoDB) is a proprietary NoSQL database of AWS. It is fully managed, fast, and efficient at any scale with single-digit millisecond latency.\nDynamoDB is a wide-column/key-value store. We cannot do things like joins or aggregations with DynamoDB, so we should know the access patterns before deciding to go with Dynamo.\nBeing fully managed, AWS will manage the instances in the DynamoDB fleet to ensure its availability and performance. DynamoDB is also the preferred database to use with lambda functions.\nExchange Messages with SQS and SNS We apply asynchronous messaging patterns to make our applications resilient and highly available. AWS takes away the complexity of managing our middleware by providing the messaging infrastructure in the form of two managed services -SQS and SNS.\nSQS (Simple Queue Service) is the messaging middleware to send, store, and receive messages. SQS comes in two flavors:\n Standard queue guarantees at-least-once delivery with best-effort ordering. FIFO (First In First Out) queue guarantees the order of messages received with exactly-once delivery.  SNS (Simple Notification Service) is a pub-sub messaging middleware. The sender publishes a message to a topic that is subscribed by one or more consumers.\nWe manage access to queues and topics using resource policies.\nCode Infrastructure With CloudFormation Managing infrastructure as code involves creating the infrastructure resources like servers, databases, message queues, network firewalls on the fly, and dispose of them when no longer required. Creating the AWS resources manually is going to be tedious and error-prone.\nInstead, we model all the resources in a single CloudFormation (sometimes abbreviated CFN) template and manage them in a single unit called a stack. For making changes we first generate a changeset to see the list of proposed changes before applying the changes.\nCloudFormation allows us to define our infrastructure \u0026lsquo;stack\u0026rsquo; in YML or JSON files which each specify a group of resources (think EC2, ELB, Security Groups, RDS instances, etc).\nWe create and update our infrastructure by interacting with these stacks from the AWS console, CLI, or by integrating them into our CI/CD pipelines. CloudFormation is a widely used service and is integral to all automation and provisioning activities.\nRun Containers On ECS \u0026amp; EKS Our container infrastructure requires a registry like Docker hub to publish our images and an orchestration system for running the desired number of container instances across multiple host machines like EC2 instances. AWS provides ECR for image registry and ECS for container orchestration.\nECR (Elastic Container Registry) provides a private Docker registry for publishing our container images with access controlled by IAM policies.\nECS (Elastic Container Service) is the container orchestration service for running stateless and stateful Docker containers using tasks and services. If you\u0026rsquo;re interested in deploying containers to AWS, have a look at our AWS Journey series.\nEKS (Elastic Kubernetes Service) is Amazon\u0026rsquo;s fully-managed Kubernetes offering. EKS provides a managed control plane and managed worker nodes.\nBoth ECS and EKS come with a Fargate option for provisioning EC2 instances. Given a Docker image, AWS Fargate takes care of automatically provisioning and managing our servers.\nServerless Compute with Lambda and SAM With AWS lambda we can eliminate activities of provisioning servers of the right capacity.\nLambda is the AWS service for running functions in a serverless model. We provide our function written in one of the supported languages with enough permissions to execute.\nThe server for executing the function is provisioned at the time of invocation. The infrastructure is dynamically scaled, depending on the number of concurrent requests.\nLambda is commonly invoked by events from other AWS services like API Gateway, SQS, SNS, or Cloudwatch.\nSAM (Serverless Application Model) is the framework for developing lambda applications with useful tools like a CLI, a local test environment based on Docker, and integration with developer tools.\nDeliver Content with CloudFront AWS CloudFront is a CDN (Content Delivery Network) service used to serve both static and dynamic content using a global network of AWS POP (Points of Presence). The content is served to the end-users from the nearest AWS POP to minimize latency. Some of the common usages are:\n Deliver image and video files stored in an S3 bucket Deliver single-page applications composed of javascript, image and HTML assets in minified or exploded form Deliver an entire web portal accelerating both the download and upload functionalities in the portal  Other sources of content are web applications running on EC2, or an ELB load balancer routing requests to a fleet of EC2 instances running web applications.\nRoute to Your IP with Route 53 AWS Route 53 is the DNS (Domain Name System) service with capabilities of high availability and scalability. It provides ways to route incoming end-user requests to resources within AWS like EC2 and ELB load balancers and also to resources outside of AWS using a group of routing rules based on network latency, geo-proximity, and weighted round-robin.\nGovernance, Compliance \u0026amp; Audit with CloudTrail Security in the cloud works on the principle of shared responsibility (something you will find repeated ad-nausea across AWS docs). AWS is responsible for the security of the cloud and we are responsible for security in the cloud.\nCloudTrail is a service that is switched on by default in an AWS account but we need to build controls to ensure nobody switches it off, modifies the generated trails, sends trails to an S3 bucket accessible to our security teams.\nCloudTrail helps to gain complete visibility into all user activity in the form of events telling you who did what and when. It provides event history of all the activities done in your AWS account.\nObservability with CloudWatch With the advent of distributed applications, observability has emerged as a key capability to monitor the health of systems and identify the root cause of problems like outage or slowness. CloudWatch has understandably been among the first services.\nAWS CloudWatch comprises services for logging, monitoring, and event handling. We send logs from various AWS resources like EC2 and even our applications to CloudWatch. Resources also emit a set of metrics over which we create alarms to enable us to take remedial actions. CloudWatch Events (renamed to EventBridge) allows us to configure remedial actions in response to any events of our interest.\nConclusion I have put everything together in the mind map below.\n[\nAWS is a behemoth. I tried to give you a peek by covering the main capabilities of the commonly used services. We also saw the elastic nature of services like ELB, S3, VPN, DX, EC2 which can autoscale based on demand. You can always refer to the AWS documentation to learn more about these services.\nCheck Out the Book!  This article gives only a first impression of what you can do with AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"June 30, 2020","image":"https://reflectoring.io/images/stock/0072-aws-1200x628-branded_hu9a76250b110a464a69a2b6c2cf42c220_109177_650x0_resize_q90_box.jpg","permalink":"/what-is-aws/","title":"What is AWS? A High-Level Overview of the Most Important AWS Services"},{"categories":["Spring Boot"],"contents":"We use a cache to protect the database or to avoid cost-intensive calculations. Spring provides an abstraction layer for implementing a cache. This article shows how to use this abstraction support with Hazelcast as a cache provider.\n Example Code This article is accompanied by a working code example on GitHub. Why Do We Need a Cache Abstraction? If we want to build a Spring Boot application and use a cache, usually we want to execute some typical operations like\n putting data into the cache, reading data from the cache, updating data in the cache, deleting data from the cache.  We have a lot of technologies available to set up a cache in our application. Each of these technologies, like Hazelcast or Redis, for example, has its own API. If we want to use it in our application, we would have a hard dependency on one of those cache providers.\nThe Spring cache abstraction gives us the possibility to use an abstract API to access the cache. Our business code can use this abstraction level only, without calling the Cache provider\u0026rsquo;s code directly. Spring provides an easy-to-use annotation-based method to implement caching.\nBehind the abstraction, we can choose a dedicated cache provider, but the business logic doesn\u0026rsquo;t need to know anything about the provider.\nThe Spring abstraction layer lets us use a cache independently of the cache provider.\nCache Providers Spring Boot supports several cache providers. If Spring Boot finds a cache provider on the classpath, it tries to find a default configuration for this provider. If it doesn\u0026rsquo;t find a provider, it configures the Simple provider, which is just a ConcurrentHashMap.\nEnabling Spring\u0026rsquo;s Cache Abstraction with @EnableCaching Let\u0026rsquo;s have a look at how to enable caching in a Spring Boot application.\nFirst, we have to add a dependency to the cache starter (Gradle notation):\nimplementation \u0026#39;org.springframework.boot:spring-boot-starter-cache\u0026#39; This starter provides all classes we need to support the cache. These are mainly the interfaces Cache and CacheManager that should be implemented by the provider, and the annotations for the methods and classes that we can use to mark methods as cacheable.\nSecond, we need to enable the cache:\n@Configuration @EnableCaching class EmbeddedCacheConfig { // Other methods omitted.  } The annotation @EnableCaching will start the search for a CacheManger bean to configure the cache provider. After enabling the cache we are ready to use it. But we didn\u0026rsquo;t define any cache provider, so as mentioned above a Simple in-memory provider would be used. This simple cache might be good for testing, but we want to use a \u0026ldquo;real\u0026rdquo; cache in production.\nWe need a provider that supports several data structures, a distributed cache, a time-to-live configuration, and so on. Let\u0026rsquo;s use Hazelcast as a cache provider. We could use Hazelcast as a Cache provider directly, but we want to configure it so that we can use the Spring abstraction instead.\nTo use the cache we have to do two things:\n configure the cache provider, and put some annotations on the methods and classes, that should read from and modify the cache.  Configuring Hazelcast as a Cache Provider To use the cache, we don\u0026rsquo;t need to know the cache provider. To configure the cache, however, we need to select a specific provider and configure it accordingly.\nTo add Hazelcast as a cache provider we first have to add Hazelcast libraries:\ncompile(\u0026#34;com.hazelcast:hazelcast:4.0.1\u0026#34;) compile(\u0026#34;com.hazelcast:hazelcast-spring:4.0.1\u0026#34;) The first dependency is the Hazelcast library, and the second one is the implementation of the Spring cache abstraction - amongst others, the implementation of CacheManager and Cache.\nNow Spring Boot will find Hazelcast on the classpath and will search for a Hazelcast configuration.\nHazelcast supports two different cache topologies. We can choose which topology we want to configure.\nConfiguring an Embedded Cache With the embedded topology, every instance of the Spring Boot application starts a member of the cache cluster.\nSince we added Hazelcast to the classpath, Spring Boot will search for the cache configuration of Hazelcast. Spring Boot will set up the configuration for embedded topology if hazelcast.xml or hazelcast.yaml is found on the classpath. In these files, we can define cache names, data structures, and other parameters of the cache.\nAnother option is to configure the cache programmatically via Spring\u0026rsquo;s Java config:\nimport com.hazelcast.config.Config; @Configuration @EnableCaching class EmbeddedCacheConfig { @Bean Config config() { Config config = new Config(); MapConfig mapConfig = new MapConfig(); mapConfig.setTimeToLiveSeconds(300); config.getMapConfigs().put(\u0026#34;cars\u0026#34;, mapConfig); return config; } } We add a bean of type Config to the Spring context. This is enough to configure a Hazelcast cache. The Spring cache abstraction will find this configuration and set up a Hazelcast cache with the embedded topology.\nConfiguring a Client-Server Cache In Hazelcast\u0026rsquo;s Client-Server topology the application is a client of a cache cluster.\nSpring\u0026rsquo;s cache abstraction will set up the client-server configuration if hazelcast-client.xml or hazelcast-client.yaml is found on the classpath. Similar to the embedded cache we can also configure the client-server topology programmatically:\n@Configuration @EnableCaching class ClientCacheConfig { @Bean ClientConfig config() { ClientConfig clientConfig = new ClientConfig(); clientConfig.addNearCacheConfig(nearCacheConfig()); return clientConfig; } private NearCacheConfig nearCacheConfig() { NearCacheConfig nearCacheConfig = new NearCacheConfig(); nearCacheConfig.setName(\u0026#34;cars\u0026#34;); nearCacheConfig.setTimeToLiveSeconds(300); return nearCacheConfig; } } We added the ClientConfig bean to the context. Spring will find this bean and configure the CacheManager to use Hazelcast as a client of a Hazelcast cache cluster automatically. Note that it makes sense to use near-cache in the client-server topology.\nUsing the Cache Now we can use the Spring caching annotations to enable the cache on specific methods. For demo purposes, we\u0026rsquo;re looking at a Spring Boot application with an in-memory database and JPA for accessing the database.\nWe assume that the operations for accessing the database are slow because of heavy database use. Our goal is to avoid unnecessary operations by using a cache.\nPutting Data into the Cache with @Cacheable We create a CarService to manage car data. This service has a method for reading data:\n@Service class CarService { public Car saveCar(Car car) { return carRepository.save(car); } @Cacheable(value = \u0026#34;cars\u0026#34;) public Car get(UUID uuid) { return carRepository.findById(uuid) .orElseThrow(() -\u0026gt; new IllegalStateException(\u0026#34;car not found\u0026#34;)); } // other methods omitted. } The method saveCar() is supposed to be used only for inserting new cars. Normally we don\u0026rsquo;t need any cache behavior in this case. The car is just stored in the database.\nThe method get() is annotated with @Cachable. This annotation starts the powerful Spring cache support. The data in the cache is stored using a key-value pattern. Spring Cache uses the parameters of the method as key and the return value as a value in the cache.\nWhen the method is called the first time, Spring will check if the value with the given key is in the cache. It will not be the case, and the method itself will be executed. It means we will have to connect to the database and read data from it. The @Cacheable annotation takes care of putting the result into the cache.\nAfter the first call, the cached value is in the cache and stays there according to the cache configuration.\nWhen the method is called the second time, and the cache value has not been evicted yet, Spring will search for the value by the key. Now it hits.\nThe value is found in the cache, and the method will not be executed.\nUpdating the Cache with @CachePut The data in the cache is just a copy of the data in the primary storage. If this primary storage is changed, the data in the cache may become stale. We can solve this by using the @CachePut annotation:\n@Service class CarService { @CachePut(value = \u0026#34;cars\u0026#34;, key = \u0026#34;#car.id\u0026#34;) public Car update(Car car) { if (carRepository.existsById(car.getId())) { return carRepository.save(car); } throw new IllegalArgumentException(\u0026#34;A car must have an id\u0026#34;); } // other methods omitted. } The body of the update() method will always be executed. Spring will put the result of the method into the cache. In this case, we also defined the key that should be used to update the data in the cache.\nEvicting Data from the Cache with @CacheEvict If we delete data from our primary storage, we would have stale data in the cache. We can annotate the delete() method to update the cache:\n@Service class CarService { @CacheEvict(value = \u0026#34;cars\u0026#34;, key = \u0026#34;#uuid\u0026#34;) public void delete(UUID uuid) { carRepository.deleteById(uuid); } // Other methods omitted. } The @CacheEvict annotation deletes the data from the cache. We can define the key that is used to identify the cache item that should be deleted. We can delete all entries from the cache if we set the attribute allEntries to true.\nCustomizing Key Generation Spring Cache uses SimpleKeyGenerator to calculate the key to be used for retrieving or updating an item in the cache from the method parameters. It\u0026rsquo;s also possible to define a custom key generation by specifying a SpEL expression in the key attribute of the @Cacheable annotation.\nIf that is not expressive enough for our use case, we can use a different key generator. For this, we implement the interface KeyGenerator and declare an instance of it as a Spring bean:\n@Configuration @EnableCaching class EmbeddedCacheConfig { @Bean public KeyGenerator carKeyGenerator() { return new CarKeyGenerator(); } // other methods omitted } Then, we can reference the key generator in the keyGenerator attribute of the @Cacheable annotation by bean name:\n@Service class CarService { @Cacheable(value = \u0026#34;cars\u0026#34;, keyGenerator = \u0026#34;carKeyGenerator\u0026#34;) public Car get(UUID uuid) { return carRepository.findById(uuid) .orElseThrow(() -\u0026gt; new IllegalStateException(\u0026#34;car not found\u0026#34;)); } // other methods omitted. } Conclusion Spring\u0026rsquo;s cache abstraction provides a powerful mechanism to keep cache usage abstract und independent of a cache provider.\nSpring Cache supports a few well-known cache providers, which should be configured in a provider-specific way.\nWith Spring\u0026rsquo;s cache abstraction we can keep our business code and the cache implementation separate.\nYou can play around with a complete Spring Boot application using the Cache abstraction on GitHub.\n","date":"June 27, 2020","image":"https://reflectoring.io/images/stock/0071-disk-1200x628-branded_hu2106704273edaf8554081f1ec02d7286_111877_650x0_resize_q90_box.jpg","permalink":"/spring-boot-cache/","title":"Implementing a Cache with Spring Boot"},{"categories":["Spring Boot"],"contents":"In some applications, we need to protect the database or avoid cost-intensive calculations. We can use a cache for this goal. This article shows how to use Hazelcast as a cache with Spring in a distributed and scalable application.\n Example Code This article is accompanied by a working code example on GitHub. Caching 101 Normally, an application reads data from storage, for example, from a database. If we want to increase the performance of reading or writing data, we can improve the hardware and make it faster. But this costs money.\nIf the data in the external storage doesn\u0026rsquo;t change very fast, we can create copies of this data in smaller but much faster storage. These copies are store temporarily. Usually, we use RAM for such fast storage.\nThis is what we call a cache.\nIf the application wants to access data, it requests the data in the cache. We know that the data in the cache are copies, and we cannot use them for a long time because the data in the primary storage can change. In this case, we would get a data inconsistency.\nThat\u0026rsquo;s why we need to define the validity time of the data in the cache. Also, we don\u0026rsquo;t want data in the cache that is not frequently requested. This data would only allocate resources of the cache but wouldn\u0026rsquo;t be used. In this case, we configure the time how long a data lives in the cache if it is not requested.\nThis is what we call time-to-live (TTL).\nIn a big enterprise system, there can be a cluster of caches. We have to replicate and synchronize the data in this cluster between the caches.\nThis, we call write-through concept.\nHazelcast as a Distributed Cache Let\u0026rsquo;s say we have a Spring Boot application, and we want to use a cache in the application. But we also want to be able to scale this application. This means, when we start three instances of the application, for example, that they have to share the cache to keep the data consistent.\nWe solve this problem by using a distributed cache.\nHazelcast is a distributed in-memory object store and provides many features including TTL, write-through, and scalability. We can build a Hazelcast cluster by starting several Hazelcast nodes in a net. Each node is called a member.\nThere are two types of topologies we can implement with Hazelcast:\n embedded cache topology, and client-server topology.  Let\u0026rsquo;s have a look at how to implement each topology with Spring.\nEmbedded Cache Topology This topology means that every instance of the application has an integrated member:\nIn this case, the application and the cache data are running on the same node. When a new cache entry is written in the cache, Hazelcast takes care of distributing it to the other members. When data is read from the cache, it can be found on the same node where the application is running.\nEmbedded Cache with Spring Let\u0026rsquo;s have a look at how to build a cluster with an embedded Hazelcast cache topology and a Spring application. Hazelcast supports many distributed data structures for caching. We will use a Map because it provides the well-known get and put operations.\nFirst, we have to add the Hazelcast dependency. Hazelcast is just a Java library, so that can be done very easily (Gradle notation):\ncompile group: \u0026#39;com.hazelcast\u0026#39;, name: \u0026#39;hazelcast\u0026#39;, version: \u0026#39;4.0.1\u0026#39; Now let\u0026rsquo;s create a cache client for the application.\n@Component class CacheClient { public static final String CARS = \u0026#34;cars\u0026#34;; private final HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance(); public Car put(String number, Car car){ IMap\u0026lt;String, Car\u0026gt; map = hazelcastInstance.getMap(CARS); return map.putIfAbsent(number, car); } public Car get(String key){ IMap\u0026lt;String, Car\u0026gt; map = hazelcastInstance.getMap(CARS); return map.get(key); } // other methods omitted  } That\u0026rsquo;s it. Now the application has a distributed cache. The most important part of this code is the creation of a cluster member. It happens by calling the method Hazelcast.newHazelcastInstance(). The method getMap() creates a Map in the cache or returns an existing one. The only thing we have to do to set the name of the Map.\nWhen we want to scale our application, every new instance will create a new member and this member will join the cluster automatically.\nHazelcast provides several mechanisms for discovering the members. If we don\u0026rsquo;t configure any discovery mechanism, the default one is used, in which Hazelcast tries to find other members in the same network using multicast.\nThis approach has two advantages:\n it\u0026rsquo;s very easy to set up the cluster, and data access is very fast.  We don\u0026rsquo;t need to set up a separate cache cluster. It means we can create a cluster very fast by adding a couple of lines of code.\nIf we want to read the data from the cluster, the data access is low-latency, because we don\u0026rsquo;t need to send a request to the cache cluster over the network.\nBut it brings drawbacks too. Imagine we have a system that requires one hundred instances of our application. In this cluster topology, it means we would have one hundred cluster members even though we don\u0026rsquo;t need them. This big number of cache members would consume a lot of memory.\nAlso, replication and synchronizing would be pretty expensive. Whenever an entry is added or updated in the cache this entry would be synchronized with other members of the cluster, which causes a lot of network communication.\nAlso, we have to note that Hazelcast is a java library. That means, the member can be embedded in a java application only.\nWe should use the Embedded cache topology when we have to execute high-performance computing with the data from the cache.\nCache Configuration We can configure the cache by passing a Config object into the factory method. Let\u0026rsquo;s have a look at a couple of the configuration parameters:\n@Component class CacheClient { public static final String CARS = \u0026#34;cars\u0026#34;; private final HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance(createConfig()); public Config createConfig() { Config config = new Config(); config.addMapConfig(mapConfig()); return config; } private MapConfig mapConfig() { MapConfig mapConfig = new MapConfig(CARS); mapConfig.setTimeToLiveSeconds(360); mapConfig.setMaxIdleSeconds(20); return mapConfig; } // other methods omitted } We can configure every Map or other data structure in the cluster separately. In this case, we configure the Map of cars.\nWith setTimeToLiveSeconds(360) we define how long an entry stays in the cache. After 360 seconds, the entry will be evicted. If the entry is updated, the eviction time will reset to 0 again.\nThe method setMaxIdleSeconds(20) defines how long the entry stays in the cache without being touched. An entry is \u0026ldquo;touched\u0026rdquo; with each read operation. If an entry is not touched for 20 seconds, it will be evicted.\nClient-Server Topology This topology means that we set up a separate cache cluster, and our application is a client of this cluster.\nThe members form a separate cluster, and the clients access the cluster from outside.\nTo build a cluster we could create a java application that sets up a Hazelcast member, but for this example, we\u0026rsquo;ll use a prepared Hazelcast server.\nAlternatively, we can start a docker container as a cluster member. Every server or every docker container will start a new member of the cluster with the default configuration.\nNow we need to create a client to access the cache cluster. Hazelcast uses TCP socket communication. That\u0026rsquo;s why it\u0026rsquo;s possible to create a client not only with java. Hazelcast provides a list of clients written in other languages. To keep it simple, let\u0026rsquo;s look at how to create a client with Spring.\nFirst, we\u0026rsquo;ll add the dependency to the Hazelcast client:\ncompile group: \u0026#39;com.hazelcast\u0026#39;, name: \u0026#39;hazelcast\u0026#39;, version: \u0026#39;4.0.1\u0026#39; Next, we create a Hazelcast client in a Spring application, similar as we did for the embedded cache topology:\n@Component class CacheClient { private static final String CARS = \u0026#34;cars\u0026#34;; private HazelcastInstance client = HazelcastClient.newHazelcastClient(); public Car put(String key, Car car){ IMap\u0026lt;String, Car\u0026gt; map = client.getMap(CARS); return map.putIfAbsent(key, car); } public Car get(String key){ IMap\u0026lt;String, Car\u0026gt; map = client.getMap(CARS); return map.get(key); } // other methods omitted  } To create a Hazelcast client we need to call the method HazelcastClient.newHazelcastClient(). Hazelcast will find the cache cluster automatically. After that, we can use the cache by using the Map again. If we put or get data from the Map, the Hazelcast client connects the cluster to access data.\nNow we can deploy and scale the application and the cache cluster independently. We can have for example 50 instances of the application and 5 members of the cache cluster. This the biggest advantage of this topology.\nIf we have some problems with the cluster, it\u0026rsquo;s easier to identify and to fix this issue, since the clients and the cache are separated and not mixed.\nThis approach has drawbacks, too, though.\nFirstly, whenever we write or read the data from the cluster we need network communication. It can take longer than in the approach with the embedded cache. This difference is especially significant for read operations.\nSecondly, we have to take care of the version compatibility between the cluster members and the clients.\nWe should use the client-server topology when the deployment of the application is bigger than the cluster cache.\nSince our application now only contains the clients to the cache and not the cache itself, we need to spin up a cache instance in our tests. We can do this very easily by using the Hazelcast Docker image and Testcontainers (see an example on GitHub).\nNear-Cache When we use the client-server topology, we\u0026rsquo;re producing network traffic for requesting data from the cache. It happens in two cases:\n when the client reads data from a cache member, and when a cache member starts the communication with other cache members to synchronize data in the cache.  We can avoid this disadvantage by using near-cache.\nNear-cache is a local cache that is created on a Hazelcast member or the client. Let\u0026rsquo;s look at how it works when we create a near-cache on a hazelcast client:\nEvery client creates its near-cache. When an application request data from the cache, it first looks for the data in the near-cache. If it doesn\u0026rsquo;t find the data, we call it a cache miss. In this case, the data is requested from the remote cache cluster and added to the near-cache. When the application wants to read this data again, it can find it in the near-cache. We call this a cache hit.\nSo, the near-cache is a second-level cache - or a \u0026ldquo;cache of the cache\u0026rdquo;.\nWe can easily configure a near-cache in a Spring application:\n@Component class CacheClient { private static final String CARS = \u0026#34;cars\u0026#34;; private HazelcastInstance client = HazelcastClient.newHazelcastClient(createClientConfig()); private ClientConfig createClientConfig() { ClientConfig clientConfig = new ClientConfig(); clientConfig.addNearCacheConfig(createNearCacheConfig()); return clientConfig; } private NearCacheConfig createNearCacheConfig() { NearCacheConfig nearCacheConfig = new NearCacheConfig(); nearCacheConfig.setName(CARS); nearCacheConfig.setTimeToLiveSeconds(360); nearCacheConfig.setMaxIdleSeconds(60); return nearCacheConfig; } // other methods omitted  } The method createNearCacheConfig() creates the configuration of the near-cache. We add this configuration to the Hazelcast client configuration by calling clientConfig.addNearCacheConfig(). Note that this is the configuration of the near-cache on this client only. Every client has to configure the near-cache itself.\nUsing the near-cache we can reduce network traffic. But it\u0026rsquo;s important to understand that we have to accept a possible data inconsistency. Since the near-cache has its own configuration, it will evict the data according this configuration. If data is updated or evicted in the cache cluster, we can still have stale data in the near-cache. This data will be evicted later according to the eviction configuration and then we\u0026rsquo;ll get a cache miss. Only after the data has been evicted from the near-cache will it be read from the cache cluster again.\nWe should use the near-cache when we read from the cache very often, and when the data in the cache cluster changes only rarely.\nSerialization The java objects are serialized when stored in the cache. The Car class from above implements Serializable, so, in this case, Hazelcast will use the standard Java serialization.\nBut the standard Java serialization has drawbacks like high resource usage of CPU and memory.\nWhy Customize Serialization? Imagine we have a scalable system with multiple instances and a cache cluster with few members. The system is working and cache entries are being stored, read, and evicted from the cache. Now we want to change a java class whose objects are cached and often used.\nWe need to deploy a new version of the application with this new class and we want to do it without downtime. If we start a rolling update of our application instances, it works fine for the application, but the cache still can have entries of the previous version of the objects.\nHazelcast will not be able to deserialize the old version of the objects and throw an exception. It means we should create a serializer, that supports versioning of cache entries and that is able to serialize and deserialize java objects of different versions at the same time.\nHazelcast provides us two options to customize the serialization:\n implement a Hazelcast serialization interface type in the classes that should be serialized, implement a custom serializer and add it to the cache configuration.  Implement the DataSerializable Interface Hazelcast has a few serialization interface types. Let\u0026rsquo;s have a look at the interface DataSerializable. This interface is more CPU and memory efficient than Serializable.\nWe implement this interface in the class Car:\nclass Car implements DataSerializable { private String name; private String number; @Override public void writeData(ObjectDataOutput out) throws IOException { out.writeUTF(name); out.writeUTF(number); } @Override public void readData(ObjectDataInput in) throws IOException { name = in.readUTF(); number = in.readUTF(); } } The methods writeData() and readData() serialize and deserialize the object of the class Car. Note that the serialization and the deserialization of the single fields should be done in the same order.\nThat\u0026rsquo;s it. Hazelcast will now use the serialization methods. But now we have the Hazelcast dependency in the domain object Car.\nWe can use a custom serializer to avoid this dependency.\nConfigure a Custom Serializer First, we have to implement a serializer. Let\u0026rsquo;s take the StreamSerializer:\nclass CarStreamSerializer implements StreamSerializer\u0026lt;Car\u0026gt; { @Override public void write(ObjectDataOutput out, Car car) throws IOException { out.writeUTF(car.getName()); out.writeUTF(car.getNumber()); } @Override public Car read(ObjectDataInput in) throws IOException { return Car.builder() .name(in.readUTF()) .number(in.readUTF()) .build(); } @Override public int getTypeId() { return 1; } } The methods write() and read() serialize and deserialize the object Car, respectively. We have to have the same order of writing and reading fields again. The method getTypeId() return the identifier of this serializer.\nNext, we have to add this serializer to the configuration:\n@Component class CacheClient { public Config createConfig() { Config config = new Config(); config.addMapConfig(mapConfig()); config.getSerializationConfig() .addSerializerConfig(serializerConfig()); return config; } private SerializerConfig serializerConfig() { return new SerializerConfig() .setImplementation(new CarSerializer()) .setTypeClass(Car.class); } // other methods omitted. } In the method serializerConfig() we let Hazelcast know that it should use CarSerializer for Car objects.\nNow the class Car doesn\u0026rsquo;t need to implement anything and can be just a domain object.\nConclusion The Hazelcast Java library supports setting up the cache cluster with two topologies. The embedded cache topology supports very fast reading for high-performance computing. The client-server topology supports independent scaling of the application and the cache cluster. It\u0026rsquo;s very easy to integrate the cluster or write a client for the cluster in a Spring (Boot) application.\nIf you want to play around with a working example, have a look at the code on Github.\n","date":"June 4, 2020","image":"https://reflectoring.io/images/stock/0070-hazelcast-1200x628-branded_hu6fc2c07e67418d9fd9e022779e44c1fe_536645_650x0_resize_q90_box.jpg","permalink":"/spring-boot-hazelcast/","title":"Distributed Cache with Hazelcast and Spring"},{"categories":["Spring Boot"],"contents":"One of the important steps to keep software applications customizable is effective configuration management. Modern frameworks provide out-of-the-box features to externalize configuration parameters.\nFor some configuration parameters it makes sense to fail application startup if they\u0026rsquo;re invalid.\nSpring Boot offers us a neat way of validating configuration parameters. We\u0026rsquo;re going to bind input values to @ConfigurationProperties and use Bean Validation to validate them.\n Example Code This article is accompanied by a working code example on GitHub. Why Do We Need to Validate Configuration Parameters? Doing proper validation of our configuration parameters can be critical sometimes.\nLet\u0026rsquo;s think about a scenario:\nWe wake up early to a frustrated call. Our client complains about not having received their very important report emails from the fancy analysis application we developed. We jump out of bed to debug the issue.\nFinally, we realize the cause. A typo in the e-mail address we defined in the configuration:\napp.properties.report-email-address = manager.analysisapp.com \u0026ldquo;Didn\u0026rsquo;t I validate it? Oh, I see. I had to implement a helper class to read and validate the configuration data and I was so lazy at that moment. Ahh, nevermind, it\u0026rsquo;s fixed right now.\u0026rdquo;\nI lived that scenario, not just once.\nSo, that\u0026rsquo;s the motivation behind this article. Let\u0026rsquo;s keep going to see a practical solution to this problem.\nValidating Properties at Startup Binding our configuration parameters to an object is a clean way to maintain them. This way we can benefit from type-safety and find errors earlier.\nSpring Boot has the @ConfigurationProperties annotation to do this binding for the properties defined in application.properties or application.yml files.\nHowever, to validate them we need to follow a couple of more steps.\nFirst, let\u0026rsquo;s take a look at our application.properties file:\napp.properties.name = Analysis Application app.properties.send-report-emails = true app.properties.report-type = HTML app.properties.report-interval-in-days = 7 app.properties.report-email-address = manager@analysisapp.com Next, we add the @Validated annotation to our @ConfigurationProperties class along with some Bean Validation anotations on the fields:\n@Validated @ConfigurationProperties(prefix=\u0026#34;app.properties\u0026#34;) class AppProperties { @NotEmpty private String name; private Boolean sendReportEmails; private ReportType reportType; @Min(value = 7) @Max(value = 30) private Integer reportIntervalInDays; @Email private String reportEmailAddress; // getters / setters } To have Spring Boot pick up our AppProperties class, we annotate our @Configuration class with @EnableConfigurationProperties:\n@Configuration @EnableConfigurationProperties(AppProperties.class) class AppConfiguration { // ... } When we start the Spring Boot application now with the (invalid) email address from the example above, the application won\u0026rsquo;t start up:\n*************************** APPLICATION FAILED TO START *************************** Description: Binding to target org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under \u0026#39;app.properties\u0026#39; to io.reflectoring.validation.AppProperties failed: Property: app.properties.reportEmailAddress Value: manager.analysisapp.com Reason: must be a well-formed email address Action: Update your application\u0026#39;s configuration Bean Validation API Dependency In order to use the bean validation annotations, we must have the javax.validation.validation-api dependency in our classpath  Additionally, we can also define some default values by initializing the fields of AppProperties:\n@Validated @ConfigurationProperties(prefix=\u0026#34;app.properties\u0026#34;) class AppProperties { // ...  private Boolean sendReportEmails = Boolean.FALSE; private ReportType reportType = ReportType.HTML; // ... } Even if we don\u0026rsquo;t define any values for the properties send-report-emails and report-type in application.properties, we will now get the default values Boolean.FALSE and ReportType.HTML respectively.\nValidate Nested Configuration Objects For some properties, it makes sense to bundle them into a nested object.\nSo, let\u0026rsquo;s create ReportProperties to group the properties related to our very important report:\nclass ReportProperties { private Boolean sendEmails = Boolean.FALSE; private ReportType type = ReportType.HTML; @Min(value = 7) @Max(value = 30) private Integer intervalInDays; @Email private String emailAddress; // getters / setters } Next, we refactor our AppProperties to include our nested object ReportProperties instead of the single properties:\n@Validated @ConfigurationProperties(prefix=\u0026#34;app.properties\u0026#34;) class AppProperties { @NotEmpty private String name; @Valid private ReportProperties report; // getters / setters } We should pay attention to put @Valid annotation on our nested report field.\nThis tells Spring to validate the properties of the nested objects.\nFinally, we should change the prefix of the report-related properties to report.* in our application.properties file as well:\n... app.properties.report.send-emails = true app.properties.report.type = HTML app.properties.report.interval-in-days = 7 app.properties.report.email-address = manager@analysisapp.com This way, properties with the prefix app.properties will still be bound to the AppProperties class, but properties with the prefix app.properties.report will be bound to the ReportProperties object in the report field.\nValidate Using @Bean Factory Methods We can also trigger validation by binding a properties file to a @Bean factory method with the @ConfigurationProperties annotation:\n@Configuration class AppConfiguration { // ...  @Bean @Validated @ConfigurationProperties(prefix = \u0026#34;app.third-party.properties\u0026#34;) public ThirdPartyComponentProperties thirdPartyComponentProperties() { return new ThirdPartyComponentProperties(); } // ... } This is particularly useful when we want to bind properties to components defined in third-party libraries or maintained in separate jar files.\nUsing a Custom Spring Validator Even though Bean Validation provides a declarative approach to validate our objects in a reusable way, sometimes we need more to customize our validation logic.\nFor this case, Spring has an independent Validator mechanism to allow dynamic input validation.\nLet\u0026rsquo;s extend our validation to check that the report.email-address has a specific domain like @analysisapp.com:\nclass ReportEmailAddressValidator implements Validator { private static final String EMAIL_DOMAIN = \u0026#34;@analysisapp.com\u0026#34;; public boolean supports(Class clazz) { return ReportProperties.class.isAssignableFrom(clazz); } public void validate(Object target, Errors errors) { ValidationUtils.rejectIfEmptyOrWhitespace(errors, \u0026#34;emailAddress\u0026#34;, \u0026#34;field.required\u0026#34;); ReportProperties reportProperties = (ReportProperties) target; if (!reportProperties.getEmailAddress().endsWith(EMAIL_DOMAIN)) { errors.rejectValue(\u0026#34;emailAddress\u0026#34;, \u0026#34;field.domain.required\u0026#34;, new Object[]{EMAIL_DOMAIN}, \u0026#34;The email address must contain [\u0026#34; + EMAIL_DOMAIN + \u0026#34;] domain.\u0026#34;); } } } Then, we need to register our custom Spring validator with the special method name configurationPropertiesValidator():\n@Configuration class AppConfiguration { // ...  @Bean public static ReportEmailAddressValidator configurationPropertiesValidator() { return new ReportEmailAddressValidator(); } // ... } Only if the resulting Spring bean\u0026rsquo;s name is configurationPropertiesValidator will Spring run this validator against all @ConfigurationProperties beans.\nNote that we must define our configurationPropertiesValidator() method as static. This allows Spring to create the bean in a very early stage, before @Configuration classes, to avoid any problems when creating other beans depending on the configuration properties.\nValidator Is Not a Part of Bean Validation Spring's Validator is not related to Bean Validation and works independently after the Bean Validation happens. Its main purpose is to encapsulate the validation logic from any infrastructure or context.  In case we need to define more than one Validator for our configuration properties, we cannot do it by defining bean factory methods, because we can only define one bean named configurationPropertiesValidator.\nInstead of defining a bean factory method, we can move our custom Validator implementation to inside the configuration property classes:\n@Validated @ConfigurationProperties(prefix = \u0026#34;app.properties\u0026#34;) class AppProperties implements Validator { // properties ...  public boolean supports(Class clazz) { return ReportProperties.class.isAssignableFrom(clazz); } public void validate(Object target, Errors errors) { // validation logic  } } By doing so, we can implement a different Validator implementation for each @ConfigurationProperties class.\nConclusion If we want to be safe from input errors, validating our configuration is a good way to go. Spring Boot makes it easy with the ways described in this article.\nAll the code examples and even more you can play with is over on Github.\n","date":"May 28, 2020","image":"https://reflectoring.io/images/stock/0051-stop-1200x628-branded_hu8c71944083c02ce8637d75428e8551b3_133770_650x0_resize_q90_box.jpg","permalink":"/validate-spring-boot-configuration-parameters-at-startup/","title":"Validate Spring Boot Configuration Parameters at Startup"},{"categories":["AWS"],"contents":"The AWS journey started with deploying a Spring Boot application in a Docker container manually. In the previous episode, we then automated the deployment with CloudFormation.\nOn the road to a production-grade, continuously deployable system, we now want to extend our CloudFormation templates to automatically provision a PostgreSQL database and connect it to our Spring Boot application.\nThe result will be a reproducible, fully automated deployment of a virtual private network, a PostgreSQL RDS instance, and our Spring Boot application.\nCheck Out the Book!  This article gives only a first impression of what you can do with CloudFormation and RDS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n Code Example This article is accompanied by working code examples of a Spring Boot application and CloudFormation templates on Github.\nWhat is RDS? RDS is short for \u0026ldquo;Amazon Relational Database Service\u0026rdquo; and is AWS\u0026rsquo;s managed database service. With RDS, we can create and manage database instances of different types and sizes. In this article, we\u0026rsquo;ll be creating a PostgreSQL instance.\nCreating a Spring Boot Application to Test RDS Connectivity We start by creating a simple Spring Boot application that we can later use to check the connectivity to the database so that we know if our setup is working properly.\nI\u0026rsquo;m not going into the details of this application too much since this is not a tutorial about building a Spring Boot application, but it has a single HTTP GET endpoint /hello:\n@RestController class HelloWorldController { private final UserRepository userRepository; HelloWorldController(UserRepository userRepository) { this.userRepository = userRepository; } @GetMapping(\u0026#34;/hello\u0026#34;) String helloWorld(){ Iterable\u0026lt;User\u0026gt; users = userRepository.findAll(); return \u0026#34;Hello AWS! Successfully connected to the database!\u0026#34;; } } We\u0026rsquo;re going to call this endpoint once the application is deployed to AWS to check that it can connect to the database.\nTo configure which database to connect to, we use the Spring Boot default properties in application.yml:\nspring: datasource: url: jdbc:postgresql://localhost:5432/hello username: hello password: hello We\u0026rsquo;re later going to override these properties to tell the application to connect to an AWS PostgreSQL instance.\nFinally, we\u0026rsquo;re packaging the Spring Boot application into a Docker image with this Dockerfile:\nFROM openjdk:8-jdk-alpine ARG JAR_FILE=build/libs/*.jar COPY ${JAR_FILE} app.jar ENTRYPOINT [\u0026#34;java\u0026#34;,\u0026#34;-jar\u0026#34;,\u0026#34;/app.jar\u0026#34;] EXPOSE 8080 I have published this Docker image under the name reflectoring/aws-rds-hello-world to Docker Hub so we can download it from there during deployment.\nThere isn\u0026rsquo;t really much more to this Spring Boot application. If you want to see all the details, have a look at the GitHub repository.\nDesigning the CloudFormation Stacks Now that we have a Spring Boot application wrapped in Docker, we can start looking at how to deploy it to AWS and connect it to a database. This picture shows what we\u0026rsquo;re building:\nWe\u0026rsquo;ll create three CloudFormation stacks:\n A network stack that creates a VPC (virtual private cloud) with two public and two private subnets (each pair across two different availability zones for high availability), an internet gateway, and a load balancer that balances traffic between those networks. A database stack that places a single PostgreSQL database instance into the private subnets. A service stack that places a Docker container with our Spring Boot application into each of the public subnets. The application connects to the database.  We have already created most of the network and service stacks in the previous article and will concentrate on additions to those stacks which concern the RDS database.\nWe\u0026rsquo;ll be discussing a single fragment of YAML at a time. You can find the complete CloudFormation templates for the network stack (network.yml), the database stack (database.yml), and the service stack (service.yml) on GitHub.\nSkip to running the stacks if you\u0026rsquo;re not interested in the nitty-gritty details of the stack configuration.\nDesigning the Network Stack The network stack creates all the basic resources we need to run our Spring Boot application and database. Compared to the original stack, we\u0026rsquo;re adding private subnets for the database and a security group to control access to those subnets.\nPrivate Subnets We add two private subnets to the network stack:\nPrivateSubnetOne: Type: AWS::EC2::Subnet Properties: AvailabilityZone: Fn::Select: - 0 - Fn::GetAZs: {Ref: \u0026#39;AWS::Region\u0026#39;} VpcId: !Ref \u0026#39;VPC\u0026#39; CidrBlock: \u0026#39;10.0.101.0/24\u0026#39; MapPublicIpOnLaunch: false PrivateSubnetTwo: Type: AWS::EC2::Subnet Properties: AvailabilityZone: Fn::Select: - 1 - Fn::GetAZs: {Ref: \u0026#39;AWS::Region\u0026#39;} VpcId: !Ref \u0026#39;VPC\u0026#39; CidrBlock: \u0026#39;10.0.102.0/24\u0026#39; MapPublicIpOnLaunch: false We have to take care that the CidrBlocks don\u0026rsquo;t overlap with those of the public subnets.\nSetting MapPublicIpOnLaunch to false makes the subnets private.\nDatabase Security Group Next, we create a security group into which we\u0026rsquo;ll later put the database:\nDBSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Access to the RDS instance VpcId: !Ref \u0026#39;VPC\u0026#39; DBSecurityGroupIngressFromECS: Type: AWS::EC2::SecurityGroupIngress Properties: Description: Ingress from the ECS containers to the RDS instance GroupId: !Ref \u0026#39;DBSecurityGroup\u0026#39; IpProtocol: -1 SourceSecurityGroupId: !Ref \u0026#39;ECSSecurityGroup\u0026#39; We allow incoming traffic to the DBSecurityGroup from the ECSSecurityGroup, which is the security group we have created earlier, and into which ECS will deploy our Spring Boot application instances. If we don\u0026rsquo;t allow this, the application cannot access the database.\nDesigning the Database Stack The database stack sets up a PostgreSQL database and all resources it needs to work. We\u0026rsquo;ll discuss the whole stack since it\u0026rsquo;s new.\nParameters The database stack needs some configuration parameters:\nParameters: NetworkStackName: Type: String Description: The name of the networking stack that this stack will build upon. DBInstanceClass: Type: String Description: The ID of the second subnet to place the RDS instance into. Default: \u0026#39;db.t2.micro\u0026#39; DBName: Type: String Description: The name of the database that is created within the PostgreSQL instance. DBUsername: Type: String Description: The master user name for the PostgreSQL instance. The database stack requires a running network stack and the NetworkStackName parameter takes the name of that network stack to refer to some of the network resources.\nWith the DBInstanceClass parameter, we can define what size of database we want to create. We give it the smallest (and cheapest) possible size as a default to save money.\nThe DBName and DBUsername parameters define the name of the database to be created within the PostgreSQL instance and the name of the user to be created.\nSecret Password Next, we create a Secret to be used as a password for the database:\nSecret: Type: \u0026#34;AWS::SecretsManager::Secret\u0026#34; Properties: Name: !Ref \u0026#39;DBUsername\u0026#39; GenerateSecretString: SecretStringTemplate: !Join [\u0026#39;\u0026#39;, [\u0026#39;{\u0026#34;username\u0026#34;: \u0026#34;\u0026#39;, !Ref \u0026#39;DBUsername\u0026#39; ,\u0026#39;\u0026#34;}\u0026#39;]] GenerateStringKey: \u0026#34;password\u0026#34; PasswordLength: 32 ExcludeCharacters: \u0026#39;\u0026#34;@/\\\u0026#39; The SecretStringTemplate property specifies a JSON structure with the user name. The GenerateStringKey property defines that the generated password should be added to this JSON structure in the password field. The resulting JSON string will look like this:\n{ \u0026#34;username\u0026#34;: \u0026#34;\u0026lt;value of DBUserName parameter\u0026gt;\u0026#34;, \u0026#34;password\u0026#34;: \u0026#34;\u0026lt;generated password\u0026gt;\u0026#34; } We\u0026rsquo;re excluding some characters from the password creation because they are not allowed in Postgres RDS instances. We\u0026rsquo;d get an error message Only printable ASCII characters besides '/', '@', '\u0026quot;', ' ' may be used if the password contains one of these characters.\nWe\u0026rsquo;ll later use the generated password when we\u0026rsquo;re setting up the database.\nDatabase Instance The core of the database stack is, of course, the database instance. A database instance must be associated with a DBSubnetGroup:\nDBSubnetGroup: Type: AWS::RDS::DBSubnetGroup Properties: DBSubnetGroupDescription: Subnet group for the RDS instance DBSubnetGroupName: DBSubnetGroup SubnetIds: - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;NetworkStackName\u0026#39;, \u0026#39;PrivateSubnetOne\u0026#39;]] - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;NetworkStackName\u0026#39;, \u0026#39;PrivateSubnetTwo\u0026#39;]] The DBSubnetGroup spans across the two private subnets we created in the network stack. A DBSubnetGroup must span across at least two subnets in at least two availability zones.\nNext, we can put a PostgreSQL instance into this subnet group:\n{% raw %} PostgresInstance: Type: AWS::RDS::DBInstance Properties: Engine: postgres EngineVersion: 11.5 AllocatedStorage: 20 AvailabilityZone: Fn::Select: - 0 - Fn::GetAZs: {Ref: \u0026#39;AWS::Region\u0026#39;} DBSubnetGroupName: !Ref \u0026#39;DBSubnetGroup\u0026#39; DBInstanceClass: !Ref \u0026#39;DBInstanceClass\u0026#39; DBName: !Ref \u0026#39;DBName\u0026#39; MasterUsername: !Ref \u0026#39;DBUsername\u0026#39; MasterUserPassword: !Join [\u0026#39;\u0026#39;, [\u0026#39;{{resolve:secretsmanager:\u0026#39;, !Ref Secret, \u0026#39;:SecretString:password}}\u0026#39; ]] PubliclyAccessible: false VPCSecurityGroups: - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;NetworkStackName\u0026#39;, \u0026#39;DBSecurityGroupId\u0026#39;]] {% endraw %} We define the engine and version and an AllocatedStorage of 20 GB (this is the minimum allowed value).\nWe place the database instance into the previously created DBSubnetGroup.\nThen, we refer to the DBInstanceClass, DBName, and DBUsername parameters we defined as inputs to this CloudFormation stack earlier to set some basic properties of the database.\nThe MasterUserPassword we set to the previously created password. For this, we resolve the secret from the Secrets Manager and extract the password field from the JSON object.\nFinally, we restrict public access to the database and place the database into the DBSecurityGroup we have created in the network stack.\nSecret Attachment Next, we attach the secret to the database:\nSecretRDSInstanceAttachment: Type: \u0026#34;AWS::SecretsManager::SecretTargetAttachment\u0026#34; Properties: SecretId: !Ref Secret TargetId: !Ref PostgresInstance TargetType: AWS::RDS::DBInstance This merely associates the secret with the database so that we can take advantage of the secret rotation feature provided by the AWS Secrets Manager.\nOutputs Finally, we need to export some resources from the database stack so that we can use them in the service stack:\nOutputs: EndpointAddress: Description: Address of the RDS endpoint. Value: !GetAtt \u0026#39;PostgresInstance.Endpoint.Address\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;EndpointAddress\u0026#39; ] ] EndpointPort: Description: Port of the RDS endpoint. Value: !GetAtt \u0026#39;PostgresInstance.Endpoint.Port\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;EndpointPort\u0026#39; ] ] DBName: Description: The name of the database that is created within the PostgreSQL instance. Value: !Ref DBName Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;DBName\u0026#39; ] ] Secret: Description: Reference to the secret containing the password to the database. Value: !Ref \u0026#39;Secret\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;Secret\u0026#39; ] ] We\u0026rsquo;ll need the EndpointAddress, EndpointPort, DBName, and Secret parameters in the service stack to connect our Spring Boot application to the database.\nDesigning the Service Stack In the service stack, we don\u0026rsquo;t really change much compared to the original stack. The only thing we do is to override some environment variables to pass the database connection to the Spring Boot application.\nParameters We need a new input parameter to capture the name of the database stack:\nParameters: DatabaseStackName: Type: String Description: The name of the database stack with the database this service should connect to. # ... other parameters We\u0026rsquo;ll need the database stack name to import some of its outputs.\nSet the Database Connection The main change is passing Environment variables to the Docker containers that contain our Spring Boot application:\n{% raw %} TaskDefinition: Type: AWS::ECS::TaskDefinition Properties: # ... ContainerDefinitions: - Name: !Ref \u0026#39;ServiceName\u0026#39; Cpu: !Ref \u0026#39;ContainerCpu\u0026#39; Memory: !Ref \u0026#39;ContainerMemory\u0026#39; image: images/stock/!Ref \u0026#39;ImageUrl\u0026#39;-1200x628-branded.jpg Environment: - Name: SPRING_DATASOURCE_URL Value: !Join - \u0026#39;\u0026#39; - - \u0026#39;jdbc:postgresql://\u0026#39; - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;DatabaseStackName\u0026#39;, \u0026#39;EndpointAddress\u0026#39;]] - \u0026#39;:\u0026#39; - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;DatabaseStackName\u0026#39;, \u0026#39;EndpointPort\u0026#39;]] - \u0026#39;/\u0026#39; - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;DatabaseStackName\u0026#39;, \u0026#39;DBName\u0026#39;]] - Name: SPRING_DATASOURCE_USERNAME Value: !Join - \u0026#39;\u0026#39; - - \u0026#39;{{resolve:secretsmanager:\u0026#39; - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;DatabaseStackName\u0026#39;, \u0026#39;Secret\u0026#39;]] - \u0026#39;:SecretString:username}}\u0026#39; - Name: SPRING_DATASOURCE_PASSWORD Value: !Join - \u0026#39;\u0026#39; - - \u0026#39;{{resolve:secretsmanager:\u0026#39; - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;DatabaseStackName\u0026#39;, \u0026#39;Secret\u0026#39;]] - \u0026#39;:SecretString:password}}\u0026#39; # ... {% endraw %} We\u0026rsquo;re setting the environment properties SPRING_DATASOURCE_URL, SPRING_DATASOURCE_USERNAME, and SPRING_DATASOURCE_PASSWORD, which are the default properties used by Spring Boot to create a database connection.\nThe URL will have a value like jdbc:postgresql://\u0026lt;EndpointAddress\u0026gt;:\u0026lt;EndpointPort\u0026gt;/\u0026lt;DBName\u0026gt;, using the respective parameters exported by the database stack.\nWe load the username and password from the Secret we created in the database stack. The dynamic reference {% raw %}{{resolve:..}}{% endraw %} resolves the exported secret as JSON from the database stack and reads the username and password fields from it.\nRunning the Stacks With those changes to the stacks, we can start them one after another.\nStarting the Stacks Will Incur AWS Costs!  Starting a stack is fun because it creates a whole bunch of resources with the click of a button. But this also means that we have to pay for the resources it creates. Starting and stopping all stacks described in this article a couple of times will incur a cost in the ballpark of cents of up to a couple of dollars, depending on how often you do it.  It cost me around $20 to start, stop, debug, and re-start the stacks over a week's time to prepare this article.  The network stack has to be up first:\naws cloudformation create-stack \\ --stack-name reflectoring-hello-rds-network \\ --template-body file://network.yml \\ --capabilities CAPABILITY_IAM Once the network stack has reached the status CREATE_COMPLETE, we can start the database stack:\naws cloudformation create-stack \\ --stack-name reflectoring-hello-rds-database \\ --template-body file://database.yml \\ --parameters \\ ParameterKey=DBName,ParameterValue=reflectoring \\ ParameterKey=NetworkStackName,ParameterValue=reflectoring-hello-rds-network \\ ParameterKey=DBUsername,ParameterValue=reflectoring And finally the service stack:\naws cloudformation create-stack \\ --stack-name reflectoring-hello-rds-service \\ --template-body file://service.yml \\ --parameters \\ ParameterKey=NetworkStackName,ParameterValue=reflectoring-hello-rds-network \\ ParameterKey=ServiceName,ParameterValue=reflectoring-hello-rds \\ ParameterKey=ImageUrl,ParameterValue=docker.io/reflectoring/aws-rds-hello-world:latest \\ ParameterKey=ContainerPort,ParameterValue=8080 \\ ParameterKey=HealthCheckPath,ParameterValue=/hello \\ ParameterKey=HealthCheckIntervalSeconds,ParameterValue=90 \\ ParameterKey=DatabaseStackName,ParameterValue=reflectoring-hello-rds-database Note that we\u0026rsquo;re starting the service stack with the Docker image reflectoring/aws-rds-hello-world:latest which we have created above.\nTesting the Stacks Once the service stack reaches the status CREATE_COMPLETE, we should test that everything works as expected. For this, we need to find out the public URL of the load balancer which is available in the EC2 console under \u0026ldquo;Load Balancers\u0026rdquo;. There, we find the DNS name of the load balancer, copy that into a browser and add the /hello endpoint. The browser should show the following text:\nHello AWS! Successfully connected to the database! This means that the Spring Boot application could successfully connect to the database.\nTroubleshooting CannotStartContainerError: Error response from dae I saw this error in the CloudFormation console when it tried to start a Docker container in the service stack. The error means that CloudFormation cannot start the Docker container for whatever reason (I couldn\u0026rsquo;t find out what a dae is, though).\nIf you go to the \u0026ldquo;Details\u0026rdquo; section of the ECS task in the ECS console you should see the same error message there. The error message is expandable (which is not obvious). If you expand it, you should see a more helpful error message.\nIn my case, the error was failed to create Cloudwatch log stream: ResourceNotFoundException: The specified log group does not exist. because I had forgotten to create a CloudWatch log stream. I added the log stream to the CloudFormation template and all was good.\nThe AWS Journey By now, we have successfully deployed a highly available Spring Boot application and a (not so highly available) PostgreSQL instance all with running a few commands from the command line.\nBut there\u0026rsquo;s more to do on the road to a production-ready, continuously deployable system.\nHere\u0026rsquo;s a list of the questions I want to answer on this journey. If there\u0026rsquo;s a link, it has already been answered with a blog post! If not, stay tuned!\n How can I deploy an application from the web console? How can I deploy an application from the command line? How can I implement high availability for my deployed application? How do I set up load balancing? How can I deploy a database in a private subnet and access it from my application? (this article) How can I deploy my application from a CI/CD pipeline? How can I deploy a new version of my application without downtime? How can I deploy my application into multiple environments (test, staging, production)? How can I auto-scale my application horizontally on high load? How can I implement sticky sessions in the load balancer (if I\u0026rsquo;m building a session-based web app)? How can I monitor what’s happening on my application? How can I bind my application to a custom domain? How can I access other AWS resources (like SQS queues and DynamoDB tables) from my application? How can I implement HTTPS?  Check Out the Book!  This article gives only a first impression of what you can do with CloudFormation and RDS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"May 19, 2020","image":"https://reflectoring.io/images/stock/0061-cloud-1200x628-branded_hu34d6aa247e0bb2675461b5a0146d87a8_82985_650x0_resize_q90_box.jpg","permalink":"/aws-cloudformation-rds/","title":"The AWS Journey Part 3: Connecting a Spring Boot Application to an RDS Instance with CloudFormation"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you want to know what makes software teams (and their companies) successful you need arguments for moving towards DevOps you are interested in the science behind a survey  Book Facts  Title: Accelerate Authors: Nicole Forsgren, Jez Humble, and Gene Kim Word Count: ~ 55.000 (3.5 hours at 250 words / minute) Reading Ease: medium Writing Style: sometimes dry discussion of survey results  Overview {% include book-link.html book=\u0026ldquo;accelerate\u0026rdquo; %} explains the data from 4 years of surveys conducted for the yearly \u0026ldquo;State of DevOps Report\u0026rdquo;.\nThe questions were designed to model certain \u0026ldquo;constructs\u0026rdquo; like \u0026ldquo;continuous delivery\u0026rdquo;, \u0026ldquo;lean management\u0026rdquo;, or \u0026ldquo;software delivery performance\u0026rdquo; and were evaluated to find the correlations between those constructs.\nA clustering algorithm identified low, medium, and high performers in software delivery.\nThe book is a data-driven discussion of which constructs lead to high-performing software development teams and a successful company overall.\nThe book is written in a sober, data-driven manner, making it a chore to read at some points. The key facts of the book are very clear, however, since the survey data leads to satisfyingly clear and easy-to-understand statements like \u0026ldquo;continuous delivery increases software development performance\u0026rdquo;.\nThe authors also explain the survey methodology and science behind it. A little too much, in my opinion, because they kept explaining why this and that is indeed true based on the surveys instead of moving all the discussion about it to the end, where they dedicated some chapters to the survey methodology anyways.\nThe content of the book is very clear and actionable, though.\nNotes Here are my notes, as usual with some comments in italics.\nSummary: Practices that Improve Software Delivery Performance  continuous delivery infrastructure-as-code test data management short-lived VCS branches loosely coupled architectures building internal tools with good UX continuous security limit WIP make work visible transformational leadership experimentation \u0026hellip; (some others I didn\u0026rsquo;t catch)  Accelerate  \u0026ldquo;Maturity models focus on helping an organization arrive at a mature state and then declare themselves done with their journey.\u0026rdquo; (this makes a maturity model a \u0026ldquo;fixed mindset\u0026rdquo; tool, see my review of \u0026ldquo;Mindset\u0026rdquo;) instead of focusing on maturity, organizations should focus on their capabilities  Measuring Performance  bad idea: rewarding dev teams for throughput and ops teams for stability - this creates a wall of confusion: dev throws poor quality software over the wall and ops will implement a painful change management process to protect stability lead time is the time from committing source code into a VCS and the time the code is deployed to production deployment frequency is how often an organization delivers changes to production (this is the equivalent of batch size in production - the smaller the batch size, the higher the deployment frequency) time to restore is the time it takes to restore a service after an incident change failure rate is the percentage of production deployments that fail for some reason there is no tradeoff between moving fast and other performance metrics - high performers improve all performance metrics software delivery performance has an impact on organizational performance in general measure these metrics responsibly and without blame - otherwise, they may be misused to judge rather than to learn (again, fixed mindset vs. growth mindset) when talking about \u0026ldquo;performance\u0026rdquo; below, it means being good in the metrics above  Measuring and Changing Culture  continuous delivery has an impact on culture - it drives culture from \u0026ldquo;pathological\u0026rdquo; or \u0026ldquo;bureaucratic\u0026rdquo; to \u0026ldquo;generative\u0026rdquo; (i.e. an open-minded, innovative, growth-mindset culture)  Technical Practices  \u0026ldquo;Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.\u0026rdquo; give developers the time and resources to invest in the process and to fix problems continuous delivery helps to make work more sustainable continuous delivery reduces unplanned work due to higher quality code and less unforeseen fixes having infrastructure-as-code is highly correlated to software delivery performance having tests maintained by an outside party (i.e. a dedicated test team) is not correlated to performance having test data management in place is correlated to performance the shorter VCS branches live the better the performance  Architecture  loosely coupled architectures are correlated to performance working on outsourced software correlates negatively to performance having separately testable and deployable artifacts increases performance \u0026ldquo;Inverse Conway maneuver\u0026rdquo;: change your team structure to get a loosely coupled architecture a modular architecture allows to scale team size -\u0026gt; more deploys for each developer added internal tools with good UX increase performance \u0026ldquo;Architects should focus on engineers and outcomes, not tools or technologies\u0026rdquo;  Integrating Infosec into the Delivery Lifecycle  shifting left on security (i.e. doing it earlier in the project, or even better, doing it continuously) increases performance security teams should provide tools and training to developers instead of conducting security reviews  Management Practices for Software  limits to WIP (work-in-progress) make bottlenecks visible and can therefore lead to improvements in throughput a visual display of work, quality metrics, and productivity metrics improve performance and team culture a process requiring approval by a manager or board is worse than having no change process at all - do peer reviews instead  Product Development  lean product management (small batches, MVPs, regular customer feedback) improves performance , and vice versa (!)  Making Work Sustainable  \u0026ldquo;deployment pain\u0026rdquo;: the fear and anxiety felt by developers when they deploy changes to production applying CI/CD reduces the deployment pain causes for deployment pain include the system being intolerant to configuration changes, manual changes, and handoffs between teems  Employee Satisfaction, Identity, and Engagement  we can measure employee satisfaction with the \u0026ldquo;net promoter score\u0026rdquo; by asking \u0026ldquo;how likely would you recommend this employer to a friend?\u0026rdquo; on a scale of 1-10 net promoter score is the percentage of promoters (answered 9-10) minus the percentage of detractors (answered 0-6) continuous delivery increases employees' identity with the company \u0026ldquo;The best thing you can do for your products, your company, and your people is institute a culture of experimentation and learning, and invest in the technical and management capabilities that enable it.\u0026rdquo;  Leaders and Managers  transformational leadership is leadership with a clear vision, giving inspiration, and supporting the employees transformational leadership increases performance DevOps can be driven bottom-up, but good leadership makes success more likely  The Science Behind This Book  the research in this book is quantitative and inferential (i.e. it infers things from a set of survey results) having a hypothesis helps avoid \u0026ldquo;fishing for data\u0026rdquo; and thus to avoid finding random correlations low, medium, and high performers were identified in the data by a clustering algorithm  Introduction to Psychometrics  create a construct of what you want to model write one survey question for each aspect of that construct the averages of all answers to a construct\u0026rsquo;s questions provide a single score for the construct  Why Use a Survey  we could instead get data from the systems we use (i.e. CI/CD tools, VCS tools, \u0026hellip;) the systems' metrics may not be complete! people provide valuable data from outside of the systems a survey protects against \u0026ldquo;bad actors\u0026rdquo; (i.e. respondents who deliberately give false answers) because the majority answers truthfully  The Data for the Project  a bunch of tables and numbers that I didn\u0026rsquo;t find too interesting  Conclusion The book provides valuable insights into what makes companies with a software development team successful. We can use this insight to measure our \u0026ldquo;DevOps metrics\u0026rdquo; (deployment frequency, lead time, mean time to restore, change failure rate).\nKnowing that increasing these metrics means increasing our software delivery performance will make it so much easier to decide on budget, staffing, and other important decisions in our company.\n","date":"May 15, 2020","image":"https://reflectoring.io/images/covers/accelerate-teaser_hu5b5ee6d86e8a2fcda8808b39414a6e23_78979_650x0_resize_q90_box.jpg","permalink":"/book-review-accelerate/","title":"Book Notes: Accelerate"},{"categories":["Spring Boot"],"contents":"Database migration with tools like Flyway or Liquibase requires creating SQL scripts and running them on a database. Although the database is an external dependency, we have to test the SQL scripts, because it is our code. But this code doesn\u0026rsquo;t run in the application that we develop and cannot be tested with unit tests.\nThis article shows how to test database migration scripts with Flyway and Testcontainers in a Spring Boot application and to keep the tests close to production.\n Example Code This article is accompanied by a working code example on GitHub. Key Takeaways  Using an in-memory database for integration tests will cause compatibility issues in our SQL scripts between the in-memory database and the production database. Using Testcontainers, we can easily spin up a Docker container with the production database for our tests.  Common Practice There is a very common and convenient approach for testing database migration scripts with Flyway at build time.\nIt\u0026rsquo;s a combination of Flyway migration support in Spring Boot and an in-memory database like H2. In this case, the database migration begins whenever the Spring application context starts, and the SQL scripts are executed on an H2 database with Flyway.\nIt\u0026rsquo;s easy and fast. But is it good?\nThe Problem of Using an In-Memory Database for Tests H2 is usually not the database we use in production or other production-like environments. When we test the SQL scripts with the H2 database, we have no idea about how the migration would run in the production environment.\nIn-Memory Database in Production  If we use an in-memory database in production, this approach is fine. We can just test the application with an integrated database like H2. In this case, these tests are completely valid and meaningful.  H2 has compatibility modes to disguise as other databases. This may include our production database. With these modes, we can start the H2 database and it will, for example, behave like a PostgreSQL database.\nBut there are still differences. The SQL code for an H2 might still look different from the code for PostgresSQL.\nLet\u0026rsquo;s look at this SQL script:\nCREATE TABLE car ( id uuid PRIMARY KEY, registration_number VARCHAR(255), name varchar(64) NOT NULL, color varchar(32) NOT NULL, registration_timestamp INTEGER ); This script can run on an H2 as well as on a PostgreSQL database.\nNow we want to change the type of the column registration_timestamp from INTEGER to timestamp with time zone and of course, we want to migrate the data in this column. So, we write an SQL script for migrating the registration_timestamp column:\nALTER TABLE car ALTER COLUMN registration_timestamp SET DATA TYPE timestamp with time zone USING timestamp with time zone \u0026#39;epoch\u0026#39; + registration_timestamp * interval \u0026#39;1 second\u0026#39;; This script will not work for H2 with PostgreSQL mode, because the USING clause doesn\u0026rsquo;t work with ALTER TABLE for H2.\nDepending on the database we have in production, we might have database-specific features in the SQL scripts. Another example would be using table inheritance in PostgreSQL with the keyword INHERITS, which isn\u0026rsquo;t supported in other databases.\nWe could, of course, maintain two sets of SQL scripts, one for H2, to be used in the tests, and one for PostgreSQL, to be used in production:\nBut now,:\n we have to configure Spring Boot profiles for different folders with scripts, we have to maintain two sets of scripts, and most importantly, we are not able to test scripts from the folder postgresql at build time.  If we want to write a new script with some features that are not supported by H2, we have to write two scripts, one for H2 and one for PostgreSQL. Also, we have to find a way to achieve the same results with both scripts.\nIf we test the database scripts with the H2 database, and our test is green, we don\u0026rsquo;t know anything about the script V1_2__change_column_type.sql from the folder postgresql.\nThese tests would give us a false sense of security!\nUsing a Production-Like Environment for Testing Database Scripts There is another approach for testing database migration: we can test database migration with an H2 database at build time and then deploy our application into a production-like environment and let the migration scripts run on this environment with the production-like database, for example, PostgreSQL.\nThis approach will alert us if any scripts are not working with the production database, but it still has drawbacks:\n Bugs are discovered too late, it is hard to find errors, and we still have to maintain two sets of SQL scripts.  Let\u0026rsquo;s imagine that we test the migration with the H2 database during build-time of the application, and the tests are green. The next step is delivering and deploying the application to a test environment. It takes time. If the migration in the test environment fails, we\u0026rsquo;ll be notified too late, maybe several minutes later. This slows down the development cycle.\nAlso, this situation is very confusing for developers, because we can\u0026rsquo;t debug errors like in our unit test. Our unit test with H2 was green, after all, and the error only happened in the test environment.\nUsing Testcontainers With Testcontainers we can test the database migration against a Docker container of the production database from our code. On the developer machine or the CI server.\nTestcontainers is a Java library that makes it easy to start up a Docker container from within our tests.\nOf course, we\u0026rsquo;ll have to install Docker to run it. After that, we can create some initialization code for testing:\n@ContextConfiguration( initializers = AbstractIntegrationTest.Initializer.class) public class AbstractIntegrationTest { static class Initializer implements ApplicationContextInitializer\u0026lt;ConfigurableApplicationContext\u0026gt; { static PostgreSQLContainer\u0026lt;?\u0026gt; postgres = new PostgreSQLContainer\u0026lt;\u0026gt;(); private static void startContainers() { Startables.deepStart(Stream.of(postgres)).join(); // we can add further containers  // here like rabbitmq or other databases  } private static Map\u0026lt;String, String\u0026gt; createConnectionConfiguration() { return Map.of( \u0026#34;spring.datasource.url\u0026#34;, postgres.getJdbcUrl(), \u0026#34;spring.datasource.username\u0026#34;, postgres.getUsername(), \u0026#34;spring.datasource.password\u0026#34;, postgres.getPassword() ); } @Override public void initialize( ConfigurableApplicationContext applicationContext) { startContainers(); ConfigurableEnvironment environment = applicationContext.getEnvironment(); MapPropertySource testcontainers = new MapPropertySource( \u0026#34;testcontainers\u0026#34;, (Map) createConnectionConfiguration() ); environment.getPropertySources().addFirst(testcontainers); } } } AbstractIntegrationTest is an abstract class that defines a PostgreSQL database and configures the connection to this database. Other test classes that need access to the PostgreSQL database can extend this class.\nIn the @ContextConfiguration annotation, we add an ApplicationContextInitializer that can modify the application context when it starts up. Spring will call the initialize() method.\nWithin initialize(), we first start the Docker container with a PostgreSQL database. The method deepStart() starts all items of the Stream in parallel. We could additional Docker containers, for instance, RabbitMQ, Keycloak, or another database. To keep it simple, we\u0026rsquo;re starting only one Docker container with the PostgreSQL database.\nNext, we call createConnectionConfiguration() to create a map of the database connection properties. The URL to the database, username, and password are created by the Testcontainers automatically. Hence, we get them from the testcontainers instance postgres and return them.\nIt\u0026rsquo;s also possible to set these parameters manually in the code, but it\u0026rsquo;s better to let Testcontainers generate them. When we let Testcontainers generate the jdbcUrl, it includes the port of the database connection. The random port provides stability and avoids possible conflicts on the machine of another developer or a build server.\nFinally, we add these database connection properties to the Spring context by creating a MapPropertySource and adding it to the Spring Environment. The method addFirst() adds the properties to the contexts with the highest precedence.\nNow, if we want to test database migration scripts, we have to extend the class and create a unit test.\n@SpringBootTest class TestcontainersApplicationTests extends AbstractIntegrationTest { @Test void migrate() { // migration starts automatically,  // since Spring Boot runs the Flyway scripts on startup  } } The class AbstractIntegrationTest can be used not only for testing database migration scripts but also for any other tests that need a database connection.\nNow we can test the migration of SQL scripts with Flyway by using a PostgreSQL database at build time.\nWe have all dependencies in our code and can spin up a close-to-production test environment anywhere.\nDrawbacks As we mentioned above, we have to install Docker on every machine where we want to build the application. This could be a developer laptop or a CI build server.\nAlso, tests interacting with Testcontainers are slower than the same test with an in-memory database, because the Docker container has to be spun up.\nConclusion Testcontainers supports testing the application with unit tests using Docker containers with minimal effort.\nDatabase migration tests with Testcontainers provide production-like database behavior and improve the quality of the tests significantly.\nThere is no need to use an in-memory database for tests.\n","date":"May 8, 2020","image":"https://reflectoring.io/images/stock/0069-testcontainers-1200x628-branded_hu3b772680baa3d43165c81d2cadb1c4a7_781128_650x0_resize_q90_box.jpg","permalink":"/spring-boot-flyway-testcontainers/","title":"Testing Database Migration Scripts with Spring Boot and Testcontainers"},{"categories":["Software Craft"],"contents":"In the first article of my AWS Journey, we deployed a Docker image via the AWS web console. While this works fine, it includes manual work and doesn\u0026rsquo;t provide fine-grained control over the network and other resources we might need.\nThe goal of this journey is to create a production-grade, continuously deployable system, so the next step in this journey is to automate the deployment of Docker images. In this article, we\u0026rsquo;ll use AWS\u0026rsquo;s CloudFormation service to deploy and undeploy a highly available virtual private cloud running multiple instances of our Docker image behind a load balancer - all with a single CLI command.\nCheck Out the Book!  This article gives only a first impression of what you can do with Docker and AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. What is CloudFormation? CloudFormation is AWS\u0026rsquo;s service for automating the deployment of AWS resources. It allows us to describe the resources we want (networks, load balancers, EC2 instances, \u0026hellip;) in a JSON or YAML template and provides commands within the AWS CLI to spin up those resources and remove them again (among other things).\nThe resources defined in such a template are called a \u0026ldquo;stack\u0026rdquo;. A stack is the unit in which CloudFormation allows us to interact with resources. Each stack can be created and deleted separately and stacks may depend on each other. We\u0026rsquo;ll make use of that later when we\u0026rsquo;re creating a stack containing all the networking resources we need and another one that contains our application.\nTo create a CloudFormation template, we need low-level knowledge of the resources we\u0026rsquo;re going to deploy. Once we have a running CloudFormation template, though, we can start and stop it at any time without having to think about the detailed resources too much.\nNote that CloudFormation is not the only option to codify an AWS infrastructure. We could also use Terraform, a general infrastructure-as-code platform, or CDK, AWS\u0026rsquo;s Cloud Development Kit, which is a wrapper around CloudFormation and allows us to describe AWS resources in different programming languages than JSON and YAML (did I hear anyone say that JSON and YAML are programming languages?). There\u0026rsquo;s a bunch of other solutions out there, but these two seem to be the most common ones.\nIn this article, however, we\u0026rsquo;ll use CloudFormation to learn the basics.\nDesigning the CloudFormation Stacks On a very high level, this is what we\u0026rsquo;re going to build in this article:\nWe\u0026rsquo;ll create two CloudFormation stacks:\n A network stack that creates a VPC (virtual private cloud) with two public subnets (each in a different availability zone for high availability), an internet gateway, and a load balancer that balances traffic between those networks. A service stack that places a Docker container with the application we want to run into each of the public networks. For this, we take advantage of ECS (Elastic Container Service) and Fargate, which together abstract away some of the gritty details and make it easier to run a Docker container.  The service stack depends on the network stack, so we start with designing the network stack.\nI didn\u0026rsquo;t start from scratch (I\u0026rsquo;m not smart enough for that) but instead used these CloudFormation templates as a starting point and modified them for simplicity and understanding.\nWe\u0026rsquo;ll be discussing a single fragment of YAML at a time. You can find the complete CloudFormation templates for the network stack (network.yml) and the service stack (service.yml) on GitHub.\nSkip to running the stacks if you\u0026rsquo;re not interested in the nitty-gritty details of the stack configuration.\nDesigning the Network Stack The network stack creates a bunch of AWS resources required to create a network for our application. Let\u0026rsquo;s look at each of the resources in turn.\nVPC VPC: Type: AWS::EC2::VPC Properties: CidrBlock: \u0026#39;10.0.0.0/16\u0026#39; A VPC is rather easy to define. The main feature is the IP address space defined by a CIDR (classless inter-domain routing) address block.\n10.0.0.0/16 means that the first 16 bits (10.0) of the CIDR block are used to designate the network and the rest of the bits can be used to create IP addresses. This gives us an IP address range from 10.0.0.0 through 10.0.255.255. More than enough to spin up a couple of Docker containers.\nPublic Subnets Next, we create two public subnets. A subnet is a network in which we can place other resources. A public subnet is a network whose resources get a public IP address which is reachable from the internet:\nPublicSubnetOne: Type: AWS::EC2::Subnet Properties: AvailabilityZone: Fn::Select: - 0 - Fn::GetAZs: {Ref: \u0026#39;AWS::Region\u0026#39;} VpcId: !Ref \u0026#39;VPC\u0026#39; CidrBlock: \u0026#39;10.0.1.0/24\u0026#39; MapPublicIpOnLaunch: true PublicSubnetTwo: Type: AWS::EC2::Subnet Properties: AvailabilityZone: Fn::Select: - 1 - Fn::GetAZs: {Ref: \u0026#39;AWS::Region\u0026#39;} VpcId: !Ref \u0026#39;VPC\u0026#39; CidrBlock: \u0026#39;10.0.2.0/24\u0026#39; MapPublicIpOnLaunch: true We place each subnet into a different AvailabilityZone so that when one zone goes down, the other can still serve traffic. For this, we select the first and second availability zone for the region we\u0026rsquo;re working in, respectively.\nUsing the CidrBlock property, we define the IP address range for each subnet. The first subnet gets the range from 10.0.1.0 to 10.0.1.255 and the second from 10.0.2.0 to 10.0.2.255.\nFinally, we set the property MapPublicIpOnLaunch to true, making those subnets public.\nWhy Are We Using Public Subnets? Because it's easier to set up. Putting our Docker containers into private subnets requires a more complicated setup with a NAT gateway and routes from the load balancer in the public subnet to the containers in the private subnet.\nWe will later define security groups to restrict the access to our Docker containers, but note that it's always more secure to put resources into a private subnet, because that puts another layer of abstraction between them and the evil internet.\n Internet Gateway Next, we set up an internet gateway, which will later allow internet traffic to reach our public subnets and which will also allow resources in our public subnets to reach out to the internet:\nInternetGateway: Type: AWS::EC2::InternetGateway GatewayAttachement: Type: AWS::EC2::VPCGatewayAttachment Properties: VpcId: !Ref \u0026#39;VPC\u0026#39; InternetGatewayId: !Ref \u0026#39;InternetGateway\u0026#39; With the VpcId and InternetGatewayId properties, we connect the internet gateway with our VPC from above.\nRouting The internet gateway is connected to our VPC now, but it wouldn\u0026rsquo;t forward any internet traffic to our subnets, yet. We need to define a route table for that:\nPublicRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref \u0026#39;VPC\u0026#39; PublicRoute: Type: AWS::EC2::Route DependsOn: GatewayAttachement Properties: RouteTableId: !Ref \u0026#39;PublicRouteTable\u0026#39; DestinationCidrBlock: \u0026#39;0.0.0.0/0\u0026#39; GatewayId: !Ref \u0026#39;InternetGateway\u0026#39; PublicSubnetOneRouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PublicSubnetOne RouteTableId: !Ref PublicRouteTable PublicSubnetTwoRouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PublicSubnetTwo RouteTableId: !Ref PublicRouteTable The route table is connected to our VPC via the VpcId property.\nWe define a route from 0.0.0.0/0 (i.e. from any possible IP address, meaning the internet) to our internet gateway from above.\nFinally, we associate the route table with both of our public subnets, opening up our subnets for internet traffic.\nDo I Need a NAT Gateway to Access the Internet From Within My Subnet? To access the internet from a custom network like our subnets, we need to translate an internal IP address (e.g. 10.0.1.123) into a global IP address. This is called network address translation (NAT) and is usually done by a router that has both an internal IP address and a global IP address.\nThe internet gateway we set up above will do this translation for us automatically, but only for resources in public subnets. To enable access to the internet from private subnets, we would need to set up a NAT gateway.\n Load Balancer Now that we have two subnets that can get traffic from the outside, we need a way to balance the traffic between them. For this, we create an application load balancer (ALB):\nPublicLoadBalancerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Access to the public facing load balancer VpcId: !Ref \u0026#39;VPC\u0026#39; SecurityGroupIngress: - CidrIp: 0.0.0.0/0 IpProtocol: -1 PublicLoadBalancer: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Scheme: internet-facing Subnets: - !Ref PublicSubnetOne - !Ref PublicSubnetTwo SecurityGroups: [!Ref \u0026#39;PublicLoadBalancerSecurityGroup\u0026#39;] We start with a security group that allows inbound (or \u0026ldquo;ingress\u0026rdquo;) traffic from the internet (0.0.0.0/0) to the load balancer.\nWe have to do this even though we already created a public route from the internet to the internet gateway above because AWS will otherwise assign all resources in the public subnets to a default security group that doesn\u0026rsquo;t allow inbound traffic from the internet.\nNext, we create the load balancer itself with the internet-facing scheme, meaning that it takes inbound traffic from the public, and attach it to both subnets and the security group.\nNow, the load balancer needs to know where to forward incoming requests. This is where a target group comes into play:\nDummyTargetGroupPublic: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 6 HealthCheckPath: / HealthCheckProtocol: HTTP HealthCheckTimeoutSeconds: 5 HealthyThresholdCount: 2 Name: \u0026#34;no-op\u0026#34; Port: 80 Protocol: HTTP UnhealthyThresholdCount: 2 VpcId: !Ref \u0026#39;VPC\u0026#39; This is just a dummy \u0026ldquo;no-op\u0026rdquo; target group that takes the traffic and drops it. We will later create a real target group with our Docker containers in the service stack. We need this dummy target group for now so that we can spin up the network stack without the real services.\nFinally, we define a listener that defines the load balancing rules:\nPublicLoadBalancerListener: Type: AWS::ElasticLoadBalancingV2::Listener DependsOn: - PublicLoadBalancer Properties: DefaultActions: - TargetGroupArn: !Ref \u0026#39;DummyTargetGroupPublic\u0026#39; Type: \u0026#39;forward\u0026#39; LoadBalancerArn: !Ref \u0026#39;PublicLoadBalancer\u0026#39; Port: 80 Protocol: HTTP The listener forwards all HTTP traffic on port 80 to the target group we created previously. The service stack will later change the target group here as well.\nECS Cluster Now, we create an ECS (Elastic Container Service) cluster that will be responsible for managing our Docker containers:\nECSCluster: Type: AWS::ECS::Cluster ECSSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Access to the ECS containers VpcId: !Ref \u0026#39;VPC\u0026#39; ECSSecurityGroupIngressFromPublicALB: Type: AWS::EC2::SecurityGroupIngress Properties: Description: Ingress from the public ALB GroupId: !Ref \u0026#39;ECSSecurityGroup\u0026#39; IpProtocol: -1 SourceSecurityGroupId: !Ref \u0026#39;PublicLoadBalancerSecurityGroup\u0026#39; ECSSecurityGroupIngressFromSelf: Type: AWS::EC2::SecurityGroupIngress Properties: Description: Ingress from other containers in the same security group GroupId: !Ref \u0026#39;ECSSecurityGroup\u0026#39; IpProtocol: -1 SourceSecurityGroupId: !Ref \u0026#39;ECSSecurityGroup\u0026#39; We define the cluster and a security group that we will later need in the service stack.\nThe security group allows inbound traffic from the load balancer (or, more specifically, from everything in the PublicLoadBalancerSecurityGroup) and inbound traffic from everything in the same security group so that our Docker containers can later talk to each other.\nRoles Finally (yes, we\u0026rsquo;re close to the end!), we set up some roles for everything to work properly.\nFirst, we need to give some permissions to ECS so it can set everything up for us when we spin up the service stack later:\nECSRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Statement: - Effect: Allow Principal: Service: [ecs.amazonaws.com] Action: [\u0026#39;sts:AssumeRole\u0026#39;] Path: / Policies: - PolicyName: ecs-service PolicyDocument: Statement: - Effect: Allow Action: - \u0026#39;ec2:AttachNetworkInterface\u0026#39; - \u0026#39;ec2:CreateNetworkInterface\u0026#39; - \u0026#39;ec2:CreateNetworkInterfacePermission\u0026#39; - \u0026#39;ec2:DeleteNetworkInterface\u0026#39; - \u0026#39;ec2:DeleteNetworkInterfacePermission\u0026#39; - \u0026#39;ec2:Describe*\u0026#39; - \u0026#39;ec2:DetachNetworkInterface\u0026#39; - \u0026#39;elasticloadbalancing:DeregisterInstancesFromLoadBalancer\u0026#39; - \u0026#39;elasticloadbalancing:DeregisterTargets\u0026#39; - \u0026#39;elasticloadbalancing:Describe*\u0026#39; - \u0026#39;elasticloadbalancing:RegisterInstancesWithLoadBalancer\u0026#39; - \u0026#39;elasticloadbalancing:RegisterTargets\u0026#39; Resource: \u0026#39;*\u0026#39; This role gives ECS the permission to do some networking stuff (prefix ec2) and some loadbalancing stuff (prefix elasticloadbalancing).\nSince ECS will later do some heavy lifting for us in setting up a bunch of EC2 instances running our Docker images, it needs the permission to create network interfaces.\nAlso, since we created a dummy target group for the load balancer above, it will need the permission to change the load balancer\u0026rsquo;s target group to a new one pointing to our Docker containers.\nWe create another role for our Docker containers:\nECSTaskExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Statement: - Effect: Allow Principal: Service: [ecs-tasks.amazonaws.com] Action: [\u0026#39;sts:AssumeRole\u0026#39;] Path: / Policies: - PolicyName: AmazonECSTaskExecutionRolePolicy PolicyDocument: Statement: - Effect: Allow Action: - \u0026#39;logs:CreateLogStream\u0026#39; - \u0026#39;logs:PutLogEvents\u0026#39; Resource: \u0026#39;*\u0026#39; Docker containers are abstracted by a \u0026ldquo;Task\u0026rdquo; in ECS, so this role is called ECSTaskExecutionRole. We\u0026rsquo;ll apply this role to the ECS tasks later in the service stack.\nThe role merely allows the Docker containers to create logs for now.\nOutputs of the Network Stack In the last step, we\u0026rsquo;re creating a bunch of Outputs that export some of the resources of this stack so we can refer to them in the service stack later:\nOutputs: ClusterName: Description: The name of the ECS cluster Value: !Ref \u0026#39;ECSCluster\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;ClusterName\u0026#39; ] ] ExternalUrl: Description: The url of the external load balancer Value: !Join [\u0026#39;\u0026#39;, [\u0026#39;http://\u0026#39;, !GetAtt \u0026#39;PublicLoadBalancer.DNSName\u0026#39;]] Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;ExternalUrl\u0026#39; ] ] ECSRole: Description: The ARN of the ECS role Value: !GetAtt \u0026#39;ECSRole.Arn\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;ECSRole\u0026#39; ] ] ECSTaskExecutionRole: Description: The ARN of the ECS role Value: !GetAtt \u0026#39;ECSTaskExecutionRole.Arn\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;ECSTaskExecutionRole\u0026#39; ] ] PublicListener: Description: The ARN of the public load balancer\u0026#39;s Listener Value: !Ref PublicLoadBalancerListener Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;PublicListener\u0026#39; ] ] VPCId: Description: The ID of the VPC that this stack is deployed in Value: !Ref \u0026#39;VPC\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;VPCId\u0026#39; ] ] PublicSubnetOne: Description: Public subnet one Value: !Ref \u0026#39;PublicSubnetOne\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;PublicSubnetOne\u0026#39; ] ] PublicSubnetTwo: Description: Public subnet two Value: !Ref \u0026#39;PublicSubnetTwo\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;PublicSubnetTwo\u0026#39; ] ] ECSSecurityGroup: Description: A security group used to allow ECS containers to receive traffic Value: !Ref \u0026#39;ECSSecurityGroup\u0026#39; Export: Name: !Join [ \u0026#39;:\u0026#39;, [ !Ref \u0026#39;AWS::StackName\u0026#39;, \u0026#39;ECSSecurityGroup\u0026#39; ] ] All resources are exported with the name pattern StackName:ResourceName. CloudFormation will resolve the stack name from the mandatory --stack-name command line parameter we have to provide when creating the stack from the template.\nThe resources can then be imported in another stack with the Fn::ImportValue function.\nDesigning the Service Stack Phew, that was a lot of networking stuff. Now how do we get our service into that network?\nLet\u0026rsquo;s build the service stack. Take comfort in the fact that it is much smaller than the network stack!\nParameters We start the service with some input parameters:\nParameters: StackName: Type: String Description: The name of the networking stack that  these resources are put into. ServiceName: Type: String Description: A human-readable name for the service. HealthCheckPath: Type: String Default: /health Description: Path to perform the healthcheck on each instance. HealthCheckIntervalSeconds: Type: Number Default: 5 Description: Number of seconds to wait between each health check. ImageUrl: Type: String Description: The url of a docker image that will handle incoming traffic. ContainerPort: Type: Number Default: 80 Description: The port number the application inside the docker container  is binding to. ContainerCpu: Type: Number Default: 256 Description: How much CPU to give the container. 1024 is 1 CPU. ContainerMemory: Type: Number Default: 512 Description: How much memory in megabytes to give the container. Path: Type: String Default: \u0026#34;*\u0026#34; Description: A path on the public load balancer that this service should be connected to. DesiredCount: Type: Number Default: 2 Description: How many copies of the service task to run. Only the StackName, ServiceName, and ImageUrl parameters are mandatory. The rest has defaults that might require tweaking depending on our application.\nThe StackName parameter must be the name of the network stack, so we can import some of its outputs later.\nRe-route the Load Balancer Remember that we have set up a dummy target group for the load balancer in the network stack? We now create the real target group:\nTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: !Ref \u0026#39;HealthCheckIntervalSeconds\u0026#39; HealthCheckPath: !Ref \u0026#39;HealthCheckPath\u0026#39; HealthCheckProtocol: HTTP HealthCheckTimeoutSeconds: 5 HealthyThresholdCount: 2 TargetType: ip Name: !Ref \u0026#39;ServiceName\u0026#39; Port: !Ref \u0026#39;ContainerPort\u0026#39; Protocol: HTTP UnhealthyThresholdCount: 2 VpcId: Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;StackName\u0026#39;, \u0026#39;VPCId\u0026#39;]] LoadBalancerRule: Type: AWS::ElasticLoadBalancingV2::ListenerRule Properties: Actions: - TargetGroupArn: !Ref \u0026#39;TargetGroup\u0026#39; Type: \u0026#39;forward\u0026#39; Conditions: - Field: path-pattern Values: [!Ref \u0026#39;Path\u0026#39;] ListenerArn: Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;StackName\u0026#39;, \u0026#39;PublicListener\u0026#39;]] Priority: 1 We use some of the input parameters from above here with the !Ref function to set up the health check and the port on which the machines within the TargetGroup should receive traffic.\nWe put the target group into our VPC using the Fn::ImportValue function to import the VPC ID from the network stack (which must be up and running before we create the service stack).\nAlso, we replace the LoadBalancerRule we created in the network stack with a new one that points to the new TargetGroup.\nTask Definition Now it\u0026rsquo;s time to set up our Docker containers. In ECS, this is done via a TaskDefinition.\nWith a TaskDefinition, we can define the resources our containers need. ECS will then take care of downloading our Docker image and passing it to Fargate to provision the required EC2 resources:\nTaskDefinition: Type: AWS::ECS::TaskDefinition Properties: Family: !Ref \u0026#39;ServiceName\u0026#39; Cpu: !Ref \u0026#39;ContainerCpu\u0026#39; Memory: !Ref \u0026#39;ContainerMemory\u0026#39; NetworkMode: awsvpc RequiresCompatibilities: - FARGATE ExecutionRoleArn: Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;StackName\u0026#39;, \u0026#39;ECSTaskExecutionRole\u0026#39;]] ContainerDefinitions: - Name: !Ref \u0026#39;ServiceName\u0026#39; Cpu: !Ref \u0026#39;ContainerCpu\u0026#39; Memory: !Ref \u0026#39;ContainerMemory\u0026#39; image: images/stock/!Ref \u0026#39;ImageUrl\u0026#39;-1200x628-branded.jpg PortMappings: - ContainerPort: !Ref \u0026#39;ContainerPort\u0026#39; LogConfiguration: LogDriver: \u0026#39;awslogs\u0026#39; Options: awslogs-group: !Ref \u0026#39;ServiceName\u0026#39; awslogs-region: !Ref AWS::Region awslogs-stream-prefix: !Ref \u0026#39;ServiceName\u0026#39; There are a couple of important settings here.\nWith RequiresCompatibilities, we declare that we\u0026rsquo;re using FARGATE, which is the AWS infrastructure that takes care of running our Docker images without us having to provision EC2 instances ourselves. If we chose EC2 instead, we\u0026rsquo;d have to do that ourselves.\nUsing Fargate, we have to set the Docker NetworkMode to awsvpc.\nIn ExecutionRoleArn, we refer to the ECSTaskExecutionRole we have defined in the network stack earlier, to give the containers permission to send logs into CloudWatch.\nWithin the ContainerDefinitions comes an important part: we set the Image property to the URL of our Docker image so that ECS can download the image and put it into action.\nWe also define the ContainerPort (i.e. the port the container receives traffic on). Using Fargate, we cannot define a HostPort (i.e. the port the host receives traffic on and passes it to the container port). Instead, the host port is the same as the container port. But that doesn\u0026rsquo;t hurt us much, because the load balancer will translate from HTTP port 80 to the container port for us.\nFinally, we define a LogConfiguration that sends whatever our Docker containers log to the console to CloudWatch logs.\nECS Service The final piece of the puzzle is the ECS Service. It connects the load balancer to the task definition and puts docker containers into our public subnets:\nService: Type: AWS::ECS::Service DependsOn: LoadBalancerRule Properties: ServiceName: !Ref \u0026#39;ServiceName\u0026#39; Cluster: Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;StackName\u0026#39;, \u0026#39;ClusterName\u0026#39;]] LaunchType: FARGATE DeploymentConfiguration: MaximumPercent: 200 MinimumHealthyPercent: 50 DesiredCount: !Ref \u0026#39;DesiredCount\u0026#39; NetworkConfiguration: AwsvpcConfiguration: AssignPublicIp: ENABLED SecurityGroups: - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;StackName\u0026#39;, \u0026#39;ECSSecurityGroup\u0026#39;]] Subnets: - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;StackName\u0026#39;, \u0026#39;PublicSubnetOne\u0026#39;]] - Fn::ImportValue: !Join [\u0026#39;:\u0026#39;, [!Ref \u0026#39;StackName\u0026#39;, \u0026#39;PublicSubnetTwo\u0026#39;]] TaskDefinition: !Ref \u0026#39;TaskDefinition\u0026#39; LoadBalancers: - ContainerName: !Ref \u0026#39;ServiceName\u0026#39; ContainerPort: !Ref \u0026#39;ContainerPort\u0026#39; TargetGroupArn: !Ref \u0026#39;TargetGroup\u0026#39; We define FARGATE as the LaunchType, because we don\u0026rsquo;t want to provision the computing resources ourselves.\nIn the DeploymentConfiguration block, we tell ECS to have a maximum of double the desired containers and a minimum of half the desired containers running at the same time. ECS needs this leeway during deployments of new versions of the Docker images.\nIn the NetworkConfiguration we pass in the ECSSecurityGroup we defined in the network stack. Remember that this security group has permissions to configure some networking and load balancing stuff which is required to bring new Docker containers into the cluster and decommission old ones.\nFinally, we tell the ECS service to run our task definition from above and connect it to the load balancer.\nRunning the Stacks Defining a CloudFormation stack is the hard part. Once a stack template is reliably defined, running stacks becomes a breeze.\nIt took me a lot of time to get here, even though I have started with existing templates from GitHub.\nThe dev loop looked something like this:\n tweak the stack template create a stack with the template find out why it\u0026rsquo;s not working delete the stack start from the beginning.  Especially the \u0026ldquo;find out why it\u0026rsquo;s not working\u0026rdquo; part takes a lot of research and time if you\u0026rsquo;re not intimately familiar with all the AWS resources you\u0026rsquo;re using. I put some of the errors that cost me time in the troubleshooting section.\nStarting the Stacks Will Incur AWS Costs!  Starting a stack is fun because it creates a whole bunch of resources with the click of a button. But this also means that we have to pay for the resources it creates. Starting and stopping all stacks described in this article a couple of times will incur a cost in the ballpark of cents of up to a couple of dollars, depending on how often you do it.  It cost me around $20 to start, stop, debug, and re-start the stacks over a week's time to prepare this article.  Creating the Network Stack Spinning up our stacks is now a matter of running a CLI command. Make sure you have the AWS CLI installed if you want to try it yourself.\nLet\u0026rsquo;s start with spinning up our network stack:\naws cloudformation create-stack \\ --stack-name reflectoring-hello-world-network \\ --template-body file://network.yml \\ --capabilities CAPABILITY_IAM We merely select a name for the stack, pass in the YAML template and give CloudFormation the CAPABILITY_IAM capability (i.e. we\u0026rsquo;re OK when CloudFormation creates or modifies identity and access management roles in our name).\nWe can check if the stack was successfully created by selecting the CloudFormation service in the AWS console and looking at the list of stacks available to us.\nAlternatively, we can use the AWS CLI and run the command aws cloudformation describe-stacks, which lists the status of all the stacks that are currently running.\nIt should only take a couple of minutes until the stack has reached the status CREATE_COMPLETE.\nCreating the Service Stack One the network stack is up and running, we can create the service stack:\naws cloudformation create-stack \\ --stack-name reflectoring-hello-world-service \\ --template-body file://service.yml \\ --parameters \\ ParameterKey=StackName,ParameterValue=reflectoring-hello-world-network \\ ParameterKey=ServiceName,ParameterValue=reflectoring-hello-world \\ ParameterKey=ImageUrl,ParameterValue=docker.io/reflectoring/aws-hello-world:latest \\ ParameterKey=ContainerPort,ParameterValue=8080 \\ ParameterKey=HealthCheckPath,ParameterValue=/hello \\ ParameterKey=HealthCheckIntervalSeconds,ParameterValue=90 This looks a little more complicated, but only because we\u0026rsquo;re passing in a bunch of the parameters we defined in the service template.\nThe three mandatory parameters are\n the StackName, which we set to the name we used when creating the network stack, the ServiceName, which is used to name some of the resources created by CloudFormation (look for ServiceName in the YAML snippets in the previous sections), and the ImageUrl, which I pointed to the Docker image of my \u0026ldquo;hello world\u0026rdquo; Spring Boot application.  The rest are optional parameters that have sensible defaults, but we need to tweak them to work with the hello world application.\nThe application runs on port 8080, so we have to set this as the value for the ContainerPort parameter.\nAlso, the application only has a single HTTP endpoint, /hello, so we have to configure the health check to use this endpoint, otherwise, the health check will fail.\nBy default, the health check would run every 5 seconds. With the default of 256 CPU units for the ContainerCpu parameter (which is 1/4 vCPU), even the simple hello world Spring Boot application doesn\u0026rsquo;t manage to start up in 5 seconds, so we set the HealthCheckIntervalSeconds to 90.\nTesting the Stacks Assuming that the stacks were created successfully and have both reached the status CREATE_COMPLETE (check this on the CloudFormation page in the AWS console), we can test if the application is working as expected. If something didn\u0026rsquo;t work out, check the troubleshooting section for some hints.\nTo send some test requests to our application, we first need to know the URL of the load balancer. For this, we go to the EC2 page in the AWS Console and click on \u0026ldquo;Load Balancers\u0026rdquo; in the menu on the left. Clicking on the load balancer in the list, we see the \u0026ldquo;DNS name\u0026rdquo; in the \u0026ldquo;Description\u0026rdquo; tab at the bottom:\nPaste this URL into the browser, and add /hello to the end, and you should see a \u0026ldquo;Hello World\u0026rdquo; greeting!\nDeleting the Stacks When we\u0026rsquo;re done, we can delete the stacks:\naws cloudformation delete-stack \\ --stack-name reflectoring-hello-world-service Wait a bit until the service stack has reached the status DELETE_COMPLETE before deleting the network stack:\naws cloudformation delete-stack \\ --stack-name reflectoring-hello-world-network Troubleshooting Even though I started with existing CloudFormation templates, I stumbled over a couple of things. In case you have the same problems, here are some hints for troubleshooting.\nECS Service Stuck in Status CREATE_IN_PROGRESS At some point when starting the service stack, it got stuck in the CREATE_IN_PROGRESS status. Looking at the \u0026ldquo;Resources\u0026rdquo; tab of the stack in the CloudFormation page of the AWS console, I saw that this was because the ECSService resource was stuck in this status.\nTo find the root cause of this, I went to the ECS page of the AWS console and clicked on the ECS Cluster. In the \u0026ldquo;Tasks\u0026rdquo; tab, expand the container under \u0026ldquo;Containers\u0026rdquo;. Under \u0026ldquo;Details\u0026rdquo; I found a reason why the containers weren\u0026rsquo;t starting:\nCannotPullContainerError: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) The cause was that the ECS Service didn\u0026rsquo;t have access to the internet.\nMake sure that the ECS Service has a security group that allows outbound traffic to your Docker registry. In my case, the Docker registry was in the internet, so my ECS Service must be in a public subnet, or in a private subnet with a NAT gateway that allows outbound traffic.\nECS Tasks Are Being Restarted Over and Over Another problem I had was that ECS Tasks seemed to be restarting again and again after the service stack has been created successfully.\nI observed this on the ECS page of the AWS console in the list of stopped tasks of my service, which grew longer and longer. Sometimes, a task would show an error message next to the STOPPED status:\nTask failed ELB health checks in (target-group arn:aws:elasticloadbalancing:\u0026lt;your-target-group\u0026gt;)) On the EC2 page of the AWS console, under \u0026ldquo;Load Balancing -\u0026gt; Target Groups -\u0026gt; Select target group -\u0026gt; Registered Targets\u0026rdquo;, I saw this error message:\nHealth checks failed with these codes: [502] To find the source of this, I added the LogConfiguration block to the task definition and restarted the service stack.\nThe logs showed that the Spring Boot app started without error, but that it took 45 seconds! Which is a lot for a dummy hello world application (but can be explained by only providing it with 256 CPU units)! And according to the logs, the application shut down shortly after that.\nSince the health check was configured to only 5 seconds, it kept failing and restarted the tasks over and over.\nI increased the health check interval to 90 seconds and it worked.\nThe AWS Journey So, we\u0026rsquo;ve successfully deployed a network and a Docker container to the cloud.\nCloudFormation is a mighty tool which can spin up whole infrastructures in minutes, but you need to understand the AWS resources and their interdependencies to create a template that works.\nWe\u0026rsquo;re still at the beginning of the AWS Journey. There\u0026rsquo;s a lot more ground to cover before we arrive at a production-ready, continuously deployable system.\nHere\u0026rsquo;s a list of the questions I want to answer on this journey. If there\u0026rsquo;s a link, it has already been answered with a blog post! If not, stay tuned!\n How can I deploy an application from the web console? How can I deploy an application from the command line? (this article) How can I implement high availability for my deployed application? (this article) How do I set up load balancing? (this article) How can I deploy a database in a private subnet and access it from my application? How can I deploy my application from a CI/CD pipeline? How can I deploy a new version of my application without downtime? How can I deploy my application into multiple environments (test, staging, production)? How can I auto-scale my application horizontally on high load? How can I implement sticky sessions in the load balancer (if I\u0026rsquo;m building a session-based webapp)? How can I monitor what’s happening on my application? How can I bind my application to a custom domain? How can I access other AWS resources (like SQS queues and DynamoDB tables) from my application? How can I implement HTTPS?  Check Out the Book!  This article gives only a first impression of what you can do with Docker and AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"May 2, 2020","image":"https://reflectoring.io/images/stock/0061-cloud-1200x628-branded_hu34d6aa247e0bb2675461b5a0146d87a8_82985_650x0_resize_q90_box.jpg","permalink":"/aws-cloudformation-deploy-docker-image/","title":"The AWS Journey Part 2: Deploying a Docker Image with AWS CloudFormation"},{"categories":["Spring Boot"],"contents":"Spring Boot provides integration with database migration tools Liquibase and Flyway. This guide provides an overview of Liquibase and how to use it in a Spring Boot application for managing and applying database schema changes.\n Example Code This article is accompanied by a working code example on GitHub. Why Do We Need Database Migration Tools? Database migration tools help us to track, version control, and automate database schema changes. They help us to have a consistent schema across different environments.\nRefer to our guides for more details on why we need database migration tools and for a quick comparison of Liquibase and Flyway.\nIntroduction to Liquibase Liquibase facilitates database migrations with not only plain old SQL scripts, but also with different abstract, database-agnostic formats including XML, YAML, and JSON. When we use non-SQL formats for database migrations, Liquibase generates the database-specific SQL for us. It takes care of variations in data types and SQL syntax for different databases. It supports most of the popular relational databases.\nLiquibase allows enhancements for databases it currently supports through Liquibase extensions. These extensions can be used to add support for additional databases as well.\nCore Concepts of Liquibase Let\u0026rsquo;s have a look at the vocabulary of Liquibase:\n  ChangeSet: A changeSet is a set of changes that need to be applied to a database. Liquibase tracks the execution of changes at a ChangeSet level.\n  Change: A change describes a single change that needs to be applied to the database. Liquibase provides several change types like \u0026ldquo;create table\u0026rdquo; or \u0026ldquo;drop column\u0026rdquo; out of the box which are each an abstraction over a piece of SQL.\n  Changelog: The file which has the list of database changeSets that needs to be applied is called a changelog. These changelog files can be in either SQL, YAML, XML, or JSON format.\n  Preconditions: Preconditions are used to control the execution of changelogs or changeSets. They are used to define the state of the database under which the changeSets or changes logs need to be executed.\n  Context: A changeSet can be tagged with a context expression. Liquibase will evaluate this expression to determine if a changeSet should be executed at runtime, given a specific context. You could compare a context expression with environment variables.\n  Labels: The purpose of Labels is similar to that of contexts. The difference is that changeSets are tagged with a list of labels (not expressions), and during runtime, we can pass a label expression to choose the changeSets which match the expression.\n  Changelog Parameters: Liquibase allows us to have placeholders in changelogs, which it dynamically substitutes during runtime.\n  Liquibase creates the two tables databasechangelog and databasechangeloglock when it runs in a database for the first time. It uses the databasechangelog table to keep track of the status of the execution of changeSets and databasechangeloglock to prevent concurrent executions of Liquibase. Refer to the docs for more details.\nLiquibase with Spring Boot Now that we went through the basics of Liquibase let\u0026rsquo;s see how to get Liquibase running in a Spring Boot application.\nSetting Up Liquibase in Spring Boot By default Spring Boot auto-configures Liquibase when we add the Liquibase dependency to our build file.\nSpring Boot uses the primary DataSource to run Liquibase (i.e. the one annotated with @Primary if there is more than one). In case we need to use a different DataSource we can mark that bean as @LiquibaseDataSource.\nAlternatively, we can set the spring.liquibase.[url,user,password]properties, so that spring creates a Datasource on its own and uses it to auto-configure Liquibase.\nBy default, Spring Boot runs Liquibase database migrations automatically on application startup.\nIt looks for a master changelog file in the folder db/migration within the classpath with the name db.changelog-master.yaml. If we want to use other Liquibase changelog formats or use different file naming convention, we can configure the spring.liquibase.change-log application property to point to a different master changelog file.\nFor example, to use db/migration/my-master-change-log.json as the master changelog file, we set the following property in application.yml:\nspring: liquibase: changeLog: \u0026#34;classpath:db/migration/my-master-change-log.json\u0026#34; The master changelog can include other changelogs so that we can split our changes up in logical steps.\nRunning Our First Database Migration After setting everything up, let\u0026rsquo;s create our first database migration. We\u0026rsquo;ll create the database table user_details in this example.\nLet\u0026rsquo;s create a file with name db.changelog-master.yaml and place it in src/main/resources/db/changelog:\ndatabaseChangeLog: - include: file: db/changelog/db.changelog-yaml-example.yaml The master file is just a collection of includes that points to changelogs with the actual changes.\nNext, we create the changelog with the first actual changeset and put it into the file src/main/resources/db/changelog-yaml-example.yaml:\ndatabaseChangeLog: - changeSet: id: create-table-user authors: [liquibase-demo-service] preConditions: - onFail: MARK_RAN not: tableExists: tableName: user_details changes: - createTable: columns: - column: autoIncrement: true constraints: nullable: false primaryKey: true primaryKeyName: user_pkey name: id type: BIGINT - column: constraints: nullable: false name: username type: VARCHAR(250) - column: constraints: nullable: false name: first_name type: VARCHAR(250) - column: name: last_name type: VARCHAR(250) tableName: user_details We used the changeType createTable, which abstracts the creation of a table. Liquibase will convert the above changeSet to the appropriate SQL based on the database that our application uses.\nThe preCondition checks that the user_details table does not exist before executing this change. If the table already exists, Liquibase marks the changeSet as having run successfully without actually having run.\nNow, when we run the Spring Boot application, Liquibase executes the changeSet which creates the user_details table with user_pkey as the primary key.\nUsing Changelog Parameters Changelog parameters come in very handy when we want to replace placeholders with different values for different environments. We can set these parameters using the application property spring.liquibase.parameters, which takes a map of key/value pairs:\nspring: profiles: docker liquibase: parameters: textColumnType: TEXT contexts: local --- spring: profiles: h2 liquibase: parameters: textColumnType: VARCHAR(250) contexts: local  We set the Liquibase parameter textColumnType to VARCHAR(250) when Spring Boot starts in the h2 profile and to TEXT when it starts in the docker profile (assuming that the docker profile starts up a \u0026ldquo;real\u0026rdquo; database).\nWe can now use this parameter in a changelog:\ndatabaseChangeLog: - changeSet: ... changes: - createTable: columns: ... - column: constraints: nullable: false name: username type: ${textColumnType} Now, when the Spring Boot application runs in the docker profile, it uses TEXT as column type and in the h2 profile it uses VARCHAR(250).\nUse the Same Database for All Environments!  The code example assumes the usage of different types of databases in different environments for demonstrating the use of the changelog parameter. Please avoid using different types of databases for different staging environments. Doing so will cause hard-to-debug errors caused by different environments.  Using Liquibase Context As described earlier, context can be used to control which changeSets should run. Let\u0026rsquo;s use this to add test data in the test and local environments:\n\u0026lt;databaseChangeLog\u0026gt; \u0026lt;changeSet author=\u0026#34;liquibase-docs\u0026#34; id=\u0026#34;loadUpdateData-example\u0026#34; context=\u0026#34;test or local\u0026#34;\u0026gt; \u0026lt;loadUpdateData encoding=\u0026#34;UTF-8\u0026#34; file=\u0026#34;db/data/users.csv\u0026#34; onlyUpdate=\u0026#34;false\u0026#34; primaryKey=\u0026#34;id\u0026#34; quotchar=\u0026#34;\u0026#39;\u0026#34; separator=\u0026#34;,\u0026#34; tableName=\u0026#34;user_details\u0026#34;\u0026gt; \u0026lt;/loadUpdateData\u0026gt; \u0026lt;/changeSet\u0026gt; \u0026lt;/databaseChangeLog\u0026gt; We\u0026rsquo;re using the expression test or local so it runs for these contexts, but not in production.\nWe now need to pass the context to Liquibase using the property spring.liquibase.contexts:\n--- spring: profiles: docker liquibase: parameters: textColumnType: TEXT contexts: test Configuring Liquibase in Spring Boot As a reference, here\u0026rsquo;s a list of all properties that Spring Boot provides to configure the behavior of Liquibase.\n   Property Description     spring.liquibase.changeLog Master changelog configuration path. Defaults to classpath:/db/changelog/db.changelog-master.yaml,   spring.liquibase.contexts Comma-separated list of runtime contexts to use.   spring.liquibase.defaultSchema Schema to use for managed database objects and Liquibase control tables.   spring.liquibase.liquibaseSchema Schema for Liquibase control tables.   spring.liquibase.liquibaseTablespace Tablespace to use for Liquibase objects.   spring.liquibase.databaseChangeLogTable To specify a different table to use for tracking change history. Default is DATABASECHANGELOG.   spring.liquibase.databaseChangeLogLockTable To specify a different table to use for tracking concurrent Liquibase usage. Default is DATABASECHANGELOGLOCK.   spring.liquibase.dropFirst Indicates whether to drop the database schema before running the migration. Do not use this in production! Default is false.   spring.liquibase.user Login username to connect to the database.   spring.liquibase.password Login password to connect to the database.   spring.liquibase.url JDBC URL of the database to migrate. If not set, the primary configured data source is used.   spring.liquibase.labels Label expression to be used when running liquibase.   spring.liquibase.parameters Parameters map to be passed to Liquibase.   spring.liquibase.rollbackFile File to which rollback SQL is written when an update is performed.   spring.liquibase.testRollbackOnUpdate Whether rollback should be tested before the update is performed. Default is false.    Enabling Logging for Liquibase in Spring Boot Enabling INFO level logging for Liquibase will help to see the changeSets that Liquibase executes during the start of the application. It also helps to identify that the application has not started yet because it is waiting to acquire changeloglock during the startup.\nAdd the following application property in application.yml to enable INFO logs:\nlogging: level: \u0026#34;liquibase\u0026#34; : info Best Practices Using Liquibase   Organizing Changelogs: Create a master changelog file that does not have actual changeSets but includes other changelogs (only YAML, JSON, and XML support using include, SQL does not). Doing so allows us to organize our changeSets in different changelog files. Every time we add a new feature to the application that requires a database change, we can create a new changelog file, add it to version control, and include it in the master changelog.\n  One Change per ChangeSet: Have only one change per changeSet, as this allows easier rollback in case of a failure in applying the changeSet.\n  Don\u0026rsquo;t Modify a ChangeSet: Never modify a changeSet once it has been executed. Instead, add a new changeSet if modifications are needed for the change that has been applied by an existing changeSet. Liquibase keeps track of the checksums of the changeSets that it already executed. If an already run changeSet is modified, Liquibase by default will fail to run that changeSet again, and it will not proceed with the execution of other changeSets.\n  ChangeSet Id: Liquibase allows us to have a descriptive name for changeSets. Prefer using a unique descriptive name as the changeSetId instead of using a sequence number. They enable multiple developers to add different changeSets without worrying about the next sequence number they need to select for the changeSetId.\n  Reference data management: Use Liquibase to populate reference data and code tables that the application needs. Doing so allows deploying application and configuration data it needs together. Liquibase provides changeType loadUpdateData to support this.\n  Use Preconditions: Have preconditions for changeSets. They ensure that Liquibase checks the database state before applying the changes.\n  Test Migrations: Make sure you always test the migrations that you have written locally before applying them in real nonproduction or production environment. Always use Liquibase to run database migrations in nonproduction or production environment instead of manually performing database changes.\n  Running Liquibase automatically during the Spring Boot application startup makes it easy to ship application code changes and database changes together. But in instances like adding indexes to existing database tables with lots of data, the application might take a longer time to start. One option is to pre-release the database migrations (releasing database changes ahead of code that needs it) and run them asynchronously.\nOther Ways of Running Liquibase Liquibase supports a range of other options to run database migrations apart from Spring Boot integration:\n via Maven plugin via Gradle plugin via Command line via JEE CDI Integration via Servlet Listener  Liquibase has a Java API that we can use in any Java-based application to perform database migrations.\nConclusion Liquibase helps to automate database migrations, and Spring Boot makes it easier to use Liquibase. This guide provided details on how to use Liquibase in Spring Boot application and some best practices.\nYou can find the example code on GitHub.\nWe also have a guide on using Flyway, another popular alternative for database migrations.\n","date":"April 15, 2020","image":"https://reflectoring.io/images/stock/0060-data-1200x628-branded_hue5f55076dc203147ceba2a59a969fa03_177458_650x0_resize_q90_box.jpg","permalink":"/database-migration-spring-boot-liquibase/","title":"One-Stop Guide to Database Migration with Liquibase and Spring Boot"},{"categories":["Spring Boot"],"contents":"I mistrust tools and products that have the word \u0026ldquo;simple\u0026rdquo; in their name. This was also the case when I had First Contact with AWS\u0026rsquo;s \u0026ldquo;Simple Queue Service\u0026rdquo; or SQS.\nAnd while it is rather simple to send messages to an SQS queue, there are some things to consider when retrieving messages from it. It\u0026rsquo;s not rocket science, but it requires some careful design to build a robust and scalable message handler.\nThis article shows a way of implementing a component that is capable of sending messages to and retrieving messages from an SQS queue in a robust and scalable manner. In the end, we\u0026rsquo;ll wrap this component into a Spring Boot starter to be used in our Spring Boot applications.\nGet the SQS Starter Library The code in this article comes from the SQS Starter library that I built for one of my projects. It\u0026rsquo;s available on Maven Central and I\u0026rsquo;ll welcome any contributions you might have to make it better.\nIsn\u0026rsquo;t the AWS SDK Good Enough? AWS provides an SDK that provides functionality to interact with an SQS queue. And it\u0026rsquo;s quite good and easy to use.\nHowever, it\u0026rsquo;s missing a polling mechanism that allows us to pull messages from the queue regularly and process them in near-realtime across a pool of message handlers working in parallel.\nThis is exactly what we\u0026rsquo;ll be building in this article.\nAs a bonus, we\u0026rsquo;ll build a message publisher that wraps the AWS SDK and adds a little extra robustness in the form of retries.\nBuilding a Robust Message Publisher Let\u0026rsquo;s start with the easy part and look at publishing messages.\nThe AmazonSQS client, which is part of the AWS SDK, provides the methods sendMessage() and sendMessageBatch() to send messages to an SQS queue.\nIn our publisher, we wrap sendMessage() to create a little more high-level message publisher that\n serializes a message object into JSON, sends the message to a specified SQS queue, and retries this if SQS returns an error response:  public abstract class SqsMessagePublisher\u0026lt;T\u0026gt; { private final String sqsQueueUrl; private final AmazonSQS sqsClient; private final ObjectMapper objectMapper; private final RetryRegistry retryRegistry; // constructors ...  public void publish(T message) { Retry retry = retryRegistry.retry(\u0026#34;publish\u0026#34;); retry.executeRunnable(() -\u0026gt; doPublish(message)); } private void doPublish(T message) { try { SendMessageRequest request = new SendMessageRequest() .withQueueUrl(sqsQueueUrl) .withMessageBody(objectMapper.writeValueAsString(message)); SendMessageResult result = sqsClient.sendMessage(request); if (result.getSdkHttpMetadata().getHttpStatusCode() != 200) { throw new RuntimeException( String.format(\u0026#34;got error response from SQS queue %s: %s\u0026#34;, sqsQueueUrl, result.getSdkHttpMetadata())); } } catch (JsonProcessingException e) { throw new IllegalStateException(\u0026#34;error sending message to SQS: \u0026#34;, e); } } } In the publish() method, we use resilience4j\u0026rsquo;s retry functionality to configure a retry behavior. We can modify this behavior by configuring the RetryRegistry that is passed into the constructor. Note that the AWS SDK provides its own retry behavior, but I opted for the more generic resilience4j library here.\nThe interaction with SQS happens in the internal doPublish() method. Here, we build a SendMessageRequest and send that to SQS via the AmazonSqs client from the Amazon SDK. If the returned HTTP status code is not 200, we throw an exception so that the retry mechanism knows something went wrong and will trigger a retry.\nIn our application, we can now simply extend the abstract SqsMessagePublisher class, instantiate that class and call the publish() method to send messages to a queue.\nBuilding a Robust Message Handler Now to the more involved part: building a message handler that regularly polls an SQS queue and fans out the messages it receives to multiple message handlers in a thread pool.\nThe SqsMessageHandler Interface Let\u0026rsquo;s start with the message handler interface:\npublic interface SqsMessageHandler\u0026lt;T\u0026gt; { void handle(T message); Class\u0026lt;T\u0026gt; messageType(); } For each SQS queue, we implement this interface to handle the messages we receive from that queue. Note that we\u0026rsquo;re assuming that all messages in a queue are of the same type!\nThe SqsMessageHandler interface gives us type safety. Instead of having to work with Strings, we can now work with message types.\nBut we still need some infrastructure to get messages from SQS, deserialize them into objects of our message type, and finally pass them to our message handler.\nFetching Messages from SQS Next, we build a SqsMessageFetcher class that fetches messages from an SQS queue:\nclass SqsMessageFetcher { private static final Logger logger = ...; private final AmazonSQS sqsClient; private final SqsMessagePollerProperties properties; // constructor ...  List\u0026lt;Message\u0026gt; fetchMessages() { ReceiveMessageRequest request = new ReceiveMessageRequest() .withMaxNumberOfMessages(properties.getBatchSize()) .withQueueUrl(properties.getQueueUrl()) .withWaitTimeSeconds((int) properties.getWaitTime().toSeconds()); ReceiveMessageResult result = sqsClient.receiveMessage(request); if (result.getSdkHttpMetadata().getHttpStatusCode() != 200) { logger.error(\u0026#34;got error response from SQS queue {}: {}\u0026#34;, properties.getQueueUrl(), result.getSdkHttpMetadata()); return Collections.emptyList(); } logger.debug(\u0026#34;polled {} messages from SQS queue {}\u0026#34;, result.getMessages().size(), properties.getQueueUrl()); return result.getMessages(); } } Again, we use the AmazonSqs client, but this time to create a ReceiveMessageRequest and return the Messages we received from the SQS queue. We can configure some parameters in the SqsMessagePollerProperties object that we pass into this class.\nAn important detail is that we\u0026rsquo;re configuring the waitTimeSeconds on the request to tell the Amazon SDK to wait some seconds until maxNumberOfMessages messages are available before returning a list of messages (or an empty if there weren\u0026rsquo;t any after that time). With these configuration parameters, we have effectively implemented a long polling mechanism if we call our fetchMessages() method regularly.\nNote that we\u0026rsquo;re not throwing an exception in case of a non-success HTTP response code. This is because we\u0026rsquo;re expecting fetchMessages() to be called frequently in short intervals. We just hope that the call will succeed the next time.\nPolling Messages The next layer up, we build a SqsMessagePoller class that calls our SqsMessageFetcher in regular intervals to implement the long polling mechanism mentioned earlier:\nclass SqsMessagePoller\u0026lt;T\u0026gt; { private static final Logger logger = ...; private final SqsMessageHandler\u0026lt;T\u0026gt; messageHandler; private final SqsMessageFetcher messageFetcher; private final SqsMessagePollerProperties pollingProperties; private final AmazonSQS sqsClient; private final ObjectMapper objectMapper; private final ThreadPoolExecutor handlerThreadPool; // other methods omitted  private void poll() { List\u0026lt;Message\u0026gt; messages = messageFetcher.fetchMessages(); for (Message sqsMessage : messages) { try { final T message = objectMapper.readValue( sqsMessage.getBody(), messageHandler.messageType()); handlerThreadPool.submit(() -\u0026gt; { messageHandler.handle(message); acknowledgeMessage(sqsMessage); }); } catch (JsonProcessingException e) { logger.warn(\u0026#34;error parsing message: \u0026#34;, e); } } } private void acknowledgeMessage(Message message) { sqsClient.deleteMessage( pollingProperties.getQueueUrl(), message.getReceiptHandle()); } } In the poll() method, we get some messages from the message fetcher. We then deserialize each message from the JSON string we receive from the Amazon SDK\u0026rsquo;s Message object.\nNext, we pass the message object into the handle() method of anSqsMessageHandler instance. We don\u0026rsquo;t do this in the current thread, though, but instead defer the execution to a thread in a special thread pool (handlerThreadPool). This way, we can fan out the processing of messages into multiple concurrent threads.\nAfter a message has been handled, we need to tell SQS that we have handled it successfully. We do this by calling the deleteMessage() API. If we didn\u0026rsquo;t, SQS would serve this message again after some time with one of the next calls to our SqsMessageFetcher.\nStarting and Stopping to Poll A piece that is still missing from the puzzle is how to start the polling. You might have noticed that the poll() method is private, so it needs to be called from somewhere within the SqsMessagePoller class.\nSo, we add a start() and a stop() method to the class, allowing us to start and stop the polling:\nclass SqsMessagePoller\u0026lt;T\u0026gt; { private static final Logger logger = ...; private final SqsMessagePollerProperties pollingProperties; private final ScheduledThreadPoolExecutor pollerThreadPool; private final ThreadPoolExecutor handlerThreadPool; void start() { logger.info(\u0026#34;starting SqsMessagePoller\u0026#34;); for (int i = 0; i \u0026lt; pollerThreadPool.getCorePoolSize(); i++) { logger.info(\u0026#34;starting SqsMessagePoller - thread {}\u0026#34;, i); pollerThreadPool.scheduleWithFixedDelay( this::poll, 1, pollingProperties.getPollDelay().toSeconds(), TimeUnit.SECONDS); } } void stop() { logger.info(\u0026#34;stopping SqsMessagePoller\u0026#34;); pollerThreadPool.shutdownNow(); handlerThreadPool.shutdownNow(); } // other methods omitted ...  } With pollerThreadPool, we have introduced a second thread pool. In start(), we schedule a call to our poll() method as a recurring task to this thread pool every couple seconds after the last call has finished.\nNote that for most cases, it should be enough if the poller thread pool has a single thread. We\u0026rsquo;d need a lot of messages on a queue and a lot of concurrent message handlers to need more than one poller thread.\nIn the stop() method, we just shut down the poller and handler thread pools so that they stop to accept new work.\nRegistering Message Handlers The final part to get everything to work is a piece of code that wires everything together. We\u0026rsquo;ll want to have a registry where we can register a message handler. The registry will then take care of creating the message fetcher and poller required to serve messages to the handler.\nBut first, we need a data structure that takes all the configuration parameters needed to register a message handler. We\u0026rsquo;ll call this class SqsMessageHandlerRegistration:\npublic interface SqsMessageHandlerRegistration\u0026lt;T\u0026gt; { /** * The message handler that shall process the messages polled from SQS. */ SqsMessageHandler\u0026lt;T\u0026gt; messageHandler(); /** * A human-readable name for the message handler. This is used to name * the message handler threads. */ String name(); /** * Configuration properties for the message handler. */ SqsMessageHandlerProperties messageHandlerProperties(); /** * Configuration properties for the message poller. */ SqsMessagePollerProperties messagePollerProperties(); /** * The SQS client to use for polling messages from SQS. */ AmazonSQS sqsClient(); /** * The {@link ObjectMapper} to use for deserializing messages from SQS. */ ObjectMapper objectMapper(); } A registration contains the message handler and everything that\u0026rsquo;s needed to instantiate and configure an SqsMessagePoller and the underlying SqsMessageFetcher.\nWe\u0026rsquo;ll then want to pass a list of such registrations to our registry:\nList\u0026lt;SqsMessageHandlerRegistration\u0026gt; registrations = ...; SqsMessageHandlerRegistry registry = new SqsMessageHandlerRegistry(registrations); registry.start(); ... registry.stop(); The registry takes the registrations and initializes the thread pools, a fetcher, and a poller for each message handler. We can then call start() and stop() on the registry to start and stop the message polling.\nThe registry code will look something like this:\nclass SqsMessageHandlerRegistry { private static final Logger logger = ...; private final Set\u0026lt;SqsMessagePoller\u0026lt;?\u0026gt;\u0026gt; pollers; public SqsMessageHandlerRegistry( List\u0026lt;SqsMessageHandlerRegistration\u0026lt;?\u0026gt;\u0026gt; messageHandlerRegistrations) { this.pollers = initializePollers(messageHandlerRegistrations); } private Set\u0026lt;SqsMessagePoller\u0026lt;?\u0026gt;\u0026gt; initializePollers( List\u0026lt;SqsMessageHandlerRegistration\u0026lt;?\u0026gt;\u0026gt; registrations) { Set\u0026lt;SqsMessagePoller\u0026lt;?\u0026gt;\u0026gt; pollers = new HashSet\u0026lt;\u0026gt;(); for (SqsMessageHandlerRegistration\u0026lt;?\u0026gt; registration : registrations) { pollers.add(createPollerForHandler(registration)); logger.info(\u0026#34;initialized SqsMessagePoller \u0026#39;{}\u0026#39;\u0026#34;, registration.name()); } return pollers; } private SqsMessagePoller\u0026lt;?\u0026gt; createPollerForHandler( SqsMessageHandlerRegistration\u0026lt;?\u0026gt; registration) { ... } public void start() { for (SqsMessagePoller\u0026lt;?\u0026gt; poller : this.pollers) { poller.start(); } } public void stop() { for (SqsMessagePoller\u0026lt;?\u0026gt; poller : this.pollers) { poller.stop(); } } } The registry code is pretty straightforward glue code. For each registration, we create a poller. we collect the pollers in a list so that we reference them in start() and stop().\nIf we call start() on the registry now, each poller will start polling messages from SQS in a separate thread and fan the messages out to message handlers living in a separate thread pool for each message handler.\nCreating a Spring Boot Auto-Configuration The code above will work with plain Java, but I promised to make it work with Spring Boot. For this, we can create a Spring Boot starter.\nThe starter consists of a single auto-configuration class:\n@Configuration class SqsAutoConfiguration { @Bean SqsMessageHandlerRegistry sqsMessageHandlerRegistry( List\u0026lt;SqsMessageHandlerRegistration\u0026lt;?\u0026gt;\u0026gt; registrations) { return new SqsMessageHandlerRegistry(registrations); } @Bean SqsLifecycle sqsLifecycle(SqsMessageHandlerRegistry registry) { return new SqsLifecycle(registry); } } In this configuration, we register our registry from above and pass all SqsMessageHandlerRegistration beans into it.\nTo register a message handler, all we have to do now is to add a SqsMessageHandlerRegistration bean to the Spring application context.\nAdditionally, we add an SqsLifecycle bean to the application context:\n@RequiredArgsConstructor class SqsAutoConfigurationLifecycle implements ApplicationListener\u0026lt;ApplicationReadyEvent\u0026gt; { private final SqsMessageHandlerRegistry registry; @Override public void onApplicationEvent(ApplicationReadyEvent event) { registry.start(); } @PreDestroy public void destroy() { registry.stop(); } } This lifecycle bean has the sole job of starting up our registry when the Spring Boot application starts up and stopping it again on shutdown.\nFinally, to make the SqsAutoConfiguration a real auto configuration, we need to add it to the META-INF/spring.factories file for Spring to pick up on application startup:\norg.springframework.boot.autoconfigure.EnableAutoConfiguration=\\ io.reflectoring.sqs.internal.SqsAutoConfiguration Conclusion In this article, we went through a way of implementing a robust message publisher and message handler to interact with an SQS queue. The Amazon SDK provides an easy-to-use interface but we wrapped it with layer adding robustness in the form of retries and scalability in the form of a configurable thread pool to handle messages.\nThe full code explained in this article is available as a Spring Boot starter on Github and Maven Central to use at your leisure.\n","date":"April 13, 2020","image":"https://reflectoring.io/images/stock/0035-switchboard-1200x628-branded_hu8b558f13f0313494c9155ce4fc356d65_235224_650x0_resize_q90_box.jpg","permalink":"/spring-robust-sqs-client/","title":"Building a Robust SQS Client with Spring Boot"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you\u0026rsquo;re a manager, you don\u0026rsquo;t know much about software development and want to learn about agile software development you haven\u0026rsquo;t worked in an agile manner before and want to know the key methods you have worked in an agile manner before and want some re-affirmation that you\u0026rsquo;re doing the right thing  Book Facts  Title: The Nature of Software Development Author: Ron Jeffries Word Count: ~ 20.000 (1.5 hours at 250 words / minute) Reading Ease: easy Writing Style: \u0026ldquo;grandfatherly advice\u0026rdquo; (i.e. telling how it\u0026rsquo;s done it without going into too much detail)  Overview {% include book-link.html book=\u0026ldquo;nature-of-software-development\u0026rdquo; %} by Ron Jeffries is an opinionated piece about agile software development. Jeffries, as one of the founders of the Agile Manifesto, sees Agile as the \u0026ldquo;natural way\u0026rdquo; of creating software - the way with the least resistance towards getting good results.\nThe book is very short. It\u0026rsquo;s even shorter than the 150 pages suggest, as many pages contain a doodle visualizing the current topic. Even then, many pages have a lot of white space, so don\u0026rsquo;t expect an epic tale.\nInstead, the book goes over all important topics of managing software development on a very(!) high level. You won\u0026rsquo;t get very specific advice on how to develop software, but you\u0026rsquo;ll get advice that might help high-level managers to appreciate the benefits of agile software development.\nWhile I agree that Agile is a \u0026ldquo;natural\u0026rdquo; way of software development I miss some real-life tales that prove what Jeffries is stating in the book. The book is full of generic statements like \u0026ldquo;The nature of the work requires us to test and refactor\u0026rdquo; without going deep into the why and how of it.\nNotes Here are my notes, as usual with some comments in italics.\nValue  software development must be driven by value value is only delivered by shipping features prioritize cheap features with high value  Feature by Feature  activity-based planning is \u0026ldquo;all-or-nothing\u0026rdquo; - it\u0026rsquo;s hard to change things in later phases instead, deliver feature by feature  earlier results more information to steer the project more flexible to changing requirements    Feature Teams  organize teams so that a feature doesn\u0026rsquo;t have to be handed over between teams (I would add that teams should have clear code ownership of software components wherever possible and should only share code ownership of components if not otherwise possible) create cross-team communities of practice to share expertise in certain disciplines  Planning  identify high-value features and plan them - defer low-value features set a time and money budget instead of a feature scope do the important stuff first the team works in intervals, taking as much work into each interval as they think they can do estimation is risky - we spend time trying to improve estimations and compare them to other estimates \u0026ldquo;Pressure is destructive. Avoid it.\u0026rdquo; \u0026ldquo;Estimates are likely to be wrong, and they focus our attention on the cost of things rather than on value.\u0026rdquo;  Building  it\u0026rsquo;s critical to have a product vision and to refine it continuously - otherwise, we\u0026rsquo;ll have a bad return on investment \u0026ldquo;done\u0026rdquo; must actually mean \u0026ldquo;done\u0026rdquo; so we can be transparent with our progress and build trust with our stakeholders be nearly bug-free at the end of each iteration to avoid a never-ending test phase at the end of the project  Features and Foundation in Parallel  \u0026ldquo;Everything we build must rest on a solid foundation\u0026rdquo; - Architecture, Design, Infrastructure build minimal versions of many features instead of full versions of few features to get feedback quickly build as much foundation as necessary  Bug-free and Well Designed  prioritize fixing of defects because they reduce certainty and plannability have extensive test coverage to save time fixing bugs and building new features \u0026ldquo;Good designs go bad one decision at a time.\u0026rdquo; \u0026ldquo;The nature of the work requires us to test and refactor.\u0026rdquo;  Value  the definition of value is different in every project context and is not only monetary measuring value on a numerical scale is most often not accurate because we\u0026rsquo;re only estimating instead, compare features and decide which one is more valuable in relation to other features - do that one first  Teams  the role \u0026ldquo;Product Owner\u0026rdquo; implies that a team doesn\u0026rsquo;t own the software - a more fitting name would be \u0026ldquo;Product Champion\u0026rdquo; let the team autonomously decide how to solve problems, but make sure they know the right problems to solve have the team show the software after each iteration an iterative process allows us to learn  Management  to work \u0026ldquo;the natural way\u0026rdquo; we need a commitment from the upper management managing \u0026ldquo;the natural way\u0026rdquo; is less about directing and more about staffing and budgeting decisions delegate instead of doing it yourself the job is not to manage according to a plan but to steer towards the best possible solution  Whip the Ponies Harder  \u0026ldquo;Under pressure, teams give up the wrong things.\u0026rdquo; bad test coverage, bad code, and bugs will come from increased pressure we have to be able to say \u0026ldquo;no\u0026rdquo; to new features, otherwise, we\u0026rsquo;re only order takers and not decision-makers instead of putting pressure on the team analyze the sources of delay  To Speed Up, Build with Skill  if the team says they can\u0026rsquo;t deliver an increment in two weeks, give them one week instead to quickly uncover the problems be experts in Test-Driven Development and Acceptance Test-Driven Development to avoid bugs that destroy any planning  Refactoring  inherent difficulty comes from the problem to solve and can\u0026rsquo;t be reduced accidental difficulty comes from a suboptimal solution we need refactoring to reduce accidental difficulty no refactoring leads to erratic progress leads to slower progress campground rule: leave the code a little cleaner than you found it  Agile Methods  don\u0026rsquo;t let the agile framework control you it should have room for unplanned interaction  Scaling Agile  \u0026ldquo;Scaling Agile is good business for scaling vendors. It\u0026rsquo;s not necessarily good advice for you.\u0026rdquo; before scaling, prove that you have more product ideas than a single can build start with a single team and add feature teams if necessary break the codebase into pieces among the feature teams  Conclusion As a software developer that has done agile projects, you won\u0026rsquo;t get much out of this book for yourself. But you might learn some arguments for agile practices that you can use to defend your software development style against un-agile managers.\n","date":"April 1, 2020","image":"https://reflectoring.io/images/covers/nature-of-software-development-teaser_huba95281750ef86855b7cc598d6d2c8fd_128172_650x0_resize_q90_box.jpg","permalink":"/book-review-nature-of-software-development/","title":"Book Notes: The Nature of Software Development"},{"categories":["Spring Boot"],"contents":"To \u0026ldquo;listen\u0026rdquo; to an event, we can always write the \u0026ldquo;listener\u0026rdquo; to an event as another method within the source of the event, but this will tightly couple the event source to the logic of the listener.\nWith real events, we are more flexible than with direct method calls. We can dynamically register and deregister listeners to certain events as we wish. We can also have multiple listeners for the same event.\nThis tutorial gives an overview of how to publish and listen to custom events and explains Spring Boot\u0026rsquo;s built-in events.\n Example Code This article is accompanied by a working code example on GitHub. Why Should I Use Events Instead of Direct Method Calls? Both, events and direct method calls, fit for different situations. With a method call it\u0026rsquo;s like making an assertion that - no matter the state of the sending and receiving modules - they need to know this event happened.\nWith events, on the other hand, we just say that an event occurred and which modules are notified about it is not our concern. It\u0026rsquo;s good to use events when we want to pass on the processing to another thread (example: sending an email on some task completion). Also, events come in handy for test-driven development.\nWhat is an Application Event? Spring application events allow us to throw and listen to specific application events that we can process as we wish. Events are meant for exchanging information between loosely coupled components. As there is no direct coupling between publishers and subscribers, it enables us to modify subscribers without affecting the publishers and vice-versa.\nLet\u0026rsquo;s see how we can create, publish and listen to custom events in a Spring Boot application.\nCreating an ApplicationEvent We can publish application events using the Spring Framework’s event publishing mechanism.\nLet\u0026rsquo;s create a custom event called UserCreatedEvent by extending ApplicationEvent:\nclass UserCreatedEvent extends ApplicationEvent { private String name; UserCreatedEvent(Object source, String name) { super(source); this.name = name; } ... } The source which is being passed to super() should be the object on which the event occurred initially or an object with which the event is associated.\nSince Spring 4.2, we can also publish objects as an event without extending ApplicationEvent:\nclass UserRemovedEvent { private String name; UserRemovedEvent(String name) { this.name = name; } ... } Publishing an ApplicationEvent We use the ApplicationEventPublisher interface to publish our events:\n@Component class Publisher { private final ApplicationEventPublisher publisher; Publisher(ApplicationEventPublisher publisher) { this.publisher = publisher; } void publishEvent(final String name) { // Publishing event created by extending ApplicationEvent  publisher.publishEvent(new UserCreatedEvent(this, name)); // Publishing an object as an event  publisher.publishEvent(new UserRemovedEvent(name)); } } When the object we\u0026rsquo;re publishing is not an ApplicationEvent, Spring will automatically wrap it in a PayloadApplicationEvent for us.\nListening to an Application Event Now that we know how to create and publish a custom event, let\u0026rsquo;s see how we can listen to the event. An event can have multiple listeners doing different work based on application requirements.\nThere are two ways to define a listener. We can either use the @EventListener annotation or implement the ApplicationListener interface. In either case, the listener class has to be managed by Spring.\nAnnotation-Driven Starting with Spring 4.1 it\u0026rsquo;s now possible to simply annotate a method of a managed bean with @EventListener to automatically register an ApplicationListener matching the signature of the method:\n@Component class UserRemovedListener { @EventListener ReturnedEvent handleUserRemovedEvent(UserRemovedEvent event) { // handle UserRemovedEvent ...  return new ReturnedEvent(); } @EventListener void handleReturnedEvent(ReturnedEvent event) { // handle ReturnedEvent ...  } ... } No additional configuration is necessary with annotation-driven configuration enabled. Our method can listen to several events or if we want to define it with no parameter at all, the event types can also be specified on the annotation itself. Example: @EventListener({ContextStartedEvent.class, ContextRefreshedEvent.class}).\nFor the methods annotated with @EventListener and defined as a non-void return type, Spring will publish the result as a new event for us. In the above example, the ReturnedEvent returned by the first method will be published and then handled by the second method.\nSpring allows our listener to be triggered only in certain circumstances if we specify a SpEL condition:\n@Component class UserRemovedListener { @EventListener(condition = \u0026#34;#event.name eq \u0026#39;reflectoring\u0026#39;\u0026#34;) void handleConditionalListener(UserRemovedEvent event) { // handle UserRemovedEvent  } } The event will only be handled if the expression evaluates to true or one of the following strings: \u0026ldquo;true\u0026rdquo;, \u0026ldquo;on\u0026rdquo;, \u0026ldquo;yes\u0026rdquo;, or \u0026ldquo;1\u0026rdquo;. Method arguments are exposed via their names. The condition expression also exposes a “root” variable referring to the raw ApplicationEvent (#root.event) and the actual method arguments (#root.args)\nIn the above example, the listener will be triggered with UserRemovedEvent only when the #event.name has the value 'reflectoring',\nImplementing ApplicationListener Another way to listen to an event is to implement the ApplicationListener interface:\n@Component class UserCreatedListener implements ApplicationListener\u0026lt;UserCreatedEvent\u0026gt; { @Override public void onApplicationEvent(UserCreatedEvent event) { // handle UserCreatedEvent  } } As long as the listener object is registered in the Spring application context, it will receive events. When Spring routes an event, it uses the signature of our listener to determine if it matches an event or not.\nAsynchronous Event Listeners By default spring events are synchronous, meaning the publisher thread blocks until all listeners have finished processing the event.\nTo make an event listener run in async mode, all we have to do is use the @Async annotation on that listener:\n@Component class AsyncListener { @Async @EventListener void handleAsyncEvent(String event) { // handle event  } } To make the @Async annotation work, we also have to annotate one of our @Configuration classes or the @SpringBootApplication class with @EnableAsync.\nThe above code example also shows that we can use Strings as events. Use at your own risk. It\u0026rsquo;s better to use data types specific for our use case so as not to conflict with other events.\nTransaction-Bound Events Spring allows us to bind an event listener to a phase of the current transaction. This allows events to be used with more flexibility when the outcome of the current transaction matters to the listener.\nWhen we annotate our method with @TransactionalEventListener, we get an extended event listener that is aware of the transaction:\n@Component class UserRemovedListener { @TransactionalEventListener(phase=TransactionPhase.AFTER_COMPLETION) void handleAfterUserRemoved(UserRemovedEvent event) { // handle UserRemovedEvent  } } UserRemovedListener will only be invoked when the current transaction completes.\nWe can bind the listener to the following phases of the transaction:\n AFTER_COMMIT: The event will be handled when the transaction gets committed successfully. We can use this if our event listener should only run if the current transaction was successful. AFTER_COMPLETION: The event will be handled when the transaction commits or is rolled back. We can use this to perform cleanup after transaction completion, for example. AFTER_ROLLBACK: The event will be handled after the transaction has rolled back. BEFORE_COMMIT: The event will be handled before the transaction commit. We can use this to flush transactional O/R mapping sessions to the database, for example.  Spring Boot’s Application Events Spring Boot provides several predefined ApplicationEvents that are tied to the lifecycle of a SpringApplication.\nSome events are triggered before the ApplicationContext is created, so we cannot register a listener on those as a @Bean. We can register listeners for these events by adding the listener manually:\n@SpringBootApplication public class EventsDemoApplication { public static void main(String[] args) { SpringApplication springApplication = new SpringApplication(EventsDemoApplication.class); springApplication.addListeners(new SpringBuiltInEventsListener()); springApplication.run(args); } } We can also register our listeners regardless of how the application is created by adding a META-INF/spring.factories file to our project and reference our listener(s) by using the org.springframework.context.ApplicationListener key:\norg.springframework.context.ApplicationListener= com.reflectoring.eventdemo.SpringBuiltInEventsListener\nclass SpringBuiltInEventsListener implements ApplicationListener\u0026lt;SpringApplicationEvent\u0026gt;{ @Override public void onApplicationEvent(SpringApplicationEvent event) { // handle event  } } Once we make sure that our event listener is registered properly, we can listen to all of Spring Boot\u0026rsquo;s SpringApplicationEvents. Let\u0026rsquo;s have a look at them, in the order of their execution during application startup.\nApplicationStartingEvent An ApplicationStartingEvent is fired at the start of a run but before any processing, except for the registration of listeners and initializers.\nApplicationEnvironmentPreparedEvent An ApplicationEnvironmentPreparedEvent is fired when the Environment to be used in the context is available.\nSince the Environment will be ready at this point, we can inspect and do modify it before it\u0026rsquo;s used by other beans.\nApplicationContextInitializedEvent An ApplicationContextInitializedEvent is fired when the ApplicationContext is ready and ApplicationContextInitializers are called but bean definitions are not yet loaded.\nWe can use this to perform a task before beans are initialized into Spring container.\nApplicationPreparedEvent An ApplicationPreparedEvent is fired when ApllicationContext is prepared but not refreshed.\nThe Environment is ready for use and bean definitions will be loaded.\nContextRefreshedEvent A ContextRefreshedEvent is fired when an ApplicationContext is refreshed.\nThe ContextRefreshedEvent comes from Spring directly and not from Spring Boot and does not extend SpringApplicationEvent.\nWebServerInitializedEvent If we\u0026rsquo;re using a web server, a WebServerInitializedEvent is fired after the web server is ready. ServletWebServerInitializedEvent and ReactiveWebServerInitializedEvent are the servlet and reactive variants, respectively.\nThe WebServerInitializedEvent does not extend SpringApplicationEvent.\nApplicationStartedEvent An ApplicationStartedEvent is fired after the context has been refreshed but before any application and command-line runners have been called.\nApplicationReadyEvent An ApplicationReadyEvent is fired to indicate that the application is ready to service requests.\nIt is advised not to modify the internal state at this point since all initialization steps will be completed.\nApplicationFailedEvent An ApplicationFailedEvent is fired if there is an exception and the application fails to start. This can happen at any time during startup.\nWe can use this to perform some tasks like execute a script or notify on startup failure.\nConclusion Events are designed for simple communication among Spring beans within the same application context. As of Spring 4.2, the infrastructure has been significantly improved and offers an annotation-based model as well as the ability to publish any arbitrary event.\nYou can find the example code on GitHub.\n","date":"March 31, 2020","image":"https://reflectoring.io/images/stock/0065-java-1200x628-branded_hu49f406cdc895c98f15314e0c34cfd114_116403_650x0_resize_q90_box.jpg","permalink":"/spring-boot-application-events-explained/","title":"Spring Boot Application Events Explained"},{"categories":["Spring Boot"],"contents":"Dependency injection is an approach to implement loose coupling among the classes in an application.\nThere are different ways of injecting dependencies and this article explains why constructor injection should be the preferred way.\n Example Code This article is accompanied by a working code example on GitHub. What is Dependency Injection?  Dependency: An object usually requires objects of other classes to perform its operations. We call these objects dependencies. Injection: The process of providing the required dependencies to an object.  Thus dependency injection helps in implementing inversion of control (IoC). This means that the responsibility of object creation and injecting the dependencies is given to the framework (i.e. Spring) instead of the class creating the dependency objects by itself.\nWe can implement dependency injection with:\n constructor-based injection, setter-based injection, or field-based injection.  Constructor Injection In constructor-based injection, the dependencies required for the class are provided as arguments to the constructor:\n@Component class Cake { private Flavor flavor; Cake(Flavor flavor) { Objects.requireNonNull(flavor); this.flavor = flavor; } Flavor getFlavor() { return flavor; } ... } Before Spring 4.3, we had to add an @Autowired annotation to the constructor. With newer versions, this is optional if the class has only one constructor.\nIn the Cake class above, since we have only one constructor, we don\u0026rsquo;t have to specify the @Autowired annotation. Consider the below example with two constructors:\n@Component class Sandwich { private Topping toppings; private Bread breadType; Sandwich(Topping toppings) { this.toppings = toppings; } @Autowired Sandwich(Topping toppings, Bread breadType) { this.toppings = toppings; this.breadType = breadType; } ... } When we have a class with multiple constructors, we need to explicitly add the @Autowired annotation to any one of the constructors so that Spring knows which constructor to use to inject the dependencies.\nSetter Injection In setter-based injection, we provide the required dependencies as field parameters to the class and the values are set using the setter methods of the properties. We have to annotate the setter method with the @Autowired annotation.\nThe Cake class requires an object of type Topping. The Topping object is provided as an argument in the setter method of that property:\n@Component class Cookie { private Topping toppings; @Autowired void setTopping(Topping toppings) { this.toppings = toppings; } Topping getTopping() { return toppings; } ... } Spring will find the @Autowired annotation and call the setter to inject the dependency.\nField Injection With field-based injection, Spring assigns the required dependencies directly to the fields on annotating with @Autowired annotation.\nIn this example, we let Spring inject the Topping dependency via field injection:\n@Component class IceCream { @Autowired private Topping toppings; Topping getToppings() { return toppings; } void setToppings(Topping toppings) { this.toppings = toppings; } } Combining Field and Setter Injection What will happen if we add @Autowired to both, a field and a setter? Which method will Spring use to inject the dependency?\n@Component class Pizza { @Autowired private Topping toppings; Topping getToppings() { return toppings; } @Autowired void setToppings(Topping toppings) { this.toppings = toppings; } } In the above example, we have added the @Autowired annotation to both the setter and the field. In this case, Spring injects dependency using the setter injection method.\nNote that it\u0026rsquo;s bad practice to mix injection types on a single class as it makes the code less readable.\nWhy Should I Use Constructor Injection? Now that we have seen the different types of injection, let\u0026rsquo;s go through some of the advantages of using constructor injection.\nAll Required Dependencies Are Available at Initialization Time We create an object by calling a constructor. If the constructor expects all required dependencies as parameters, then we can be 100% sure that the class will never be instantiated without its dependencies injected.\nThe IoC container makes sure that all the arguments provided in the constructor are available before passing them into the constructor. This helps in preventing the infamous NullPointerException.\nConstructor injection is extremely useful since we do not have to write separate business logic everywhere to check if all the required dependencies are loaded, thus simplifying code complexity.\nWhat About Optional Dependencies? With setter injection, Spring allows us to specify optional dependencies by adding @Autowired(required = false) to a setter method. This is not possible with constructor injection since the required=false would be applied to all constructor arguments.\n We can still provide optional dependencies with constructor injection using Java's Optional type.  Identifying Code Smells Constructor injection helps us to identify if our bean is dependent on too many other objects. If our constructor has a large number of arguments this may be a sign that our class has too many responsibilities. We may want to think about refactoring our code to better address proper separation of concerns.\nPreventing Errors in Tests Constructor injection simplifies writing unit tests. The constructor forces us to provide valid objects for all dependencies. Using mocking libraries like Mockito, we can create mock objects that we can then pass into the constructor.\nWe can also pass mocks via setters, of course, but if we add a new dependency to a class, we may forget to call the setter in the test, potentially causing a NullPointerException in the test.\nConstructor injection ensures that our test cases are executed only when all the dependencies are available. It\u0026rsquo;s not possible to have half created objects in unit tests (or anywhere else for that matter).\nImmutability Constructor injection helps in creating immutable objects because a constructor’s signature is the only possible way to create objects. Once we create a bean, we cannot alter its dependencies anymore. With setter injection, it\u0026rsquo;s possible to inject the dependency after creation, thus leading to mutable objects which, among other things, may not be thread-safe in a multi-threaded environment and are harder to debug due to their mutability.\nConclusion Constructor injection makes code more robust. It allows us to create immutable objects, preventing NullPointerExceptions and other errors.\nYou can find the code example on GitHub.\n","date":"March 28, 2020","image":"https://reflectoring.io/images/stock/0068-injection-1200x628-branded_hucf9acc1e1521027b25f1f4aca76f2eb0_89846_650x0_resize_q90_box.jpg","permalink":"/constructor-injection/","title":"Why You Should Use Constructor Injection in Spring"},{"categories":["Simplify"],"contents":"As knowledge workers, we software developers are very vulnerable to distractions. Have you counted the number of context switches you\u0026rsquo;ve had today?\nI\u0026rsquo;m always on the hunt for the perfect method to organize my daily work. While I don\u0026rsquo;t think I have reached perfection (or ever will, for that matter), I\u0026rsquo;m pretty satisfied with my current system, which I explain in this article.\nHopefully, you\u0026rsquo;ll find bits and pieces of it useful for managing your own work.\nChallenges in Our Work Day A lot is going on in a workday, especially when working in a team. Before looking at a solution, let\u0026rsquo;s discuss some of the challenging we\u0026rsquo;re having.\nDistractions Have you counted how often you are interrupted on a regular workday? A teammate asking a question, a meeting, or a butterfly flying past the window. Everything has the potential to distract us from the work we\u0026rsquo;re currently doing.\nWorking from home in the current COVID-19 pandemic, I\u0026rsquo;m especially distracted. My kids wanting attention, a Slack message I just couldn\u0026rsquo;t ignore, or thinking about my supply of toilet paper and the money I invested in stocks just before the crash.\nI can try very hard to resist these interruptions, and I\u0026rsquo;m sometimes successful, but I found that resistance is futile in most cases.\nAsynchronicity Software development is often inherently asynchronous (except if you\u0026rsquo;re practicing pair programming or mob programming for most of the day, which I usually don\u0026rsquo;t).\nA workflow often looks like this:\n We\u0026rsquo;re coding a little and create a pull request. A teammate reviews the pull request at their own pace and leaves some comments. When we\u0026rsquo;re free again, we review the comments and action them, passing the pull request back to the teammate for approval. \u0026hellip; and so on.  Or maybe someone asks a question in Slack which we want to answer, but not right now. We tell them we\u0026rsquo;ll get back to them.\nIn a remote working situation, we\u0026rsquo;re usually working even more asynchronously than in the office. We can\u0026rsquo;t just tap a teammate on the shoulder to resolve an issue on the spot (actually, with chat tools, we can, but we tend not to). Instead, we post a question and wait for the answer. When the answer comes, we\u0026rsquo;re switching contexts.\nForgetting Stuff Distractions and asynchronicity both lead to context switching. Like an operating system switches between processes, we\u0026rsquo;re switching between tasks. An operating system has access to multiple processor cores which can each work on one process at a time. We have only one core.\nContext switching leads to forgetting stuff. When a distraction comes our way, we have to decide to either continue working on the task we\u0026rsquo;ve been working on or to divert our attention to the distraction. We can\u0026rsquo;t do both at once. We might forget the things we\u0026rsquo;re not currently paying attention to.\nIf a teammate asked a question while I was busy and I said I\u0026rsquo;d come back to them later, chances are that I would forget. Personally, I\u0026rsquo;m utterly bad at remembering stuff.\nI don\u0026rsquo;t trust my brain to remember things, especially while I\u0026rsquo;m busy with other stuff. If you can do that, you have my respect :).\nAnxiety If I can\u0026rsquo;t trust myself to remember stuff, I get anxious. I get a nagging feeling that there was something I wanted to do but I don\u0026rsquo;t remember what.\nOr worse, I know exactly what I need to do the next day and keep thinking of it through the night, even though I would really like to sleep.\nI\u0026rsquo;ve had a pretty bad case of such anxiety in a software project early in my career when I was first given technical responsibility for a 5-person-years project. I couldn\u0026rsquo;t sleep and eat. Had it gone on for longer than it did, it would have ended in burnout.\nTo reduce anxiety, it helps to have an external memory aid where we can put ideas and tasks and be sure that it\u0026rsquo;s still there the next day.\nMy System for Organizing Work Distractions and Asynchronicity lead to context switching. Context switching leads to forgetting stuff and anxiety. Anxiety leads to suffering \u0026hellip; you get the idea.\nSo we need a system for capturing our work. We need to trust this system to remind us what\u0026rsquo;s important and lead us through the day.\nMy system is a visual board, using Trello as a tool. It has one column for each day of the workweek (i.e. Monday through Friday) and one column for \u0026ldquo;Next week\u0026rdquo;:\nIt\u0026rsquo;s not a Kanban or Scrum board! The columns don\u0026rsquo;t represent a status. It\u0026rsquo;s more like a calendar that I can use to organize my work day.\nHaving a board like that in place, we \u0026ldquo;just\u0026rdquo; need to build some habits around it.\nHere\u0026rsquo;s what I\u0026rsquo;m doing with the board.\nPlan the Week on Monday Morning On Monday morning, I take some time to organize my board for the upcoming week.\nI copy the board from last week into a new one. I go through all the columns and remove the tasks marked with \u0026ldquo;done\u0026rdquo;. Then, I look through all the tasks that are left and decide what I want to tackle in the upcoming week (most of these will come from the \u0026ldquo;next week\u0026rdquo; column, where they might have waited for a couple of weeks).\nNext, I distribute the tasks I want to tackle over the work days. Usually, not more than 2-3 tasks per day, to leave room for unexpected work (which always comes).\nFrom this habit, I get a sense of security that I haven\u0026rsquo;t forgotten anything important over the weekend.\nBookmark the Board I have a bookmark to my board in my browser\u0026rsquo;s bookmark bar. Every week, I update that bookmark to link to the new board.\nThis way, the board is always very easily accessible.\nThat\u0026rsquo;s important when I quickly want to note something that I would otherwise forget.\nWrite a Card for Everything I find that it helps my mental health tremendously to note down stuff as soon as I learn of it. I just don\u0026rsquo;t trust my brain to remember things.\nSo, I add a card to the board for just about everything as soon as I think of it:\n talk to a teammate about the breaking build, expense the WFH equipment I bought, or work on that task from the sprint backlog.  During meetings I add a card for each action I take from the meeting so I don\u0026rsquo;t forget.\nIf I have to follow up with someone from an asynchronous communication (like an email or an interrupted Slack chat), I create a card for it.\nI pretty much create a card for everything, so I\u0026rsquo;m sure it\u0026rsquo;s in my trusted system (and not in my untrustworthy brain).\nMark Tasks as Done When I\u0026rsquo;m done with a task, I label it as \u0026ldquo;done\u0026rdquo;. Labels in Trello are colored, so I can quickly see what I\u0026rsquo;ve achieved in a day.\nLike many teams, we do a daily standup meeting to catch up with each other (currently, we do this remotely). Before the standup, I refresh my brain and look at the \u0026ldquo;done\u0026rdquo; tasks from yesterday and the planned tasks from tomorrow on the board.\nMarking tasks as \u0026ldquo;done\u0026rdquo; is satisfying and helps in remembering the things I\u0026rsquo;ve done. If I need to research something, I can even look up the \u0026ldquo;done\u0026rdquo; tasks from 3 weeks ago in that week\u0026rsquo;s version of the board.\nBatch Similar Work Items Sometimes I use labels to mark tasks of a certain type. Some tasks are coding, some tasks are talking to people, some tasks are reading.\nWhen I plan my day or week, I can then group similar tasks. This way, I can plan an hour of uninterrupted reading, for example, ignoring Slack and emails for this time. Or I can resolve some of my asynchronous communications. Or do some serious coding for an hour or two (this is my favorite, by the way).\nBatching similar tasks helps to reduce context switches and to generate a feeling of \u0026ldquo;flow\u0026rdquo;. After having done a batch of tasks, it\u0026rsquo;s especially satisfying to mark them as \u0026ldquo;done\u0026rdquo; on the board.\nPlan the Next Day In the Evening At the end of a work day, I take a minute to take stock of today and to plan the next day.\nDid I mark all the tasks I finished as \u0026ldquo;done\u0026rdquo; on the board? If not, I do it now.\nWhich tasks are left over from today that need attention tomorrow? I move those tasks to the next day on the board.\nAre there tasks that weren\u0026rsquo;t that urgent after all? I move them to the \u0026ldquo;next week\u0026rdquo; column.\nAre there tasks that turned out to be unnecessary? I label those as \u0026ldquo;abandoned\u0026rdquo;.\nWith my next working day roughly planned, I know that I can go back to my system tomorrow and continue where I left off. I can sleep well tonight.\nGroom the Backlog Using the system as outlined above will lead to an overflowing \u0026ldquo;next week\u0026rdquo; column sooner rather than later. All the tasks that weren\u0026rsquo;t so urgent are dumped there, added up over several weeks.\nSo, every once in a while, I have a \u0026ldquo;Rendezvous with myself\u0026rdquo; where I go through the \u0026ldquo;next week\u0026rdquo; column and decide on each task whether to keep it in the system or flush it.\nIdeally, I do this every week, but I\u0026rsquo;m still struggling to build that habit.\nConclusion There are certainly more habits to build around a system like this to organize work, but the above is a report of what I\u0026rsquo;m currently doing with it. Hopefully, some of it sparks ideas for your own system of managing work.\nThe system gives me security that I don\u0026rsquo;t forget anything, protecting my mental health.\nIt\u0026rsquo;s easy to start, satisfying to work with, and it provides daily and weekly triggers to engage with it - three important factors for creating habits. Which doesn\u0026rsquo;t mean that I\u0026rsquo;m not struggling now and then.\nWhat are you doing to organize your work? I\u0026rsquo;m curious to know. Let me know in the comments!\nFurther Reading Some of the ideas behind the system outlined in this article come from books, which you can read up on in my book reviews:\n The Power of Habit Atomic Habits Deep Work Everybody Writes  ","date":"March 22, 2020","image":"https://reflectoring.io/images/stock/0067-todo-1200x628-branded_hu5bbc7ccfba0d83bff6435397d9bf47a3_115459_650x0_resize_q90_box.jpg","permalink":"/organizing-work/","title":"My System for Organizing Work in a Distracted World"},{"categories":["Spring Boot"],"contents":"When we\u0026rsquo;re building software, we want to build for \u0026ldquo;-ilities\u0026rdquo;: understandability, maintainability, extensibility, and - trending right now - decomposability (so we can decompose a monolith into microservices if the need arises). Add your favorite \u0026ldquo;-ility\u0026rdquo; to that list.\nMost - perhaps even all - of those \u0026ldquo;-ilities\u0026rdquo; go hand in hand with clean dependencies between components.\nIf a component depends on all other components, we don\u0026rsquo;t know what side effects a change to one component will have, making the codebase hard to maintain and even harder to extend and decompose.\nOver time, the component boundaries in a codebase tend to deteriorate. Bad dependencies creep in and make it harder to work with the code. This has all kinds of bad effects. Most notably, development gets slower.\nThis is all the more important if we\u0026rsquo;re working on a monolithic codebase that covers many different business areas or \u0026ldquo;bounded contexts\u0026rdquo;, to use Domain-Driven Design lingo.\nHow can we protect our codebase from unwanted dependencies? With careful design of bounded contexts and persistent enforcement of component boundaries. This article shows a set of practices that help in both regards when working with Spring Boot.\n Example Code This article is accompanied by a working code example on GitHub. Package-Private Visibility What helps with enforcing component boundaries? Reducing visibility.\nIf we use package-private visibility on \u0026ldquo;internal\u0026rdquo; classes, only classes in the same package have access. This makes it harder to add unwanted dependencies from outside of the package.\nSo, just put all classes of a component into the same package and make only those classes public that we need outside of the component. Problem solved?\nNot in my opinion.\nIt doesn\u0026rsquo;t work if we need sub-packages within our component.\nWe\u0026rsquo;d have to make classes in sub-packages public so they can be used in other sub-packages, opening them up to the whole world.\nI don\u0026rsquo;t want to be restricted to a single package for my component! Maybe my component has sub-components that I don\u0026rsquo;t want to expose to the outside. Or maybe I just want to sort the classes into separate buckets to make the codebase easier to navigate. I need those sub-packages!\nSo, yes, package-private visibility helps in avoiding unwanted dependencies, but on its own, it\u0026rsquo;s a half-assed solution at best.\nA Modular Approach to Bounded Contexts What can we do about it? We can\u0026rsquo;t rely on package-private visibility by itself. Let\u0026rsquo;s look at an approach for keeping our codebase clean of unwanted dependencies using a smart package structure, package-private visibility where possible, and ArchUnit as an enforcer where we can\u0026rsquo;t use package-private visibility.\nExample Use Case We discuss the approach alongside an example use case. Say we\u0026rsquo;re building a billing component that looks like this:\nThe billing component exposes an invoice calculator to the outside. The invoice calculator generates an invoice for a certain customer and time period.\nTo use Domain-Driven Design (DDD) language: the billing component implements a bounded context that provides billing use cases. We want that context to be as independent as possible from other bounded contexts. We\u0026rsquo;ll use the terms \u0026ldquo;component\u0026rdquo; and \u0026ldquo;bounded context\u0026rdquo; synonymously in the rest of the article.\nFor the invoice calculator to work, it needs to synchronize data from an external order system in a daily batch job. This batch job pulls the data from an external source and puts it into the database.\nOur component has three sub-components: the invoice calculator, the batch job, and the database code. All of those components potentially consist of a couple of classes. The invoice calculator is a public component and the batch job and database components are internal components that should not be accessible from outside of the billing component.\nAPI Classes vs. Internal Classes Let\u0026rsquo;s take at the look at the package structure I propose for our billing component:\nbilling ├── api └── internal ├── batchjob | └── internal └── database ├── api └── internal Each component and sub-component has an internal package containing, well, internal classes, and an optional api package containing - you guessed right - API classes that are meant to be used by other components.\nThis package separation between internal and api gives us a couple of advantages:\n We can easily nest components within one another. It\u0026rsquo;s easy to guess that classes within an internal package are not to be used from outside of it. It\u0026rsquo;s easy to guess that classes within an internal package may be used from within its sub-packages. The api and internal packages give us a handle to enforce dependency rules with ArchUnit (more on that later). We can use as many classes or sub-packages within an api or internal package as we want and we still have our component boundaries cleanly defined.  Classes within an internal package should be package-private if possible. But even if they are public (and they need to be public if we use sub-packages), the package structure defines clean and easy to follow boundaries.\nInstead of relying on Java\u0026rsquo;s insufficient support of package-private visibility, we have created an architecturally expressive package structure that can easily be enforced by tools.\nNow, let\u0026rsquo;s look into those packages.\nInverting Dependencies to Expose Package-Private Functionality Let\u0026rsquo;s start with the database sub-component:\ndatabase ├── api | ├── + LineItem | ├── + ReadLineItems | └── + WriteLineItems └── internal └── o BillingDatabase + means a class is public, o means that it\u0026rsquo;s package-private.\nThe database component exposes an API with two interfaces ReadLineItems and WriteLineItems, which allow to read and write line items from a customer\u0026rsquo;s order from and to the database, respectively. The LineItem domain type is also part of the API.\nInternally, the database sub-component has a class BillingDatabase which implements the two interfaces:\n@Component class BillingDatabase implements WriteLineItems, ReadLineItems { ... } There may be some helper classes around this implementation, but they\u0026rsquo;re not relevant to this discussion.\nNote that this is an application of the Dependency Inversion Principle.\nInstead of the api package depending on the internal package, the dependency is the other way around. This gives us the freedom to do in the internal package whatever we want, as long as we implement the interfaces in the api package.\nIn the case of the database sub-component, for instance, we don\u0026rsquo;t care what database technology is used to query the database.\nLet\u0026rsquo;s have a peek into the batchjob sub-component, too:\nbatchjob └── internal └── o LoadInvoiceDataBatchJob The batchjob sub-component doesn\u0026rsquo;t expose an API to other components at all. It simply has a class LoadInvoiceDataBatchJob (and potentially some helper classes), that loads data from an external source on a daily basis, transforms it, and feeds it into the billing component\u0026rsquo;s database via the WriteLineItems interface:\n@Component @RequiredArgsConstructor class LoadInvoiceDataBatchJob { private final WriteLineItems writeLineItems; @Scheduled(fixedRate = 5000) void loadDataFromBillingSystem() { ... writeLineItems.saveLineItems(items); } } Note that we use Spring\u0026rsquo;s @Scheduled annotation to regularly check for new items in the billing system.\nFinally, the content of the top-level billing component:\nbilling ├── api | ├── + Invoice | └── + InvoiceCalculator └── internal ├── batchjob ├── database └── o BillingService The billing component exposes the InvoiceCalculator interface and Invoice domain type. Again, the InvoiceCalculator interface is implemented by an internal class, called BillingService in the example. BillingService accesses the database via the ReadLineItems database API to create a customer invoice from multiple line items:\n@Component @RequiredArgsConstructor class BillingService implements InvoiceCalculator { private final ReadLineItems readLineItems; @Override public Invoice calculateInvoice( Long userId, LocalDate fromDate, LocalDate toDate) { List\u0026lt;LineItem\u0026gt; items = readLineItems.getLineItemsForUser( userId, fromDate, toDate); ... } } Now that we have a clean structure in place, we need dependency injection to wire it all together.\nWiring It Together with Spring Boot To wire everything together to an application, we make use of Spring\u0026rsquo;s Java Config feature and add a Configuration class to each module\u0026rsquo;s internal package:\nbilling └── internal ├── batchjob | └── internal | └── o BillingBatchJobConfiguration ├── database | └── internal | └── o BillingDatabaseConfiguration └── o BillingConfiguration These configurations tell Spring to contribute set of Spring beans to the application context.\nThe database sub-component configuration looks like this:\n@Configuration @EnableJpaRepositories @ComponentScan class BillingDatabaseConfiguration { } With the @Configuration annotation, we\u0026rsquo;re telling Spring that this is a configuration class that contributes Spring beans to the application context.\nThe @ComponentScan annotation tells Spring to include all classes that are in the same package as the configuration class (or a sub-package) and annotated with @Component as beans into the application context. This will load our BillingDatabase class from above.\nInstead of @ComponentScan, we could also use @Bean-annotated factory methods within the @Configuration class.\nUnder the hood, to connect to the database, the database module uses Spring Data JPA repositories. We enable these with the @EnableJpaRepositories annotation.\nThe batchjob configuration looks similar:\n@Configuration @EnableScheduling @ComponentScan class BillingBatchJobConfiguration { } Only the @EnableScheduling annotation is different. We need this to enable the @Scheduled annotation in our LoadInvoiceDataBatchJob bean.\nFinally, the configuration of the top-level billing component looks pretty boring:\n@Configuration @ComponentScan class BillingConfiguration { } With the @ComponentScan annotation, this configuration makes sure that the sub-component @Configurations are picked up by Spring and loaded into the application context together with their contributed beans.\nWith this, we have a clean separation of boundaries not only in the dimension of packages but also in the dimension of Spring configurations.\nThis means that we can target each component and sub-component separately, by addressing its @Configuration class. For example, we can:\n Load only one (sub-)component into the application context within a @SpringBootTest integration test. Enable or disable specific (sub-)components by adding a @Conditional... annotation to that sub-component\u0026rsquo;s configuration. Replace the beans contributed to the application context by a (sub-)component without affecting other (sub-)components.  We still have a problem, though: the classes in the billing.internal.database.api package are public, meaning they can be accessed from outside of the billing component, which we don\u0026rsquo;t want.\nLet\u0026rsquo;s address this issue by adding ArchUnit to the game.\nEnforcing Boundaries with ArchUnit ArchUnit is a library that allows us to run assertions on our architecture. This includes checking if dependencies between certain classes are valid or not according to rules we can define ourselves.\nIn our case, we want to define the rule that all classes in an internal package are not used from outside of this package. This rule would make sure that classes within the billing.internal.*.api packages are not accessible from outside of the billing.internal package.\nMarking Internal Packages To have a handle on our internal packages when creating architecture rules, we need to mark them as \u0026ldquo;internal\u0026rdquo; somehow.\nWe could do it by name (i.e. consider all packages with the name \u0026ldquo;internal\u0026rdquo; as internal packages), but we also might want to mark packages with a different name, so we create the @InternalPackage annotation:\n@Target(ElementType.PACKAGE) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface InternalPackage { } In all our internal packages, we then add a package-info.java file with this annotation:\n@InternalPackage package io.reflectoring.boundaries.billing.internal.database.internal; import io.reflectoring.boundaries.InternalPackage; This way, all internal packages are marked and we can create rules around this.\nVerifying That Internal Packages Are Not Accessed from the Outside We now create a test that validates that the classes in our internal packages are not accessed from the outside:\nclass InternalPackageTests { private static final String BASE_PACKAGE = \u0026#34;io.reflectoring\u0026#34;; private final JavaClasses analyzedClasses = new ClassFileImporter().importPackages(BASE_PACKAGE); @Test void internalPackagesAreNotAccessedFromOutside() throws IOException { List\u0026lt;String\u0026gt; internalPackages = internalPackages(BASE_PACKAGE); for (String internalPackage : internalPackages) { assertPackageIsNotAccessedFromOutside(internalPackage); } } private List\u0026lt;String\u0026gt; internalPackages(String basePackage) { Reflections reflections = new Reflections(basePackage); return reflections.getTypesAnnotatedWith(InternalPackage.class).stream() .map(c -\u0026gt; c.getPackage().getName()) .collect(Collectors.toList()); } void assertPackageIsNotAccessedFromOutside(String internalPackage) { noClasses() .that() .resideOutsideOfPackage(packageMatcher(internalPackage)) .should() .dependOnClassesThat() .resideInAPackage(packageMatcher(internalPackage)) .check(analyzedClasses); } private String packageMatcher(String fullyQualifiedPackage) { return fullyQualifiedPackage + \u0026#34;..\u0026#34;; } } In internalPackages(), we make use of the reflections library to collect all packages annotated with our @InternalPackage annotation.\nFor each of these packages, we then call assertPackageIsNotAccessedFromOutside(). This method uses ArchUnit\u0026rsquo;s DSL-like API to make sure that \u0026ldquo;classes that reside outside of the package should not depend on classes that reside within the package\u0026rdquo;.\nThis test will now fail if someone adds an unwanted dependency to a public class in an internal package.\nBut we still have one problem: what if we rename the base package (io.reflectoring in this case) in a refactoring?\nThe test will then still pass, because it won\u0026rsquo;t find any packages within the (now non-existent) io.reflectoring package. If it doesn\u0026rsquo;t have any packages to check, it can\u0026rsquo;t fail.\nSo, we need a way to make this test refactoring-safe.\nMaking the Architecture Rules Refactoring-Safe To make our test refactoring-safe, we verify that packages exist:\nclass InternalPackageTests { private static final String BASE_PACKAGE = \u0026#34;io.reflectoring\u0026#34;; @Test void internalPackagesAreNotAccessedFromOutside() throws IOException { // make it refactoring-safe in case we\u0026#39;re renaming the base package  assertPackageExists(BASE_PACKAGE); List\u0026lt;String\u0026gt; internalPackages = internalPackages(BASE_PACKAGE); for (String internalPackage : internalPackages) { // make it refactoring-safe in case we\u0026#39;re renaming the internal package  assertPackageIsNotAccessedFromOutside(internalPackage); } } void assertPackageExists(String packageName) { assertThat(analyzedClasses.containPackage(packageName)) .as(\u0026#34;package %s exists\u0026#34;, packageName) .isTrue(); } private List\u0026lt;String\u0026gt; internalPackages(String basePackage) { ... } void assertPackageIsNotAccessedFromOutside(String internalPackage) { ... } } The new method assertPackageExists() uses ArchUnit to make sure that the package in question is contained within the classes we\u0026rsquo;re analyzing.\nWe do this check only for the base package. We don\u0026rsquo;t do this check for the internal packages, because we know they exist. After all, we have identified those packages by the @InternalPackage annotation within the internalPackages() method.\nThis test is now refactoring-safe and will fail if we rename packages as it should.\nConclusion This article presents an opinionated approach to using packages to modularize a Java application and combines this with Spring Boot as a dependency injection mechanism and with ArchUnit to fail tests when someone has added an inter-module dependency that is not allowed.\nThis allows us to develop components with clear APIs and clear boundaries, avoiding a big ball of mud.\nLet me know your thoughts in the comments!\nYou can find an example application using this approach on GitHub.\nIf you\u0026rsquo;re interested in other ways of dealing with component boundaries with Spring Boot, you might find the moduliths project interesting.\n","date":"March 14, 2020","image":"https://reflectoring.io/images/stock/0065-boundary-1200x628-branded_hucefcc2b4e529c3d944bd5d2010fcecc7_283542_650x0_resize_q90_box.jpg","permalink":"/java-components-clean-boundaries/","title":"Clean Architecture Boundaries with Spring Boot and ArchUnit"},{"categories":["Spring Boot"],"contents":"Following an API-first approach, we specify an API before we start coding. Via API description languages, teams can collaborate without having implemented anything, yet.\nThose description languages specify endpoints, security schemas, object schemas, and much more. Moreover, most of the time we can also generate code such a specification.\nOften, an API specification also becomes the documentation of the API.\n Example Code This article is accompanied by a working code example on GitHub. Benefits of API-First To start working on an integration between components or systems, a team needs a contract. In our case, the contract is the API specification. API-first helps teams to communicate with each other, without implementing a thing. It also enables teams to work in parallel.\nWhere the API-first approach shines is on building a better API. Focusing on the functionality that it is needed to provide and only that. Minimalistic APIs mean less code to maintain.\nCreating an API Spec with the Swagger Editor Let\u0026rsquo;s create our own OpenAPI specification in a YAML document. To make it easier to follow, we\u0026rsquo;ll split the discussion into separate parts of the YAML document we\u0026rsquo;re creating.\nIf you want to learn more details about the OpenAPI-Specification you can visit the Github repository.\nGeneral Information We start with some general information about our API at the top of our document:\nopenapi: 3.0.2 info: title: Reflectoring description: \u0026#34;Tutorials on Spring Boot and Java.\u0026#34; termsOfService: http://swagger.io/terms/ contact: email: petros.stergioulas94@gmail.com license: name: Apache 2.0 url: http://www.apache.org/licenses/LICENSE-2.0.html version: 0.0.1-SNAPSHOT externalDocs: description: Find out more about Reflectoring url: https://reflectoring.io/about/ servers: - url: https://reflectoring.swagger.io/v2 The openapi field allows us to define the version of the OpenAPI spec that our document follows.\nWithin the info section, we add some information about our API. The fields should be pretty self-explanatory.\nFinally, in the servers section, we provide a list of servers that implement the API.\nTags Then comes some additional metadata about our API:\ntags: - name: user description: Operations about user externalDocs: description: Find out more about our store url: http://swagger.io The tags section provides fields for additional metadata which we can use to make our API more readable and easier to follow. We can add multiple tags, but each tag should be unique.\nPaths Next, we\u0026rsquo;ll describe some paths. A path holds information about an individual endpoint and its operations:\npaths: /user/{username}: get: tags: - user summary: Get user by user name operationId: getUserByName parameters: - name: username in: path description: \u0026#39;The name that needs to be fetched. \u0026#39; required: true schema: type: string responses: 200: description: successful operation content: application/json: schema: $ref: \u0026#39;#/components/schemas/User\u0026#39; 404: description: User not found content: {} The $ref field allows us to refer to objects in a self-defined schema. In this case we refer to the User schema object (see the next section about Components).\nThe summary is a short description of what the operation does.\nWith the operationId, we can define a unique identifier for the operation. We can think about it as our method name.\nFinally, the responses object allows us to define the outcomes of an operation. We must define at least one successful response code for any operation call.\nComponents The objects of the API are all described in the components section. The objects defined within the components object will not affect the API unless they are explicitly referenced from properties outside the components object, as we have seen above:\ncomponents: schemas: User: type: object properties: id: type: integer format: int64 username: type: string firstName: type: string ... more attributes userStatus: type: integer description: User Status format: int32 securitySchemes: reflectoring_auth: type: oauth2 flows: implicit: authorizationUrl: http://reflectoring.swagger.io/oauth/dialog scopes: write:users: modify users read:users: read users api_key: type: apiKey name: api_key in: header The schemas section allows us to define the objects we want to use in our API.\nIn the securitySchemes section, we can define security schemes that can be used by the operations.\nThere two possible ways to make use of security schemes.\nFirst, we can add a security scheme to a specific operation using the security field:\npaths: /user/{username}: get: tags: - user summary: Get user by user name security: - api_key: [] In the above example we explicitly specify that the path /user/{username} is secured with the api_key scheme we defined above.\nHowever, if we want to apply security on the whole project, we just need to specify it as a top-level field:\npaths: /user/{username}: get: tags: - user summary: Get user by user name security: - api_key: [] Now, all of our paths should be secured with the api_key scheme.\nGenerating Code From an API Specification Having defined an API, we\u0026rsquo;ll now create code from the YAML document above.\nWe\u0026rsquo;ll take a look at two different approaches to generating the code:\n using the Swagger Editor to generate code manually, and using the OpenAPI Maven plugin to generate code from a Maven build.  Generating Code from Swagger Editor Although this is an approach that I wouldn\u0026rsquo;t take, let\u0026rsquo;s talk about it and discuss why I think it\u0026rsquo;s a bad idea.\nLet\u0026rsquo;s go over to Swagger Editor and paste our YAML file into it. Then, we select Generate Server from the menu and pick what kind of a server we\u0026rsquo;d like to generate (I went with \u0026ldquo;Spring\u0026rdquo;).\nSo why is this a bad idea?\nFirst, the code that was generated for me is using Java 7 and Spring Boot 1.5.22, both of which are quite outdated.\nSecond, if we make a change to the specification (and changes happen all the time), we\u0026rsquo;d have to copy-and-paste the files that were changed manually.\nGenerating Code with the OpenAPI Maven plugin A better alternative is to generate the code from within a Maven build with the OpenAPI Maven plugin.\nLet\u0026rsquo;s take a look at the folder structure. I chose to use a multi-module maven project, where we have two projects:\n app, an application that implements the API from our specification. specification, whose only job is to provide the API Specification for our app.  The folder structure looks like this:\nspring-boot-openapi ├── app │ └── pom.xml │ └── src │ └── main │ └── java │ └── io.reflectoring │ └── OpenAPIConsumerApp.java ├── specification │ └── pom.xml │ └── src │ └── resources │ └── openapi.yml └── pom.xml For the sake of simplicity, we omit the test folders.\nOur app is a simple Spring Boot project that we can automatically generate on start.spring.io, so let\u0026rsquo;s focus on the pom.xml from the specification module, where we configure the OpenAPI Maven plugin:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.openapitools\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;openapi-generator-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;4.2.3\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;generate\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;inputSpec\u0026gt; ${project.basedir}/src/main/resources/openapi.yml \u0026lt;/inputSpec\u0026gt; \u0026lt;generatorName\u0026gt;spring\u0026lt;/generatorName\u0026gt; \u0026lt;apiPackage\u0026gt;io.reflectoring.api\u0026lt;/apiPackage\u0026gt; \u0026lt;modelPackage\u0026gt;io.reflectoring.model\u0026lt;/modelPackage\u0026gt; \u0026lt;supportingFilesToGenerate\u0026gt; ApiUtil.java \u0026lt;/supportingFilesToGenerate\u0026gt; \u0026lt;configOptions\u0026gt; \u0026lt;delegatePattern\u0026gt;true\u0026lt;/delegatePattern\u0026gt; \u0026lt;/configOptions\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; You can see the full pom.xml file on GitHub.\nFor this tutorial, we\u0026rsquo;re using the spring generator.\nSimply running the command ./mvnw install will generate code that implements our OpenAPI specification!\nTaking a look into the folder target/generated-sources/openapi/src/main/java/io/reflectoring/model, we find the code for the User model we defined in our YAML:\n@javax.annotation.Generated(...) public class User { @JsonProperty(\u0026#34;id\u0026#34;) private Long id; @JsonProperty(\u0026#34;username\u0026#34;) private String username; @JsonProperty(\u0026#34;firstName\u0026#34;) private String firstName; // ... more properties  @JsonProperty(\u0026#34;userStatus\u0026#34;) private Integer userStatus; // ... getters and setters  } The generator does not only generate the models but also the endpoints. Let\u0026rsquo;s take a quick look at what we generated:\npublic interface UserApiDelegate { default Optional\u0026lt;NativeWebRequest\u0026gt; getRequest() { return Optional.empty(); } /** * POST /user : Create user * Create user functionality * * @param body Created user object (required) * @return successful operation (status code 200) * @see UserApi#createUser */ default ResponseEntity\u0026lt;Void\u0026gt; createUser(User body) { return new ResponseEntity\u0026lt;\u0026gt;(HttpStatus.NOT_IMPLEMENTED); } // ... omit deleteUser, getUserByName and updateUser } Of course, the generator cannot generate our business logic for us, but it does generate interfaces like UserApiDelegate above for us to implement.\nIt also creates a UserApi interface which delegates calls to UserApiDelegate:\n@Validated @Api(value = \u0026#34;user\u0026#34;, description = \u0026#34;the user API\u0026#34;) public interface UserApi { default UserApiDelegate getDelegate() { return new UserApiDelegate() {}; } /** * POST /user : Create user * Create user functionality * * @param body Created user object (required) * @return successful operation (status code 200) */ @ApiOperation(value = \u0026#34;Create user\u0026#34;, nickname = \u0026#34;createUser\u0026#34;, notes = \u0026#34;Create user functionality\u0026#34;, tags={ \u0026#34;user\u0026#34;, }) @ApiResponses(value = { @ApiResponse(code = 200, message = \u0026#34;successful operation\u0026#34;) }) @RequestMapping(value = \u0026#34;/user\u0026#34;, method = RequestMethod.POST) default ResponseEntity\u0026lt;Void\u0026gt; createUser( @ApiParam(value = \u0026#34;Created user object\u0026#34; ,required=true ) @Valid @RequestBody User body) { return getDelegate().createUser(body); } // ... other methods omitted } The generator also creates a Spring controller for us that implements the UserApi interface:\n@javax.annotation.Generated(...) @Controller @RequestMapping(\u0026#34;${openapi.reflectoring.base-path:/v2}\u0026#34;) public class UserApiController implements UserApi { private final UserApiDelegate delegate; public UserApiController( @Autowired(required = false) UserApiDelegate delegate) { this.delegate = Optional.ofNullable(delegate) .orElse(new UserApiDelegate() {}); } @Override public UserApiDelegate getDelegate() { return delegate; } } Spring will inject our implementation of UserApiDelegate into the controller\u0026rsquo;s constructor if it finds it in the application context. Otherwise, the default implementation will be used.\nLet\u0026rsquo;s start our application and hit the GET endpoint /v2/user/{username}.\ncurl -I http://localhost:8080/v2/user/Petros HTTP/1.1 501 Content-Length: 0 But why do we get a 501 response (Not Implemented)?\nBecause we did not implement the UserApiDelegate interface and the UserApiController used the default one, which returns HttpStatus.NOT_IMPLEMENTED.\nNow let\u0026rsquo;s implement the UserApiDelegate:\n@Service public class UserApiDelegateImpl implements UserApiDelegate { @Override public ResponseEntity\u0026lt;User\u0026gt; getUserByName(String username) { User user = new User(); user.setId(123L); user.setFirstName(\u0026#34;Petros\u0026#34;); // ... omit other initialization  return ResponseEntity.ok(user); } } It\u0026rsquo;s important to add a @Service or @Component annotation to the class so that Spring can pick it up and inject it into the UserApiController.\nIf we run curl http://localhost:8080/v2/user/Petros again now, we\u0026rsquo;ll receive a valid JSON response:\n{ \u0026#34;id\u0026#34;: 123, \u0026#34;firstName\u0026#34;: \u0026#34;Petros\u0026#34;, // ... omit other properties } The UserApiDelegate is the single point of truth. That enables us to make fast changes in our API. For example, if we change the specification and generate it again, we only have to implement the newly generated methods.\nThe good thing is that if we won\u0026rsquo;t implement them, our application doesn\u0026rsquo;t break. By default, those endpoints would return HTTP status 501 (Not Implemented).\nIn my opinion, generating the OpenAPI Specification with Maven plugin instead of Swagger Editor is the better choice. That\u0026rsquo;s because we have more control over our options. The plugin provides some configuration and with Git as a version control tool, we can safely track any changes in either pom.xml and openapi.yml.\nConclusion With OpenAPI we can create an API specification that we can share among teams to communicate contracts. The OpenAPI Maven plugin allows us to generate boilerplate code for Spring Boot from such a specification so that we only need to implement the business logic ourselves.\nYou can browse the example code on GitHub.\n","date":"March 12, 2020","image":"https://reflectoring.io/images/stock/0066-blueprint-1200x628-branded_hu48e3d47c178704853b652b021bfc1958_111939_650x0_resize_q90_box.jpg","permalink":"/spring-boot-openapi/","title":"API-First Development with Spring Boot and Swagger"},{"categories":["Spring Boot"],"contents":"Spring MVC provides a very convenient programming model for creating web controllers. We declare a method signature and the method arguments will be resolved automatically by Spring. We can make it even more convenient by letting Spring pass custom objects from our domain into controller methods so we don\u0026rsquo;t have to map them each time.\n Example Code This article is accompanied by a working code example on GitHub. Why Would I Want Custom Arguments in My Web Controllers? Let\u0026rsquo;s say we\u0026rsquo;re building an application managing Git repositories similar to GitHub.\nTo identify a certain GitRepository entity, we use a GitRepositoryId value object instead of a simple Long value. This way, we cannot accidentally confuse a repository ID with a user ID, for example.\nNow, we\u0026rsquo;d like to use a GitRepositoryId instead of a Long in the method signatures of our web controllers so we don\u0026rsquo;t have to do that conversion ourselves.\nAnother use case is when we want to extract some context object from the URL path for all our controllers. For example, think of the repository name on GitHub: every URL starts with a repository name.\nSo, each time we have a repository name in a URL, we\u0026rsquo;d like to have Spring automatically convert that repository name to a full-blown GitRepository entity and pass it into our web controller for further processing.\nIn the following sections, we\u0026rsquo;re looking at a solution for each of these use cases.\nConverting Primitives into Value Objects with a Converter Let\u0026rsquo;s start with the simple one.\nUsing a Custom Value Object in a Controller Method Signature We want Spring to automatically convert a path variable into a GitRepositoryId object:\n@RestController class GitRepositoryController { @GetMapping(\u0026#34;/repositories/{repoId}\u0026#34;) String getSomething(@PathVariable(\u0026#34;repoId\u0026#34;) GitRepositoryId repositoryId) { // ... load and return repository  } } We\u0026rsquo;re binding the repositoryId method parameter to the {repositoryId} path variable. Spring will now try to create a GitRepositoryId object from the String value in the path.\nOur GitRepositoryId is a simple value object:\n@Value class GitRepositoryId { private final long value; } We use the Lombok annotation @Value so we don\u0026rsquo;t have to create constructors and getters ourselves.\nCreating a Test Let\u0026rsquo;s create a test and see if it passes:\n@WebMvcTest(controllers = GitRepositoryController.class) class GitRepositoryIdConverterTest { @Autowired private MockMvc mockMvc; @Test void resolvesGitRepositoryId() throws Exception { mockMvc.perform(get(\u0026#34;/repositories/42\u0026#34;)) .andExpect(status().isOk()); } } This test performs a GET request to the endpoint /repositories/42 and checks is the response HTTP status code is 200 (OK).\nBy running the test before having the solution in place, we can make sure that we actually have a problem to solve. It turns out, we do, because running the test will result in an error like this:\nFailed to convert value of type \u0026#39;java.lang.String\u0026#39; to required type \u0026#39;...GitRepositoryId\u0026#39;; nested exception is java.lang.IllegalStateException: Cannot convert value of type \u0026#39;java.lang.String\u0026#39; to required type \u0026#39;...GitRepositoryId\u0026#39;: no matching editors or conversion strategy found Building a Converter Fixing this is rather easy. All we need to do is to implement a custom Converter:\n@Component class GitRepositoryIdConverter implements Converter\u0026lt;String, GitRepositoryId\u0026gt; { @Override public GitRepositoryId convert(String source) { return new GitRepositoryId(Long.parseLong(source)); } } Since all input from HTTP requests is considered a String, we need to build a Converter that converts a String value to a GitRepositoryId.\nBy adding the @Component annotation, we make this converter known to Spring. Spring will then automatically apply this converter to all controller method arguments of type GitRepositoryId.\nIf we run the test now, it\u0026rsquo;s green.\nProviding a valueOf() Method Instead of building a converter, we can also provide a static valueOf() method on our value object:\n@Value class GitRepositoryId { private final long value; public static GitRepositoryId valueOf(String value){ return new GitRepositoryId(Long.parseLong(value)); } } In effect, this method does the same as the converter we built above (converting a String into a value object).\nIf a method like this is available on an object that is used as a parameter in a controller method, Spring will automatically call it to do the conversion without the need of a separate Converter bean.\nResolving Custom Arguments with a HandlerMethodArgumentResolver The above solution with the Converter only works because we\u0026rsquo;re using Spring\u0026rsquo;s @PathVariable annotation to bind the method parameter to a variable in the URL path.\nNow, let\u0026rsquo;s say that ALL our URLs start with the name of a Git repository (called a URL-friendly \u0026ldquo;slug\u0026rdquo;) and we want to minimize boilerplate code:\n We don\u0026rsquo;t want to pollute our code with lots of @PathVariable annotations. We don\u0026rsquo;t want every controller to have to check if the repository slug in the URL is valid. We don\u0026rsquo;t want every controller to have to load the repository data from the database.  We can achieve this by building a custom HandlerMethodArgumentResolver.\nUsing a Custom Object in a Controller Method Signature Let\u0026rsquo;s start with how we expect the controller code to look:\n@RestController @RequestMapping(path = \u0026#34;/{repositorySlug}\u0026#34;) class GitRepositoryController { @GetMapping(\u0026#34;/contributors\u0026#34;) String listContributors(GitRepository repository) { // list the contributors of the GitRepository ...  } // more controller methods ...  } In the class-level @RequestMapping annotation, we define that all requests start with a {repositorySlug} variable.\nThe listContributors() method will be called when someone hits the path /{repositorySlug}/contributors/. The method requires a GitRepository object as an argument so that it knows which git repository to work with.\nWe now want to create some code that will be applied to ALL controller methods and\n checks the database if a repository with the given {repositorySlug} exists if the repository doesn\u0026rsquo;t exist, returns HTTP status code 404 if the repository exists, hydrates a GitRepository object with the repository data and passes that into the controller method.  Creating a Test Again, let\u0026rsquo;s start with a test to define our requirements:\n@WebMvcTest(controllers = GitRepositoryController.class) class GitRepositoryArgumentResolverTest { @Autowired private MockMvc mockMvc; @MockBean private GitRepositoryFinder repositoryFinder; @Test void resolvesSiteSuccessfully() throws Exception { given(repositoryFinder.findBySlug(\u0026#34;my-repo\u0026#34;)) .willReturn(Optional.of(new GitRepository(1L, \u0026#34;my-repo\u0026#34;))); mockMvc.perform(get(\u0026#34;/my-repo/contributors\u0026#34;)) .andExpect(status().isOk()); } @Test void notFoundOnUnknownSlug() throws Exception { given(repositoryFinder.findBySlug(\u0026#34;unknownSlug\u0026#34;)) .willReturn(Optional.empty()); mockMvc.perform(get(\u0026#34;/unknownSlug/contributors\u0026#34;)) .andExpect(status().isNotFound()); } } We have two test cases:\nThe first checks the happy path. If the GitRepositoryFinder finds a repository with the given slug, we expect the HTTP status code to be 200 (OK).\nThe second test checks the error path. If the GitRepositoryFinder doesn\u0026rsquo;t find a repository with the given slug, we expect the HTTP status code to be 404 (NOT FOUND).\nIf we run the test without doing anything, we\u0026rsquo;ll get an error like this:\nCaused by: java.lang.AssertionError: Expecting actual not to be null This means that the GitRepository object passed into the controller methods is null.\nCreating a HandlerMethodArgumentResolver Let\u0026rsquo;s fix that. We do this by implementing a custom HandlerMethodArgumentResolver:\n@RequiredArgsConstructor class GitRepositoryArgumentResolver implements HandlerMethodArgumentResolver { private final GitRepositoryFinder repositoryFinder; @Override public boolean supportsParameter(MethodParameter parameter) { return parameter.getParameter().getType() == GitRepository.class; } @Override public Object resolveArgument( MethodParameter parameter, ModelAndViewContainer mavContainer, NativeWebRequest webRequest, WebDataBinderFactory binderFactory) { String requestPath = ((ServletWebRequest) webRequest) .getRequest() .getPathInfo(); String slug = requestPath .substring(0, requestPath.indexOf(\u0026#34;/\u0026#34;, 1)) .replaceAll(\u0026#34;^/\u0026#34;, \u0026#34;\u0026#34;); return gitRepositoryFinder.findBySlug(slug) .orElseThrow(NotFoundException::new); } } In resolveArgument(), we extract the first segment of the request path, which should contain our repository slug.\nThen, we feed this slug into GitRepositoryFinder to load the repository from the database.\nIf GitRepositoryFinder doesn\u0026rsquo;t find a repository with that slug, we throw a custom NotFoundException. Otherwise, we return the GitRepository object we found in the database.\nRegister the HandlerMethodArgumentResolver Now, we have to make our GitRepositoryArgumentResolver known to Spring Boot:\n@Component @RequiredArgsConstructor class GitRepositoryArgumentResolverConfiguration implements WebMvcConfigurer { private final GitRepositoryFinder repositoryFinder; @Override public void addArgumentResolvers( List\u0026lt;HandlerMethodArgumentResolver\u0026gt; resolvers) { resolvers.add(new GitRepositoryArgumentResolver(repositoryFinder)); } } We implement the WebMvcConfigurer interface and add our GitRepositoryArgumentResolver to the list of resolvers. Don\u0026rsquo;t forget to make this configurer known to Spring Boot by adding the @Component annotation.\nMapping NotFoundException to HTTP Status 404 Finally, we want to map our custom NotFoundException to the HTTP status code 404. We do this by creating a controller advice:\n@ControllerAdvice class ErrorHandler { @ExceptionHandler(NotFoundException.class) ResponseEntity\u0026lt;?\u0026gt; handleHttpStatusCodeException(NotFoundException e) { return ResponseEntity.status(e.getStatusCode()).build(); } } The @ControllerAdvice annotation will register the ErrorHandler class to be applied to all web controllers.\nIn handleHttpStatusCodeException() we return a ResponseEntity with HTTP status code 404 in case of a NotFoundException.\nWhat Arguments Can We Pass into Web Controller Methods by Default? There\u0026rsquo;s a whole bunch of method arguments that Spring supports by default so that we don\u0026rsquo;t have to add any custom argument resolvers. The complete list is available in the docs.\nConclusion With Converters, we can convert web controller method arguments annotated with @PathVariables or @RequestParams to value objects.\nWith a HandlerMethodArgumentResolver, we can resolve any method argument type. This is used heavily by the Spring framework itself, for example, to resolve method arguments annotated with @ModelAttribute or @PathVariable or to resolve arguments of type RequestEntity or Model.\nYou can view the example code on GitHub.\n","date":"March 6, 2020","image":"https://reflectoring.io/images/stock/0065-java-1200x628-branded_hu49f406cdc895c98f15314e0c34cfd114_116403_650x0_resize_q90_box.jpg","permalink":"/spring-boot-argumentresolver/","title":"Custom Web Controller Arguments with Spring MVC and Spring Boot"},{"categories":["Spring Boot"],"contents":"Systems with user management require authentication. If we use password-based authentication, we have to handle users' passwords in our system. This article shows how to encode and store passwords securely with Spring Security.\n Example Code This article is accompanied by a working code example on GitHub. Password Handling If we want to authenticate the user on the server side, we have to follow these steps:\n Get the user name and password from the user who wants to authenticate. Find the user name in the storage, usually a database. Compare the password the user provided with the user\u0026rsquo;s password from the database.  Let\u0026rsquo;s have a look at some best (and worst) practices of how to do that.\nSaving Passwords as Plain Text We have to deal with the fact that we have to save users' passwords in our system for comparison during authentication.\nObviously, it is a bad idea to save passwords as plain text in the database.\nWe should assume that an attacker can steal the database with passwords or get access to the passwords by other methods like SQL injection.\nIn this case, the attacker could use the password right away to access the application. So we need to save the passwords in a form that the attacker can\u0026rsquo;t use it for authentication.\nHashing Hashing solves the problem of immediate access to the system with exposed passwords.\nHashing is a one-way function that converts the input to a line of symbols. Normally the length of this line is fixed.\nIf the data is hashed, it\u0026rsquo;s very hard to convert the hash back to the original input and it\u0026rsquo;s also very hard to find the input to get the desired output.\nWe have to hash the password in two cases:\n When the user registers in the application we hash the password and save it to the database. When the user wants to authenticate, we hash the provided password and compare it with the password hash from the database.  Now, when attackers get the hash of a password, they are not able to use it for accessing the system. Any attempt to find the plain text from the hash value requires a huge effort from the attacker. A brute force attack can be very expensive if the hash is long enough.\nUsing rainbow tables, attackers still can have success, however. A rainbow table is a table with precomputed hashes for many passwords. There are many rainbow tables available on the internet and some of them contain millions of passwords.\nSalting the Password To prevent an attack with rainbow tables we can use salted passwords. A salt is a sequence of randomly generated bytes that is hashed along with the password. The salt is stored in the storage and doesn\u0026rsquo;t need to be protected.\nWhenever the user tries to authenticate, the user\u0026rsquo;s password is hashed with the saved salt and the result should match the stored password.\nThe probability that the combination of the password and the salt is precomputed in a rainbow table is very small. If the salt is long and random enough, it is impossible to find the hash in a rainbow table.\nSince the salt is not a secret, attackers are still able to start a brute force attack, though.\nA salt can make the attack difficult for the attacker, but hardware is getting more efficient. We must assume fast-evolving hardware with which the attacker can calculate billions of hashes per second.\nThus, hashing and salting are necessary - but not enough.\nPassword Hashing Functions Hash functions were not created to hash only passwords. The inventor of hash functions did a very good job and made the hash function very fast.\nIf we can hash passwords very fast, though, then an attacker can run the brute force attack very fast too.\nThe solution is to make password hashing slow.\nBut how slow can it be? It should not be so slow as to be unacceptable for the user, but slow enough to make a brute force attack take infinite time.\nWe don\u0026rsquo;t need to develop the slow hashing on our own. Several algorithms have been developed especially for password hashing:\n bcrypt, scrypt, PBKDF2, argon2, and others.  They use a complicated cryptographic algorithm and allocate resources like CPU or memory deliberately.\nWork Factor The work factor is a configuration of the encoding algorithms that we can increase with growing hardware power.\nEvery password encoding has its own work factor. The work factor influences the speed of the password encoding. For instance, bcrypt has the parameter strength. The algorithm will make 2 to the power of strength iterations to calculate the hash value. The bigger the number, the slower the encoding.\nPassword Handling with Spring Security Now let\u0026rsquo;s see how Spring Security supports these algorithms and how we can handle passwords with them.\nPassword Encoders First, let\u0026rsquo;s have a look at the password encoders of Spring Security. All password encoders implement the interface PasswordEncoder.\nThis interface defines the method encode() to convert the plain password into the encoded form and the method matches() to compare a plain password with the encoded password.\nEvery encoder has a default constructor that creates an instance with the default work factor. We can use other constructors for tuning the work factor.\nBCryptPasswordEncoder int strength = 10; // work factor of bcrypt  BCryptPasswordEncoder bCryptPasswordEncoder = new BCryptPasswordEncoder(strength, new SecureRandom()); String encodedPassword = bCryptPasswordEncoder.encode(plainPassword); BCryptPasswordEncoder has the parameter strength. The default value in Spring Security is 10. It\u0026rsquo;s recommended to use a SecureRandom as salt generator, because it provides a cryptographically strong random number.\nThe output looks like this:\n$2a$10$EzbrJCN8wj8M8B5aQiRmiuWqVvnxna73Ccvm38aoneiJb88kkwlH2 Note that in contrast to simple hash algorithms like SHA-256 or MD5, the output of bcrypt contains meta-information about the version of the algorithm, work factor, and salt. We don\u0026rsquo;t need to save this information separately.\nPbkdf2PasswordEncoder String pepper = \u0026#34;pepper\u0026#34;; // secret key used by password encoding int iterations = 200000; // number of hash iteration int hashWidth = 256; // hash width in bits  Pbkdf2PasswordEncoder pbkdf2PasswordEncoder = new Pbkdf2PasswordEncoder(pepper, iterations, hashWidth); pbkdf2PasswordEncoder.setEncodeHashAsBase64(true); String encodedPassword = pbkdf2PasswordEncoder.encode(plainPassword); The PBKDF2 algorithm was not designed for password encoding but for key derivation from a password. Key derivation is usually needed when we want want to encrypt some data with a password, but the password is not strong enough to be used as an encryption key.\nPbkdf2PasswordEncoder runs the hash algorithm over the plain password many times. It generates a salt, too. We can define how long the output can be and additionally use a secret called pepper to make the password encoding more secure.\nThe output looks like this:\nlLDINGz0YLUUFQuuj5ChAsq0GNM9yHeUAJiL2Be7WUh43Xo3gmXNaw== The salt is saved within, but we have to save the number of iterations and hash width separately. The pepper should be kept as a secret.\nThe default number of iterations is 185000 and the default hash width is 256.\nSCryptPasswordEncoder int cpuCost = (int) Math.pow(2, 14); // factor to increase CPU costs int memoryCost = 8; // increases memory usage int parallelization = 1; // currently not supported by Spring Security int keyLength = 32; // key length in bytes int saltLength = 64; // salt length in bytes  SCryptPasswordEncoder sCryptPasswordEncoder = new SCryptPasswordEncoder( cpuCost, memoryCost, parallelization, keyLength, saltLength); String encodedPassword = sCryptPasswordEncoder.encode(plainPassword); The scrypt algorithm can not only configure the CPU cost but also memory cost. This way, we can make an attack even more expensive.\nThe output looks like this:\n$e0801$jRlFuIUd6eAZcuM1wKrzswD8TeKPed9wuWf3lwsWkStxHs0DvdpOZQB32cQJnf0lq/dxL+QsbDpSyyc9Pnet1A==$P3imAo3G8k27RccgP5iR/uoP8FgWGSS920YnHj+CRVA= This encoder puts the parameter for work factor and salt in the result string, so there is no additional information to save.\nArgon2PasswordEncoder int saltLength = 16; // salt length in bytes int hashLength = 32; // hash length in bytes int parallelism = 1; // currently not supported by Spring Security int memory = 4096; // memory costs int iterations = 3; Argon2PasswordEncoder argon2PasswordEncoder = new Argon2PasswordEncoder( saltLength, hashLength, parallelism, memory, iterations); String encodePassword = argon2PasswordEncoder.encode(plainPassword); Argon2 is the winner of Password Hashing Competition in 2015. This algorithm, too, allows us to tune CPU and memory costs. The Argon2 encoder saves all the parameters in the result string. If we want to use this password encoder, we\u0026rsquo;ll have to import the BouncyCastle crypto library.\nSetting Up a Password Encoder in Spring Boot To see how it works in Spring Boot let\u0026rsquo;s create an application with REST APIs and password-based authentication supported by Spring Security. The passwords are stored in the relational database.\nTo keep it simple in this example we send the user credentials with every HTTP request. It means the application must start authentication whenever the client wants to access the API.\nConfiguring a Password Encoder First, we create an API we want to protect with Spring Security:\n@RestController class CarResources { @GetMapping(\u0026#34;/cars\u0026#34;) public Set\u0026lt;Car\u0026gt; cars() { return Set.of( new Car(\u0026#34;vw\u0026#34;, \u0026#34;black\u0026#34;), new Car(\u0026#34;bmw\u0026#34;, \u0026#34;white\u0026#34;)); } } Our goal is to provide access to the resource /cars for authenticated users only, so, we create a configuration with Spring Security rules:\n@Configuration @EnableWebSecurity class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity httpSecurity) throws Exception { httpSecurity .csrf() .disable() .authorizeRequests() .antMatchers(\u0026#34;/registration\u0026#34;) .permitAll() .anyRequest() .authenticated() .and() .httpBasic(); } // ...  } This code creates rules that requires authentication for all endpoints except /registration and enables HTTP basic authentication.\nWhenever an HTTP request is sent to the application Spring Security now checks if the header contains Authorization: Basic \u0026lt;credentials\u0026gt;.\nIf the header is not set, the server responds with HTTP status 401 (Unauthorized).\nIf Spring Security finds the header, it starts the authentication.\nTo authenticate, Spring Security needs user data with user names and password hashes. That\u0026rsquo;s why we have to implement the UserDetailsService interface. This interface loads user-specific data and needs read-only access to user data:\n@Service class DatabaseUserDetailsService implements UserDetailsService { private final UserRepository userRepository; private final UserDetailsMapper userDetailsMapper; // constructor ...  @Override public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { UserCredentials userCredentials = userRepository.findByUsername(username); return userDetailsMapper.toUserDetails(userCredentials); } } In the service we implement the method loadUserByUsername(), that loads user data from the database.\nAn implementation of the AuthenticationProvider interface will use the UserDetailsService to perform the authentication logic.\nThere are many implementations of this interface, but we are interested in DaoAuthenticationProvider, because we store the data in the database:\n@Configuration @EnableWebSecurity class SecurityConfiguration extends WebSecurityConfigurerAdapter { private final DatabaseUserDetailsService databaseUserDetailsService; // constructor ...  @Bean public AuthenticationProvider daoAuthenticationProvider() { DaoAuthenticationProvider provider = new DaoAuthenticationProvider(); provider.setPasswordEncoder(passwordEncoder()); provider.setUserDetailsService(this.databaseUserDetailsService); return provider; } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } // ...  } We created a DaoAuthenticationProvider and passed in a BCryptPasswordEncoder. That\u0026rsquo;s all we need to do to enable password encoding and password matching.\nNow we have to take one step more to complete the configuration. We set the DatabaseUserDetailsService service to the DaoAuthenticationProvider. After that, DaoAuthenticationProvider can get the user data to execute the authentication. Spring Security takes care of the rest.\nIf a client sends an HTTP request with the basic authentication header, Spring Security will read this header, load data for the user, and try to match the password using BCryptPasswordEncoder. If the password matches, the request will be passed through. If not, the server will respond with HTTP status 401.\nImplementing User Registration To add a user to the system, we need to implement an API for registration:\n@RestController class RegistrationResource { private final UserRepository userRepository; private final PasswordEncoder passwordEncoder; // constructor ...  @PostMapping(\u0026#34;/registration\u0026#34;) @ResponseStatus(code = HttpStatus.CREATED) public void register(@RequestBody UserCredentialsDto userCredentialsDto) { UserCredentials user = UserCredentials.builder() .enabled(true) .username(userCredentialsDto.getUsername()) .password(passwordEncoder.encode(userCredentialsDto.getPassword())) .roles(Set.of(\u0026#34;USER\u0026#34;)) .build(); userRepository.save(user); } } As we defined in Spring Security rules, the access to /registration is open for everybody. We use the PasswordEncoder that is defined in the Spring Security configuration to encode the password.\nIn this example, the passwords are encoded with the bcrypt algorithm because we set the PasswordEncoder as the password encoder in the configuration. The code just saves the new user to the database. After that, the user is ready to authenticate.\nUpgrading The Work Factor There are cases where we should increase the work factor of the password encoding for an existing application that uses PasswordEncoder.\nMaybe the work factor set years ago is not strong enough anymore today. Or maybe the work factor we use today will not be secure in a couple of years. In these cases, we should increase the work factor of password encoding.\nAlso, the application might get better hardware. In this case, we can increase work factors without significantly increasing authentication time. Spring Security supports the update of the work factor for many encoding algorithms.\nTo achieve this, we have to do two things. First, we need to implement UserDetailsPasswordService interface:\n@Service @Transactional class DatabaseUserDetailPasswordService implements UserDetailsPasswordService { private final UserRepository userRepository; private final UserDetailsMapper userDetailsMapper; // constructor ...  @Override public UserDetails updatePassword(UserDetails user, String newPassword) { UserCredentials userCredentials = userRepository.findByUsername(user.getUsername()); userCredentials.setPassword(newPassword); return userDetailsMapper.toUserDetails(userCredentials); } } In the method updatePassword() we just set the new password to the user in the database.\nSecond, we make this interface known to AuthenticationProvider:\n@Configuration @EnableWebSecurity class SecurityConfiguration extends WebSecurityConfigurerAdapter { private final DatabaseUserDetailPasswordService userDetailsService; // constructor ...  @Bean public AuthenticationProvider daoAuthenticationProvider() { DaoAuthenticationProvider provider = new DaoAuthenticationProvider(); provider.setPasswordEncoder(passwordEncoder()); provider.setUserDetailsPasswordService( this.databaseUserDetailPasswordService); provider.setUserDetailsService(this.databaseUserDetailsService); return provider; } // ... } That\u0026rsquo;s it. Now, whenever a user starts the authentication, Spring Security compares the work factor in the encoded password of the user with the current work factor of PasswordEncoder.\nIf the current work factor is stronger, the authentication provider will encode the password of the user with the current password encoder and update it using DatabaseUserDetailPasswordService automatically.\nFor example, if passwords are currently encoded with BCryptPasswordEncoder of strength 5, we can just add a password encoder of strength 10\n@Configuration @EnableWebSecurity class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(10); } // ... } With each login, passwords are now migrated from strength 5 to 10 automatically.\nUsing Multiple Password Encodings in the Same Application Some applications live very long. Long enough that the standards and best practices for password encoding change.\nImagine we support an application with thousands of users and this application uses a normal SHA-1 hashing for password encoding. It means all passwords are stored in the database as SHA-1 hashes.\nNow, to raise security, we want to use scrypt for all new users.\nTo encode and match passwords using different algorithms in the same application, we can use DelegatingPasswordEncoder. This encoder delegates the encoding to another encoder using prefixes:\n@Configuration @EnableWebSecurity class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Bean public PasswordEncoder passwordEncoder() { return PasswordEncoderFactories.createDelegatingPasswordEncoder(); } // ... } The simplest way is to let PasswordEncoderFactories generate the DelegatingPasswordEncoder for us. This factory generates a DelegatingPasswordEncoder that supports all encoders of Spring Security for matching.\nDelegatingPasswordEncoder has one default encoder. The PasswordEncoderFactories sets BCryptPasswordEncoder as the default encoder. Now, when user data is saved during registration, the password encoder will encode the password and add a prefix at the beginning of the result string. The encoded password looks like this:\n{bcrypt}$2a$10$4V9kA793Pi2xf94dYFgKWuw8ukyETxWb7tZ4/mfco9sWkwvBQndxW When the user with this password wants to authenticate, DelegatingPasswordEncoder can recognize the prefix und choose the suitable encoder for matching.\nIn the example with the old SHA-1 passwords, we have to run a SQL-script that prefixes all password hashes with {SHA-1}. From this moment, DelegatingPasswordEncoder can match the SHA-1 password when the user wants to authenticate.\nBut let\u0026rsquo;s say we don\u0026rsquo;t want to use BCryptPasswordEncoder as the new default encoder, but SCryptPasswordEncoder instead. We can set the default password encoder after creating DelegatingPasswordEncoder:\n@Configuration @EnableWebSecurity class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Bean public PasswordEncoder passwordEncoder() { DelegatingPasswordEncoder delegatingPasswordEncoder = (DelegatingPasswordEncoder) PasswordEncoderFactories .createDelegatingPasswordEncoder(); delegatingPasswordEncoder .setDefaultPasswordEncoderForMatches(new SCryptPasswordEncoder()); return delegatingPasswordEncoder; } // ... } We can also take full control of which encoders should be supported if we create a DelegatingPasswordEncoder on our own:\n@Configuration @EnableWebSecurity class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Bean public PasswordEncoder passwordEncoder() { String encodingId = \u0026#34;scrypt\u0026#34;; Map\u0026lt;String, PasswordEncoder\u0026gt; encoders = new HashMap\u0026lt;\u0026gt;(); encoders.put(encodingId, new SCryptPasswordEncoder()); encoders.put(\u0026#34;SHA-1\u0026#34;, new MessageDigestPasswordEncoder(\u0026#34;SHA-1\u0026#34;)); return new DelegatingPasswordEncoder(encodingId, encoders); } // ... } This code creates a password encoder that supports SHA-1 and scrypt for matching and uses scrypt for encoding new passwords. Now we have users in the database with both password encodings SHA-1 and scrypt and the application supports both.\nMigrating Password Encoding If the passwords in the database are encoded by an old, easily attackable, algorithm, then we might want to migrate the passwords to another encoding. To migrate a password to another encoding we have to encode the plain text password.\nOf course, we don\u0026rsquo;t have the plain password in the database and we can\u0026rsquo;t compute it without huge effort. Also, we don\u0026rsquo;t want to force users to migrate their passwords. But we can start a slow gradual migration.\nLuckily, we don\u0026rsquo;t need to implement this logic on our own. Spring Security can migrate passwords to the default password encoding. DelegatingPasswordEncoder compares the encoding algorithm after every successful authentication. If the encoding algorithm of the password is different from the current password encoder, the DaoAuthenticationProvider will update the encoded password with the current password encoder and override it in the database using DatabaseUserDetailPasswordService.\nIf the password encoder we\u0026rsquo;re currently using gets old and insecure in a couple of years, we can just set another, more secure password encoder as the default encoder. After that, Spring Security will gradually migrate all passwords to the new encoding automatically.\nCalculating the Optimal Work Factor How to choose the suitable work factor for the password encoder? Spring Security recommends tuning the password encoder to take about one second to verify the password. But this time depends on the hardware on which the application runs.\nIf the same application runs on different hardware for different customers, we can\u0026rsquo;t set the best work factor at compile time.\nBut we can calculate a good work factor when starting the application:\n@Configuration @EnableWebSecurity class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder( bcCryptWorkFactorService.calculateStrength()); } // ... } The method calculateStrength() returns the work factor that is needed to encode the password so that it takes about one second. The method is executed by starting the application on the current hardware. If the application starts on a different machine, the best work factor for that hardware will be found automatically. Note that this method can take several seconds. It means the start of the application will be slower than usual.\nConclusion Spring Security supports many password encoders, for both old and modern algorithms. Also, Spring Security provides methods to work with multiple password encodings in the same application. We can change the work factor of password encodings or migrate from one encoding to another without affecting users.\nYou can find the example code on GitHub.\n","date":"March 4, 2020","image":"https://reflectoring.io/images/stock/0064-password-1200x628-branded_huf0288656e93cd9a99ef718344415a6ea_94120_650x0_resize_q90_box.jpg","permalink":"/spring-security-password-handling/","title":"Handling Passwords with Spring Boot and Spring Security"},{"categories":["Software Craft"],"contents":" \u0026ldquo;Clients should not be forced to depend upon interfaces that they do not use.\u0026rdquo; — Robert Martin, paper \u0026ldquo;The Interface Segregation Principle\u0026rdquo;\n Abstraction is the heart of object-oriented design. It allows the client to be unconcerned with the implementation details of functionality. In Java, abstraction is achieved through abstract classes and interfaces. This article explains the idea of the Interface Segregation Principle, which is the \u0026ldquo;I\u0026rdquo; in the SOLID principles.\n Example Code This article is accompanied by a working code example on GitHub. What Is an Interface? An Interface is a set of abstractions that an implementing class must follow. We define the behavior but don\u0026rsquo;t implement it:\ninterface Dog { void bark(); } Taking the interface as a template, we can then implement the behavior:\nclass Poodle implements Dog { public void bark(){ // poodle-specific implementation  } } What Is the Interface Segregation Principle? The Interface Segregation Principle (ISP) states that a client should not be exposed to methods it doesn\u0026rsquo;t need. Declaring methods in an interface that the client doesn\u0026rsquo;t need pollutes the interface and leads to a \u0026ldquo;bulky\u0026rdquo; or \u0026ldquo;fat\u0026rdquo; interface.\nReasons to Follow the Interface Segregation Principle Let\u0026rsquo;s look at an example to understand why the Interface Segregation Principle is helpful.\nWe\u0026rsquo;ll create some code for a burger place where a customer can order a burger, fries or a combo of both:\ninterface OrderService { void orderBurger(int quantity); void orderFries(int fries); void orderCombo(int quantity, int fries); } Since a customer can order fries, or a burger, or both, we decided to put all order methods in a single interface.\nNow, to implement a burger-only order, we are forced to throw an exception in the orderFries() method:\nclass BurgerOrderService implements OrderService { @Override public void orderBurger(int quantity) { System.out.println(\u0026#34;Received order of \u0026#34;+quantity+\u0026#34; burgers\u0026#34;); } @Override public void orderFries(int fries) { throw new UnsupportedOperationException(\u0026#34;No fries in burger only order\u0026#34;); } @Override public void orderCombo(int quantity, int fries) { throw new UnsupportedOperationException(\u0026#34;No combo in burger only order\u0026#34;); } } Similarly, for a fries-only order, we\u0026rsquo;d also need to throw an exception in orderBurger() method.\nAnd this is not the only downside of this design. The BurgerOrderService and FriesOrderService classes will also have unwanted side effects whenever we make changes to our abstraction.\nLet\u0026rsquo;s say we decided to accept an order of fries in units such as pounds or grams. In that case, we most likely have to add a unit parameter in orderFries(). This change will also affect BurgerOrderService even though it\u0026rsquo;s not implementing this method!\nBy violating the ISP, we face the following problems in our code:\n Client developers are confused by the methods they don\u0026rsquo;t need. Maintenance becomes harder because of side effects: a change in an interface forces us to change classes that don\u0026rsquo;t implement the interface.  Violating the ISP also leads to violation of other principles like the Single Responsibility Principle.\nCode Smells for ISP Violations and How to Fix Them Whether working solo or in larger teams, it helps to identify problems in code early. So, let\u0026rsquo;s discuss some code smells which could indicate a violation of the ISP.\nA Bulky Interface In bulky interfaces, there are too many operations, but for most objects, these operations are not used. The ISP tells us that we should need most or all methods of an interface, and in a bulky interface, we most commonly only need a few of them in each case. Also, when testing a bulky interface, we have to identify which dependencies to mock and potentially have a giant test setup.\nUnused Dependencies Another indication of an ISP violation is when we have to pass null or equivalent value into a method. In our example, we can use orderCombo() to place a burger-only order by passing zero as the fries parameter. This client does not require the fries dependency, so we should have a separate method in a different interface to order fries.\nMethods Throwing Exceptions As in our burger example, if we encounter an UnsupportedOperationException, a NotImplementedException, or similar exceptions, it smells like a design problem related to the ISP. It might be a good time to refactor these classes.\nRefactoring Code Smells For example, we can refactor our burger place code to have separate interfaces for BurgerOrderService and FriesOrderService:\ninterface BurgerOrderService { void orderBurger(int quantity); } interface FriesOrderService { void orderFries(int fries); } In case when we have an external dependency, we can use the adapter pattern to abstract away the unwanted methods, which makes two incompatible interfaces compatible by using an adapter class.\nFor example, let\u0026rsquo;s say that OrderService is an external dependency that we can\u0026rsquo;t modify and needs to use to place an order. We will use the Object Adapter Pattern to adapt OrderService to our target interface i.e. BurgerOrderService. For this, we will create the OrderServiceObjectAdapter class which holds a reference to the external OrderService.\nclass OrderServiceObjectAdapter implements BurgerOrderService { private OrderService adaptee; public OrderServiceObjectAdapter(OrderService adaptee) { super(); this.adaptee = adaptee; } @Override public void orderBurger(int quantity) { adaptee.orderBurger(quantity); } } Now when a client wants to use BurgerOrderService, we can use the OrderServiceObjectAdapter to wrap the external dependency:\nclass Main{ public static void main(String[] args){ OrderService orderService = ...; BurgerOrderService burgerService = new OrderServiceObjectAdapter(new ComboOrderService()); burgerService.orderBurger(4); } } As we can see, we are still using the methods provided by the OrderService interface, but the client now only depends on the method orderBurger(). We are using the OrderService interface as an external dependency, but we have successfully restructured code to avoid the side effects of an ISP violation.\nSo, Should Interfaces Always Have a Single Method? Applying the ISP to the extreme will result in single-method interfaces, also known as role interfaces.\nThis solution will solve the problem of ISP violation. Still, it can result in a violation of cohesion in interfaces, resulting in the scattered codebase that is hard to maintain. For example, the Collection interface in Java has many methods like size() and isEmpty() which are often used together, so it makes sense for them to be in a single interface.\nThe Interface Segregation Principle and Other Solid Principles The SOLID principles are closely related to one another. The ISP is particularly closely associated with the Liskov Substitution Principle (LSP) and the Single Responsibility Principle (SRP).\nIn our burger place example, we have thrown an UnsupportedOperationException in BurgerOrderService, which is a violation of the LSP as the child is not actually extending the functionality of the parent but instead restricting it.\nThe SRP states that a class should only have a single reason to change. If we violate the ISP and define unrelated methods in the interface, the interface will have multiple reasons to change - one for each of the unrelated clients that need to change.\nAnother interesting relation of the ISP is with the Open/Closed Principle (OCP), which states that a class should be open for extension but closed for modification. In our burger place example, we have to modify OrderService to add another order type. Had we implemented OrderService to take a generic Order object as a parameter, we would not only have saved ourselves from potential OCP violation but also have solved the ISP violation as well:\ninterface OrderService { void submitOrder(Order order); } Conclusion The ISP is a straightforward principle that is also easy to violate by adding methods to existing interfaces that the clients don\u0026rsquo;t need. ISP is also closely related to other SOLID principles.\nThere are many code smells that can help us to identify and then fix ISP violations. Still, we have to remember that an overly aggressive implementation of any principle can lead to other issues in the codebase.\nThe example code used in this article is available on GitHub.\n","date":"March 3, 2020","image":"https://reflectoring.io/images/stock/0063-interface-1200x628-branded_hu8c3a5b7a897a90fddea1af1e185fffb6_93041_650x0_resize_q90_box.jpg","permalink":"/interface-segregation-principle/","title":"Interface Segregation Principle: Everything You Need to Know"},{"categories":["Software Craft"],"contents":"This article explains the Single Responsibility Principle (SRP): what does it practically mean? And when and how should we apply it?\nWhat Does the Single Responsibility Principle Say? The Single Responsibility Principle may feel a bit vague at first. Let\u0026rsquo;s try to deconstruct it and look at what it actually means.\nThe Single Responsibility Principle applies to the software that we develop on different levels: methods, classes, modules, and services (collectively, I\u0026rsquo;ll call all these things components later in this article). So, the SRP states that each component should have a single responsibility.\nThis phrase is a little more concrete, but it still doesn\u0026rsquo;t explain what a responsibility is and how small or large a responsibility should be for each particular method, class, module, or service.\nTypes Of Responsibilities Instead of defining a responsibility in abstract terms, it may be more intuitive to list the actual types of responsibilities. Here are some examples (they are derived from Adam Warski\u0026rsquo;s classification of objects in applications which he distilled in his thought-provoking post about dependency injection in Scala):\nBusiness Logic For example, extracting a phone number from text, converting an XML document into JSON, or classifying a money transaction as fraud. On the level of classes and above, a business logic responsibility is knowing how to do (or encapsulating) the business function: for example, a class knowing how to convert XML documents into JSON, or a service encapsulating the detection of fraud transactions.\nExternal Integration On the lowest level, this can be an integration between modules within the application, such as putting a message into a queue which is processed by another subsystem. Then, there are integrations with the system, such as logging or checking the system time (System.currentTimeMillis()). Finally, there are integrations with external systems, such as database transactions, reading from or writing to a distributed message queue such as Kafka, or RPC calls to other services.\nOn the level of classes, modules, and services, an external integration responsibility is knowing how to integrate (or encapsulating integration with) the external part: for example, a class knowing how to read the system time (which is exactly what java.time.Clock is), or a service encapsulating talking with an external API.\nData A profile of a person on a website, a JSON document, a message. Embodying a piece of data can only be a responsibility of a class (object), but not of a method, module, or service. A specific kind of data is configuration: a collection of parameters for some other method, class, or system.\nControl Flow A piece of an application\u0026rsquo;s control flow, execution, or data flow. An example of this responsibility is a method that orchestrates calls to components that each have other responsibilities:\nvoid processTransaction(Transaction t) { if (isFraud(t)) { // Business logic  // External integration: logging  logger.log(\u0026#34;Detected fraud transaction {}\u0026#34;, t); // Integration with external service  alertingService.sendAlert(new FraudTransactionAlert(t)); } } On the level of classes, an example of a data flow responsibility may be a BufferedLogger class which buffers logging statements in memory and manages a separate background thread that takes statements from the buffer and writes them to actual external logger:\nclass BufferedLogger implements Logger { private final Logger delegate; private final ExecutorService backgroundWorker; private final BlockingQueue\u0026lt;Statement\u0026gt; buffer; BufferedLogger(Logger delegate) { this.delegate = delegate; this.backgroundWorker = newSingleThreadExecutor(); this.buffer = new ArrayBlockingQueue\u0026lt;\u0026gt;(100); backgroundWorker.execute(this::writeStatementsInBackground); } @Override public void log(Statement s) { putUninterruptibly(buffer, s); } private void writeStatementsInBackground() { while (true) { Statement s = takeUninterruptibly(buffer); delegate.log(s); } } } Method writeStatementsInBackground() itself has a control flow responsibility.\nIn a distributed system, examples of services with a control or data flow responsibility could be a proxy, a load balancer, or a service transparently caching responses from or buffering requests to some other service.\nHow Small or Large Should a Responsibility Be? I hope the examples above give some more grounded sense of what a responsibility of a method, class, module, or service could be. However, they still provide no actionable guidance on how finely we should chop responsibilities between the components of your system. For example:\n Should conversion from XML to JSON be a responsibility of a single method (or a class), or should we split it between two methods? One translates XML into a tree, and another serializes a tree into JSON? Or should these be separate methods belonging to a single class? Should individual types of interactions with an external service (such as different RPC operations) be responsibilities of separate classes, or should they all belong to a single class? Or, perhaps, should interactions be grouped, such as read operations going to one class and write operations going to another? How should we split responsibilities across (micro)services?  Uncle Bob Martin (who first proposed the Single Responsibility Principle) suggests that components should be broken down until each one has only one reason to change. However, to me, this criterion still doesn\u0026rsquo;t feel very instructive. Consider the processTransaction method above. There may be many reasons to change it:\n Increment counters of normal and fraudulent transactions to gather statistics. Enrich or reformat the logging statement. Wrap sending an alert into error-handling try-catch and log a failure to send an alert.  Does this mean that the processTransaction() method is too large and we should split it further into smaller methods? According to Uncle Bob, we probably should, but many other people may think that processTransaction is already small enough.\nLet\u0026rsquo;s return to the purpose of using the Single Responsibility Principle. Obviously, it\u0026rsquo;s to improve the overall quality of the codebase and of its production behavior (Carlo Pescio calls these two domains artifact space and runtime space, respectively).\nSo, what will ultimately help us to apply the Single Responsibility Principle effectively is making clearer for ourselves how SRP affects the quality of the code and the running application. The optimal scope of the responsibility for a component highly depends on the context:\n The responsibility itself (i. e. what the component actually does) The non-functional requirements to the application or the component we\u0026rsquo;re developing How long we plan to support the code in the future How many people will work with this code Etc.  However, this shouldn\u0026rsquo;t intimidate us. We should just split (or merge) components while we see that the software qualities we\u0026rsquo;re interested in keep improving.\nThus, the next step is to analyze how the Single Responsibility Principle affects the specific software qualities.\nThe Impact Of the Single Responsibility Principle On Different Software Qualities Understandability and Learning Curve When we split responsibilities between smaller methods and classes, usually the system becomes easier to learn overall. We can learn bite-sized components one at a time, iteratively. When we jump into a new codebase, we can learn fine-grained components as we need them, ignoring the internals of the other components which are not yet relevant for us.\nIf you have ever worked with code in which the Single Responsibility Principle was not regarded much, you probably remember the frustration when you stumble upon a three-hundred-line method or a thousand-line class about which you need to understand something (probably a little thing), but to figure that out, you are forced to read through the whole method or the class. This not only takes a lot of time and mental energy, but also fills the \u0026ldquo;memory cache\u0026rdquo; of your brain with junk information that is completely irrelevant at the moment.\nHowever, it\u0026rsquo;s possible to take the separation of concerns so far that it might actually become harder to understand the logic. Returning to the processTransaction() example, consider the following way of implementing it:\nclass TransactionProcessor { private final TransactionInstrumentation instrumentation; ... void processTransaction(Transaction t) { if (isFraud(t)) { instrumentation.detectedFraud(t); } } } class TransactionInstrumentation { private final Logger logger; private final AlertingService alertingService; ... void detectedFraud(Transaction t) { logger.log(\u0026#34;Detected fraud transaction {}\u0026#34;, t); alertingService.sendAlert(new FraudTransactionAlert(t)); } } We extracted the observation part of the logic into a separate TransactionInstrumentation class. This approach is not unreasonable. Compared to the original version, it aids the flexibility and the testability of the code, as we will discuss below in this article. (In fact, I took the idea directly from the excellent article about domain-oriented observability by Pete Hodgson.)\nOn the other hand, it smears the logic so thin across multiple classes and methods that it would take longer to learn it than the original, at least for me.\nExtracting responsibilities into separate modules or services (rather than just classes) doesn\u0026rsquo;t help to further improve understandability per se, however, it may help with other qualities related the learning curve: the discoverability of the functionality (for example, through service API discovery) and the observability of the system, which we will discuss below.\nUnderstandability itself is somewhat less important when we work on the code alone, rather than in a team. But don\u0026rsquo;t abuse this - we tend to underestimate how quickly we forget the details of the code on which we worked just a little while ago and how hard it is to relearn its purpose :).\nFlexibility We can easily combine independent components (via separate control flow components) in different ways for different purposes or depending on configuration. Let\u0026rsquo;s take TransactionProcessor again:\nclass TransactionProcessor { private final AlertingService alertingService; ... void processTransaction(Transaction t) { if (isFraud(t)) { logger.log(\u0026#34;Detected fraud transaction {}\u0026#34;, t); alertingService.sendAlert(new FraudTransactionAlert(t)); } } private boolean isFraud(Transaction t) { ... } } To allow the operators of the system to disable alerting, we can create a NoOpAlertingService and make it configurable for TransactionProcessor via dependency injection. On the other hand, if the sendAlert() responsibility was not separated into the AlertingService interface, but rather was just a method in TransactionProcessor, to make alerting configurable we would have to add a boolean field sendAlerts to the class.\nImagine now that we want to analyze historical transactions in a batch process. Since the isFraud() method (that is, the fraud detection responsibility) is a part of TransactionProcessor, this method is called during batch processing. If online and batch processing require different initialization logic, TransactionProcessor has to provide a different constructor for each use case. On the other hand, if fraud detection was a concern of a separate FraudDetection class, we could prevent TransactionProcessor from swelling.\nWe can notice a pattern: it\u0026rsquo;s still possible to support different use cases and configuration for a component with multiple responsibilities, but only by increasing the size and the complexity of the component itself, like adding flags and conditional logic. Little by little, this is how big ball of mud systems (aka monoliths) and runaway methods and classes emerge. When each component has a single responsibility, we can keep the complexity of any single one of them limited.\nWhat about the \u0026ldquo;lean\u0026rdquo; approach of splitting responsibilities only when we actually need to make them configurable? I think this is a good strategy if applied with moderation. It is similar to Martin Fowler\u0026rsquo;s idea of preparatory refactoring. Keep in mind, however, that if we don\u0026rsquo;t keep responsibilities separate from early on, the code for them may grow to have many subtle interdependencies, so it might take much more effort to split them apart further down the road. And to do this, we might also need to spend time relearning the workings of the code in more detail than we would like to.\nReusability It becomes possible to reuse components when they have a single, narrow responsibility. The FraudDetection class from the previous section is an example of this: we could reuse it in online processing and batch processing components. To do this in the artifact space, we could pull it into a shared library. Another direction is to move fraud detection into a separate microservice: we can think about this as reusability in the runtime space. The FraudDetection class within our application will then turn from having business logic responsibility to do external integration with the new service.\nMost methods with a narrow responsibility shouldn\u0026rsquo;t have side effects and shouldn\u0026rsquo;t depend on the state of the class, which enables sharing and calling them from any place. In other words, the Single Responsibility Principle nudges us toward a functional programming style.\nPro tip: thinking about responsibilities helps to notice unrelated subproblems hiding in our methods and classes. When we extract them, we can then see opportunities to reuse them in other places. Moving unrelated subproblems out of the way keeps a component at a single level of abstraction, which makes easier to understand the logic of the component.  Testability It\u0026rsquo;s easier to write and maintain tests for methods and classes with focused, independent concerns. This is what the Humble Object pattern is all about. Let\u0026rsquo;s continue playing with TransactionProcessor:\nclass TransactionProcessor { void processTransaction(Transaction t) { boolean isFraud; // Some logic detecting that the transaction is fraud,  // many lines of code omitted  ... if (isFraud) { logger.log(\u0026#34;Detected fraud transaction {}\u0026#34;, t); alertingService.sendAlert(new FraudTransactionAlert(t)); } } } In this variant, there is no separate isFraud() method. processTransaction() combines fraud detection and the reporting logic.\nThen, to test the fraud detection, we may need to mock the alertingService, which pollutes the test code with boilerplate. Not only does it take effort to set up mocks in the first place, but mock-based tests also tend to break every time we change anything in the production code. Such tests become a permanent maintenance burden.\nAlternatively, to test the fraud detection logic in the example above, we could intercept and check the logging output. However, this is also cumbersome, and it hinders the ability to execute tests in parallel.\nIt\u0026rsquo;s simpler to test a separate isFraud() method. But we would still need to construct a TransactionProcessor object and to pass some dummy Logger and AlertingService objects into it.\nSo, it\u0026rsquo;s even easier to test the variant with the separate FraudDetection class. Notice that to test the intermediate version without a separate FraudDetection class, we often find ourselves changing the visibility of the method under test (isFraud(), in this example) from private to default (package-private).\nChanging visibility of a method and the @VisibleForTesting annotation are clues to think about whether it\u0026rsquo;s better to split the responsibilities of the enclosing class.\nPete Hodgson also explains how extracting observability like the alerting feature into a separate class (like TransactionInstrumentation) enables clearer, more focused tests.\nIn contrast to methods and classes, smaller (micro)services complicate the local setup for integration testing. Docker Compose is a godsend, but it doesn\u0026rsquo;t solve the problem fully.\nDebuggability When methods and classes are focused on a single concern, we can write equally focused tests for them. If tests covering only a single production method or class fail we immediately know where the bug is and thus we don\u0026rsquo;t need to debug. Sometimes, debugging may become a large portion of the development process: for example, Michael Malis reports that for him, it used to take as much as a quarter of the total time.\nWhen we still have to debug, we can accelerate the debugging loop by testing isolated pieces of functionality without building large graphs of objects through dependency injection or spinning up databases in Testcontainers.\nHowever, keep in mind that many bugs are due to one component incorrectly using another. Mistakes happen exactly in the integration of real components. So, it\u0026rsquo;s important to have both narrowly focused unit tests to quickly fix certain types of errors without lengthy debugging, and more integration-like tests to check that components use each other properly.\nObservability and Operability Having methods with single responsibilities also helps to quickly pinpoint performance problems because the results of profiling become more informative. At the top of a profiler\u0026rsquo;s output, we can see the methods that perform badly and will know what exact responsibilities they have.\nWhen components (not only methods and classes, but also modules and distributed services) are connected with queues (either in-memory, in-process Queues, or distributed message brokers such as Kafka), we can easily monitor the sizes of the backlogs in the pipeline. Matt Welsh, the engineer who proposed the staged event-driven architecture (SEDA), regarded this observability of load and resource bottlenecks as the most important contribution of SEDA.\nDecoupled services could be scaled up and down independently in response to the changing load, without overuse of resources. Within an application, we can control the distribution of CPU resources between method, class, and module responsibilities by sizing the corresponding thread pools. ThreadPoolExecutor even supports dynamic reconfiguration in runtime via the setCorePoolSize() method.\nWhen microservices have focused responsibilities, it also helps to investigate incidents. If we monitor the request success rates and health status of each service and see that one service which connects to a particular database is failing or unavailable, we may assume that the root problem lies in this database rather than any other part of the system.\nHowever, despite the advantages of finer-grained monitoring and scaling, splitting responsibilities between smaller services generally increases the burden of operating the system. Smaller services mean more work:\n Setting up and operating intermediate message queues (like Kafka) between the services. DevOps: setting up and managing separate delivery pipelines, monitoring, configuration, machine and container images. Deployment and orchestration: Kubernetes doesn\u0026rsquo;t fully alleviate it. To ensure rollback safety, the deployments should be multi-phase, shared state and messages sent between services should be versioned.  Reliability Reliability is the first software quality in the list that we mostly hurt, not aid when we split smaller responsibilities between the components.\nIf engineered properly (an important caveat!), a microservice architecture can increase reliability: when one service is sick, others can still serve something for the users. However, the inherent fallibility of distributed systems hits harder: more remote communications between services mean more ways of how things could go wrong, including network partition or degradation.\nDiscussing the pros and the cons of microservices is not the main goal of this article, but there are plenty of good materials on this topic, the reliability aspect in particular: 1, 2, 3.\nCode Size Smaller responsibility of each component means that there are more components in total in the system.\nEach method needs a signature declaration. Each class needs constructors, static factory methods, field declarations, and other ceremony. Each module needs a separate configuration class and a dependency injection setup. Each service needs separate configuration files, startup scripts, CI/CD/orchestration infrastructure, and so on.\nTherefore, the more focused responsibilities of the components we make, the more code we will need to write. This impacts the long-term maintainability much less than all the qualities discussed above: understandability, flexibility, reusability, etc. However, it means that it takes more time and effort to develop the first version of the application with finely separated responsibilities than with larger components.\nPerformance This shouldn\u0026rsquo;t be a concern normally, but for the sake of completeness, we should note that a large number of smaller classes may impact the application startup time. An entry in the Spring blog has a nice chart illustrating this:\nHaving lots of small methods taxes the application performance through method calls and returns. This is not a problem at hotspots thanks to method inlining, but in applications with a \u0026ldquo;flat\u0026rdquo; performance profile (no obvious hotspots), an excessive number of method calls might considerably affect the cumulative throughput.\nOn the higher level, the size of services might significantly impact the efficiency of the distributed system due to the costs of RPC calls and message serialization.\nSummary The Single Responsibility Principle applies to software components on all levels: methods, classes, modules, and distributed services.\nThe Single Responsibility Principle itself doesn\u0026rsquo;t include guidance about how large or small a responsibility for a component should be. The optimal size depends on the specific component, the type of the application, the current development priorities, and other context.\nWe should analyze how making the responsibilities of components smaller or larger affects the qualities of the code and the system that we are developing.\nIf we are writing proof-of-concept or throwaway code, or the relative cost of time to market / penalty for missing some deadline is super high, it\u0026rsquo;s important to keep in mind that following the Single Responsibility Principle \u0026ldquo;properly\u0026rdquo; requires more effort and therefore may delay the delivery time.\nIn other cases, we should split up responsibilities into separate methods and classes as long as the flexibility, reusability, testability, debuggability, and observability of the software keep improving, and while the code doesn\u0026rsquo;t bloat too much and we still see the \u0026ldquo;forest\u0026rdquo; of the logic behind the \u0026ldquo;trees\u0026rdquo; of small methods and classes (in more formal language, the understandability of the code doesn\u0026rsquo;t begin to deteriorate).\nThis may sound overwhelming, but of course, this analysis shouldn\u0026rsquo;t be done for each and every method and class in separation, but instead done infrequently to establish a guideline on the project, or just to train our intuition.\nOn the level of the distributed system, the trade-off is much less in favor of extracting (micro)services with more narrow responsibilities: discoverability, flexibility, reusability, and observability improve, but testability, operability, reliability, and performance mostly decline. On the other hand, the Single Responsibility Principle probably shouldn\u0026rsquo;t be the first thing to consider when sizing microservices. Most people in the industry think that it\u0026rsquo;s more important to follow the team boundaries, bounded contexts, and aggregates (the last two are concepts from Domain-Driven Design).\n P. S.: I explore the idea of analyzing software design practices and principles through the lenses of distinct software qualities such as understandability, testability, performance, and so on in the Software Design project on Wikiversity.\n","date":"February 26, 2020","image":"https://reflectoring.io/images/stock/0062-lego-1200x628-branded_hufb30a8c04e18112c57ea4d7a1876037e_191149_650x0_resize_q90_box.jpg","permalink":"/single-responsibility-principle/","title":"Single Responsibility Principle Unpacked"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you always wanted to know what your HR people mean when they say \u0026ldquo;we should all have a growth mindset\u0026rdquo; you are looking for a shift of your mindset towards learning you want to read stories about important people and what effects their mindset have on their environments  Book Facts  Title: Mindset Author: Carol S. Dweck Word Count: ~ 105.000 (7 hours at 250 words / minute) Reading Ease: medium Writing Style: storytelling backed with scientific research  Overview The proposition of {% include book-link.html book=\u0026ldquo;mindset\u0026rdquo; %} by Carol S. Dweck sounds a little esoteric: something like \u0026ldquo;control your mindset and you\u0026rsquo;ll be a winner\u0026rdquo;.\nIn the book, she discusses two mindsets:\n the \u0026ldquo;fixed\u0026rdquo; mindset, in which we believe that our capabilities are fixed and won\u0026rsquo;t change much, no matter how hard we try, and the \u0026ldquo;growth\u0026rdquo; mindset, in which we believe that we can do a lot more than we currently can, just by learning hard enough.  Dweck tells plausible stories from her psychological research that show the effects both mindsets have on people and gives some tips on how we can choose our mindset.\nNotes Here are my notes, as usual with some comments in italics.\nThe Mindsets  the major factor to achieve expertise is purposeful engagement - not a predefined, fixed amount of intelligence in a fixed mindset, we believe that our intelligence towards a certain skill is a given and cannot change much over time in a growth mindset, we believe that our intelligence towards a certain skill can improve over time  Inside the Mindsets  the growth mindset is all about learning - challenge and interest come hand in hand the fixed mindset is all about the pressure to perform and not losing face - we can\u0026rsquo;t improve, so we have to make do with what we have the fixed mindset supports only personal success because we want to prove that our fixed ability is good enough the growth mindset supports shared success because we focus on learning and not so much on comparing our respective ability in the fixed mindset we\u0026rsquo;re not trying hard because we don\u0026rsquo;t believe that we can improve in the fixed mindset, only results count, while in the growth mindset we count every learning on the way as a success in the fixed mindset, we\u0026rsquo;re prone to give up on hard challenges because we\u0026rsquo;re telling ourselves that we\u0026rsquo;re not good enough - we\u0026rsquo;re more likely to pull through hard challenges in the growth mindset This makes a growth mindset the foundation for habit change as described in The Power of Habit and Atomic Habits.  The Truth About Ability and Accomplishment  accomplishment comes with effort and only the growth mindset allows effort growth-minded people tend to learn for the sake of learning - not for acing an exam - and tend to get better results not all of us can become prodigies, but an essential piece for becoming one is an urge for learning - a growth mindset what one person can learn, almost everyone else can learn, given the appropriate learning conditions test scores only measure the current snapshot of intelligence and not where a student\u0026rsquo;s intelligence can end up the fixed mindset limits achievement, because we don\u0026rsquo;t believe in improvement \u0026ldquo;Just because people can do something with little or no training, it doesn\u0026rsquo;t mean that others can\u0026rsquo;t do it with training.\u0026rdquo; don\u0026rsquo;t praise someone\u0026rsquo;s ability - praise the effort it took instead to put them into a growth mindset  Sports: The Mindset of a Champion  success is more about the process than about ability (strong parallel to Atomic Habits, where the author says that we should focus on the process instead of the outcomes to foster habit change) having great innate ability can be a curse because everyone keeps saying to you - consciously or not - that you don\u0026rsquo;t have to expend effort great sportspeople often have a certain base talent, but they keep working on themselves to improve fixed sportspeople blame outside forces when they lose (there\u0026rsquo;s an interesting story about John MacEnroe in the book)  Business: Mindset and Leadership  \u0026ldquo;A company that cannot self-correct cannot thrive.\u0026rdquo; many companies have failed due to the fixed mindset of a top manager the fixed mindset strives for feeling good in the short term and doesn\u0026rsquo;t support long-term decisions very good a CEO with a fixed mindset may put the whole company into a fixed mindset - stopping innovation a fixed mindset leads to using the word \u0026ldquo;I\u0026rdquo; where a growth mindset allows using the word \u0026ldquo;we\u0026rdquo; growth-minded leaders are guides - fixed-minded leaders are judges a fixed mindset causes groupthink - everybody thinks alike and no one disagrees leaders are made, not born companies can have a collective mindset - either a culture of continuous development or a culture of genius  Relationships: Mindsets in Love (or not)  fixed-minded people feel humiliated by being rejected in a relationship and want revenge, while growth-minded people tend to get over it faster in fixed-mindset relationships partners think they were meant to be or not and if they were meant to be, they should be good with each other in every situation - this creates misunderstandings and conflict \u0026ldquo;A no-effort relationship is a doomed relationship.\u0026rdquo; \u0026ldquo;Choosing a partner is choosing a set of problems.\u0026rdquo; a growth-minded relationship gives room for partners to support each other\u0026rsquo;s development a fixed mindset tries to protect one from rejection at all cost - this can lead to shyness and coldness towards other people bullying is a symptom of the fixed mindset - it\u0026rsquo;s about judging others bullying victims also often have a fixed mindset which makes them perfect targets for fixed-minded bullies - this mindset makes them want to judge back and seek revenge  Parents, Teachers, and Coaches  praising children\u0026rsquo;s intelligence harms their innovation and performance  \u0026ldquo;if success means I\u0026rsquo;m smart, then failure means I\u0026rsquo;m dumb\u0026rdquo; this may lead to avoidance of challenges for fear of failure don\u0026rsquo;t praise for speed and perfection - they\u0026rsquo;re the enemies of hard work   \u0026ldquo;Don\u0026rsquo;t judge. Teach.\u0026rdquo; the growth mindset is supported by an atmosphere of trust, not judgment \u0026ldquo;The fixed mindset makes people complicated.\u0026rdquo; continued success can lull us into a fixed mindset don\u0026rsquo;t praise effort that wasn\u0026rsquo;t there don\u0026rsquo;t praise effort as a consolation prize only instead, praise the process but tie the praise to the outcome \u0026ldquo;Great contributions to society are born out of curiosity and deep understanding.\u0026rdquo; - learn to understand, not to memorize  Changing Mindsets  our beliefs influence how we interpret the world they can lead to exaggerated feelings of depression or superiority an 8-session growth workshop has changed the beliefs of school kids with noticeable effects like better grades and enhanced motivation when you have trouble following through with a plan, define when, where, and how you want to do it in great detail to increase the chances that you really do it (strong parallel to The Power of Habit, where the author says that making a plan helps follow-through of habit change) when confronted with failure or rejection think about your reaction to it - there is always a growth-minded way of dealing with it that makes you stronger a growth mindset enables you to change your environment by changing yourself (strong parallel to The 7 Habits of Effective People, where the author says that to change your environment, you have to change first) willpower alone isn\u0026rsquo;t enough to change things - you need a system to facilitate change (strong parallel to Atomic Habits, where the author says that we should concentrate on a process of change instead of the outcomes)  in a fixed mindset, we don\u0026rsquo;t see this system   the journey to change:  accept your fixed mindset (we all have it) find your fixed mindset triggers give your fixed persona a name educate your fixed persona each time it shows up    Conclusion It seems like magic: just switch your mindset to a growth mindset and a bunch of positive things will happen to you. But it\u0026rsquo;s not that easy, of course.\nThe book tells fascinating stories about people with a growth mindset and people with a fixed mindset that make it apparent that our mindset affects us and our environment. There are strong parallels to other books I\u0026rsquo;ve read about habit building, so different authors came to the same conclusions.\n","date":"February 25, 2020","image":"https://reflectoring.io/images/covers/mindset-teaser_hubdb11c14ba021233bda5ee37833f8042_23680_650x0_resize_q90_box.jpg","permalink":"/book-review-mindset/","title":"Book Notes: Mindset"},{"categories":["Spring Boot"],"contents":"Spring Boot simplifies database migrations by providing integration with Flyway, one of the most widely used database migration tools. This guide presents various options of using Flyway as part of a Spring Boot application, as well as running it within a CI build. We\u0026rsquo;ll also cover the main advantages of having Database Migrations Done Right.\n Example Code This article is accompanied by a working code example on GitHub. Why Do We Need Database Migrations? I\u0026rsquo;ve worked on a project where all database changes were deployed manually. Over time, more people joined and, naturally, they started asking questions:\n What state is the database in on this environment? Has a specific script already been applied or not? Has this hotfix in production been deployed in other environments afterward? How can I set up a new database instance to a specific or the latest state?  Answering these questions required one of us to check the SQL scripts to find out if someone has added a column, modified a stored procedure, or similar things. If we multiply the time spent on all these checks with the number of environments and add the time spent on aligning the database state, then we get a decent amount of time lost.\nAutomatic database migrations with Flyway or similar tools allow us to:\n Create a database from scratch. Have a single source of truth for the version of the database state. Have a reproducible state of the database in local and remote environments. Automate database changes deployment, which helps to minimize human errors.  Enter Flyway Flyway facilitates database migration while providing:\n Well structured and easy-to-read documentation. An option to integrate with an existing database. Support for almost all known schema-based databases. A wide variety of running and configuration options.  Let\u0026rsquo;s see how to get Flyway running.\nWriting Our First Database Migration Flyway tries to find user-provided migrations both on the filesystem and on the Java classpath. By default, it loads all files in the folder db/migration within the classpath that conform to the configured naming convention. We can change this behavior by configuring the locations property.\nSQL-based Migration Flyway has a naming convention for database migration scripts which can be adjusted to our needs using the following configuration properties in application.properties (or application.yml):\nspring.flyway.sql-migration-prefix=V spring.flyway.repeatable-sql-migration-prefix=R spring.flyway.sql-migration-separator=__ spring.flyway.sql-migration-suffixes=.sql Let\u0026rsquo;s create our first migration script V1__init.sql:\nCREATE TABLE test_user( id INT AUTO_INCREMENT PRIMARY KEY, username VARCHAR(255) NOT NULL UNIQUE, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, ); test_user is just an example table that stores some user details.\nThe SQL we\u0026rsquo;re using in this article will run in an H2 in-memory database, so keep in mind that it might not work with other databases.\nJava-Based Migration If we have a case that requires more dynamic database manipulation, we can create a Java-based migration. This is handy for modifying BLOB \u0026amp; CLOB columns, for instance, or for bulk data changes like generating random data or recalculating column values.\nFile naming rules are similar to SQL-based migrations, but overriding them requires us to implement the JavaMigration interface.\nLet\u0026rsquo;s create V2__InsertRandomUsers.java and have a look at its extended capabilities:\npackage db.migration; import org.flywaydb.core.api.migration.BaseJavaMigration; import org.flywaydb.core.api.migration.Context; import org.springframework.jdbc.core.JdbcTemplate; import org.springframework.jdbc.datasource.SingleConnectionDataSource; public class V2__InsertRandomUsers extends BaseJavaMigration { public void migrate(Context context) { final JdbcTemplate jdbcTemplate = new JdbcTemplate( new SingleConnectionDataSource(context.getConnection(), true)); // Create 10 random users  for (int i = 1; i \u0026lt;= 10; i++) { jdbcTemplate.execute(String.format(\u0026#34;insert into test_user\u0026#34; + \u0026#34; (username, first_name, last_name) values\u0026#34; + \u0026#34; (\u0026#39;%d@reflectoring.io\u0026#39;, \u0026#39;Elvis_%d\u0026#39;, \u0026#39;Presley_%d\u0026#39;)\u0026#34;, i, i, i)); } } } We can execute any logic we want within a Java migration and thus have all the flexibility to implement more dynamic database changes.\nRunning Flyway We use an H2 database in in-memory mode for this article, so we can simplify database access settings. We need to add its dependency to our build file (Gradle notation):\nruntimeOnly \u0026#39;com.h2database:h2\u0026#39; Flyway supports a range of different options to run database migrations:\n via command line via Java API, via Maven and Gradle plugins, and via community plugins and integrations including Spring Boot.  Let\u0026rsquo;s have a look at each of them and discuss their pros and cons.\nSpring Boot Auto-Configuration Having a supported DataSource implementation as a dependency in the classpath is enough for Spring Boot to instantiate that DataSource and make it available for running database queries. This DataSource is automatically passed on to auto-configure Flyway when we add the following dependency to our build file (Gradle notation):\nimplementation \u0026#39;org.flywaydb:flyway-core\u0026#39; By default, Spring Boot runs Flyway database migrations automatically on application startup.\nIn case we put our migrations in different locations from the default folder, we can provide a comma-separated list of one or more classpath: or filesystem: locations in the spring.flyway.locations property in application.properties:\nspring.flyway.locations=classpath:db/migration,filesystem:/another/migration/directory Using Spring Boot auto-configuration is the simplest approach and requires minimal effort to support database migrations out of the box.\nJava API Non-Spring applications can still benefit from Flyway. Again, we need to add flyway as a dependency (Gradle notation):\nimplementation \u0026#39;org.flywaydb:flyway-core\u0026#39; Now we only need to configure and run the core class Flyway as part of application initialization:\nimport org.flywaydb.core.Flyway; public class MyApplication { public static void main(String[] args) { DataSource dataSource = ... Flyway flyway = Flyway.configure().dataSource(dataSource).load(); flyway.migrate(); // Start the rest of the application  } } Calling flyway.migrate() will now execute all database migrations that haven\u0026rsquo;t been executed before.\nGradle Plugin We can use the Flyway Gradle plugin for Spring-based applications as well as for plain Java applications if we don\u0026rsquo;t want to run migrations automatically at startup. The plugin takes all the configuration out of our application and into the Gradle script:\nplugins { // Other plugins...  id \u0026#34;org.flywaydb.flyway\u0026#34; version \u0026#34;6.2.3\u0026#34; } flyway { url = \u0026#39;jdbc:h2:mem:\u0026#39; locations = [ // Add this when Java-based migrations are used  \u0026#39;classpath:db/migration\u0026#39; ] } After successful configuration we can call the following command in our terminal:\n./gradlew flywayMigrate --info Here we use Gradle Wrapper to call the flywayMigrate task which executes all previously not-run database migrations. The --info parameter sets Gradle log level to info, which allows us to see Flyway output.\nThe Gradle plugin supports all Flyway commands by providing corresponding tasks, following the pattern flyway\u0026lt;Command\u0026gt;.\nCommand Line We can also run Flyway via command line. This option allows us to have an independent tool which doesn\u0026rsquo;t require installation or integration with our application.\nFirst, we need to download the relevant archive for our operating system and extract it.\nNext, we should create our SQL-based migrations in a folder named sql or jars in case of Java-based migrations. The jar folder must contain our Java migrations packed into jar files.\nAs with other running options, we can override the default configuration by modifying the flyway.conf file located in the conf folder. Here is a minimal configuration for H2 database:\nflyway.url=jdbc:h2:mem: flyway.user=sa Calling the Flyway executable is different for each operating system. On macOS/Linux we must call:\ncd flyway-\u0026lt;version\u0026gt; ./flyway migrate On Windows:\ncd flyway-\u0026lt;version\u0026gt; flyway.cmd migrate Placeholders Placeholders come in very handy when we want to abstract from differences between environments. A good example is using a different schema name in development and production environments:\nCREATE TABLE ${schema_name}.test_user( ... ); By default, we can use Ant-style placeholders, but when we run Flyway with Spring Boot, we can easily override it by changing the following properties in application.properties:\nspring.flyway.placeholder-prefix=${ spring.flyway.placeholder-replacement=true spring.flyway.placeholder-suffix=} # spring.flyway.placeholders.* spring.flyway.placeholders.schema_name=test Tips Basic usage of Flyway is simple, but database migration can get complicated. Here are some thoughts about how to get database migration right.\nIncremental Mindset Flyway tries to enforce incremental database changes. That means we shouldn\u0026rsquo;t update already applied migrations, except repeatable ones. By default, we should use versioned migrations that will only be run once and will be skipped in subsequent migrations.\nSometimes we have to do manual changes, directly to the database server, but we want to have them in our migrations scripts as well so we can transport them to other environments. So, we change a flyway script after it has already been applied. If we run another migration sometime later, we get the following error:\n* What went wrong: Execution failed for task \u0026#39;:flywayMigrate\u0026#39;. \u0026gt; Error occurred while executing flywayMigrate Validate failed: Migration checksum mismatch for migration version 1 -\u0026gt; Applied to database : -883224591 -\u0026gt; Resolved locally : -1438254535 This is because we changed the script and Flyway has a different checksum recorded for it.\nFixing this is easy, by simply calling the repair command, which generates the following output:\nRepair of failed migration in Schema History table \u0026#34;PUBLIC\u0026#34;.\u0026#34;flyway_schema_history\u0026#34; not necessary. No failed migration detected. Repairing Schema History table for version 1 (Description: init, Type: SQL, Checksum: -1438254535) ... Successfully repaired schema history table \u0026#34;PUBLIC\u0026#34;.\u0026#34;flyway_schema_history\u0026#34; (execution time 00:00.026s). Manual cleanup of the remaining effects the failed migration may still be required. Flyway now has updated the checksum of migration script version 1 to the local value so that future migrations won\u0026rsquo;t cause this error again.\nSupport of Undo I guess we all have been in a situation when the latest production database changes should be reverted. We should be aware that Flyway supports the undo command in the professional edition only. Undo migrations are defined with the U prefix, which can be changed via the undoSqlMigrationPrefix property. The undo script to our migration script from above would look like this:\nDROP TABLE test_user; Executing the above migration would produce this output:\nCurrent version of schema \u0026#34;PUBLIC\u0026#34;: 1 Undoing migration of schema \u0026#34;PUBLIC\u0026#34; to version 1 - init Successfully undid 1 migration to schema \u0026#34;PUBLIC\u0026#34; (execution time 00:00.024s). I\u0026rsquo;ve created a free alternative, which is capable to handle the rollback of previously applied changes for a PostgreSQL database.\nDatabase Migration as Part of a CI/CD Process  \u0026ldquo;If it can be automated, it should be automated\u0026rdquo; - Unknown\n This quote is also applicable to delivering database changes to different environments (test, stage, prod, etc.).\nWe need to make sure that our local database changes will work on all other servers. The most common approach is to use a CI/CD build to emulate a real deployment.\nOne of the most widely used CI/CD servers is Jenkins. Let\u0026rsquo;s define a pipeline using the Flyway Gradle plugin to execute the database migrations:\npipeline { agent any stages { checkout scm stage(\u0026#39;Apply Database Migrations\u0026#39;) { steps { script { if (isUnix()) { sh \u0026#39;/gradlew flywayMigrate --info\u0026#39; } else { bat \u0026#39;gradlew.bat flywayMigrate --info\u0026#39; } } } } } } We call ./gradlew flywayMigrate to run the SQL scripts against the database. We have to make sure, of course, that the Flyway Gradle plugin is configured against the correct database. We could even create multiple configurations so that we can migrate to different databases (staging, production, \u0026hellip;) in different CI/CD pipelines.\nThe same command can easily be integrated in pipelines of other CI/CD tools than Jenkins.\nConclusion Implementing automated database migration with Flyway makes us confident when dealing with database changes and their distribution to target environments.\nAnother popular alternative of Flyway is Liquibase, which will be the subject of a future blog post.\nYou can find the example code on GitHub.\n","date":"February 21, 2020","image":"https://reflectoring.io/images/stock/0060-data-1200x628-branded_hue5f55076dc203147ceba2a59a969fa03_177458_650x0_resize_q90_box.jpg","permalink":"/database-migration-spring-boot-flyway/","title":"One-Stop Guide to Database Migration with Flyway and Spring Boot"},{"categories":["AWS"],"contents":"Amazon Web Services is a beast. It offers so many different cloud services that my natural reaction was to be intimidated. But not for long! I intend to tame that beast one blog post at a time until I have a production-grade, continuously deployable system!\nWe\u0026rsquo;ll start this series by creating a small win to boost our motivation: we\u0026rsquo;ll deploy a Docker image using the AWS Management Console. In a real-world scenario with multiple images and a more complex setup, we\u0026rsquo;d want to automate deployments using scripts and the AWS command-line interface. But using the web-based Management Console is a good way to get our bearings.\nCheck Out the Book!  This article gives only a first impression of what you can do with Docker and AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n  Example Code This article is accompanied by a working code example on GitHub. Prerequisites Before we start, there are some things to set up to get this tutorial going smoothly.\nOptionally, if you want to create your own Docker image, you need to have an account at hub.docker.com. Once logged in, you need to create a repository. You can give it any name you want, but aws-hello-world is a good candidate. We\u0026rsquo;ll later use this repository to publish our Docker image so that AWS can load it from there.\nYou can also use a different Docker registry (Amazon ECR, Artifactory, Docker\u0026rsquo;s own Registry, or any of a list of other products), but we\u0026rsquo;ll use the public Docker Hub in this tutorial.\nSecond, you\u0026rsquo;ll need an AWS account. Go to aws.amazon.com/console/ and sign up.\nNote that, as of yet, I find Amazon\u0026rsquo;s pricing for its cloud services very intransparent. I can\u0026rsquo;t guarantee that this tutorial won\u0026rsquo;t incur costs for your AWS account, but it hasn\u0026rsquo;t for me (Update: after the month ended, I actually got a bill over $0.36 for playing around with AWS to write this article).\nFinally, if you want to create and publish your own Docker image, you need to have Docker installed.\nPreparing a Docker Image Let\u0026rsquo;s start with creating and publishing a Docker image that we can then deploy to AWS. If you want to skip this part, you can just use the Docker image reflectoring/aws-hello-world:latest, which is available here, and move on to the next chapter.\nCreating the Docker Image For this tutorial, we\u0026rsquo;ll create a simple Docker image from a Hello World application I created. You can pull it from GitHub to build the Docker image yourself.\nThe example application is a simple Hello World application that prints \u0026ldquo;Hello World\u0026rdquo; when you open the \u0026ldquo;/hello\u0026rdquo; endpoint in a browser.\nTo build a Docker image, the application has a Dockerfile:\nFROMopenjdk:8-jdk-alpineARG JAR_FILE=build/libs/*.jarCOPY ${JAR_FILE} app.jarENTRYPOINT [\u0026#34;java\u0026#34;,\u0026#34;-jar\u0026#34;,\u0026#34;/app.jar\u0026#34;]EXPOSE8080This is a Docker file wrapping our Spring Boot application. It starts with an OpenJDK on top of a Linux alpine distribution and takes the path to a JAR file as an argument. It then copies this JAR file into app.jar within its own filesystem and runs java -jar app.jar to start the application. Finally, we\u0026rsquo;re telling Docker that the application exposes port 8080, which is for documentation purposes more than for real effect.\nNext, we have to build the Java application with ./gradlew clean build. This will create the file build/libs/aws-hello-world-0.0.1-SNAPSHOT.jar, which Docker picks up by default because we specified build/libs/*.jar as the default value for the JAR_FILE argument in our Docker file.\nNow, we can build the Docker image. From the folder containing the Dockerfile, we run:\ndocker build -t reflectoring/aws-hello-world:latest . To check if everything worked out, we can run\ndocker images | grep aws-hello-world which will display all Docker images available locally that contain aws-hello-world in their name.\nTesting the Docker Image Let\u0026rsquo;s check if the Docker image we just built actually works. We start the image up with docker run:\ndocker run -p 8081:8080 reflectoring/aws-hello-world:latest With -p we define that whatever is available on port 8080 within the container, Docker will make available via the port 8081 on the host computer. In other words, requests to port 8081 on the host computer (the host port) will be forwarded to port 8080 within the container (the container port).\nWithout specifying these ports, Docker won\u0026rsquo;t expose a port on which we can access the application.\nWhen the Docker container has successfully started up, we should see log output similar to this:\n... Tomcat started on port(s) 8080 (http) with context path \u0026#39;\u0026#39; ... Started AwsHelloWorldApplication in 3.222 seconds ... Once we see this output, we can type http://localhost:8081/hello into a browser and should be rewarded with a \u0026ldquo;Hello World\u0026rdquo; message.\nPublishing the Docker Image To deploy a Docker image to AWS, it needs to be available in a Docker registry so that AWS can download it from there. So, let\u0026rsquo;s publish our image.\nWe can choose to publish our Docker image in any Docker registry we want, even our own, as long as AWS can reach it from the internet. We\u0026rsquo;ll publish it to Docker Hub, which is Docker\u0026rsquo;s official registry.\nFor business-critical applications that we don\u0026rsquo;t want to share with the world, we should have our own private Docker registry, but for this tutorial, we\u0026rsquo;ll just share the Docker image publicly.\nFirst, we need to log in with Docker:\ndocker login registry-1.docker.io We\u0026rsquo;re prompted to enter the credentials to our Docker Hub account.\nWe can leave out the registry-1.docker.io part because Docker will use this as a default. If we want to publish to a different registry, we need to replace this address.\nNext, we push the Docker image to the registry:\ndocker push reflectoring/aws-hello-world:latest Requested Access to the Resource is Denied?  If you get errors like denied: requested access to the resource is denied when pushing to Docker Hub this means that you don't have permission to the account you want to publish the image under. Make sure that the account name (the name before the \"/\") is correct and that it's either your account name or the name of an organization that you have access to.  AWS Concepts So, we\u0026rsquo;ve got a Docker image ready to be deployed to AWS. Before we start working with AWS, let\u0026rsquo;s learn some high-level AWS vocabulary that we\u0026rsquo;ll need.\nECS - Elastic Container Service ECS is the \u0026ldquo;entry point\u0026rdquo; service that allows us to run Docker containers on AWS infrastructure. Under the hood, it uses a bunch of other AWS services to get things done.\nTask A task is AWS domain language for a wrapper around one or more Docker containers. A task instance is what AWS considers an instance of our application.\nService A service wraps a task and provides security rules to control access and potentially load balancing rules to balance traffic across multiple task instances.\nCluster A cluster provides a network and scaling rules for the tasks of a service.\nDeploying a Docker Image Using the Management Console We\u0026rsquo;ll configure a task, service, and cluster using the \u0026ldquo;Get Started\u0026rdquo; wizard provided in the web-based management console. This wizard is very convenient to use, but it\u0026rsquo;s very limited in its feature set. We don\u0026rsquo;t have all the configuration options available.\nAlso, by definition, deploying containers via the web-based wizard is a manual process and cannot be automated. In real-world scenarios, we want to automate deployments and will need to use the AWS CLI.\nIf you want to follow along, open the ECS start page in your browser and click on the \u0026ldquo;Get started\u0026rdquo; button. It should take no more than a couple minutes to get a container up and running!\nConfiguring the Container First, we configure the Docker container:\nWe can select a pre-defined Docker image or choose our own. We want to use the Docker image we published previously, so we\u0026rsquo;ll click on the \u0026ldquo;Configure\u0026rdquo; button in the \u0026ldquo;custom\u0026rdquo; box to open the \u0026ldquo;Edit container\u0026rdquo; form and will be prompted to enter a bunch of information:\n Container name: An arbitrary name for the container. image: images/stock/ The URL to the Docker image. If you have published your image in a Docker registry different from Docker Hub, check with that registry what the URL to your image looks like. We\u0026rsquo;ll use the URL docker.io/reflectoring/aws-hello-world:latest, pointing to the Docker image we prepared above.-1200x628-branded.jpg Private repository authentication: if the Docker image is private, we need to provide authentication credentials here. We\u0026rsquo;ll skip this, as our image is public. Memory Limits: We\u0026rsquo;ll leave the default (i.e. no memory limits). This should definitely be thought out and set in a production deployment, though! Port mappings: Here we can define the container port, i.e. the port that our application exposes. The Spring Boot application in the aws-hello-world Docker image exposes port 8080, so we have to put this port here. The container port doubles as the host port and I have found no way of changing that using the web wizard. This means that we\u0026rsquo;ll have to add :8080 to the URL when we want to access our application later.  In the \u0026ldquo;Advanced container configuration\u0026rdquo;`\u0026quot; section, we could configure more, but we\u0026rsquo;ll leave everything else in the default configuration for now.\nConfiguring the Task Next, we configure the task, which wraps our Docker image: images/stock/-1200x628-branded.jpg\nWe leave everything in the default setting except the name, so we can find the task later.\nConfiguring the Service Next, the wizard takes us to a screen configuring the service that\u0026rsquo;s going to wraps the task we just configured:\nAgain, we just change the name and leave everything in the default setting.\nConfiguring the Cluster We do the same with the cluster configuration:\nChange the name, leave the rest on default, hit \u0026ldquo;Next\u0026rdquo;.\nTesting the Service After checking everything again and hitting the \u0026ldquo;Create\u0026rdquo; button, we\u0026rsquo;ll be redirected to a screen showing the steps AWS performs to set everything up:\nWhen all steps are completed, hit the \u0026ldquo;View service\u0026rdquo; button, and we\u0026rsquo;ll a screen like this:\nThis screen shows a whole bunch of information about the status of the service we have just started. But where do we find the URL it\u0026rsquo;s available at so that I can test it out?\nTo find the URL of our application in the web console, do the following:\n Click on the cluster name to see the status of the cluster. In the \u0026ldquo;Tasks\u0026rdquo; tab, click the name of the task to see the status of the task. Click the \u0026ldquo;ENI Id\u0026rdquo; of the task to see the status of the network interface of that task (ENI = Elastic Network Interface). On the status page of the network interface, we finally find the public IPv4 address we can use to access our freshly deployed service.  If you have deployed the aws-hello-world container from above, add :8080/hello to that URL, put it in your browser, and you should see the message \u0026ldquo;Hello AWS!\u0026rdquo;.\nDone. We\u0026rsquo;ve just deployed our first Docker container to AWS!\nBut How Do The Pros Do It? Deploying a Docker container via a web UI is nice and all, but it doesn\u0026rsquo;t scale. We need a human each time we want to deploy a new version of the application, doing a bunch of manual stuff.\nAnd the web UI doesn\u0026rsquo;t even provide all the knobs and dials we might need. We\u0026rsquo;ve seen already that we can\u0026rsquo;t even provide a host port to do proper port forwarding.\nSo, there\u0026rsquo;s a bunch of things we\u0026rsquo;re missing out on when using the web UI.\nAll the stuff we\u0026rsquo;re missing (and more) is available in the AWS CLI, a command-line interface that we can use in scripts to remote control pretty much everything that\u0026rsquo;s going on in our AWS account.\nKeep your eyes peeled for a follow-up article doing the same as we did here with the AWS CLI.\nThe AWS Journey So, we\u0026rsquo;ve successfully deployed a Docker container. That\u0026rsquo;s only the beginning of the story. Having an application running in the cloud opens a huge range of follow-up questions.\nHere\u0026rsquo;s a list of the questions I want to answer on this journey. If there\u0026rsquo;s a link, it has already been answered with a blog post! If not, stay tuned!\n How can I deploy an application from the web console? (this article) How can I deploy an application from the command line? How can I implement high availability for my deployed application? How do I set up load balancing? How can I deploy a database in a private subnet and access it from my application? How can I deploy my application from a CI/CD pipeline? How can I deploy a new version of my application without downtime? How can I deploy my application into multiple environments (test, staging, production)? How can I auto-scale my application horizontally on high load? How can I implement sticky sessions in the load balancer (if I\u0026rsquo;m building a session-based webapp)? How can I monitor what’s happening on my application? How can I bind my application to a custom domain? How can I access other AWS resources (like SQS queues and DynamoDB tables) from my application? How can I implement HTTPS?  Conclusion The AWS web interface is intimidating. If we know where to look, though, we can deploy a Docker container in a matter of minutes. But this is a manual process and the web interface only provides basic means of configuration.\nIn real-world scenarios, we need to use the AWS CLI to create production-grade configurations and to deploy those from within an automated CI/CD pipeline.\nCheck Out the Book!  This article gives only a first impression of what you can do with Docker and AWS.\nIf you want to go deeper and learn how to deploy a Spring Boot application to the AWS cloud and how to connect it to cloud services like RDS, Cognito, and SQS, make sure to check out the book Stratospheric - From Zero to Production with Spring Boot and AWS!\n ","date":"February 15, 2020","image":"https://reflectoring.io/images/stock/0061-cloud-1200x628-branded_hu34d6aa247e0bb2675461b5a0146d87a8_82985_650x0_resize_q90_box.jpg","permalink":"/aws-deploy-docker-image-via-web-console/","title":"The AWS Journey Part 1: Deploying Your First Docker Image"},{"categories":["Spring Boot"],"contents":"To \u0026ldquo;listen\u0026rdquo; to an event, we can always write \u0026ldquo;listener\u0026rdquo; to an event as another method within the source of the event, but this will tightly couple the event source to the logic of the listener.\nWith real events, we are more flexible than with direct method calls. We can dynamically register and deregister listeners to certain events as we wish. We can also have multiple listeners for the same event.\nThis tutorial gives an overview of how to publish and listen to custom events and explains Spring Boot\u0026rsquo;s built-in events.\n Example Code This article is accompanied by a working code example on GitHub. Why Should I Use Events Instead of Direct Method Calls? Both events and direct method calls fit for different situations. With a method call it\u0026rsquo;s like making an assertion that, no matter the state of the sending and receiving modules, they need to know this event happened.\nWith events, on the other hand, we just say that an event occurred and which modules get notified is not our concern. It\u0026rsquo;s good to use events when we want to pass on the processing to another thread (example: sending an email on some task completion). Also, events come in handy for test driven development.\nWhat is an Application Event? Spring application events allows us to throw and listen to specific application events that we can process as we wish. Events are meant for exchanging information between loosely coupled components. As there is no direct coupling between publishers and subscribers, it enables us to modify subscribers without affecting the publishers and vice-versa.\nLet\u0026rsquo;s see how we can create, publish and listen to custom events in a Spring Boot application.\nCreating an ApplicationEvent We can publish application events using the Spring Framework’s event publishing mechanism.\nLet\u0026rsquo;s create a custom event called UserCreatedEvent by extending ApplicationEvent\nclass UserCreatedEvent extends ApplicationEvent { private String name; UserCreatedEvent(Object source, String name) { super(source); this.name = name; } ... } The source which is being passed to super() can be a object on which the event occured initially or a object with which the event is associated.\nSince Spring 4.2, we can also publish objects as an event without extending ApplicationEvent:\nclass UserRemovedEvent { private String name; UserRemovedEvent(String name) { this.name = name; } ... } Publishing an ApplicationEvent We use the ApplicationEventPublisher interface to publish our events.\nWhen the object we\u0026rsquo;re publishing is not an ApplicationEvent, Spring will wrap it in a PayloadApplicationEvent for us:\n@Component class Publisher { @Autowired private ApplicationEventPublisher publisher; void publishEvent(final String name) { // Publishing event created by extending ApplicationEvent \tpublisher.publishEvent(new UserCreatedEvent(this, name)); // Publishing a object as an event \tpublisher.publishEvent(new UserRemovedEvent(name)); } } Listening to an Application Event Now that we know how to create and publish a custom event, let\u0026rsquo;s see how we can listen to the event. An event can have multiple listeners doing different work based on the application requirements.\nThere are two ways to define a listener. We can either use the @EventListener annotation or implement the ApplicationListener interface. In either case, the listener class has to be managed by Spring.\nAnnotation-Driven @Component class UserRemovedListener { @EventListener ReturnedEvent handleUserRemovedEvent(UserRemovedEvent event) { System.out.println(String.format(\u0026#34;User removed (@EventListener): %s\u0026#34;, event.getName())); // Spring will send ReturnedEvent as a new event \treturn new ReturnedEvent(); } // Listener to receive the event returned by Spring \t@EventListener void handleReturnedEvent(ReturnedEvent event) { System.out.println(\u0026#34;Returned Event Called\u0026#34;); } ... } Starting with Spring 4.1 it\u0026rsquo;s now possible to simply annotate a method of a managed bean with @EventListener to automatically register an ApplicationListener matching the signature of the method. No additional configuration is necessary with annotation-driven configuration enabled. Our method can listen to several events or if we want to define it with no parameter at all, the event types can also be specified on the annotation itself. Example: @EventListener({ContextStartedEvent.class, ContextRefreshedEvent.class}).\nFor the methods annotated with @EventListener and defined as a non-void return type, Spring will publish the result as a new event for us.\n@Component class UserRemovedListener { ... @EventListener(condition = \u0026#34;#event.name eq \u0026#39;reflectoring\u0026#39;\u0026#34;) void handleConditionalListener(UserRemovedEvent event) { System.out.println(String.format(\u0026#34;User removed (Conditional): %s\u0026#34;, event.getName())); } } Spring allows our listener to be triggered only in certain circumstances if we specify a condition by defining a boolean SpEL expression. The event will only be handled if the expression evaluates to true or one of the following strings: \u0026ldquo;true\u0026rdquo;, \u0026ldquo;on\u0026rdquo;, \u0026ldquo;yes\u0026rdquo;, or \u0026ldquo;1\u0026rdquo;. Method arguments are exposed via their names. The condition expression also exposes a “root” variable referring to the raw ApplicationEvent (#root.event) and the actual method arguments (#root.args)\nIn the above example, the listener will be triggered with UserRemovedEvent only when the #event.name has value 'reflectoring',\nImplementing ApplicationListener @Component class UserCreatedListener implements ApplicationListener\u0026lt;UserCreatedEvent\u0026gt; { @Override public void onApplicationEvent(UserCreatedEvent event) { System.out.println(String.format(\u0026#34;User created: %s\u0026#34;, event.getName())); } } In the above example, we have created a listener by implementing ApplicationLister and the generic represents the type of event we want to listen. It is now possible to define our ApplicationListener implementation with nested generics information in the event type. When dispatching an event, the signature of our listener is used to determine if it matches said incoming event.\nAsynchronous Event Listeners By default spring events are synchronous, meaning the publisher thread blocks until all listeners have finished processing the event.\n@Component class AsyncListener { @Async @EventListener void handleAsyncEvent(String event) { System.out.println(String.format(\u0026#34;Async event recevied: %s\u0026#34;, event)); } } To make an event listener run in async mode, all we have to do is use the @Async annotation on that listener. To make the @Async annotation work, we also have to annotate one of our @Configuration classes or the @SpringBootApplication class with @EnableAsync.\nTransaction-Bound Events Spring allows us to bind an event listener to a phase of the current transaction. This allows events to be used with more flexibility when the outcome of the current transaction actually matters to the listener.\n@Component class UserRemovedListener { @TransactionalEventListener(condition = \u0026#34;#event.name eq \u0026#39;reflectoring\u0026#39;\u0026#34;, phase=TransactionPhase.AFTER_COMPLETION) void handleAfterUserRemoved(UserRemovedEvent event) { System.out.println(String.format(\u0026#34;User removed (@TransactionalEventListener): %s\u0026#34;, event.getName())); } } The transaction module implements an EventListenerFactory that looks for the new @TransactionalEventListener annotation. So when we annotate our method with @TransactionalEventListener, we get an extended event listener that is aware of the transaction:\nThe UserRemovedListener will only be invoked when the current transaction completes.\nWe can bind the listener to the following phases of transaction:\n AFTER_COMMIT: Event will be fired when the transaction gets committed successfully. Can perform further operation once the main transaction commit gets complete. AFTER_COMPLETION: Event will be fired when trasaction commit or get a successful rollback. Can perform cleanup after transaction completion. AFTER_ROLLBACK: Event will be fired after trasaction has rolled back. BEFORE_COMMIT: Event will be fired before trasaction commit. Can flush trasactional O/R mapping sessions to database.  Spring Boot’s Application Events Spring Boot provides a number of predefined ApplicationEvents that are tied to the lifecycle of a SpringApplication.\nSome events are actually triggered before the ApplicationContext is created, so we cannot register a listener on those as a @Bean. We can register listeners for these events by adding the listener manually:\n@SpringBootApplication public class EventsDemoApplication { public static void main(String[] args) { SpringApplication springApplication = new SpringApplication(EventsDemoApplication.class); springApplication.addListeners(new SpringBuiltInEventsListener()); springApplication.run(args); } } We can also register our listeners regardless of how the application is created by adding a META-INF/spring.factories file to our project and reference our listener(s) by using the org.springframework.context.ApplicationListener key:\norg.springframework.context.ApplicationListener = com.reflectoring.eventdemo.SpringBuiltInEventsListener\nclass SpringBuiltInEventsListener implements ApplicationListener\u0026lt;SpringApplicationEvent\u0026gt;{ @Override public void onApplicationEvent(SpringApplicationEvent event) { System.out.println(\u0026#34;SpringApplicationEvent Received - \u0026#34; + event); } } Once we make sure that our event listener is registered properly, we can listen to all of Spring Boot\u0026rsquo;s SpringApplicationEvents:\nBelow are the list of SpringApplicationEvent\u0026rsquo;s in the order of their execution,\nApplicationContextInitializedEvent An ApplicationContextInitializedEvent is fired when the ApplicationContext is ready and ApplicationContextInitializers are called but bean definitions are not yet loaded. Can be use to perform task before beans are initialized into Spring container.\nApplicationEnvironmentPreparedEvent An ApplicationEnvironmentPreparedEvent is fired when the Environment to be used in the context is available. Since environment will be ready we can inspect and do modification if required.\nApplicationFailedEvent An ApplicationFailedEvent is fired if there is an exception and application fails to start. Can be used to perform some task like execute a script or notify on failure.\nApplicationPreparedEvent An ApplicationPreparedEvent is fired when ApllicationContext is prepared but not refreshed. Environment is ready for use and bean definitions will be loaded.\nApplicationReadyEvent An ApplicationReadyEvent is fired to indicate that application is ready to service requests. It is advised not to modify the internal state since all initialization steps will be completed.\nApplicationStartedEvent An ApplicationStartedEvent is fired after the context has been refreshed but before any application and command-line runners have been called.\nApplicationStartingEvent An ApplicationStartingEvent is fired at the start of a run but before any processing, except for the registration of listeners and initializers.\nContextRefreshedEvent A ContextRefreshedEvent is fired when an ApplicationContext is refreshed.\nWebServerInitializedEvent A WebServerInitializedEvent is fired after the WebServer is ready. ServletWebServerInitializedEvent and ReactiveWebServerInitializedEvent are the servlet and reactive variants respectively.\nConclusion Events are designed for simple communication among Spring beans within the same application context. As of Spring 4.2, the infrastructure has been significantly improved and offers an annotation-based model as well as the ability to publish any arbitrary event.\nHowever, for more sophisticated enterprise needs, the Spring Integration project provides complete support for building lightweight, pattern-oriented, event-driven applications that build upon the well-known Spring programming model.\n","date":"February 13, 2020","image":"https://reflectoring.io/images/stock/0058-motorway-junction-1200x628-branded_hua289a663b32b971eb8621dc44c8dafac_322530_650x0_resize_q90_box.jpg","permalink":"/spring-boot-application-events-explained/","title":"Spring Boot Application Events Explained"},{"categories":["Spring Boot"],"contents":"Multitenancy applications allow different customers to work with the same application without seeing each other\u0026rsquo;s data. That means we have to set up a separate data store for each tenant. And as if that\u0026rsquo;s not hard enough, if we want to make some changes to the database, we have to do it for every tenant.\nThis article shows a way how to implement a Spring Boot application with a data source for each tenant and how to use Flyway to make updates to all tenant databases at once.\n Example Code This article is accompanied by a working code example on GitHub. General Approach To work with multiple tenants in an application we\u0026rsquo;ll have a look at:\n how to bind an incoming request to a tenant, how to provide the data source for the current tenant, and how to execute SQL scripts for all tenants at once.  Binding a Request to a Tenant When the application is used by many different tenants, every tenant has their own data. This means that the business logic executed with each request sent to the application must work with the data of the tenant who sent the request.\nThat\u0026rsquo;s why we need to assign every request to an existing tenant.\nThere are different ways to bind an incoming request to a specific tenant:\n sending a tenantId with a request as part of the URI, adding a tenantId to the JWT token, including a tenantId field in the header of the HTTP request, and many more\u0026hellip;.  To keep it simple, let\u0026rsquo;s consider the last option. We\u0026rsquo;ll include a tenantId field in the header of the HTTP request.\nIn Spring Boot, to read the header from a request, we implement the WebRequestInterceptor interface. This interface allows us to intercept a request before it\u0026rsquo;s received in the web controller:\n@Component public class HeaderTenantInterceptor implements WebRequestInterceptor { public static final String TENANT_HEADER = \u0026#34;X-tenant\u0026#34;; @Override public void preHandle(WebRequest request) throws Exception { ThreadTenantStorage.setId(request.getHeader(TENANT_HEADER)); } // other methods omitted  } In the method preHandle(), we read every request\u0026rsquo;s tenantId from the header and forward it to ThreadTenantStorage.\nThreadTenantStorage is a storage that contains a ThreadLocal variable. By storing the tenantId in a ThreadLocal we can be sure that every thread has its own copy of this variable and that the current thread has no access to another tenantId:\npublic class ThreadTenantStorage { private static ThreadLocal\u0026lt;String\u0026gt; currentTenant = new ThreadLocal\u0026lt;\u0026gt;(); public static void setTenantId(String tenantId) { currentTenant.set(tenantId); } public static String getTenantId() { return currentTenant.get(); } public static void clear(){ currentTenant.remove(); } } The last step in configuring the tenant binding is to make our interceptor known to Spring:\n@Configuration public class WebConfiguration implements WebMvcConfigurer { private final HeaderTenantInterceptor headerTenantInterceptor; public WebConfiguration(HeaderTenantInterceptor headerTenantInterceptor) { this.headerTenantInterceptor = headerTenantInterceptor; } @Override public void addInterceptors(InterceptorRegistry registry) { registry.addWebRequestInterceptor(headerTenantInterceptor); } } Don't Use Sequential Numbers as Tenant IDs!  Sequential numbers are easy to guess. All you have to do as a client is to add or subtract from your own tenantId, modify the HTTP header, and voilá, you'll have access to another tenant's data.  Better use a UUID, as it's all but impossible to guess and people won't accidentally confuse one tenant ID with another. Better yet, verify that the logged-in user actually belongs to the specified tenant in each request.  Configuring a DataSource For Each Tenant There are different possibilities to separate data for different tenants. We can\n use a different schema for each tenant, or use a completely different database for each tenant.  From the application\u0026rsquo;s perspective, schemas and databases are abstracted by a DataSource, so, in the code, we can handle both approaches in the same way.\nIn a Spring Boot application, we usually configure the DataSource in application.yaml using properties with the prefix spring.datasource. But we can define only one DataSource with these properties. To define multiple DataSources we need to use custom properties in application.yaml:\ntenants: datasources: vw: jdbcUrl: jdbc:h2:mem:vw driverClassName: org.h2.Driver username: sa password: password bmw: jdbcUrl: jdbc:h2:mem:bmw driverClassName: org.h2.Driver username: sa password: password In this case, we configured data sources for two tenants: vw and bmw.\nTo get access to these DataSources in our code, we can bind the properties to a Spring bean using @ConfigurationProperties:\n@Component @ConfigurationProperties(prefix = \u0026#34;tenants\u0026#34;) public class DataSourceProperties { private Map\u0026lt;Object, Object\u0026gt; datasources = new LinkedHashMap\u0026lt;\u0026gt;(); public Map\u0026lt;Object, Object\u0026gt; getDatasources() { return datasources; } public void setDatasources(Map\u0026lt;String, Map\u0026lt;String, String\u0026gt;\u0026gt; datasources) { datasources .forEach((key, value) -\u0026gt; this.datasources.put(key, convert(value))); } public DataSource convert(Map\u0026lt;String, String\u0026gt; source) { return DataSourceBuilder.create() .url(source.get(\u0026#34;jdbcUrl\u0026#34;)) .driverClassName(source.get(\u0026#34;driverClassName\u0026#34;)) .username(source.get(\u0026#34;username\u0026#34;)) .password(source.get(\u0026#34;password\u0026#34;)) .build(); } } In DataSourceProperties, we build a Map with the data source names as keys and the DataSource objects as values. Now we can add a new tenant to application.yaml and the DataSource for this new tenant will be loaded automatically when the application is started.\nThe default configuration of Spring Boot has only one DataSource. In our case, however, we need a way to load the right data source for a tenant, depending on the tenantId from the HTTP request. We can achieve this by using an AbstractRoutingDataSource.\nAbstractRoutingDataSource can manage multiple DataSources and routes between them. We can extend AbstractRoutingDataSource to route between our tenants' Datasources:\npublic class TenantRoutingDataSource extends AbstractRoutingDataSource { @Override protected Object determineCurrentLookupKey() { return ThreadTenantStorage.getTenantId(); } } The AbstractRoutingDataSource will call determineCurrentLookupKey() whenever a client requests a connection. The current tenant is available from ThreadTenantStorage, so the method determineCurrentLookupKey() returns this current tenant. This way, TenantRoutingDataSource will find the DataSource of this tenant and return a connection to this data source automatically.\nNow, we have to replace Spring Boot\u0026rsquo;s default DataSource with our TenantRoutingDataSource:\n@Configuration public class DataSourceConfiguration { private final DataSourceProperties dataSourceProperties; public DataSourceConfiguration(DataSourceProperties dataSourceProperties) { this.dataSourceProperties = dataSourceProperties; } @Bean public DataSource dataSource() { TenantRoutingDataSource customDataSource = new TenantRoutingDataSource(); customDataSource.setTargetDataSources( dataSourceProperties.getDatasources()); return customDataSource; } } To let the TenantRoutingDataSource know which DataSources to use, we pass the map DataSources from our DataSourceProperties into setTargetDataSources().\nThat\u0026rsquo;s it. Each HTTP request will now have its own DataSource depending on the tenantId in the HTTP header.\nMigrating Multiple SQL Schemas at Once If we want to have version control over the database state with Flyway and make changes to it like adding a column, adding a table, or dropping a constraint, we have to write SQL scripts. With Spring Boot\u0026rsquo;s Flyway support we just need to deploy the application and new scripts are executed automatically to migrate the database to the new state.\nTo enable Flyway for all of our tenants' data sources, first we have do disable the preconfigured property for automated Flyway migration in application.yaml:\nspring: flyway: enabled: false If we don\u0026rsquo;t do this, Flyway will try to migrate scripts to the current DataSource when starting the application. But during startup, we don\u0026rsquo;t have a current tenant, so ThreadTenantStorage.getTenantId() would return null and the application would crash.\nNext, we want to apply the Flyway-managed SQL scripts to all DataSources we defined in the application. We can iterate over our DataSources in a @PostConstruct method:\n@Configuration public class DataSourceConfiguration { private final DataSourceProperties dataSourceProperties; public DataSourceConfiguration(DataSourceProperties dataSourceProperties) { this.dataSourceProperties = dataSourceProperties; } @PostConstruct public void migrate() { for (Object dataSource : dataSourceProperties .getDatasources() .values()) { DataSource source = (DataSource) dataSource; Flyway flyway = Flyway.configure().dataSource(source).load(); flyway.migrate(); } } } Whenever the application starts, the SQL scripts are now executed for each tenant\u0026rsquo;s DataSource.\nIf we want to add a new tenant, we just put a new configuration in application.yaml and restart the application to trigger the SQL migration. The new tenant\u0026rsquo;s database will be updated to the current state automatically.\nIf we don\u0026rsquo;t want to re-compile the application for adding or removing a tenant, we can externalize the configuration of tenants (i.e. not bake application.yaml into the JAR or WAR file). Then, all it needs to trigger the Flyway migration is a restart.\nConclusion Spring Boot provides good means to implement a multi-tenant application. With interceptors, it\u0026rsquo;s possible to bind the request to a tenant. Spring Boot supports working with many data sources and with Flyway we can execute SQL scripts across all of those data sources.\nYou can find the code examples on GitHub.\n","date":"February 4, 2020","image":"https://reflectoring.io/images/stock/0059-library-1200x628-branded_hufd1c76fdddcd68370f35d4cc8a896aad_297099_650x0_resize_q90_box.jpg","permalink":"/flyway-spring-boot-multitenancy/","title":"Multitenancy Applications with Spring Boot and Flyway"},{"categories":["Java"],"contents":"I recently had a rough time refactoring a multi-threaded, reactive message processor. It just didn\u0026rsquo;t seem to be working the way I expected. It was failing in various ways, each of which took me a while to understand. But it finally clicked.\nThis article provides a complete example of a reactive stream that processes items in parallel and explains all the pitfalls I encountered. It should be a good intro for developers that are just starting with reactive, and it also provides a working solution for creating a reactive batch processing stream for those that are looking for such a solution.\nWe\u0026rsquo;ll be using RxJava 3, which is an implementation of the ReactiveX specification. It should be relatively easy to transfer the code to other reactive libraries.\n Example Code This article is accompanied by a working code example on GitHub. The Batch Processing Use Case Let\u0026rsquo;s start with a literally painted picture of what we\u0026rsquo;re trying to achieve:\nWe want to create a paginating processor that fetches batches (or pages) of items (we\u0026rsquo;ll call them \u0026ldquo;messages\u0026rdquo;) from a source. This source can be a queue system, or a REST endpoint, or any other system providing input messages for us.\nOur batch processor loads these batches of messages from a dedicated \u0026ldquo;coordinator\u0026rdquo; thread, splits the batch into single messages and forwards each single message to one of several worker threads. We want this coordination work to be done in a separate thread so that we don\u0026rsquo;t block the current thread of our application.\nIn the figure above, the coordinator thread loads pages of 3 messages at a time and forwards them to a thread pool of 2 worker threads to be processed. When all messages of a page have been processed, the coordinator thread loads the next batch of messages and forwards these, too. If the source runs out of messages, the coordinator thread waits for the source to generate more messages and continues its work.\nIn a nutshell, these are the requirements to our batch processor:\n The fetching of messages must take place in a different thread (a coordinator thread) so we don\u0026rsquo;t block the application\u0026rsquo;s thread. The processor can fan out the message processing to an arbitrary configurable number of worker threads. If the message source has more messages than our worker thread pool can handle, we must not reject those incoming messages but instead wait until the worker threads have capacity again.  Why Reactive? So, why implement this multi-threaded batch processor in the reactive programming model instead of in the usual imperative way? Reactive is hard, isn\u0026rsquo;t it?\nHard to learn, hard to read, even harder to debug.\nBelieve me, I had my share of cursing the reactive programming model, and I think all of the above statements are true. But I can\u0026rsquo;t help to admire the elegance of the reactive way, especially when it\u0026rsquo;s about working with multiple threads.\nIt requires much less code and once you have understood it, it even makes sense (this is a lame statement, but I wanted to express my joy in finally having understood it)!\nSo, let\u0026rsquo;s understand this thing.\nDesigning a Batch Processing API First, let\u0026rsquo;s define the API of this batch processor we want to create.\nMessageSource A MessageSource is where the messages come from:\ninterface MessageSource { Flowable\u0026lt;MessageBatch\u0026gt; getMessageBatches(); } It\u0026rsquo;s a simple interface that returns a Flowable of MessageBatch objects. This Flowable can be a steady stream of messages, or a paginated one like in the figure above, or whatever else. The implementation of this interface decides how messages are being fetched from a source.\nMessageHandler At the other end of the reactive stream is the MessageHandler:\ninterface MessageHandler { enum Result { SUCCESS, FAILURE } Result handleMessage(Message message); } The handleMessage() method takes a single message as input and returns a success or failure Result. The Message and Result types are placeholders for whatever types our application needs.\nReactiveBatchProcessor Finally, we have a class named ReactiveBatchProcessor that will later contain the heart of our reactive stream implementation. We\u0026rsquo;ll want this class to have an API like this:\nReactiveBatchProcessor processor = new ReactiveBatchProcessor( messageSource, messageHandler, threads, threadPoolQueueSize); processor.start(); We pass a MessageSource and a MessageHandler to the processor so that it knows from where to fetch the messages and where to forward them for processing. Also, we want to configure the size of the worker thread pool and the size of the queue of that thread pool (a ThreadPoolExecutor can have a queue of tasks that is used to buffer tasks when all threads are currently busy).\nTesting the Batch Processing API In test-driven development fashion, let\u0026rsquo;s write a failing test before we start with the implementation.\nNote that I didn\u0026rsquo;t actually build it in TDD fashion, because I didn\u0026rsquo;t know how to test this before playing around with the problem a bit. But from a didactic point of view, I think it\u0026rsquo;s good to start with the test to get a grasp for the requirements:\nclass ReactiveBatchProcessorTest { @Test void allMessagesAreProcessedOnMultipleThreads() { int batches = 10; int batchSize = 3; int threads = 2; int threadPoolQueueSize = 10; MessageSource messageSource = new TestMessageSource(batches, batchSize); TestMessageHandler messageHandler = new TestMessageHandler(); ReactiveBatchProcessor processor = new ReactiveBatchProcessor( messageSource, messageHandler, threads, threadPoolQueueSize); processor.start(); await() .atMost(10, TimeUnit.SECONDS) .pollInterval(1, TimeUnit.SECONDS) .untilAsserted(() -\u0026gt; assertEquals( batches * batchSize, messageHandler.getProcessedMessages())); assertEquals(threads, messageHandler.threadNames().size(), String.format( \u0026#34;expecting messages to be executed on %d threads!\u0026#34;, threads)); } } Let\u0026rsquo;s take this test apart.\nSince we want to unit-test our batch processor, we don\u0026rsquo;t want a real message source or message handler. Hence, we create a TestMessageSource that generates 10 batches of 3 messages each and a TestMessageHandler that processes a single message by simply logging it, waiting 500ms, counting the number of messages it has processed and counting the number of threads it has been called from. You can find the implementation of both classes in the GitHub repo.\nThen, we instantiate our not-yet-implemented ReactiveBatchProcessor, giving it 2 threads and a thread pool queue with capacity for 10 messages.\nNext, we call the start() method on the processor, which should trigger the coordination thread to start fetching message batches from the source and passing them to the 2 worker threads.\nSince none of this takes place in the main thread of our unit test, we now have to pause the current thread to wait until the coordinator and worker threads have finished their job. For this, we make use of the Awaitility library.\nThe await() method allows us to wait at most 10 seconds until all messages have been processed (or fail if the messages have not been processed within that time). To check if all messages have been processed, we compare the number of expected messages (batches x messages per batch) to the number of messages that our TestMessageHandler has counted so far.\nFinally, after all messages have been successfully processed, we ask the TestMessageHandler for the number of different threads it has been called from to assert that all threads of our thread pool have been used in processing the messages.\nOur task is now to build an implementation of ReactiveBatchProcessor that passes this test.\nImplementing the Reactive Batch Processor We\u0026rsquo;ll implement the ReactiveBatchProcessor in a couple of iterations. Each iteration has a flaw that shows one of the pitfalls of reactive programming that I fell for when solving this problem.\nIteration #1 - Working on the Wrong Thread Let\u0026rsquo;s have a look at the first implementation to get a grasp of the solution:\nclass ReactiveBatchProcessorV1 { // ...  void start() { // WARNING: this code doesn\u0026#39;t work as expected  messageSource.getMessageBatches() .subscribeOn(Schedulers.from(Executors.newSingleThreadExecutor())) .doOnNext(batch -\u0026gt; logger.log(batch.toString())) .flatMap(batch -\u0026gt; Flowable.fromIterable(batch.getMessages())) .flatMapSingle(m -\u0026gt; Single.just(messageHandler.handleMessage(m)) .subscribeOn(threadPoolScheduler(threads, threadPoolQueueSize))) .subscribeWith(new SimpleSubscriber\u0026lt;\u0026gt;(threads, 1)); } } The start() method sets up a reactive stream that fetches MessageBatches from the source.\nWe subscribe to this Flowable\u0026lt;MessageBatch\u0026gt; on a single new thread. This is the thread I called \u0026ldquo;coordinator thread\u0026rdquo; earlier.\nNext, we flatMap() each MessageBatch into a Flowable\u0026lt;Message\u0026gt;. This step allows us to only care about Messages further downstream and ignore the fact that each message is part of a batch.\nThen, we use flatMapSingle() to pass each Message into our MessageHandler. Since the handler has a blocking interface (i.e. it doesn\u0026rsquo;t return a Flowable or Single), we wrap the result with Single.just(). We subscribe to these Singles on a thread pool with the specified number of threads and the specified threadPoolQueueSize.\nFinally, we subscribe to this reactive stream with a simple subscriber that initially pulls enough messages down the stream so that all worker threads are busy and pulls one more message each time a message has been processed.\nLooks good, doesn\u0026rsquo;t it? Spot the error if you want to make a game of it :).\nThe test is failing with a ConditionTimeoutException indicating that not all messages have been processed within the timeout. Processing is too slow. Let\u0026rsquo;s look at the log output:\n1580500514456 Test worker: subscribed 1580500514472 pool-1-thread-1: MessageBatch{messages=[1-1, 1-2, 1-3]} 1580500514974 pool-1-thread-1: processed message 1-1 1580500515486 pool-1-thread-1: processed message 1-2 1580500515987 pool-1-thread-1: processed message 1-3 1580500515987 pool-1-thread-1: MessageBatch{messages=[2-1, 2-2, 2-3]} 1580500516487 pool-1-thread-1: processed message 2-1 1580500516988 pool-1-thread-1: processed message 2-2 1580500517488 pool-1-thread-1: processed message 2-3 ... In the logs, we see that our stream has been subscribed to on the Test worker thread, which is the main thread of the JUnit test, and then everything else takes place on the thread pool-1-thread-1.\nAll messages are processed sequentially instead of in parallel!\nThe reason (of course), is that messageHandler.handleMessage() is called in a blocking fashion. The Single.just() doesn\u0026rsquo;t defer the execution to the thread pool!\nThe solution is to wrap it in a Single.defer(), as shown in the next code example.\nIs defer() an Anti-Pattern?  I hear people say that using defer() is an anti-pattern in reactive programming. I don't share that opinion, at least not in a black-or-white sense.  It's true that defer() wraps blocking (= not reactive) code and that this blocking code is not really part of the reactive stream. The blocking code cannot use features of the reactive programming model and thus is probably not taking full advantage of the CPU resources.  But there are cases in which we just don't need the reactive programming model - performance may be good enough without it. Think of developers implementing the (blocking) MessageHandler interface - they don't have to think about the complexities of reactive programming, making their job so much easier. I believe that it's OK to make things blocking just to make them easier to understand - assuming performance isn't an issue.  The downside of blocking code within a reactive stream is, of course, that we can run into the pitfall I described above. So, if you use blocking code withing a reactive stream, make sure to defer() it!  Iteration #2 - Working On Too Many Thread Pools Ok, we learned that we need to defer() blocking code, so it\u0026rsquo;s not executed on the current thread. This is the fixed version:\nclass ReactiveBatchProcessorV2 { // ...  void start() { // WARNING: this code doesn\u0026#39;t work as expected  messageSource.getMessageBatches() .subscribeOn(Schedulers.from(Executors.newSingleThreadExecutor())) .doOnNext(batch -\u0026gt; logger.log(batch.toString())) .flatMap(batch -\u0026gt; Flowable.fromIterable(batch.getMessages())) .flatMapSingle(m -\u0026gt; Single.defer(() -\u0026gt; Single.just(messageHandler.handleMessage(m))) .subscribeOn(threadPoolScheduler(threads, threadPoolQueueSize))) .subscribeWith(new SimpleSubscriber\u0026lt;\u0026gt;(threads, 1)); } } With the Single.defer() in place, the message processing should now take place in the worker threads:\n1580500834588 Test worker: subscribed 1580500834603 pool-1-thread-1: MessageBatch{messages=[1-1, 1-2, 1-3]} 1580500834618 pool-1-thread-1: MessageBatch{messages=[2-1, 2-2, 2-3]} ... some more message batches 1580500835117 pool-3-thread-1: processed message 1-1 1580500835117 pool-5-thread-1: processed message 1-3 1580500835117 pool-4-thread-1: processed message 1-2 1580500835118 pool-8-thread-1: processed message 2-3 1580500835118 pool-6-thread-1: processed message 2-1 1580500835118 pool-7-thread-1: processed message 2-2 ... some more messages expecting messages to be executed on 2 threads! ==\u0026gt; expected:\u0026lt;2\u0026gt; but was:\u0026lt;30\u0026gt; This time, the test fails because the messages are processed on 30 different threads! We expected only 2 threads, because that\u0026rsquo;s the pool size we passed into the factory method threadPoolScheduler(), which is supposed to create a ThreadPoolExecutor for us. Where do the other 28 threads come from?\nLooking at the log output, it becomes clear that each message is processed not only in its own thread but in its own thread pool.\nThe reason for this is, once again, that threadPoolScheduler() is called in the wrong thread. It\u0026rsquo;s called for each message that is returned from our message handler.\nThe solution is easy: store the result of threadPoolScheduler() in a variable and use the variable instead.\nIteration #3 - Rejected Messages So, here\u0026rsquo;s the next version, without creating a separate thread pool for each message:\nclass ReactiveBatchProcessorV3 { // ...  void start() { // WARNING: this code doesn\u0026#39;t work as expected  Scheduler scheduler = threadPoolScheduler(threads, threadPoolQueueSize); messageSource.getMessageBatches() .subscribeOn(Schedulers.from(Executors.newSingleThreadExecutor())) .doOnNext(batch -\u0026gt; logger.log(batch.toString())) .flatMap(batch -\u0026gt; Flowable.fromIterable(batch.getMessages())) .flatMapSingle(m -\u0026gt; Single.defer(() -\u0026gt; Single.just(messageHandler.handleMessage(m))) .subscribeOn(scheduler)) .subscribeWith(new SimpleSubscriber\u0026lt;\u0026gt;(threads, 1)); } } Now, it should finally work, shouldn\u0026rsquo;t it? Let\u0026rsquo;s look at the test output:\n1580501297031 Test worker: subscribed 1580501297044 pool-3-thread-1: MessageBatch{messages=[1-1, 1-2, 1-3]} 1580501297056 pool-3-thread-1: MessageBatch{messages=[2-1, 2-2, 2-3]} 1580501297057 pool-3-thread-1: MessageBatch{messages=[3-1, 3-2, 3-3]} 1580501297057 pool-3-thread-1: MessageBatch{messages=[4-1, 4-2, 4-3]} 1580501297058 pool-3-thread-1: MessageBatch{messages=[5-1, 5-2, 5-3]} io.reactivex.exceptions.UndeliverableException: The exception could not be delivered to the consumer ... Caused by: java.util.concurrent.RejectedExecutionException: Task ... rejected from java.util.concurrent.ThreadPoolExecutor@4a195f69[ Running, pool size = 2, active threads = 2, queued tasks = 10, completed tasks = 0]\tThe test hasn\u0026rsquo;t even started to process messages and yet it fails due to an RejectedExecutionException!\nIt turns out that this exception is thrown by a ThreadPoolExecutor when all of its threads are busy and its queue is full. Our ThreadPoolExecutor has two threads and we passed 10 as the threadPoolQueueSize, so it has a capacity of 2 + 10 = 12. The 13th message will cause exactly the above exception if the message handler blocks the two threads long enough.\nThe solution to this is to re-queue a rejected task by implementing a RejectedExecutionHandler and adding this to our ThreadPoolExecutor:\nclass WaitForCapacityPolicy implements RejectedExecutionHandler { @Override void rejectedExecution( Runnable runnable, ThreadPoolExecutor threadPoolExecutor) { try { threadPoolExecutor.getQueue().put(runnable); } catch (InterruptedException e) { throw new RejectedExecutionException(e); } } } Since a ThreadPoolExecutors queue is a BlockingQueue, the put() operation will wait until the queue has capacity again. Since this happens in our coordinator thread, no new messages will be fetched from the source until the ThreadPoolExecutor has capacity.\nIteration #4 - Works as Expected Here\u0026rsquo;s the version that finally passes our test:\nclass ReactiveBatchProcessor { // ...  void start() { Scheduler scheduler = threadPoolScheduler(threads, threadPoolQueueSize); messageSource.getMessageBatches() .subscribeOn(Schedulers.from(Executors.newSingleThreadExecutor())) .doOnNext(batch -\u0026gt; logger.log(batch.toString())) .flatMap(batch -\u0026gt; Flowable.fromIterable(batch.getMessages())) .flatMapSingle(m -\u0026gt; Single.defer(() -\u0026gt; Single.just(messageHandler.handleMessage(m))) .subscribeOn(scheduler)) .subscribeWith(new SimpleSubscriber\u0026lt;\u0026gt;(threads, 1)); } private Scheduler threadPoolScheduler(int poolSize, int queueSize) { return Schedulers.from(new ThreadPoolExecutor( poolSize, poolSize, 0L, TimeUnit.SECONDS, new LinkedBlockingDeque\u0026lt;\u0026gt;(queueSize), new WaitForCapacityPolicy() )); } } Within the threadPoolScheduler() method, we add our WaitForCapacityPolicy() to re-queue rejected tasks.\nThe log output of the test now looks complete:\n1580601895022 Test worker: subscribed 1580601895039 pool-3-thread-1: MessageBatch{messages=[1-1, 1-2, 1-3]} 1580601895055 pool-3-thread-1: MessageBatch{messages=[2-1, 2-2, 2-3]} 1580601895056 pool-3-thread-1: MessageBatch{messages=[3-1, 3-2, 3-3]} 1580601895057 pool-3-thread-1: MessageBatch{messages=[4-1, 4-2, 4-3]} 1580601895058 pool-3-thread-1: MessageBatch{messages=[5-1, 5-2, 5-3]} 1580601895558 pool-1-thread-2: processed message 1-2 1580601895558 pool-1-thread-1: processed message 1-1 1580601896059 pool-1-thread-2: processed message 1-3 1580601896059 pool-1-thread-1: processed message 2-1 1580601896059 pool-3-thread-1: MessageBatch{messages=[6-1, 6-2, 6-3]} 1580601896560 pool-1-thread-2: processed message 2-2 1580601896560 pool-1-thread-1: processed message 2-3 ... 1580601901565 pool-1-thread-2: processed message 9-1 1580601902066 pool-1-thread-2: processed message 10-1 1580601902066 pool-1-thread-1: processed message 9-3 1580601902567 pool-1-thread-2: processed message 10-2 1580601902567 pool-1-thread-1: processed message 10-3 1580601902567 pool-1-thread-1: completed Looking at the timestamps, we see that two messages are always processed at approximately the same time, followed by a pause of 500 ms. That is because our TestMessageHandler is waiting for 500 ms for each message. Also, the messages are processed by two threads in the same thread pool pool-1, as we wanted.\nAlso, we can see that the message batches are fetched in a single thread of a different thread pool pool-3. This is our coordinator thread.\nAll of our requirements are fulfilled. Mission accomplished.\nConclusion The conclusion I draw from the experience of implementing a reactive batch processor is that the reactive programming model is very hard to grasp in the beginning and you only come to admire its elegance once you have overcome the learning curve. The reactive stream shown in this example is a very easy one, yet!\nBlocking code within a reactive stream has a high potential of introducing errors with the threading model. In my opinion, however, this doesn\u0026rsquo;t mean that every single line of code should be reactive. It\u0026rsquo;s much easier to understand (and thus maintain) blocking code. We should check that everything is being processed on the expected threads, though, by looking at log output or even better, by creating unit tests.\nFeel free to play around with the code examples on GitHub.\n","date":"February 3, 2020","image":"https://reflectoring.io/images/stock/0058-motorway-junction-1200x628-branded_hua289a663b32b971eb8621dc44c8dafac_322530_650x0_resize_q90_box.jpg","permalink":"/rxjava-reactive-batch-processing/","title":"Reactive Multi-Threading with RxJava - Pitfalls and Solutions"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you want to raise your awareness of your habits you are looking for a framework to help in starting or stopping habits you enjoy real-world stories about habits  Overview {% include book-link.html book=\u0026ldquo;atomic-habits\u0026rdquo; %} by James Clear claims to provide \u0026ldquo;An Easy and Proven Way to Build Good Habits and Break Bad Ones\u0026rdquo;, and it does that.\nThis doesn\u0026rsquo;t mean that you will magically transform into your best self after reading the book, but the book actually provides a framework for building and breaking habits and the explanations behind it. If you really want to change your habits, you can do it with this framework.\nClear builds on top of the \u0026ldquo;habit loop\u0026rdquo; from The Power of Habit and provides rules on how to address each step in the loop to make a habit stick or unstick.\nHe also connects our habits to our identity, reminding me a lot of the concepts I read in The 7 Habits of Highly Effective People.\nLikes and Dislikes The book is written in easy-to-read, conversational language and structured into short chapters, making it a quick and entertaining read.\nThe rules for habit-making are always accompanied by one or more real-world stories. This gives credibility to the rules and provides some satisfying \u0026ldquo;aha\u0026rdquo;-moments.\nThe only thing that bothered me a little is that the \u0026ldquo;Atomic\u0026rdquo; part of the title was only really relevant in the first couple of chapters.\nKey Takeaways Here are my notes of the book. I added some comments in italics.\nThe Surprising Power of Atomic Habits  doing tiny in improvements leads to an aggregation of marginal gains small improvements usually pay out much later, just like when you change the course of an airplane \u0026ldquo;Success is the product of daily habits - not once-in-a-lifetime transformations.\u0026rdquo; habits need to persist long enough for the tiny improvements to accumulate to something noteworthy focus on a system to make changes instead of on the goals you want to achieve  How Habits Shape Your Identity (and Vice Versa)  change must be driven from our identity, not by the goals pride is a prime motivator for changing established habits to build habits, you need to know who you want to be decide who you want to be, then prove it to yourself with small wins every day  to decide who you want to be, you can create a \u0026ldquo;personal mission statement\u0026rdquo; that Stephen Corey writes about in The 7 Habits of Highly Effective People    How To Build Better Habits  the habit loop consists of:  a cue (noticing the reward) a craving (wanting the reward) a response (obtaining the reward; this is what Charles Duhigg calls the \u0026ldquo;routine\u0026rdquo; in The Power of Habit) a reward that satisfies us funny and totally irrelevant analogy to software development: the habit loop is like an endless loop in code - a thread that continually scans for cues to trigger a habit routine   to establish a new habit, you must address each part of the habit loop:  cue: make it obvious craving: make it attractive routine: make it easy reward: make it satisfying    The Man Who Didn\u0026rsquo;t Look Right  you don\u0026rsquo;t need to be aware of a cue to start a habit - it can be automatic (like a paramedic telling a man \u0026ldquo;he doesn\u0026rsquo;t look right\u0026rdquo; and needs medical attention without being able to tell why) you can only change a habit, however, if you\u0026rsquo;re aware of it pointing at something and calling it out can make an unconscious action conscious a habit scorecard (a list of habits marked as good, bad, or neutral) helps to make habits conscious  The Best Way to Start a New Habit  having a plan greatly increases follow-through plan a time and location to start a habit time and location are the most common cues for habits habit stacking is the process of chaining habits so that one habit is the cue to another  Motivation is Overrated  the environment (when and where) is more important for forming habits than the intrinsic motivation visual cues are more powerful than others - they are more obvious to us you can design your environment - put visual cues around you to trigger habits if you want to separate habits, associate a certain environment to one habit only (for instance to separate work from personal time) it\u0026rsquo;s easier to build a new habit in a new environment (I\u0026rsquo;m currently experiencing this first hand, after having moved to Australia)  The Secret to Self-Control  a radical change in environment (like returning home from war) can re-set habits completely (this includes hard habits like a heroin addiction) people with high \u0026ldquo;self-control\u0026rdquo; usually structure their life to make things easy for them, so they actually don\u0026rsquo;t actually need self-control the best way to get rid of a bad habit is to reduce exposure to the cue of that habit  How to Make Habits Irresistible  there are supernormal stimuli that make things irresistible, like certain combinations of fat and sugar \u0026ldquo;We have the brains of our ancestors but temptations they never had to face\u0026rdquo; \u0026ldquo;Desire is the engine that that drives behavior\u0026rdquo; - once a habit is formed, dopamine is released on the cue, not on the reward! to make a behavior attractive, combine it with something tempting - Clear calls this temptation bundling  The Role of Family and Friends  habits that are \u0026ldquo;normal\u0026rdquo; in our environment are the most attractive - \u0026ldquo;One of the deepest human desires is to belong\u0026rdquo; to support habit change, join a culture where your desired behavior is normal and where you already have something in common with the others  How to Find and Fix the Causes of Bad Habits  our behavior is controlled by predictions - each cue produces a prediction a prediction leads to a feeling - a craving making or breaking a habit is often just a mind shift - tell yourself about the benefits or drawbacks often enough and you will change create a (de)-motivation ritual to support this  Walk Slowly But Never Backward  thinking about the best way to do something is \u0026ldquo;motion\u0026rdquo; but not \u0026ldquo;action\u0026rdquo; we often think to make progress while in motion but without taking action you make a habit easy by repetition - not by preparing, planning, or other ways of procrastination habits are built by repetition - not by time  The Law of Least Effort  we are motivated to do whatever takes less effort  Daniel Kahneman, in \u0026ldquo;Thinking, Fast and Slow\u0026rdquo; (review pending, I\u0026rsquo;m currently reading it), says our mind has two modes: one for doing things automatically and one for doing things consciously - doing things consciously costs more effort so we always take the automatic (habitual) route, if possible   design your environment to make habits easier the secret of Japanese companies' \u0026ldquo;lean production\u0026rdquo; was to reduce obstacles from the production process wherever possible - we can apply that to our habits set up your environment so that good habits are easy and bad habits are hard  How to Stop Procrastinating  our habits lead us through many decisive moments every day - work out or not? TV or not? when you start a new habit, make it less than 2 minutes - this can form a gateway habit that triggers a routine that may take more effort the secret is to learn to show up - the actual routine will form eventually  How to Make Good Habits Inevitable and Bad Habits Impossible  automation using technology or other humans can work to establish and fight habits - Clear calls this a commitment device:  a shutdown timer to disable internet access after 10pm to get to bed earlier boxing half of your dinner before eating to reduce the amount you eat   strategic one-time decisions can shape future habits - like removing the TV from the bedroom  The Cardinal Rule of Behavior Change  \u0026ldquo;What is immediately rewarded is repeated. What is immediately punished is avoided.\u0026rdquo; we prioritize the present over the future create an immediate-return environment for your habits to make them satisfying \u0026ldquo;The costs of your good habits are in the present. The costs of your bad habits are in the future.\u0026rdquo;  How to Stick With Good Habits Every Day  use a habit tracker to make a habit obvious, attractive, and satisfying \u0026ldquo;Don\u0026rsquo;t break the chain\u0026rdquo; is a powerful mantra - often accounted to Jerry Seinfeld, who writes one joke every day, even if it\u0026rsquo;s a bad one habit tracking keeps you focused on the process, not the result if you slip, start a new streak the next day never slip twice in a row - this leads to an \u0026ldquo;all-or-nothing\u0026rdquo; mindset which doesn\u0026rsquo;t help in building a habit we optimize for what we measure - so make sure to measure the right thing  How an Accountability Partner Can Change Everything  an accountability partner adds an immediate cost to slipping a habit a habit contract signed by you and your partner can give extra motivation  The Truth About Talent (When Genes Matter and When They Don\u0026rsquo;t)  habits are easier to establish when they play into natural abilities and inclinations genes matter, but it\u0026rsquo;s more productive to focus on your own fulfillment  The Goldilocks Rule: How to Stay Motivated in Life and Work  we experience peak motivation on tasks that are just hard enough to challenge us variable rewards, e.g. rewards in 50% of cases and no reward in the other 50%, increase the dopamine rush for each reward (this explains the ridiculous amount of time I have spent in loot-based computer games like Diablo and World of Warcraft) \u0026ldquo;Professionals stick to the schedule. Amateurs let it get in the way.\u0026rdquo; you have to embrace boredom to effectively stick to a habit (also see Deep Work in which Cal Newport has dedicated a whole chapter to \u0026ldquo;Embracing Boredom\u0026rdquo;  The Downside of Creating Good Habits  improvement usually stagnates with time - you need deliberate practice to overcome this establish a system of regular review and reflection don\u0026rsquo;t let your habits lock you into an identity you don\u0026rsquo;t want  Conclusion {% include book-link.html book=\u0026ldquo;atomic-habits\u0026rdquo; %} is an entertaining book that provided some \u0026ldquo;aha\u0026rdquo;-moments for me. If you are interested in habits, this is a definite reading recommendation.\nThe rules in this book are highly actionable, more so than in other books I\u0026rsquo;ve read about habits. But don\u0026rsquo;t let that fool you in blissful idleness - the changes won\u0026rsquo;t come just by reading the book.\n","date":"January 18, 2020","image":"https://reflectoring.io/images/covers/atomic-habits-teaser_hua3839ef5528827622e05fc9a290daeb2_553159_650x0_resize_q90_box.jpg","permalink":"/book-review-atomic-habits/","title":"Book Review: Atomic Habits"},{"categories":["Software Craft"],"contents":"Self-contained systems (SCS) are systems that have no tight coupling to other systems. They can be developed, deployed and operated on their own. With continuous delivery mechanisms, we can easily deploy an application. But what if our SCS contains a database and we want to deliver a change to its configuration?\nThis article shows a way of implementing continuous delivery for database configuration using Kubernetes and Flyway.\n Example Code This article is accompanied by a working code example on GitHub. The Problem When developing a self-contained system (SCS), we may want to have the whole database inside the SCS to avoid tight coupling to other systems. If we want to deliver an SCS, then our goal is to start the SCS with one click. To achieve this goal, we have to:\n set up an empty database, configure the database for access by the application, deploy the application and connect the application to the database.  We\u0026rsquo;ll want to automate all these steps to create a smooth delivery pipeline to the customer. Technologies like docker make it easy to set up an empty database for step 1. Also, we can easily deploy an application as a docker container for step 3. But how can we automatically configure the database and how can we control different versions for this configuration?\nUsing the declarative approach of Kubernetes as a container orchestration system, we can easily build up the automated process to deliver the SCS. We can just declare the desired state of components like a database or application and Kubernetes takes care of the rest. But it\u0026rsquo;s not possible to declare the desired database configuration with its schemas, users, privileges and so on, because it is the internal configuration of the database.\nImagine we have a continuous delivery pipeline for our SCS. This pipeline does the three steps from the list above. When we start the pipeline for the first time, it will create and configure the database and deploy the application. If we later make a change in the business logic, the pipeline will run these steps and Kubernetes will not detect any changes on the database, but on the business logic application. In this case, Kubernetes is responsible for updating our application. It works fine with Kubernetes.\nBut we also want to be able to make changes to the database configuration (e.g. to change some permissions) and have that change deploy automatically.\nLet\u0026rsquo;s find out how to do that with Kubernetes and Flyway.\nGeneral Approach It\u0026rsquo;s possible to configure a database with SQL scripts. This means we can code the configuration of the database and use this code in the delivery process. Thanks to tools like Flyway or Liquibase we can use version control systems for these SQL configuration scripts.\nNote that we\u0026rsquo;re not talking about SQL scripts that create the tables for the data model, but about scripts that change the configuration of the database itself.\nProject Structure The example project is an SCS with a PostgreSQL Database and a very simple Spring Boot application called \u0026ldquo;Post Service\u0026rdquo;. The project consists of two parts:\n a k8s folder with Kubernetes manifest files a src folder with source code for the Post Service  Kubernetes Objects Let\u0026rsquo;s have a look at the Kubernetes configuration.\nBase Configuration In the folder k8s/base we create a ConfigMap and Secret with the connection properties for the Post Service application.\nA ConfigMap is a Kubernetes object where we can put our external configuration so that Kubernetes can apply this configuration to our system at runtime:\n# base/configmap.yml apiVersion: v1 kind: ConfigMap metadata: name: post-configmap namespace: migration data: spring.datasource.username: post_service spring.datasource.driver-class-name: org.postgresql.Driver spring.datasource.url: jdbc:postgresql://postgres:5432/post_service A Secret is a Kubernetes Object for storing sensitive data. Similar to a ConfigMap, a Secret can also be read at runtime:\n# base/secret.yml apiVersion: v1 kind: Secret metadata: name: post-secret namespace: migration type: Opaque data: spring.datasource.password: bXlfc2VydmljZV9wYXNzd29yZA== The data from configmap.yml and the password from secret.yml are the data for connecting to the database with the user post_service. After deploying the Spring Boot application, our application and database should be connected and ready to use.\nDatabase Configuration With the scripts from k8s/postgres, we create a PostgreSQL Database as a StatefulSet with an admin user. A StatefulSet is a Kubernetes Object that defines components with unique, persistent identities and with persistent disk storage. A database is a prime use case for a StatefulSet.\nThe password for the user is read from another secret:\n# postgres/postgres-secret.yml apiVersion: v1 kind: Secret metadata: name: postgres namespace: migration type: Opaque data: password: bXlzZWNyZXQ= We need this password later to run the SQL scripts in the name of the admin user.\nDatabase Scripts Now let\u0026rsquo;s look at the most interesting part of the Kubernetes files. Our goal is to automate the creation of the schema, users, privileges and other configuration. We can create all of this with SQL scripts.\nFortunately, we can use Flyway as a docker container to execute the SQL configuration scripts. Since we are now in the world of containers we can use the official Flyway Docker Container.\nFirst, we create a ConfigMap with the SQL scripts:\n# migration/migration_configmap.yml apiVersion: v1 kind: ConfigMap metadata: name: postgres-configmap namespace: migration data: V1_1__create_user.sql: |CREATE USER ${username} WITH LOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOREPLICATION CONNECTION LIMIT -1 ENCRYPTED PASSWORD \u0026#39;${password}\u0026#39;; V1_2__create_db.sql: |CREATE DATABASE ${username} WITH OWNER = ${username} ENCODING = \u0026#39;UTF8\u0026#39; CONNECTION LIMIT = -1; V1_3__grant_privileges.sql: | GRANT ALL ON DATABASE ${username} TO ${username} To be able to use this ConfigMap, we can mount it as a Persistence Volume so that the Flyway Docker container can see it as files.\nDatabase Migration Job Next, we create a Kubernetes Job. A Job is a Kubernetes Object that includes a docker container. It runs only once and terminates immediately after the container is finished, which is exactly what we need for the execution of the SQL configuration scripts:\n# migration/migration-job.yml apiVersion: batch/v1 kind: Job metadata: name: migration-job namespace: migration spec: template: spec: containers: - name: flyway image: images/stock/boxfuse/flyway:5.2.4-1200x628-branded.jpg args: - info - repair - migrate - info env: - name: FLYWAY_URL value: jdbc:postgresql://postgres:5432/postgres - name: FLYWAY_USER value: admin - name: FLYWAY_PASSWORD valueFrom: secretKeyRef: name: postgres key: password - name: FLYWAY_PLACEHOLDER_REPLACEMENT value: \u0026#34;true\u0026#34; - name: FLYWAY_PLACEHOLDERS_USERNAME valueFrom: configMapKeyRef: name: post-configmap key: spring.datasource.username - name: FLYWAY_PLACEHOLDERS_PASSWORD valueFrom: secretKeyRef: name: post-secret key: spring.datasource.password volumeMounts: - mountPath: /flyway/sql name: sql volumes: - name: sql configMap: name: postgres-configmap restartPolicy: Never The Job includes the Flyway Docker container and runs it by starting. The postgres-configmap is mounted to the Job. This means that Flyway will find scripts on the filesystem and start the migration. After this migration, the schema and the user will be created in the database.\nYou might have noticed that there are some placeholders in the SQL scripts. In the Job, we expose values from the post-configmap and post-secret ConfigMaps we have created above as environment variables. We then make use of Flyway\u0026rsquo;s feature to replace placeholders with values from environment variables. We activate this by setting the FLYWAY_PLACEHOLDER_REPLACEMENT environment variable to true.\nIn the Job, the post-configmap is mounted to the path /flyway/sql, which is the default path for SQL migrations with Flyway. The environment variables FLYWAY_USER and FLYWAY_PASSWORD define the user executing the scripts, which is the admin user.\nSubsequent Database Migrations After the initial database scripts have been executed, the database, user, and privileges are created and are under Flyway\u0026rsquo;s control. If we want to change our database configuration in the future, we just have to add a new SQL script to the postgres-configmap and start the Job again. Ideally, our CD pipeline does this automatically. We then just have to push changes to the postgres-configmap to our code repository.\nLet\u0026rsquo;s assume that the Job was triggered again by the pipeline. The database is already created and configured. There are no changes in the postgres-configmap containing our SQL scripts. The Flyway migration will run anyway, but Flyway will not detect any changes, and thus do nothing.\nFlyway will only execute scripts that have not been executed on the database before, so we can run the Job as often as we want. Flyway will only do something if there are new scripts in the postgres-configmap.\nDeploying the Application Now the database is prepared to be used from our Spring Boot application. To access the database from the Spring Boot application, it has to read the data source configuration from post-configmap and post-secret. Luckily, there is a Spring Boot starter for this.\nTo use the starter, we need to add these dependencies to our application (Gradle notation):\nimplementation \u0026#39;org.springframework.cloud:spring-cloud-starter-kubernetes-config:1.0.4.RELEASE\u0026#39; implementation \u0026#39;org.springframework.cloud:spring-cloud-starter-kubernetes:1.0.4.RELEASE\u0026#39; Then we have to tell Spring Boot to load the properties from the post-configmap and post-secret Kubernetes objects in the configuration file bootstrap.yml:\n# bootstrap.yml spring: application: name: post-service cloud: kubernetes: config: name: post-configmap secrets: name: post-secret Don\u0026rsquo;t confuse bootstrap.yml with application.yml!\nThat\u0026rsquo;s it. The values\n spring.datasource.username, spring.datasource.driver-class-name , spring.datasource.url, and spring.datasource.password  are read from post-configmap and post-secret and Spring Boot can use them. The same values are used by the Flyway Job.\nBy the way, this Spring Boot application starts its own database migration with Flyway, but this time with Spring Boot support instead of Kubernetes support. The application should only create and modify tables, though, and leave the lower-level configuration to the Kubernetes job.\nNotes  I used minikube to run the SCS locally. For more security don\u0026rsquo;t put unencrypted secrets into the files in the repository!  Conclusion When delivering a self-contained system, we can take advantage of Flyway and Kubernetes to automate database creation and configuration. Flyway enables us to implement a continuous migration of SQL configuration scripts within the delivery process. We can automate the database configuration for the case when we want to start an SCS from scratch and use the same mechanism for updating the configuration in an already existing database. This is helpful when we have different environments with different database states.\n","date":"January 15, 2020","image":"https://reflectoring.io/images/stock/0018-cogs-1200x628-branded_huddc0bdf9d6d0f4fdfef3c3a64a742934_149789_650x0_resize_q90_box.jpg","permalink":"/flyway-kubernetes-continuous-database-configuration/","title":"Continuous Database Configuration with Flyway and Kubernetes"},{"categories":["Meta"],"contents":"There it is. 2019 is gone. 2020 is here. I feel old.\nAs I did last year, I take this time to reflect on my year 2019 and to set some plans for 2020. This is more for myself than anyone else, but you might enjoy some of the insights to the blogging and habit building I did in 2019.\nThe Blog Let\u0026rsquo;s start with a review of this blog in 2019. The blog has started its life in 2018, so 2019 has been its second year with any noteworthy traffic. And it has been a great year in all regards.\nBlog Posts I have published 27 blog posts in 2019. That\u0026rsquo;s a little more than last year:\nThe bar chart shows that I\u0026rsquo;ve been pretty consistent in writing. This is thanks to my habit of writing a bit every day.\nTraffic I had almost 600k users (as Google Analytics counts them) reading the blog over 2019:\nThose users added up to more than 1 million page views. That\u0026rsquo;s more than three times the traffic of the previous year.\nThis increase was thanks to concentrating my writing efforts on comprehensive tutorials around Spring Boot that go deep into a topic. Also, my articles not only explain how to do something, but why. The most successful articles are all of this type:\nMailing List I started the mailing list in the middle of 2018 with Drip, then migrated to MailChimp a couple of months later because Drip was too expensive ($40 a month is just too much when you don\u0026rsquo;t make any money with your blog).\nThis year, I migrated the mailing list again, because MailChimp fucked their customers. They changed their prices without announcement, including having to pay not only for the number of subscribers but also paying for subscribers that have unsubscribed from my list. All this while increasing the prices overall. This is not acceptable.\nThis year, I ended up with MailerLite. They have solid pricing tiers that match the growth of my mailing list, so all is good. I currently have ~1.250 subscribers and pay $15 a month, which is fine.\nFor some months while working on my book, I gave it away for free for subscribers of the mailing list, which had quite an effect on the subscription rate. I stopped that when the book was finished, though, because it was too much work to give away for free forever. I\u0026rsquo;ll have to think of another welcome present that provides enough value to make people want to subscribe.\nSince July (when I migrated to MailerLite), I sent 10.550 emails in 10 newsletters:\nThat\u0026rsquo;s pretty much one mailing every two weeks, which I had set as a goal.\nThe mails each contain at least one link to new content on the blog and one more interesting tidbit - either another link to new content or a link to something else that I think the readers might find interesting.\nAccording to the stats, the mails are received well:\nThese numbers are much higher than my research about mailing lists would have suggested! Thanks for reading those mails, everybody! Let me know what I should change (if anything).\nMaking Money with the Blog I haven\u0026rsquo;t been making any money with the blog until recently.\nAs soon as my book was half-finished, I advertised it in the sidebar of the blog. People started buying it even though it wasn\u0026rsquo;t finished yet. Thanks to those readers! That added to my motivation to finish it because I didn\u0026rsquo;t want to disappoint readers that have paid real money for the book. That\u0026rsquo;s a good forcing function and a great benefit of self-publishing.\nI\u0026rsquo;ve played around with other ways to monetize the blog. Amazon affiliate links for my book reviews don\u0026rsquo;t work because no one reads my book reviews (which is OK, because I write those reviews more for myself than for anyone else). And I don\u0026rsquo;t want to plaster the page with affiliate ads just so that someone clicks on it.\nIn October, I was approached by Carbon Ads and asked if I would like to serve their ads. They serve developer-specific ads only, in a very subtle way. I could even filter out ads for MailChimp to get revenge for their pricing stunt.\nCurrently, I\u0026rsquo;m earning about $100 per month with those ads, which is great. From that, I grudgingly pay $9 each month to Disqus to remove their filthy ads from the site. I kept getting ads like \u0026ldquo;Her Belly Keeps Growing, Doctor Sees Scan And Calls Authorities\u0026rdquo; or \u0026ldquo;She dips a tea bag in a sink full of dirty dishes\u0026hellip; You will too when you see why!\u0026rdquo;. I don\u0026rsquo;t know what I did to deserve those ads, but I want to keep them as far away from my audience as possible, so I pay Disqus for it. I wonder if that\u0026rsquo;s their plan all along\u0026hellip;.\nI\u0026rsquo;d like to replace Disqus with another comment service, but don\u0026rsquo;t have a plan for it, yet.\nNew Layout \u0026amp; Design In August I decided that I was bored by the visual theme of the blog. I used the Minimal Mistakes Jekyll theme, which I had originally chosen for exactly its plainness. But now I wanted a little more excitement.\nI finally chose to base my new design on the webmag HTML theme, which I had to transfer into a Jekyll theme first. It was less work than I would have expected and I learned a lot about Jekyll.\nAs a person who is notoriously bad in CSS and design in general, I\u0026rsquo;m proud of the result.\nI still take feedback, though. All you designers out there, let me know what I did wrong.\nThe Book In April 2019, I self-published my (then unfinished) book on Leanpub. I started with a very low price (I think it was $4.99) and added a Dollar with each chapter that I published. To my surprise, people actually started buying it. This was very rewarding and gave me the motivation to finish it so as not to disappoint the readers.\nI finally finished the book in October, just in time for my journey to the other end of the world.\nThe book has sold more than 750 times since April and, while the book was unfinished, I gave away about the same number of copies for free as an incentive to join my mailing list. Leanpub gives me 80% of the customer price, which is quite a lot compared to the usual 10 to 20 percent old-school publishers pay.\nIn total, I made about $5.500 with the book in a little more than half a year\u0026rsquo;s time, not counting the sales from the print version, which is published in the old-school way by Packt. And this old-school way means that I get a royalty statement each quarter for the second-to-last quarter. This means I\u0026rsquo;ll know how the book sales are going half a year after the fact. A great way for an author to know if they\u0026rsquo;re on the right track\u0026hellip;. I\u0026rsquo;m glad I took the self-publishing route.\nAll in all, the book was - and still is - a very rewarding experience. Thanks to all the readers that bought the book and especially to those that have provided feedback to make it better! I\u0026rsquo;d like to write another one at some point, but I have to find another topic that drives me, yet.\nHabits 2019 has been my habit-building year. I\u0026rsquo;ve read a lot about habits: The Power of Habit, The 7 Habits of Highly Effective People, Deep Work, and Atomic Habits (review pending). I also read Everybody Writes which teaches about writing habits.\nAfter having read all these books, and having successfully established some habits, I\u0026rsquo;m convinced that every one of us can do pretty much everything if we build the habits to support it.\nLet\u0026rsquo;s look at the two habits I have been very consistent about during the last year.\nWrite a Bit Every Day I have a full-time developer job to pay the bills. The writing I\u0026rsquo;m doing on my blog and in my book is a hobby. I do it because I like the feeling of having created something helpful to others. Of course, it\u0026rsquo;s also fun to earn some money with it, but this hasn\u0026rsquo;t been the case at the start of last year.\nI also have a wife and two kids that I like to spend some time with now and then.\nYou can imagine that it\u0026rsquo;s hard to get any writing done in these circumstances.\nYou can also imagine that my reward for writing (the feeling of having created something helpful) only sets in after I have actually finished writing something. This is a vicious cycle because If I don\u0026rsquo;t get anything done, I\u0026rsquo;m not motivated to write more, and vice versa.\nSo, I concluded that I need to write a little every day to create a feeling of progress that gets me going. And this works best if it\u0026rsquo;s at a fixed time and place.\nIn the evenings after work, I\u0026rsquo;m too mentally exhausted to do any good writing. I can get something done, but it\u0026rsquo;s hard and I get distracted very easily. This was hard to keep up, so I didn\u0026rsquo;t regularly do it.\nThis left the mornings before work. I can\u0026rsquo;t concentrate, though, with my family up and about in the house, so I decided to rise at 5.30 each morning to get some \u0026ldquo;deep work\u0026rdquo; time before my family gets up. This was hard in the beginning, but my sleep cycle has automatically adjusted after a couple of weeks and now I usually go to bed at 10 pm most days so I still get the 7 hours of sleep I need.\nThese 1.5 hours each morning make all the difference! I\u0026rsquo;m still a painfully slow writer in my opinion (I guess that I get done 300-500 words in a morning, and that\u0026rsquo;s not counting researching and example code). But I get a bit done every day. I feel progress. I get regular rewards for finishing a blog post, or a chapter. This habit allowed me to work on my book and my blog in parallel: a chapter in the book in one week, a new blog post the next week, and so on.\nAfter a year of rising early to write (or sometimes do other productive things) each morning - with only the odd day in between where I didn\u0026rsquo;t do it - this habit is so ingrained that I experience a feeling of loss when I skip my morning session. This feeling is enough motivation to get me up early the next morning.\nMeaningful Reading Reading nonfiction books is very inspiring. I get new ideas for my writing, for my work as a software developer and my life in general.\nI\u0026rsquo;ve read a nonfiction book every once in a while before, but I wanted to read more, get done faster, and better retain the material of the books I\u0026rsquo;ve read.\nSo, another habit I committed to last year is what I call \u0026ldquo;meaningful reading\u0026rdquo;. I maintain a list of books I\u0026rsquo;d like to read at some point. I add to this list when I learn about a book in a talk, podcast, or another book. From that list, I make sure that I have a stack of books ready to read on my desk. Each time I\u0026rsquo;m done with a book I immediately choose the next one from that stack.\nI take notes while reading to make the topic stick in my useless memory and I use those notes to write up a summary of the book on my blog to make it stick even better (and to be able to look it up when my useless memory finally deserts me).\nAll book reviews together make up perhaps 1% of the blog\u0026rsquo;s traffic, so no one reads them. But that\u0026rsquo;s OK because I do it for myself. It\u0026rsquo;s all about the process of taking notes and then transforming those notes into a summary. The process is what keeps me going.\nAnd the paper notebook I use to take my notes in. In a weird way, this notebook motivates be to read more because my completionist self wants me to fill it up with my illegible notes.\nSo, the idea of taking notes and writing up a summary alone gives me the motivation to start and finish a book. But still, I need a regular time and place to read.\nSo I started reading in the lunch break at work every day for about 30 minutes. Since I moved to Australia and traded my commute by car with a commute by bus, I\u0026rsquo;m reading each morning on the bus, and sometimes in the afternoon on the way back, but I\u0026rsquo;m usually too spent after a day\u0026rsquo;s worth of knowledge work. Taking notes on the bus is hard, by the way, because all Australian bus drivers seem to brake hard at the last possible moment just to spite me.\nI got through 11 nonfiction books this way over the last year. I\u0026rsquo;m reading more than before, and I retain more of the material in memory than before. It\u0026rsquo;s a big win!\nA New Job at the End of the World As if all of the above wasn\u0026rsquo;t enough change for 2019, I accepted a job offer with Atlassian in Sydney. So, my family and I moved from Germany to Australia in October and everyone is starting a new life here.\nSo far, everything works out nicely (except for the bush fires). The job is great. My family seems to get along. I have more time for my family since I\u0026rsquo;m usually home earlier.\nWe don\u0026rsquo;t know how long we\u0026rsquo;re going to stay Down Under. For now, we\u0026rsquo;re here to stay.\nPlans For 2020 Let\u0026rsquo;s look at the New Year\u0026rsquo;s resolutions for 2020. Most importantly, I want to stick to the habits I described above. But there\u0026rsquo;s more I want to achieve this year:\n I want to read (at least) 15 nonfiction books from cover to cover I want to prepare a fun talk connecting psychology and habits with software development I want to start (and perhaps even finish?), another writing project (perhaps on the same topic?) I want to speak at 3 or more conferences or meetups I want to double the visitors to my blog by working together with other authors and editing their work I want to build a habit of working out to get my neglected body into shape I definitely need to take surfing lessons while I\u0026rsquo;m in Sydney!  So, What are your goals for the year? I found that sharing my goals with the world helps me to stick to my word.\nHave a great 2020!\n","date":"January 11, 2020","image":"https://reflectoring.io/images/stock/0057-sparkler-1200x628-branded_hu72d153006a6ae904228dca3e91d9e620_202896_650x0_resize_q90_box.jpg","permalink":"/review-2019/","title":"On Blogging, Writing, and Habits: This Was 2019"},{"categories":["Spring Boot"],"contents":"Spring provides a mighty tool for grouping configuration properties into so-called profiles, allowing us to activate a bunch of configurations with a single profile parameter. Spring Boot builds on top of that by allowing us to configure and activate profiles externally.\nProfiles are perfect for setting up our application for different environments, but they\u0026rsquo;re also tempting in other use cases.\nRead on to learn how profiles work, what use cases they support and in which cases we should rather not use them.\n Example Code This article is accompanied by a working code example on GitHub. What Do Profiles Control? Activating a certain profile can have a huge effect on a Spring Boot application, but under the hood, a profile can merely control two things:\n a profile may influence the application properties, and a profile may influence which beans are loaded into the application context.  Let\u0026rsquo;s look at how to do both.\nProfile-Specific Properties In Spring Boot, we can create a file named application.yml that contains configuration properties for our application (we can also use a file named application.properties, but I\u0026rsquo;ll only refer to the YAML version from now on).\nBy default, if an application.yml file is found in the root of the classpath, or next to the executable JAR, the properties in this file will be made available in the Spring Boot application context.\nUsing profiles, we can create an additional file application-foo.yml whose properties will only be loaded when the foo profile is active.\nLet\u0026rsquo;s look at an example. We have two YAML files:\n// application.yml helloMessage: \u0026#34;Hello!\u0026#34; // application-foo.yml helloMessage: \u0026#34;Hello Foo!\u0026#34; And we have a Bean that takes the helloMessage property as a constructor argument:\n@Component class HelloBean { private static final Logger logger = ...; HelloBean(@Value(\u0026#34;${helloMessage}\u0026#34;) String helloMessage) { logger.info(helloMessage); } } Depending on whether the foo profile is active, HelloBean will print a different message to the logger.\nWe can also specify all profiles in a single YAML file called application.yml using the multi-document syntax:\nhelloMessage: \u0026#34;Hello!\u0026#34; --- spring: profiles: foo helloMessage: \u0026#34;Hello Foo!\u0026#34; By specifying the property spring.profiles in each section separated by --- we define the target profile for the properties in that section. If it\u0026rsquo;s missing, the properties belong to the default profile.\nI\u0026rsquo;m a fan of using separate files, however, because it makes it much easier to find properties for a certain profile and even to compare them between profiles. Even the reference manual says that the multi-document syntax can lead to unexpected behavior.\nProfile-Specific Beans With properties, we can already control many things like connection strings to databases or URLs to external systems that should have different values in different profiles.\nBut with profiles, we can also control which beans are loaded into Spring\u0026rsquo;s application context.\nLet\u0026rsquo;s look at an example:\n@Component @Profile(\u0026#34;foo\u0026#34;) class FooBean { private static final Logger logger = ...; @PostConstruct void postConstruct(){ logger.info(\u0026#34;loaded FooBean!\u0026#34;); } } The FooBean is automatically picked up by Spring Boot\u0026rsquo;s classpath scan because we used the @Component annotation. But we\u0026rsquo;ll only see the log output in the postConstruct() method if the foo profile is active. Otherwise, the bean will not be instantiated and not be added to the application context.\nIt works similarly with beans defined via @Bean in a @Configuration class:\n@Configuration class BaseConfiguration { private static final Logger logger = ...; @Bean @Profile(\u0026#34;bar\u0026#34;) BarBean barBean() { return new BarBean(); } } The factory method barBean() will only be called if the bar profile is active. If the profile is not active, there will be no BarBean instance available in the application context.\nUse Profile-Specific Beans Responsibly!  Adding certain beans to the application context for one profile, but not for another, can quickly add complexity to our application! We always have to pause and think if a bean is available in a particular profile or not, otherwise, this may cause NoSuchBeanDefinitionExceptions when other beans depend on it!  Most use cases can and should be implemented using profile-specific properties instead of profile-specific beans. This makes the configuration of our application easier to understand because everything specific to a profile is collected in a single application.yml file and we don't have to scan our codebase to find out which beans are actually loaded for which profile.  Read more about why you should avoid the @Profile annotation in this article.  How to Activate Profiles? Spring only acts on a profile if it\u0026rsquo;s activated. Let\u0026rsquo;s look at the different ways to activate a profile.\nThe Default Profile The default profile is always active. Spring Boot loads all properties in application.yml into the default profile. We could rename the configuration file to application-default.yml and it would work the same.\nOther profiles will always be evaluated on top of the default profile. This means that if a property is defined in the default profile, but not in the foo profile, the property value will be populated from the default profile. This is very handy for defining default values that are valid across all profiles.\nVia Environment Variable To activate other profiles than the default profile, we have to let Spring know which profiles we want to activate.\nThe first way to do this is via the environment variable SPRING_PROFILES_ACTIVE:\nexport SPRING_PROFILES_ACTIVE=foo,bar java -jar profiles-0.0.1-SNAPSHOT.jar This will activate the profiles foo and bar.\nVia Java System Property We can achieve the same using the Java system property spring.profiles.active:\njava -Dspring.profiles.active=foo -jar profiles-0.0.1-SNAPSHOT.jar If the system property is set, the environment variable SPRING_PROFILES_ACTIVE will be ignored.\nIt\u0026rsquo;s important to put the -D... before the -jar..., otherwise the system property won\u0026rsquo;t have an effect.\nProgrammatically We can also influence the profile of our application programmatically when starting the application:\n@SpringBootApplication public class ProfilesApplication { public static void main(String[] args) { SpringApplication application = new SpringApplication(ProfilesApplication.class); application.setAdditionalProfiles(\u0026#34;baz\u0026#34;); application.run(args); } } This will activate the baz profile in addition to all profiles that have been activated by either the environment variable or the system property.\nI can\u0026rsquo;t think of a good use case that justifies this, though. It\u0026rsquo;s always better to configure the application using external environment variables or system properties instead of baking it into the code.\nActivating a Profile in Tests with @ActiveProfiles In tests, using system properties or environment variables to activate a profile would be very awkward, especially if we have different tests that need to activate different profiles.\nThe Spring Test library gives us the @ActiveProfiles annotation as an alternative. We simply annotate our test and the Spring context used for this test will have the specified profiles activated:\n@SpringBootTest @ActiveProfiles({\u0026#34;foo\u0026#34;, \u0026#34;bar\u0026#34;}) class FooBarProfileTest { @Test void test() { // test something  } } It\u0026rsquo;s important to note that the @ActiveProfiles annotation will create a new application context for each combination of profiles that are encountered when running multiple tests. This means that the application context will not be re-used between tests with different profiles which will cause longer test times, depending on the size of the application.\nChecking Which Profiles are Active To check which profiles are active, we can simply have a look at the log output. Spring Boot logs the active profiles on each application start:\n... i.r.profiles.ProfilesApplication: The following profiles are active: foo We can also check which profiles are active programmatically:\n@Component class ProfileScannerBean { private static final Logger logger = ...; private Environment environment; ProfileScannerBean(Environment environment) { this.environment = environment; } @PostConstruct void postConstruct(){ String[] activeProfiles = environment.getActiveProfiles(); logger.info(\u0026#34;active profiles: {}\u0026#34;, Arrays.toString(activeProfiles)); } } We simply inject the Environment into a bean and call the getActiveProfiles() method to get all active profiles.\nWhen To Use Profiles? Now that we know how to use profiles let\u0026rsquo;s discuss in which cases we should use them.\nUsing a Profile for Each Environment The prime use case for profiles is configuring our application for one of multiple environments.\nLet\u0026rsquo;s discuss an example.\nThere might be a local environment that configures the application to run on the developer machine. This profile might configure a database url to point to localhost instead of to an external database. So we put the localhost URL into application-local.yml.\nThen, there might be a prod profile for the production environment. This profile uses a real database and so we set the database url to connect to the real database in application-prod.yml.\nI would advise putting an invalid value into the default profile (i.e. into application.yml) so that the application fails fast if we forget to override it in a profile-specific configuration. If we put a valid URL like test-db:1234 into the default profile we might get an ugly surprise when we forget to override it and the production environment unknowingly connects to the test database\u0026hellip;.\nOur configuration files then might look like this:\n# application.yml database-url: \u0026#34;INVALID!\u0026#34; # application-local.yml database-url: \u0026#34;localhost:1234\u0026#34; # application-prod.yml database-url: \u0026#34;the-real-db:1234\u0026#34; For each environment, we now have a pre-configured set of properties that we can simply activate using one of the methods above.\nUsing a Profile for Tests Another sensible use case for profiles is creating a test profile to be used in Spring Boot integration tests. All we have to do to activate this profile in a test is to annotation the test class with @ActiveProfiles(\u0026quot;test\u0026quot;) and everything is set up for the test.\nUsing the same properties as above, our application-test.yml might look like this:\n# application-test.yml database-url: \u0026#34;jdbc:h2:mem:testDB\u0026#34; We have set the database url to point to an in-memory database that is used during tests.\nBasically, we have created an additional environment called test.\nIf we have a set of integration tests that interact with a test database, we might also want to create a separate integrationTest profile pointing to a different database:\n# application-integrationTest.yml database-url: \u0026#34;the-integration-db:1234\u0026#34; Don't Re-Use Environments for Tests!  Don't re-use another environment (like `local`) for tests, even if the properties are the same. In this case, copy application-local.yml into application-test.yml and use the test profile. The properties will diverge at some point and we don't want to have to search which property values belong into which profile then!  When Not to Use Profiles? Profiles are powerful and we might be tempted to use them for other use cases than the ones described above. Here\u0026rsquo;s my take on why that is a bad idea more often than not.\nDon\u0026rsquo;t Use Profiles For \u0026ldquo;Application Modes\u0026rdquo; This is probably debatable because profiles seem to be a perfect solution to this, but I would argue not to use profiles to create different \u0026ldquo;modes\u0026rdquo; of an application.\nFor example, our application could have a master mode and a worker mode. We\u0026rsquo;d create a master and a worker profile and add different beans to the application context depending on these profiles:\n@Configuration @Profile(\u0026#34;master\u0026#34;) public class MasterConfiguration { // @Bean definitions needed for a master } @Configuration @Profile(\u0026#34;worker\u0026#34;) public class WorkerConfiguration { // @Bean definitions needed for a worker } In a different use case, our application might have a mock mode, to be used in tests, that mocks all outgoing HTTP calls instead of calling the real services. We\u0026rsquo;d have a mock profile that replaces our output ports with mocks:\n@Configuration class BaseConfiguration { @Profile(\u0026#34;mock\u0026#34;) OutputPort mockedOutputPort(){ return new MockedOutputPort(); } @Profile(\u0026#34;!mock\u0026#34;) OutputPort realOutputPort(){ return new RealOutputPort(); } } So, why do I consider this to be problematic?\nFirst, we have to look into the code to see which profiles are available and what they do. That is if we haven\u0026rsquo;t documented them outside of the code, but who does that, right? We see these @Profile annotations in the code and ask ourselves what this profile does exactly. Each time. Better to use a set of properties that are clearly documented in application.yml and can be overridden for a specific environment or a specific test.\nSecond, we have a combinatorial effect when using profiles for multiple application modes. Which combinations of modes are compatible? Does the application still work when we combine the worker profile with the mock profile? What happens if we activate the master and the worker profile at the same time? We\u0026rsquo;re more likely to understand the effect of these combinations if we\u0026rsquo;re looking at them at a property level instead of at a profile level. So, again, a set of central properties in application.yml for the same effect is easier to grasp.\nThe final reason why I find this problematic is that we\u0026rsquo;re creating a different application with each profile! Each \u0026ldquo;mode\u0026rdquo; of the application needs to be tested with each valid combination of other \u0026ldquo;modes\u0026rdquo;. It\u0026rsquo;s easy to forget to test a specific combination of modes if they\u0026rsquo;re not aligned with the environment profiles.\nDon’t Use Profiles For Feature Flags For similar reasons, I believe that we shouldn\u0026rsquo;t use profiles for feature flags.\nA feature flag is an on/off switch for a specific feature. We could model this as a profile enable-foo that controls the loading of a couple beans.\nBut if we use feature flags for what they\u0026rsquo;re intended (i.e. to enable trunk-based development and speed up our deployments), we\u0026rsquo;re bound to collect a bunch of feature flags over time. If we create a profile for each profile, we\u0026rsquo;ll be drowning in the combinatorial hell I described in the previous section.\nAlso, profiles are too cumbersome to evaluate at runtime. To check if a feature is enabled or disabled, we\u0026rsquo;ll have to use if/else blocks more often than not and to call environment.getActiveProfiles() for this check is awkward at best.\nBetter to configure a boolean property for each feature and inject it into our beans with @Value(\u0026quot;${feature.foo.enabled}\u0026quot;) boolean featureEnabled.\nFeature flags should be a simple property with a very narrow scope instead of an application-wide profile. Better yet, use a dedicated feature flag tool.\nDon’t Use Profiles That Align With Environments I\u0026rsquo;ve seen profiles like test-db (configures a database to be used in tests) and local-only (configures who knows what for local testing). These profiles clearly align with the test and the local environment, respectively. So, the database configuration in the test-db profile should move into the test profile, and the configuration in the local-only profile should move into the local profile.\nAs a general rule, profiles that contain the name of an environment in their name should be consolidated into a single profile with the name of that environment to reduce combinatorial effects. A few environment profiles are much easier to maintain than many profiles that we have to combine to create a valid environment configuration.\nDon’t Use spring.profiles.active In application.yml! As we\u0026rsquo;ve seen above, profiles are activated using the spring.profiles.active property. This is useful for external configuration via environment variable or similar.\nWe could also add the property spring.profiles.active to one of our application.yml files to activate a certain set of profiles by default.\nThis only works in the default application.yml file, however, and not in the profile-specific application-\u0026lt;profile\u0026gt;.yml files. Otherwise, in a profile, we could activate another set of profiles, which could activate another set of profiles, which could activate another set of profiles until no one knows where those profiles come from anymore. Spring Boot doesn\u0026rsquo;t support this profile-ception, and that\u0026rsquo;s a good thing!\nSo, using spring.profiles.active might lead to misunderstandings when developers expect spring.profiles.active to work in profile-specific YAML files.\nAlso, activating a profile in application.yml would make it active by default. If it\u0026rsquo;s active by default, why would we need a profile for it?\nConclusion Profiles are a great tool to provide configuration properties for different environments like local development and a test, staging, and production environment. We create a set of properties we need, apply different values to those properties depending on the environment and activate the profile via command-line parameter or environment variable. In my opinion, this is the best (and should be the only) use of profiles.\nAs soon as we use profiles for different things like feature flags or application modes, things might get hard to understand and hard to maintain very quickly.\nYou can find the example code from this article on GitHub.\nUse profiles for environments and think very hard before using a profile for something different.\n","date":"January 2, 2020","image":"https://reflectoring.io/images/stock/0056-colors-1200x628-branded_hu1f0f2ae699f8c150df4c1cc5b3061948_206706_650x0_resize_q90_box.jpg","permalink":"/spring-boot-profiles/","title":"One-Stop Guide to Profiles with Spring Boot"},{"categories":["Software Craft"],"contents":"I visited the YOW! conference in Sydney in December 2019. What better way to persist what I learned than in a blog post? This way I can look it up again when I have forgotten it (not \u0026ldquo;if\u0026rdquo;, but \u0026ldquo;when\u0026rdquo; - the half-life of things my brain can remember is usually not more than a day or two).\nAnd of course, you can read it up, too, even if you weren\u0026rsquo;t there. The things that modern technology can do\u0026hellip;\nSo read on to be inspired by the insights of some great speakers. There\u0026rsquo;s some interesting stuff in here!\nGene Kim on the Ideals of DevOps In his talk \u0026ldquo;The Unicorn Project and the 5 Ideals\u0026rdquo;, Gene Kim talked about the business value of DevOps and the ideals that get us there.\nHe started by making clear that the value of DevOps is much higher than we would expect. For instance, high-performing teams deploy their software 208x more often than low-performing teams. Also,\n they have a 106x faster lead time (time from starting work to having it deployed to production), they have a 7x better deployment success rate, they have a 2.6x better Mean Time to Restore in case of a failed deployment, they spend 23% more time on developing features, and they are 2.2x more likely to refer their company as a great place to work.  Ideal 1: Locality and Simplicity The organizational environment must support DevOps. We can measure it with the \u0026ldquo;Lunch Factor\u0026rdquo;: how many people do we have to talk to over lunch to get things done? Reduce the lunch factor to a minimum.\n Have fewer teams that can deploy more often independently. Have an architecture that allows a fast turnaround time. Developers must be empowered to work independently including that most integration tests must be run without an integrated staging environment (note: done right, contract testing might be a means to achieve that). Developers should not have to understand a big codebase. Developers must have the authority to deploy code to production. Developers must not have to wait for other business units to get things done for them (like enterprise architecture review, database schemas, \u0026hellip;).  Ideal 2: Focus, Flow, and Joy Developers should solve the business problem and not spend most of their time on configuration and infrastructure tasks.\n Put the infrastructure into a platform. Reduce the lead time between code check-in and feedback to a minimum to connect the cause to the effect; ideally, know within seconds if a feature works or not. Do trunk-based development to reduce the PR review and merge overhead.  Ideal 3: Improvement of Daily Work We can only improve if we get feedback, so we should put as much feedback into our daily work as possible to get a little better each day.\n Toyota had a cord along the assembly line that is expected to be pulled by any worker that sees a problem (the Andon Cord. If it\u0026rsquo;s pulled, they have 55 seconds to fix the problem, otherwise, the assembly line stops. The cord is pulled something like 3500 times a day (talk about feedback). Big companies like Microsoft, Google, and Amazon only survived their technical debts because at some point they froze feature development. The build time of Nokia\u0026rsquo;s own operating system Symbian was 48 hours before they \u0026ldquo;pulled the cord\u0026rdquo; and went with another OS (didn\u0026rsquo;t help them in the long run, though\u0026hellip;). Always take 20% off the development cycle to fix (or avoid) tech debt. Enable greatness by having some engineers solely concentrate on dev productivity. Have a virtual Andon Cord to fix things right when they happen.  Ideal 4: Psychological Safety Only a psychologically safe environment allows for new ideas, innovation, and to learn from errors. An environment where you get bullied for each production error does not support a fast DevOps cycle, because everyone wants to be 100% certain that it works, which results in big deployments with slow processes and more things that can go wrong per deployment.\nIdeal 5: Customer Focus Finally, focus on the things that bring value to the customers instead of building functional silos and long internal decision chains within the organization.\nReading Suggestions  The Unicorn Project by Gene Kim Team of Teams by General Stanley McChrystal Flow by Mihaly Csikszenthmihalyi Transforming Nokia by Risto Siilasmaa Inspired by Marty Cagan  Aino Corry on Retrospective Antipatterns Antipatterns are as useful as patterns if we learn from them. They might first be perceived as patterns and might only later be identified as antipatterns. Then, it\u0026rsquo;s time to refactor them so they can really become a pattern we should follow.\nAino Corry introduced 6 antipatterns in her talk \u0026ldquo;Retrospective Antipatterns\u0026rdquo; that she has learned about in her agile career.\nAntipattern #1: Prime Directive Ignorance The prime directive of retrospectives says that the participants must believe that everyone did the best job they could and that they must believe that a retro can really bring change.\nIf the participants or the facilitator don\u0026rsquo;t believe in that, a retrospective cannot have the effect it should have.\nAntipattern #2: The Wheel of Fortune If we collect problems and talk about solutions without identifying their root causes, we could just as well spin a wheel of fortune for the same effect. Better go through all the 5 steps of a retrospective:\n Set the stage - make everyone say something to warm them up. Gather data - look into the past to find out what we might want to change. Generate insights - find the root causes of things we want to change. Decide what to do. Close the retrospective.  Not part of the talk, but if you\u0026rsquo;re looking for inspiration about what to do in each of the above steps, have a look at the Retromat.\nAntipattern #3: The Disillusioned Facilitator If the facilitator doesn\u0026rsquo;t believe in the activities they are using, the retrospective cannot have the desired effect. Only use those activities that you believe can spark change in the group you are facilitating. Not every group likes the same retro activities.\nAntipattern #4: In the Soup If you hear things like \u0026ldquo;we can\u0026rsquo;t do anything about this\u0026rdquo; or \u0026ldquo;we just have to accept this\u0026rdquo; it\u0026rsquo;s a sign that people think their fate is controlled by outside factors.\nA way out of this is to use the \u0026ldquo;in the soup\u0026rdquo; analogy. There are things we can completely control ourselves, there are things we can influence by nudging someone outside of the team and there are things we can\u0026rsquo;t do anything about - those are \u0026ldquo;in the soup\u0026rdquo; (Note: if you want to go deeper into this analogy, I recommend you read Stephen Covey\u0026rsquo;s book The 7 Habits of Highly Effective People - he calls it the \u0026ldquo;Circle of Influence\u0026rdquo;).\nDraw three concentric circles on a whiteboard (things we can do something about, things we can influence, and the soup) and put solutions on post-its into the circle depending on our influence implementing it. This makes obvious which solutions we actually can do something about.\nAntipattern #5: DIY Retrospectives I obviously didn\u0026rsquo;t pay enough attention to this part of the talk since I can\u0026rsquo;t remember the reasoning behind this antipattern.\nAntipattern #6: Disregard of Preparation People coming unprepared is a general problem in meetings. This is especially true for remote meetings (Aino included this video in her talk - it paints a hilarious picture of video conferences).\nSend an email the day before and another one 15 minutes before to make people aware of preparations.\nIf you don\u0026rsquo;t get the preparation you want from the participants, like everyone in a video conference to share their faces and concentrate on the meeting, make it embarrassing for them.\nReading Suggestions  Antipatterns for Retrospectives by Aino Corry Project Retrospectives by Norm Kerth  Todd Montgomery on Pride and Quality In his talk \u0026ldquo;Level Up: Quality, Security, and Safety\u0026rdquo;, Todd Montgomery took a stance for taking pride in our work as software developers to increase software quality.\nOur industry creates data breaches and other major incidents non-stop. That should give us a pause. Knight Capital fell to a programming error that sold stock orders high and sold them low and lost several hundred million within an hour.\nMany people that are not in the software industry expect software to be clunky or not work as they would expect.\nWe\u0026rsquo;re the only industry that has EULA\u0026rsquo;s that absolve us of responsibility. Imagine that in other engineering disciplines!\nThings that don\u0026rsquo;t help in creating secure and safe (= high-quality) software:\n Languages, frameworks, and methodologies don\u0026rsquo;t matter for software quality. The same is true for the recruiting process of developers and using the latest technologies like AI, ML, and Reactive. Code reviews and code coverage alone don\u0026rsquo;t create high-quality software, though they might have an effect if applied together (100% code coverage might even motivate developers to don\u0026rsquo;t implement any error handling anymore, because it\u0026rsquo;s so hard to test). Any kind of dogma holds us back and might cause harm.  Things that do help in creating secure and safe (= high-quality) software:\n Getting to the bottom of errors. Analyze the root causes, even if it\u0026rsquo;s hard! Using specs for communication. Early requirements analysis. Early domain expertise. A culture of accountability.  The thing that really helps in creating safe and secure software is taking pride in the work. As Steve Jobs said:\n When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.\n Develop a taste for software development. Our sense of taste will then find out very quickly if something doesn\u0026rsquo;t taste good. Care about the work. This leads automatically to accountability and responsibility.\nReading Suggestions  Putt\u0026rsquo;s Law and the Successful Technocrat by Archibald Putt  Martin Thompson on Good-Mannered Protocols In his talk \u0026ldquo;Interaction Protocols: It\u0026rsquo;s all about Good Manners\u0026rdquo;, Martin Thompson made a case for thinking deeply about the protocols we\u0026rsquo;re creating when building software. Think of little-endian vs. big-endian byte order. Little and big endianness originates from Gulliver\u0026rsquo;s Travels where the little-endians and the big-endians fight over which side to open an egg (I really did not know that! Probably because I watched the movie in German.). Which side to open an egg on is a protocol, and a good or a bad manner depending on who you ask.\nWhen we think about how our components should interact, however, we talk about API design and not protocol design. But an API is a very narrow view of the interaction between components. It doesn\u0026rsquo;t include the sequence of things. To create robust interaction, we need to think about the sequence of things, and ask ourselves the following questions:\n What if things were called in a different order? What if things don\u0026rsquo;t arrive?  Studies show that 25% percent of production outages are caused by missing or buggy error handling.\nMartin went on to discuss some interaction design best practices:\n Use a binary format instead of a text-based format for interaction protocols. The data centers of the world use more energy than the airline industry. Much of that comes from processing wasteful text-based data formats. It could be reduced by using a more efficient binary format. Synchronous interaction is also part of the energy waste. Instead of waiting for a response, think about using async to utilize the processing power we have. Batch the things we send over the line. We wouldn\u0026rsquo;t send someone to fetch things from the supermarket for us one-by-one. We would send someone to get all the things you need in one go. Beware of \u0026ldquo;snake oil protocols\u0026rdquo;. Protocols like 2PC/XA are broken. Guaranteed delivery doesn\u0026rsquo;t exist. Instead, implement a feedback and recovery mechanism if something doesn\u0026rsquo;t reach its destination. A computer should check if the other computer it talked to has understood what it wanted to say (we humans do it all the time when talking to someone else).  Edith Harbough on Feature Flagging In her talk \u0026ldquo;Mistakes we made - Patterns \u0026amp; Anti-Patterns For Effective Feature Flagging\u0026rdquo;, Edith Harbaugh - CEO and co-founder of the Feature Flagging server LaunchDarkly - introduces some good and bad practices when working with feature flags.\nFeature flags make it possible to reduce deployment cycle time. We cannot afford big \u0026ldquo;we need to make everything perfect\u0026rdquo;-type deployments anymore. Instead, we deploy code behind feature flags and release it to a fraction of the user base to test it.\nEdith discussed the following feature flagging best practices:\n Use feature flags to create a kill switch for features. Simply turn it off if it doesn\u0026rsquo;t work and fix it on Monday (instead of on the weekend). No big rollbacks needed. We can use feature flags in a single branch instead of one branch for each feature. This enables trunk-based development. Use feature flags to create controlled rollouts. At first, only expose a fraction of the users to the new feature and progressively increase this fraction to make sure the feature works as expected. A side-effect of controlled rollouts is that we can let users opt-in to a beta test and only activate certain features for the beta testers without having to set up a whole new environment for the tests. Inversely, we can block a feature for certain users if we need to. With feature flags, we can test in production if we activate the features only for ourselves. \u0026ldquo;We always test in production - sometimes we\u0026rsquo;re just lucky enough to have tested before that\u0026rdquo;. Instead of trying (and failing) to reproduce errors in a staging environment, make the production environment observable enough to support error analysis. In subscription models, we can wrap features that are available for a certain subscription level with long-lived feature flags. This way we can easily move features between subscription levels. Feature flags easily allow the sunsetting of features that we no longer want to have in the system.  She mentioned the following antipatterns:\n Ambiguously named feature flags lead to misunderstandings and activation of deactivation of the wrong feature. Think long and deep when naming feature flags. Have a naming pattern. Overused feature flags, i.e. using a single feature flag for multiple things, can lead to feature flags with mixed responsibilities and unwanted side effects. We don\u0026rsquo;t understand what the feature flag does anymore. Overlapping feature flags might conflict with each other and may have unwanted side effects on each other. Dangerous feature flags are feature flags that wrap a very important feature without which many users couldn\u0026rsquo;t work with the software anymore. Either remove the feature flag completely or at least don\u0026rsquo;t make it too easy to disable (e.g. don\u0026rsquo;t put a button on a dashboard that allows to disable it - there\u0026rsquo;s a true story behind that). Leftover feature flags are feature flags that stay in the system and are not maintained anymore. They become technical debt. What\u0026rsquo;s worse: when the feature flag server isn\u0026rsquo;t available for some reason, the feature flags fall back to the default value we defined 2 years ago, which might not be the value we want anymore.  Troy Hunt on Security Breaches In his keynote \u0026ldquo;Rise of the Breaches\u0026rdquo;, security researcher Troy Hunt told of big security breaches, shared some stories about how he and other security researchers have privately disclosed security issues before they could be exploited, and of the often naive behavior of the companies responsible for the security issues.\nTroy is the creator of haveibeenpwned.com, where he collects data that has been made publicly available by security breaches so you can check if your email has been among the leaked accounts (Sit down before you paste your email address in there, you have almost certainly been pwned)!\nToday, it\u0026rsquo;s as easy as never to search for and exploit security holes in software that is available via the internet. All it takes is a Google search for pages with an \u0026ldquo;id\u0026rdquo; parameter in the URL and a freely downloadable tool that exploits this parameter to inject SQL. If successful, the tool will automatically list all database tables and lets you pick which ones you want to copy. A 77 million-pound data breach in the UK was done by a 17-year-old. Everyone can do it! There are Youtube videos and tutorials freely available!\nIoT makes things worse. Usually, you have an IoT device accompanied by an app. The app talks to a server on the internet, which then talks to the IoT device. By simply sniffing the traffic between the app and the server, you can find very embarrassing security holes:\n  Nissan had a companion app to its model LEAF that allowed, among other things, to control the heating of the car. By simply replacing your own serial number in the URL with that of another car, you could control the other car\u0026rsquo;s heating. From everywhere across the globe. And the serial number is printed on the windshield for everyone to see.\n  With the TicTocTrack smartwatch, TicToc has created a watch for kids that allows parents to track where their kids are and who is allowed to call them via the watch. Sadly, you can replace your own \u0026ldquo;family ID\u0026rdquo; in the URL with that of another family so you can see the locations of other children. You can also add yourself to the allowed caller list and call those children. They don\u0026rsquo;t even need to accept the call, because the watch accepts it automatically. The TicTockTrack has been banned in Germany but is still available in Australia and other countries (just in case you still need a Christmas present for your kids).\n  Confronted with security holes like these, companies often defend their behavior instead of quickly putting a stop to security issues.\nSimon Brown on Software Design In his talk \u0026ldquo;The lost art of software design\u0026rdquo;, Simon Brown made a case for doing a little up-front design instead of \u0026ldquo;just using a whiteboard\u0026rdquo; and getting started with development right away.\nIt\u0026rsquo;s crucial to make some decisions early in a project (programming language, microservices vs. monolith, \u0026hellip;) so that the developers have some architectural guardrails to guide their work. But people hesitate to make such decisions in the design phase of a project, often due to misinterpreted Agile values. When writing a technical book alone, you want to have an outline before starting, so why not have an outline when building software with tens of people?\nGoing for an MVP is often used as an excuse for not doing any design work, since \u0026ldquo;the design will emerge eventually\u0026rdquo;. But doing no upfront design can lead to each iteration of the MVP to become a complete refactoring of the architecture.\nEven the Agile Manifesto even promotes design work:\n Continuous attention to technical excellence and good design enhances agility.\n So good architecture is an enabler for agility, not an inhibitor.\nUp-front design doesn\u0026rsquo;t need to be perfect, we don\u0026rsquo;t want to do BDUF (Big Design Up-Front). But instead of re-inventing the architecture with each iteration of an MVP (i.e. start with a skateboard and evolve it into a scooter, then a bike, and finally a car), we can begin with a \u0026ldquo;primitive whole\u0026rdquo; (i.e. start with a very small car without an engine, and evolve that car to its final state).\nEvolutionary architecture is doable, but it\u0026rsquo;s very hard.\nAs a side effect, design work helps in creating estimates.\nSo, how much up-front design should we do? We should stop when\n we understand the architecture drivers we understand the quality attributes we understand the constraints we understand the significant design decisions to be made we can share a technical vision we\u0026rsquo;re comfortable with the risk.  Reading Suggestions  Building Evolutionary Architectures by Neal Ford and Patrick Kua  Sarah Wells on Mature Microservices In her talk \u0026ldquo;Mature microservices and how to operate them\u0026rdquo;, Sarah Wells - responsible for operations and reliability at the Financial Times - shared her experience in managing microservices.\nIn a microservice environment, we never understand the whole picture because it\u0026rsquo;s too complex. We might not know how to access the database of a specific service to quickly fix a problem. Most problems you encounter are new.\nSo why not go back to monoliths? Because we want to be able to make cheap experiments. We want to quickly implement a certain feature and test it in production. We want to release many times a day.\nOptimizing For Speed In the old days, we did deployments into production by following a spreadsheet with \u0026gt;50 steps to be performed by managers, developers, and ops people. The Financial Times had all of 12 releases a year (note: I find that 12 times a year was still pretty good in the old days \u0026hellip; I remember quarterly or even yearly deployments in my projects from back then, and, yes, we had that spreadsheet, too). Features we built took 8 weeks or more to get into production.\nIf we want to be able to get features out there fast and to do experiments, we need to optimize for speed:\n automated release pipelines automated testing as part of the pipeline continuous integration zero-downtime deployments test and deploy services independently little to none coordination between teams for deployments the person who built it is the person who deploys it no filling out forms or collecting permissions for deploying a change  Operating Microservices To operate microservices successfully, DevOps is key. Only the developers really know what\u0026rsquo;s going on and can fix things.\nBesides DevOps, other factors help:\n Make things like infrastructure (queues, databases, \u0026hellip;) someone else\u0026rsquo;s (= your cloud provider\u0026rsquo;s) problem so that building stuff takes days instead of weeks. Bake resilience into your services because things will fail. Bake observability (logs, metrics) into your services because reproducing errors locally will be impossible. Don\u0026rsquo;t measure every metric you can think of. Request rates, error rates, and request durations for the main business use cases go a long way. In case of incidents mitigate now and investigate later instead of trying to fix the root cause directly. Automate things as soon as they get painful. Use a service mesh to take care of communication between services.  When People Move On Naturally, people leave a team and new people join it. Even whole teams are dissolved. What happens to the services the team owned, then? In the DevOps world, services must be owned by a team.\nThe following actions have helped the Financial Times with changing people and teams:\n Either invest in a service or abandon it. Don\u0026rsquo;t let it live without an owner. Build a graph database to manage properties of the services like ownership and dependencies and to build support tools atop of that graph. Practice failures to gain trust. Use unique service codes to identify services across different supporting tools. A searchable runbook library. Keep the runbooks in near the sourcecode (i.e. in the repository) so you don\u0026rsquo;t have to search for them. Make a game of who has the best documentation. Automatic and manual reviews of the documentation.  Reading Suggestions  Accelerate by Nicole Forsgren, Jez Humble, and Gene Kim Continuous Delivery by Jez Humble and David Farley  ","date":"December 12, 2019","image":"https://reflectoring.io/images/stock/0055-yow2019-1200x628-branded_hu2304493210e8c78a8f2110ee7756924d_473093_650x0_resize_q90_box.jpg","permalink":"/yow2019-report/","title":"Insights from YOW! 2019"},{"categories":["Spring Boot"],"contents":"Sometimes we just need to run a snippet of code on application startup, be it only to log that a certain bean has loaded or the application is ready to process requests.\nSpring Boot offers at least 5 different ways of executing code on startup, so which one should we choose? This article gives an overview of those different ways and explains when to use which one.\nLet\u0026rsquo;s start by looking at some use cases, though.\n Example Code This article is accompanied by a working code example on GitHub. Why Would I Want to Execute Code at Startup? The most critical use case of doing something at application startup is when we want our application to start processing certain data only when everything is set up to support that processing.\nImagine our application is event-driven and pulls events from a queue, processes them, and then sends new events to another queue. In this case, we want the application to start pulling events from the source queue only if the connection to the target queue is ready to receive events. So we include some startup logic that activates the event processing once the connection to the target queue is ready.\nIn a more conventional setting, our application responds to HTTP requests, loads data from a database, and stores data back to the database. We want to start responding to HTTP requests only once the database connection is ready to do its work, otherwise, we would be serving responses with HTTP status 500 until the connection was ready.\nSpring Boot takes care of many of those scenarios automatically and will activate certain connections only when the application is \u0026ldquo;warm\u0026rdquo;.\nFor custom scenarios, though, we need a way to react to application startup with custom code. Spring and Spring Boot offer several ways of doing this.\nLet\u0026rsquo;s have a look at each of them in turn.\nCommandLineRunner CommandLineRunner is a simple interface we can implement to execute some code after the Spring application has successfully started up:\n@Component @Order(1) class MyCommandLineRunner implements CommandLineRunner { private static final Logger logger = ...; @Override public void run(String... args) throws Exception { if(args.length \u0026gt; 0) { logger.info(\u0026#34;first command-line parameter: \u0026#39;{}\u0026#39;\u0026#34;, args[0]); } } } When Spring Boot finds a CommandLineRunner bean in the application context, it will call its run() method after the application has started up and pass in the command-line arguments with which the application has been started.\nWe can now start the application with a command-line parameter like this:\njava -jar application.jar --foo=bar This will produce the following log output:\nfirst command-line parameter: \u0026#39;--foo=bar\u0026#39; As we can see, the parameter is not parsed but instead interpreted as a single parameter with the value --foo=bar. We\u0026rsquo;ll later see how an ApplicationRunner parses arguments for us.\nNote the Exception in the signature of run(). Even though we don\u0026rsquo;t need to add it to the signature in our case, because we\u0026rsquo;re not throwing an exception, it shows that Spring Boot will handle exceptions in our CommandLineRunner. Spring Boot considers a CommandLineRunner to be part of the application startup and will abort the startup when it throws an exception.\nSeveral CommandLineRunners can be put in order using the @Order annotation.\nWhen we want to access simple space-separated command-line parameters, a CommandLineRunner is the way to go.\nDon't @Order too much!  While the @Order annotation is very convenient to put certain startup logic fragments into a sequence, it's also a sign that those startup fragments have a dependency on each other. We should strive to have as few dependencies as possible to create a maintainable codebase.  What's more, the @Order annotation creates a hard-to-understand logical dependency instead of an easy-to-catch compile-time dependency. Future you might wonder about the @Order annotation and delete it, causing Armageddon on the way.  ApplicationRunner We can use an ApplicationRunner instead if we want the command-line arguments parsed:\n@Component @Order(2) class MyApplicationRunner implements ApplicationRunner { private static final Logger logger = ...; @Override public void run(ApplicationArguments args) throws Exception { logger.info(\u0026#34;ApplicationRunner#run()\u0026#34;); logger.info(\u0026#34;foo: {}\u0026#34;, args.getOptionValues(\u0026#34;foo\u0026#34;)); } } The ApplicationArguments object gives us access to the parsed command-line arguments. Each argument can have multiple values because they might be used more than once in the command-line. We can get an array of the values for a specific parameter by calling getOptionValues().\nLet\u0026rsquo;s start the application with the foo parameter again:\njava -jar application.jar --foo=bar The resulting log output looks like this:\nfoo: [bar] As with CommandLineRunner, an exception in the run() method will abort application startup and several ApplicationRunners can be put in sequence using the @Order annotation. The sequence created by @Order is shared between CommandLineRunners and ApplicationRunners.\nWe\u0026rsquo;ll want to use an ApplicationRunner if we need to create some global startup logic with access to complex command-line arguments.\nApplicationListener If we don\u0026rsquo;t need access to command-line parameters, we can tie our startup logic to Spring\u0026rsquo;s ApplicationReadyEvent:\n@Component @Order(0) class MyApplicationListener implements ApplicationListener\u0026lt;ApplicationReadyEvent\u0026gt; { private static final Logger logger = ...; @Override public void onApplicationEvent(ApplicationReadyEvent event) { logger.info(\u0026#34;ApplicationListener#onApplicationEvent()\u0026#34;); } } The ApplicationReadyEvent is fired only after the application is ready (duh) so that the above listener will execute after all the other solutions described in this article have done their work.\nMultiple ApplicationListeners can be put in an order with the @Order annotation. The order sequence is shared only with other ApplicationListeners and not with ApplicationRunners or CommandLineRunners.\nAn ApplicationListener listening for the ApplicationReadyEvent is the way to go if we need to create some global startup logic without access to command-line parameters. We can still access environment parameters by injecting them with Spring Boot\u0026rsquo;s support for configuration properties.\n@PostConstruct Another simple solution to create startup logic is by providing an initializing method that is called by Spring during bean creation. All we have to do is to add the @PostConstruct annotation to a method:\n@Component @DependsOn(\u0026#34;myApplicationListener\u0026#34;) class MyPostConstructBean { private static final Logger logger = ...; @PostConstruct void postConstruct(){ logger.info(\u0026#34;@PostConstruct\u0026#34;); } } This method will be called by Spring once the bean of type MyPostConstructBean has been successfully instantiated.\nThe @PostConstruct method is called right after the bean has been created by Spring, so we cannot order it freely with the @Order annotation, as it may depend on other Spring beans that are @Autowired into our bean.\nInstead, it will be called after all beans it depends on have been initialized. If we want to add an artificial dependency, and thus create an order, we can use the @DependsOn annotation (same warnings apply as for the @Order annotation!).\nA @PostConstruct method is inherently tied to a specific Spring bean so it should be used for the initialization logic of this single bean only.\nFor global initialization logic, a CommandLineRunner, ApplicationRunner, or ApplicationListener provides a better solution.\nInitializingBean Very similar in effect to the @PostConstruct solution, we can implement the InitializingBean interface and let Spring call a certain initializing method:\n@Component class MyInitializingBean implements InitializingBean { private static final Logger logger = ...; @Override public void afterPropertiesSet() throws Exception { logger.info(\u0026#34;InitializingBean#afterPropertiesSet()\u0026#34;); } } Spring will call the afterPropertiesSet() method during application startup. As the name suggests, we can be sure that all the properties of our bean have been populated by Spring. If we\u0026rsquo;re using @Autowired on certain properties (which we shouldn\u0026rsquo;t - we should use constructor injection instead), Spring will have injected beans into those properties before calling afterPropertiesSet() - same as with @PostConstruct.\nWith both InitializingBean and @PostConstruct we must be careful not to depend on state that has been initialized in the afterPropertiesSet() or @PostConstruct method of another bean. That state may not have been initialized yet and cause a NullPointerException.\nIf possible, we should use constructor injection and initialize everything we need in the constructor, because that makes this kind of error impossible.\nConclusion There are many ways of executing code during the startup of a Spring Boot application. Although they look similar, each one behaves slightly different or provides different features so they all have a right to exist.\nWe can influence the sequence of different startup beans with the @Order annotation but should only use this as a last resort, because it introduces a difficult-to-grasp logical dependency between those beans.\nIf you want to see all solutions at work, have a look at the GitHub repository.\n","date":"December 2, 2019","image":"https://reflectoring.io/images/stock/0039-start-1200x628-branded_hu0e786b71aef533dc2d1f5d8371554774_82130_650x0_resize_q90_box.jpg","permalink":"/spring-boot-execute-on-startup/","title":"Executing Code on Spring Boot Application Startup"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you\u0026rsquo;re new to programming and want to learn some coding best practices you\u0026rsquo;re a coding veteran and want to confirm your own coding best practices you met a \u0026ldquo;Clean Code\u0026rdquo; fanatic and want to check out if the book really is as black-and-white as they are preaching (this was my main reason to read it)  Overview I guess {% include book-link.html book=\u0026ldquo;clean-code\u0026rdquo; %} by Robert C. Martin doesn\u0026rsquo;t need an introduction. I knew the book well before I read it. Even though I\u0026rsquo;m quite comfortable with my own coding best practices, I read it to confirm my coding practices and to be able to discuss it with any fanatic \u0026ldquo;Clean Code\u0026rdquo; disciple I happen to meet.\nThe book contains mostly small and easily digestible chapters, which get (a lot) longer and (a lot) more tiring towards the end of the book. It uses Java for most code examples and even some Java-specific frameworks in the discussions, so you get the most of it if you\u0026rsquo;re familiar with Java.\nThe book starts with isolated coding practices around naming and functions and becomes more broad and general in the end, discussing systems, concurrency and code smells.\nLikes and Dislikes I agree with most of the clean code practices discussed in the book. However, they should be applied with common sense instead of being followed dogmatically.\nSome quotes from the book are very black-and-white, like\n \u0026ldquo;Duplication may be the root of all evil\u0026rdquo;,\n or\n \u0026ldquo;Comments are always failures\u0026rdquo;.\n Taken out of context, these quotes may very well recruit \u0026ldquo;Clean Code\u0026rdquo; fanatics that split all of their code into one-line methods and argue about every comment in your code, even if they are justified.\nIn most cases, though, Martin softens up the meaning of those rules and explains when it makes sense to break them.\nThe book contains a lot of quotes about coding, however, that are very valid even when taken out of context. This is my favorite:\n \u0026ldquo;To write clean code, you must first write dirty code and then clean it.\u0026rdquo;\n This is exactly how I create clean code :).\nThe first half of the book is very concise and fun to read, as it explains clean coding practices.\nThe second half is tiresome to read as it contains a lot of very long code examples that are hard to follow. Most of the value of the book is in the first half, though.\nKey Takeaways Here are my notes of the book in my own words. I added some comments in italics.\nClean Code  code will always be needed to translate vague requirements into \u0026ldquo;perfectly executed programs\u0026rdquo; bad code can bring a whole company down due to maintenance nightmares it\u0026rsquo;s our responsibility to create clean code even if deadlines are looming - communication is key we usually won\u0026rsquo;t make a deadline by cutting corners \u0026ldquo;bad code tempts the mess to grow\u0026rdquo; (this is an instance of the \u0026ldquo;Broken Windows\u0026rdquo; Theory - learn about broken windows in my article about code coverage and in my book)  Meaningful Names  meaningful names matter because programming is a social activity and we have to be able to talk about it keep interface names clean (i.e. without prefixed \u0026ldquo;I\u0026rdquo;, for instance) and prefer to rename the implementation instead (call it \u0026ldquo;\u0026hellip;Impl\u0026rdquo;, if you must) - we don\u0026rsquo;t want clients to know that they\u0026rsquo;re using an interface  Functions  functions should be short functions should not mix levels of abstraction a way to hide big switch statements is to hide them in a factory and use polymorphism to implement behavior that depends on the switch value functions should have as few parameters as possible - understandability suffers with each added parameter functions should not have side effects functions should not need a double-take to understand functions should avoid output parameters functions should separate commands from queries code is refactored into small functions - it\u0026rsquo;s not created that way  Comments  inaccurate comments are worse than no comments at all - and comments tend to become inaccurate over time we cannot always avoid comments - sometimes we need comments that are informative, explaining, warning, or amplifying intent there are many more types of bad comments than of good comments javadoc should not be seen as mandatory - it often only adds clutter commented-out source code creates questions, not answers, so it should be deleted a comment should not require further explanation (to explain the comment)  Formatting  a source file should read like a newspaper article - starting with the general idea and increasing in detail as you read on use vertical distance (empty lines) to separate thoughts and vertical density to group things together most formatting can and should be automated with tools  Objects and Data Structures  don\u0026rsquo;t add getters and setters by default - instead, provide an abstract interface to the data so that you can modify the implementation without having to think about the clients adding a new data structure to procedural code is hard because all functions have to change adding a new function to object-oriented code is hard because all objects have to change an object hides its internals, a data structure doesn\u0026rsquo;t objects expose behavior and hide data (i.e. adding or changing behavior is hard while adding or changing the underlying data is easy) data structures have no behavior and expose data (i.e. adding behavior is easy while adding or changing data is hard) we don\u0026rsquo;t need objects all the time - sometimes a data structure will do  Error Handling  error handling should not obscure the business logic checked exceptions violate the Open/Closed Principle - every method between the throwing method and the handling method needs to declare it  Boundaries  wrap third-party code so as not to expose its externals to your system use \u0026ldquo;learning tests\u0026rdquo; to try out third-party code before integrating it into your codebase integration of third-party code should be covered by \u0026ldquo;boundary tests\u0026rdquo; so that we know if a new version of the library will work as expected  Unit Tests  the bulk of unit tests created when practicing TDD can become a management problem dirty tests are worse than no tests - they will reduce understanding and take more time to change than the production code the causal chain of dirty tests:  dirty tests developers complain developers throw tests away developers fear changes in production code production code rots defect rate climbs   tests enable change test code must be made to read don\u0026rsquo;t stick to the \u0026ldquo;one assertion per test\u0026rdquo; rule dogmatically a good test follows the FIRST rules:  fast independent reliable self-validating timely    Classes  a class should have a single reason to change \u0026ldquo;A system with many small classes has no more moving parts than a system with a few large classes.\u0026rdquo; classes are maximally cohesive if every method manipulates or accesses each instance variable if a class loses cohesion, split it  Systems  separate bootstrapping logic from business logic (you might like the chapter \u0026ldquo;Assembling the Application\u0026rdquo; in my own book) systems can grow iteratively if we maintain proper separation of concerns the rest of this chapter is just a sequence of shallow discussions on EJB, AOP, and other concepts.  Emergence  making sure the system is testable helps us create better designs because we\u0026rsquo;re building small classes that are easy to test refactor to remove duplication refactor to separate responsibilities take pride in workmanship to improve code continuously - the design will emerge  Concurrency  concurrency is hard to get right, so clean code is especially important concurrency is a responsibility and should be separated from other responsibilities to avoid concurrency issues restrict scope, return object copies, and make threads independent keep synchronized methods small  Successive Refinement  \u0026ldquo;To write clean code, you must first write dirty code and then clean it.\u0026rdquo; this chapter contains 60 pages with code examples making it a chore to read - I skipped most of it\u0026hellip;  JUnit  refactoring is iterative - each refactoring may invalidate a previous refactoring this chapter contains very long code examples, making it a chore to read - I skipped most of it\u0026hellip;  Refactoring SerialDate  a rather boring discussion about how Uncle Bob refactored the SerialDate class into clean code  Smells and Heuristics  this chapter contains a valuable list of code smells which don\u0026rsquo;t make sense to include in this summary I believe that this chapter violates the Single Responsibility Principle by explaining both smells and heuristics \u0026hellip; why not two separate chapters?  Conclusion The first half of \u0026ldquo;Clean Code\u0026rdquo; is a worthy read and helps to establish or confirm good coding practices. It\u0026rsquo;s easy - even fun - to read about the reasoning behind the clean coding practices.\nYou might want to skip the second half, though, as it feels like a chore to read and, in my opinion, doesn\u0026rsquo;t bring as much value.\nBe careful with out-of-context quotes from this book, as they tend to be very black-and-white.\n","date":"November 19, 2019","image":"https://reflectoring.io/images/covers/clean-code-teaser_hu19e98324c1ec6e18b342dd2a5a4e2e17_47072_650x0_resize_q90_box.jpg","permalink":"/book-review-clean-code/","title":"Book Review: Clean Code"},{"categories":["Spring Boot"],"contents":"Sometimes we need some structured, static data in our application. Perhaps the static data is a workaround until we have built the full feature that stores the data in the database and allows users to maintain the data themselves. Or we just need a way to easily maintain and access rarely changing data without the overhead of storing it in a database.\nUse cases might be:\n maintaining a large enumeration containing structured information that changes every once in a while - we don\u0026rsquo;t want to use enums in code because we don\u0026rsquo;t want to recompile the whole application for each change, or displaying static data in an application, like the name and address of the CEO in the letterhead of an invoice or a \u0026ldquo;Quote of the Day\u0026rdquo; on a web page, or using any structured data you can think of that you don\u0026rsquo;t want to maintain in code nor in the database.  With its @ConfigurationProperties feature, Spring Boot supports access to structured data from one or more configuration files.\nIn this article, we\u0026rsquo;ll have a look at:\n how to create a configuration file with the data, how to create an integration test that verifies the setup, and how to access the data in the application.  We\u0026rsquo;ll take the \u0026ldquo;Quote of the Day\u0026rdquo; use case as an example (I actually built that a couple weeks back as a farewell present to my previous team :)).\n Example Code This article is accompanied by a working code example on GitHub. Storing Static Data in a Config File First, we create a YAML file quotes.yml that contains our static data:\nstatic: quotes: - text: \u0026#34;A clever person solves a problem. A wise person avoids it.\u0026#34; author: \u0026#34;Albert Einstein\u0026#34; - text: \u0026#34;Adding manpower to a late software project makes it later.\u0026#34; author: \u0026#34;Fred Brooks\u0026#34; If you prefer properties files over YAML, you can use that instead. It\u0026rsquo;s just easier to represent nested data structures with YAML.\nIn our case, each quote has a text and an author. Each quote will later be represented in a Quote object.\nNote that we prefixed the data with static:quotes. This is necessary to create a unique namespace because Spring Boot will later merge the content of this config file with the rest of its configuration.\nMaking Spring Boot Aware of the Config File Now we have to make Spring Boot aware of this configuration file. We can do this by setting the system property spring.config.location each time we start the Spring Boot application:\n-Dspring.config.location=./,./quotes.yml This tells Spring Boot to search for an application.properties or application.yml file in the current folder (which is the default) and to additionally load the file quotes.yml.\nThis is all we need to do for Spring Boot to load our YAML file and expose the content within our application.\nAccessing the Static Data Now to the code.\nFirst off, we need a Quote data structure that serves as a vessel for the configuration data:\n@ConstructorBinding class Quote { private final String text; private final String author; Quote(String text, String author) { this.text = text; this.author = author; } // getters and setters omitted  } The Quote class only has simple String properties. If we have more complex data types, we can make use of custom converters that convert the configuration parameters (which are always Strings) to the custom types we need.\nNote that Quotes are immutable, taking all their state in the constructor. Because of this, we need to add the @ConstructorBinding annotation to the class, telling Spring Boot to use the constructor for instantiation. Otherwise, we\u0026rsquo;ll get a binding error (see box below).\nNext, we take advantage of Spring Boot\u0026rsquo;s @ConfigurationProperties feature to bind the static data to a QuotesProperties object:\n@Component @ConfigurationProperties(\u0026#34;static\u0026#34;) public class QuotesProperties { private final List\u0026lt;Quote\u0026gt; quotes; public QuotesProperties(List\u0026lt;Quote\u0026gt; quotes) { this.quotes = quotes; } public List\u0026lt;Quote\u0026gt; getQuotes(){ return this.quotes; } } This is where our namespace prefix comes into play. The QuotesProperties class is bound to the namespace static and the quotes prefix in the config file binds to the field of the same name.\nGetting a \"Binding failed\" error?  Spring Boot is a little intransparent in the error messages when the binding of a configuration property fails. You might get an error message like Binding to target ... failed ... property was left unbound without knowing the root cause.  In my case, the root cause was always that I did not provide a default constructor and getters and setters in one of the classes that act as a data structure for the configuration properties (Quote, in this case). By default, Spring Boot uses a no-args constructor and setters to create and populate an object. This does not allow immutable objects, however.  If we want immutable objects, as is the case with Quote, we need to add the @ConstructorBinding annotation to tell Spring Boot to use the constructor.  Verifying Access to the Static Data To test if our static data works as expected, we can create a simple integration test:\n@SpringBootTest( properties = { \u0026#34;spring.config.location = ./,file:./quotes.yml\u0026#34; } ) class QuotesPropertiesTest { @Autowired private QuotesProperties quotesProperties; @Test void staticQuotesAreLoaded() { assertThat(quotesProperties.getQuotes()).hasSize(2); } } The most important part of this test is setting the spring.config.location property to tell Spring Boot to pick up our quotes.yml file.\nThen, we can simply inject the QuotesProperties bean and assert that it contains the quotes we expect.\nAccessing the Static Data Finally, having the QuotesProperties bean in place and tested, we can now simply inject it into any other bean to do whatever we need with our quotes. For instance, we can build a scheduler that logs a random quote every 5 seconds:\n@Configuration @EnableScheduling public class RandomQuotePrinter { private static final Logger logger = LoggerFactory.getLogger(RandomQuotePrinter.class); private final Random random = new Random(); private final QuotesProperties quotesProperties; public RandomQuotePrinter(QuotesProperties quotesProperties) { this.quotesProperties = quotesProperties; } @Scheduled(fixedRate = 5000) void printRandomQuote(){ int index = random.nextInt(quotesProperties.getQuotes().size()); Quote quote = quotesProperties.getQuotes().get(index); logger.info(\u0026#34;\u0026#39;{}\u0026#39; - {}\u0026#34;, quote.getText(), quote.getAuthor()); } } Conclusion With @ConfigurationProperties, Spring Boot makes it easy to load configuration from external sources, especially from local configuration files. These files can contain custom complex data structures and thus are ideal for static data that we don\u0026rsquo;t want to maintain within our source code or the database.\nYou can find the code to this article on github.\n","date":"November 12, 2019","image":"https://reflectoring.io/images/stock/0031-matrix-1200x628-branded_hufb3c207f9151b804bbf7fe86cefe5814_184798_650x0_resize_q90_box.jpg","permalink":"/spring-boot-static-data/","title":"Static Data with Spring Boot"},{"categories":["Java","Software Craft","Spring Boot"],"contents":"The term \u0026ldquo;Hexagonal Architecture\u0026rdquo; has been around for a long time. Long enough that the primary source on this topic has been offline for a while and has only recently been rescued from the archives.\nI found, however, that there are very few resources about how to actually implement an application in this architecture style. The goal of this article is to provide an opinionated way of implementing a web application in the hexagonal style with Java and Spring.\nIf you\u0026rsquo;d like to dive deeper into the topic, have a look at my book.\n Example Code This article is accompanied by a working code example on GitHub. What is \u0026ldquo;Hexagonal Architecture\u0026rdquo;? The main feature of \u0026ldquo;Hexagonal Architecture\u0026rdquo;, as opposed to the common layered architecture style, is that the dependencies between our components point \u0026ldquo;inward\u0026rdquo;, towards our domain objects:\nThe hexagon is just a fancy way to describe the core of the application that is made up of domain objects, use cases that operate on them, and input and output ports that provide an interface to the outside world.\nLet\u0026rsquo;s have a look at each of the stereotypes in this architecture style.\nDomain Objects In a domain rich with business rules, domain objects are the lifeblood of an application. Domain objects can contain both state and behavior. The closer the behavior is to the state, the easier the code will be to understand, reason about, and maintain.\nDomain objects don\u0026rsquo;t have any outward dependency. They\u0026rsquo;re pure Java and provide an API for use cases to operate on them.\nBecause domain objects have no dependencies on other layers of the application, changes in other layers don\u0026rsquo;t affect them. They can evolve free of dependencies. This is a prime example of the Single Responsibility Principle (the \u0026ldquo;S\u0026rdquo; in \u0026ldquo;SOLID\u0026rdquo;), which states that components should have only one reason to change. For our domain object, this reason is a change in business requirements.\nHaving a single responsibility lets us evolve our domain objects without having to take external dependencies in regard. This evolvability makes the hexagonal architecture style perfect for when you\u0026rsquo;re practicing Domain-Driven Design. While developing, we just follow the natural flow of dependencies: we start coding in the domain objects and go outward from there. If that\u0026rsquo;s not Domain-Driven, then I don\u0026rsquo;t know what is.\nUse Cases We know use cases as abstract descriptions of what users are doing with our software. In the hexagonal architecture style, it makes sense to promote use cases to first-class citizens of our codebase.\nA use case in this sense is a class that handles everything around, well, a certain use case. As an example let\u0026rsquo;s consider the use case \u0026ldquo;Send money from one account to another\u0026rdquo; in a banking application. We\u0026rsquo;d create a class SendMoneyUseCase with a distinct API that allows a user to transfer money. The code contains all the business rule validations and logic that are specific to the use case and thus cannot be implemented within the domain objects. Everything else is delegated to the domain objects (there might be a domain object Account, for instance).\nSimilar to the domain objects, a use case class has no dependency on outward components. When it needs something from outside of the hexagon, we create an output port.\nInput and Output Ports The domain objects and use cases are within the hexagon, i.e. within the core of the application. Every communication to and from the outside happens through dedicated \u0026ldquo;ports\u0026rdquo;.\nAn input port is a simple interface that can be called by outward components and that is implemented by a use case. The component calling such an input port is called an input adapter or \u0026ldquo;driving\u0026rdquo; adapter.\nAn output port is again a simple interface that can be called by our use cases if they need something from the outside (database access, for instance). This interface is designed to fit the needs of the use cases, but it\u0026rsquo;s implemented by an outside component called an output or \u0026ldquo;driven\u0026rdquo; adapter. If you\u0026rsquo;re familiar with the SOLID principles, this is an application of the Dependency Inversion Principle (the \u0026ldquo;D\u0026rdquo; in SOLID), because we\u0026rsquo;re inverting the dependency from the use cases to the output adapter using an interface.\nWith input and output ports in place, we have very distinct places where data enters and leaves our system, making it easy to reason about the architecture.\nAdapters The adapters form the outer layer of the hexagonal architecture. They are not part of the core but interact with it.\nInput adapters or \u0026ldquo;driving\u0026rdquo; adapters call the input ports to get something done. An input adapter could be a web interface, for instance. When a user clicks a button in a browser, the web adapter calls a certain input port to call the corresponding use case.\nOutput adapters or \u0026ldquo;driven\u0026rdquo; adapters are called by our use cases and might, for instance, provide data from a database. An output adapter implements a set of output port interfaces. Note that the interfaces are dictated by the use cases and not the other way around.\nThe adapters make it easy to exchange a certain layer of the application. If the application should be usable from a fat client additionally to the web, we add a fat client input adapter. If the application needs a different database, we add a new persistence adapter implementing the same output port interfaces as the old one.\nShow Me Some Code! After the brief introduction to the hexagonal architecture style above, let\u0026rsquo;s finally have a look at some code. Translating the concepts of an architecture style into code is always subject to interpretation and flavor, so please don\u0026rsquo;t take the following code examples as given, but instead as inspiration to creating your own style.\nThe code examples are all from my \u0026ldquo;BuckPal\u0026rdquo; example application on GitHub and revolve around the use case of transferring money from one account to another. Some code snippets are slightly modified for the purpose of this blog post, so have a look at the repo for the original code.\nBuilding a Domain Object We start by building a domain object that serves our use case. We create an Account class that manages withdrawals and deposits to an account:\n@AllArgsConstructor(access = AccessLevel.PRIVATE) public class Account { @Getter private final AccountId id; @Getter private final Money baselineBalance; @Getter private final ActivityWindow activityWindow; public static Account account( AccountId accountId, Money baselineBalance, ActivityWindow activityWindow) { return new Account(accountId, baselineBalance, activityWindow); } public Optional\u0026lt;AccountId\u0026gt; getId(){ return Optional.ofNullable(this.id); } public Money calculateBalance() { return Money.add( this.baselineBalance, this.activityWindow.calculateBalance(this.id)); } public boolean withdraw(Money money, AccountId targetAccountId) { if (!mayWithdraw(money)) { return false; } Activity withdrawal = new Activity( this.id, this.id, targetAccountId, LocalDateTime.now(), money); this.activityWindow.addActivity(withdrawal); return true; } private boolean mayWithdraw(Money money) { return Money.add( this.calculateBalance(), money.negate()) .isPositiveOrZero(); } public boolean deposit(Money money, AccountId sourceAccountId) { Activity deposit = new Activity( this.id, sourceAccountId, this.id, LocalDateTime.now(), money); this.activityWindow.addActivity(deposit); return true; } @Value public static class AccountId { private Long value; } } An Account can have many associated Activitys that each represents a withdrawal or a deposit to that account. Since we don\u0026rsquo;t always want to load all activities for a given account, we limit it to a certain ActivityWindow. To still be able to calculate the total balance of the account, the Account class has the baselineBalance attribute containing the balance of the account at the start time of the activity window.\nAs you can see in the code above, we build our domain objects completely free of dependencies to the other layers of our architecture. We\u0026rsquo;re free to model the code how we see fit, in this case creating a \u0026ldquo;rich\u0026rdquo; behavior that is very close to the state of the model to make it easier to understand.\nWe can use external libraries in our domain model if we choose to, but those dependencies should be relatively stable to prevent forced changes to our code. In the case above, we included Lombok annotations, for instance.\nThe Account class now allows us to withdraw and deposit money to a single account, but we want to transfer money between two accounts. So, we create a use case class that orchestrates this for us.\nBuilding an Input Port Before we actually implement the use case, however, we create the external API to that use case, which will become an input port in our hexagonal architecture:\npublic interface SendMoneyUseCase { boolean sendMoney(SendMoneyCommand command); @Value @EqualsAndHashCode(callSuper = false) class SendMoneyCommand extends SelfValidating\u0026lt;SendMoneyCommand\u0026gt; { @NotNull private final AccountId sourceAccountId; @NotNull private final AccountId targetAccountId; @NotNull private final Money money; public SendMoneyCommand( AccountId sourceAccountId, AccountId targetAccountId, Money money) { this.sourceAccountId = sourceAccountId; this.targetAccountId = targetAccountId; this.money = money; this.validateSelf(); } } } By calling sendMoney(), an adapter outside of our application core can now invoke this use case.\nWe aggregated all the parameters we need into the SendMoneyCommand value object. This allows us to do the input validation in the constructor of the value object. In the example above we even used the Bean Validation annotation @NotNull, which is validated in the validateSelf() method. This way the actual use case code is not polluted with noisy validation code.\nNow we need an implementation of this interface.\nBuilding a Use Case and Output Ports In the use case implementation we use our domain model to make a withdrawal from the source account and a deposit to the target account:\n@RequiredArgsConstructor @Component @Transactional public class SendMoneyService implements SendMoneyUseCase { private final LoadAccountPort loadAccountPort; private final AccountLock accountLock; private final UpdateAccountStatePort updateAccountStatePort; @Override public boolean sendMoney(SendMoneyCommand command) { LocalDateTime baselineDate = LocalDateTime.now().minusDays(10); Account sourceAccount = loadAccountPort.loadAccount( command.getSourceAccountId(), baselineDate); Account targetAccount = loadAccountPort.loadAccount( command.getTargetAccountId(), baselineDate); accountLock.lockAccount(sourceAccountId); if (!sourceAccount.withdraw(command.getMoney(), targetAccountId)) { accountLock.releaseAccount(sourceAccountId); return false; } accountLock.lockAccount(targetAccountId); if (!targetAccount.deposit(command.getMoney(), sourceAccountId)) { accountLock.releaseAccount(sourceAccountId); accountLock.releaseAccount(targetAccountId); return false; } updateAccountStatePort.updateActivities(sourceAccount); updateAccountStatePort.updateActivities(targetAccount); accountLock.releaseAccount(sourceAccountId); accountLock.releaseAccount(targetAccountId); return true; } } Basically, the use case implementation loads the source and target account from the database, locks the accounts so that no other transactions can take place at the same time, makes the withdrawal and deposit, and finally writes the new state of the accounts back to the database.\nAlso, by using @Component, we make this service a Spring bean to be injected into any components that need access to the SendMoneyUseCase input port without having a dependency on the actual implementation.\nFor loading and storing the accounts from and to the database, the implementation depends on the output ports LoadAccountPort and UpdateAccountStatePort, which are interfaces that we will later implement within our persistence adapter.\nThe shape of the output port interfaces is dictated by the use case. While writing the use case we may find that we need to load certain data from the database, so we create an output port interface for it. Those ports may be re-used in other use cases, of course. In our case, the output ports look like this:\npublic interface LoadAccountPort { Account loadAccount(AccountId accountId, LocalDateTime baselineDate); } public interface UpdateAccountStatePort { void updateActivities(Account account); } Building a Web Adapter With the domain model, use cases, and input and output ports, we have now completed the core of our application (i.e. everything within the hexagon). This core doesn\u0026rsquo;t help us, though, if we don\u0026rsquo;t connect it with the outside world. Hence, we build an adapter that exposes our application core via a REST API:\n@RestController @RequiredArgsConstructor public class SendMoneyController { private final SendMoneyUseCase sendMoneyUseCase; @PostMapping(path = \u0026#34;/accounts/send/{sourceAccountId}/{targetAccountId}/{amount}\u0026#34;) void sendMoney( @PathVariable(\u0026#34;sourceAccountId\u0026#34;) Long sourceAccountId, @PathVariable(\u0026#34;targetAccountId\u0026#34;) Long targetAccountId, @PathVariable(\u0026#34;amount\u0026#34;) Long amount) { SendMoneyCommand command = new SendMoneyCommand( new AccountId(sourceAccountId), new AccountId(targetAccountId), Money.of(amount)); sendMoneyUseCase.sendMoney(command); } } If you\u0026rsquo;re familiar with Spring MVC, you\u0026rsquo;ll find that this is a pretty boring web controller. It simply reads the needed parameters from the request path, puts them into a SendMoneyCommand and invokes the use case. In a more complex scenario, the web controller may also check authentication and authorization and do more sophisticated mapping of JSON input, for example.\nThe above controller exposes our use case to the world by mapping HTTP requests to the use case\u0026rsquo;s input port. Let\u0026rsquo;s now see how we can connect our application to a database by connecting the output ports.\nBuilding a Persistence Adapter While an input port is implemented by a use case service, an output port is implemented by a persistence adapter. Say we use Spring Data JPA as the tool of choice for managing persistence in our codebase. A persistence adapter implementing the output ports LoadAccountPort and UpdateAccountStatePort might then look like this:\n@RequiredArgsConstructor @Component class AccountPersistenceAdapter implements LoadAccountPort, UpdateAccountStatePort { private final AccountRepository accountRepository; private final ActivityRepository activityRepository; private final AccountMapper accountMapper; @Override public Account loadAccount( AccountId accountId, LocalDateTime baselineDate) { AccountJpaEntity account = accountRepository.findById(accountId.getValue()) .orElseThrow(EntityNotFoundException::new); List\u0026lt;ActivityJpaEntity\u0026gt; activities = activityRepository.findByOwnerSince( accountId.getValue(), baselineDate); Long withdrawalBalance = orZero(activityRepository .getWithdrawalBalanceUntil( accountId.getValue(), baselineDate)); Long depositBalance = orZero(activityRepository .getDepositBalanceUntil( accountId.getValue(), baselineDate)); return accountMapper.mapToDomainEntity( account, activities, withdrawalBalance, depositBalance); } private Long orZero(Long value){ return value == null ? 0L : value; } @Override public void updateActivities(Account account) { for (Activity activity : account.getActivityWindow().getActivities()) { if (activity.getId() == null) { activityRepository.save(accountMapper.mapToJpaEntity(activity)); } } } } The adapter implements the loadAccount() and updateActivities() methods required by the implemented output ports. It uses Spring Data repositories to load data from and save data to the database and an AccountMapper to map Account domain objects into AccountJpaEntity objects which represent an account within the database.\nAgain, we use @Component to make this a Spring bean that can be injected into the use case service above.\nIs it Worth the Effort? People often ask themselves whether an architecture like this is worth the effort (I include myself here). After all, we have to create port interfaces and we have x to map between multiple representations of the domain model. There may be a domain model representation within the web adapter and another one within the persistence adapter.\nSo, is it worth the effort?\nAs a professional consultant my answer is of course \u0026ldquo;it depends\u0026rdquo;.\nIf we\u0026rsquo;re building a CRUD application that simply stores and saves data, an architecture like this is probably overhead. If we\u0026rsquo;re building an application with rich business rules that can be expressed in a rich domain model that combines state with behavior, then this architecture really shines because it puts the domain model in the center of things.\nDive Deeper The above only gives an idea of what a hexagonal architecture might look like in real code. There are other ways of doing it, so feel free to experiment and find the way that best fits your needs. Also, the web and persistence adapters are just examples of adapters to the outside. There may be adapters to other third party systems or other user-facing frontends.\nIf you want to dive deeper into this topic, have a look at my book which goes into much more detail and also discusses things like testing, mapping strategies, and shortcuts.\n","date":"November 3, 2019","image":"https://reflectoring.io/images/stock/0054-bee-1200x628-branded_hu178224517b326c40da4b12810c856ac9_134300_650x0_resize_q90_box.jpg","permalink":"/spring-hexagonal/","title":"Hexagonal Architecture with Java and Spring"},{"categories":["Java","Software Craft"],"contents":"In short, no. Feel free to jump right ahead to the section on bad practices. If you want to read a bit more on the why and how of immutables, have a look at the rest of this article.\nImmutable objects are a way to create safer software that is easier to maintain. Why is that? And what should we do and what not when implementing them? This article provides answers.\nIf you\u0026rsquo;re interested in creating immutable objects in Java, also have a look at the article about the Immutables Java library.\n Example Code This article is accompanied by a working code example on GitHub. What\u0026rsquo;s an Immutable? The definition of an immutable object is rather short:\n An object whose state cannot be changed after construction is called an immutable object.\n However clear this definition is, there are still enough questions to write a 2000+-word article about immutables.\nIn this article, we\u0026rsquo;ll explore why immutable objects are a good idea, how to (and how not to) implement them, and finally discuss some use cases in which they shine.\nWhy Should I Make an Object Immutable? It\u0026rsquo;s good to know what an immutable object is, but why should we use them? Here is a (most certainly incomplete) list of reasons why immutable objects are a good idea. Let me know in the comments if you find more reasons.\nYou Know What to Expect from an Immutable Since the state of an immutable cannot change, we know what to expect from it. If we follow some of the best practices below, we know that the state of the object is valid throughout the object\u0026rsquo;s lifetime.\nNowhere in the code can the state be changed to potentially introduce inconsistencies that may lead to runtime errors.\nAn Immutable Is a Gate Keeper for Valid State If implemented correctly, an immutable object validates the state it is constructed with and only lets itself be instantiated if the state is valid.\nThis means that no one can create an instance of an immutable in an invalid state. This goes back to the first reason: we can not only expect the immutable object to have the same state through its lifetime, but also a valid state.\nNo more null-checks or other validations strewn across the codebase. All those validations take place within the immutable object.\nCompilers Love Immutables Because immutables are so predictable, compilers love them.\nSince immutable fields usually use the final keyword, compilers can tell us when such a field has not been initialized.\nAnd since the whole state of an immutable object has to be passed into the constructor, the compiler can tell us when we forget to pass a certain field. This is especially handy when we\u0026rsquo;re adding a field to an existing immutable object. The compiler will point out all the places where we have to add that new field in the client code.\nBecause compilers love immutables, we should love them, too.\nImmutable Best Practices Let\u0026rsquo;s have a look at how to implement an immutable.\nA Basic Immutable A very basic immutable class looks like this:\nclass User { private final Long id; private final String name; User(Long id, String name) { this.id = id; this.name = name; } } The main features are that the fields are final, telling the compiler that their values must not change once initialized and that all field values are passed into the constructor.\nUse Lombok\u0026rsquo;s @RequiredArgsConstructor Instead of writing the constructor by hand, we can use Lombok to generate the constructor for us:\n@RequiredArgsConstructor class User { private final Long id; private final String name; } @RequiredArgsConstructor generates a constructor that takes values for all final fields as parameters.\nNote that if we change the order of the fields, Lombok will automatically change the order of the parameters. This is the price to pay for automatic code generation.\nA Factory Method for Each Valid Combination of Fields An immutable object may have fields that are optional so that their value is null. Passing null into a constructor is a code smell, however, because we assume knowledge of the inner workings of the immutable. Instead, the immutable should provide a factory method for each valid combination of fields:\n@RequiredArgsConstructor(access = AccessLevel.PRIVATE) class User { private final Long id; private final String name; static User existingUser(Long id, String name){ return new User(id, name); } static User newUser(String name){ return new User(null, name); } } The User class may have an empty ID because we somehow have to instantiate users that have not been saved to the database yet.\nInstead of providing a single constructor into which we would have to pass a null ID, we have created a static factory method to which we only have to pass the name. Internally, the immutable then passes a null ID to the private constructor.\nWe can give names to the factory methods like newUser and existingUser, to make clear their intent.\nMake Optional Fields Obvious In the User class from above, the ID is an optional field and may be null. We don\u0026rsquo;t want every client of the User class fall prey to potential NullPointerExceptions, so we can make the getter return an Optional:\n@RequiredArgsConstructor(access = AccessLevel.PRIVATE) class User { private final Long id; private final String name; static User existingUser(Long id, String name){ return new User(id, name); } static User newUser(String name){ return new User(null, name); } Optional\u0026lt;Long\u0026gt; getId() { return Optional.ofNullable(id); } } Any client calling getId() will immediately know that the value might be empty and will act accordingly.\nDon't Use Optional as a Field or Argument Type  Instead of using Long as the field type for the user ID, we could have used Optional\u0026lt;Long\u0026gt;, right? This would make it obvious at a glance at the field declarations that the ID may be empty.  This is bad practice, however, since an Optional may also be null. This would mean that each time we work with the value of the ID field within the User class, we would have to first check if the Optional is null and then check if it has a value or is empty.  The same argument holds for passing an Optional as a parameter into a method.  Self-Validate To only allow valid state, an immutable may check within its constructor(s) if the passed-in values are valid according to the business rules of the class:\nclass User { private final Long id; private final String name; User(Long id, String name) { if(id \u0026lt; 0) { throw new IllegalArgumentException(\u0026#34;id must be \u0026gt;= 0!\u0026#34;); } if(name == null || \u0026#34;\u0026#34;.equals(name)) { throw new IllegalArgumentException(\u0026#34;name must not be null or empty!\u0026#34;); } this.id = id; this.name = name; } // additional methods omitted ... } This way we can always be certain that we have an object with a valid state in our hands.\nAlso, the validation is very close to the validated fields (as opposed to the validation code being in some service at the other end of the codebase), making it easy to find and maintain together with the fields.\nSelf-Validate with Bean Validation Instead of validating our immutable by hand as we did above, we can also take advantage of the declarative approach of the Bean Validation library:\nclass User extends SelfValidating\u0026lt;User\u0026gt;{ @Min(0) private final Long id; @NotEmpty private final String name; User(Long id, String name) { this.id = id; this.name = name; this.validateSelf(); } } We simply add Bean Validation annotations to mark validation rules and then call validateSelf() as the last statement in the constructor.\nThe validateSelf() method is implemented in the parent class SelfValidating and might look like this:\npublic abstract class SelfValidating\u0026lt;T\u0026gt; { private Validator validator; public SelfValidating() { ValidatorFactory factory = Validation.buildDefaultValidatorFactory(); validator = factory.getValidator(); } /** * Evaluates all Bean Validations on the attributes of this * instance. */ protected void validateSelf() { Set\u0026lt;ConstraintViolation\u0026lt;T\u0026gt;\u0026gt; violations = validator.validate((T) this); if (!violations.isEmpty()) { throw new ConstraintViolationException(violations); } } } If you\u0026rsquo;re not familiar with all the ins and outs of Bean Validation, have a look at my articles about Bean Validation and validation anti-patterns.\nImmutable Bad Practices Some patterns don\u0026rsquo;t work well with immutables. Let\u0026rsquo;s discuss some of them.\nDon\u0026rsquo;t Use Builders A builder is a class whose goal it is to make object instantiation easy. Instead of calling a constructor which takes all field values as arguments, we call fluid builder methods to set the state of an object step-by-step:\nUser user = User.builder() .id(42L) .build(); This is especially helpful if we have a lot of fields since its better readable than a call to a constructor with many parameters.\nUsing a builder to create an immutable object instance is not a good idea, however. Look at the code above: we called the build() method after only initializing the id field. The name field is still empty.\nIf the User class also requires a value for the name field, the builder will probably simply pass null into the constructor and object instantiation will fail at runtime. If we have not implemented any kind of validation, object validation won\u0026rsquo;t even fail at all and we have an immutable with an unexpected null value.\nWe have just tricked the compiler into believing that we\u0026rsquo;re creating a valid object. Had we used the factory methods from above, the compiler would know which combinations of fields are valid and which are not at compile time.\nDon\u0026rsquo;t Use Withers If you search the web for immutables, you may come across the pattern of using so-called \u0026ldquo;wither\u0026rdquo; methods to \u0026ldquo;change the state\u0026rdquo; of an immutable:\n@RequiredArgsConstructor class User { private final Long id; private final String name; User withId(Long id) { return new User(id, this.name); } User withName(String name) { return new User(this.id, name); } } Wither methods are similar to setters, except that they usually start with the with... prefix.\nThe class in the code above is still technically immutable since its fields are final and the wither methods each return a new object instead of manipulating the state of the current object.\nThis pattern works against the idea of an immutable, though. We\u0026rsquo;re using an immutable as if it were mutable. If we see wither methods like this used on an immutable, we should check if the class should rather be mutable because that\u0026rsquo;s what the code implies.\nThere may be valid use cases for immutables with wither methods, but I would at least be skeptical if I found an immutable using this pattern.\nDon\u0026rsquo;t Use Setters It\u0026rsquo;s obvious that an immutable shouldn\u0026rsquo;t have a setter, because its fields are final and cannot be changed. However, similar to withers described above, we might implement setters so that they return a new object:\n@RequiredArgsConstructor class User { private final Long id; private final String name; User setId(Long id) { return new User(id, this.name); } User setName(String name) { return new User(this.id, name); } } Don\u0026rsquo;t do this. At first glance, the class looks like it\u0026rsquo;s mutable. And it might be used like a mutable class.\nIf you find yourself using setter methods like this often, the class should probably be mutable after all.\nDon\u0026rsquo;t Provide Getters by Default Often, it\u0026rsquo;s no more than a reflex to have the IDE (or Lombok) create getters and setters for us. Setters are out of the question for an immutable object, but what about getters?\nLet\u0026rsquo;s look at a different version of our User class:\n@Getter @RequiredArgsConstructor class User { private final Long id; private final List\u0026lt;String\u0026gt; roles; } Instead of a name, the user now has a list of roles. We have also added Lombok\u0026rsquo;s @Getter annotation to create getters for us.\nNow, we work with this class:\nUser user = new User(42L, Arrays.asList(\u0026#34;role1\u0026#34;, \u0026#34;role2\u0026#34;)); user.getRoles().add(\u0026#34;admin\u0026#34;); Even though we did not provide setters and made all fields final, this User class is not immutable. We can simply access the list of roles via its getter and change its state.\nSo, we should not provide getters by default. If we do provide getters, we should make that the type of the field is immutable (like Long or String) or that we return a copy of the field value instead of a reference to it.\nFor this reason, we should use Lombok\u0026rsquo;s @Value annotation (which is intended to be used for creating immutable value objects) with care because it creates getters for all fields by default.\nUse Cases for Immutables Now that we\u0026rsquo;ve talked a lot about why and how to build immutables, let\u0026rsquo;s discuss some actual use cases where they shine.\nConcurrency If we\u0026rsquo;re working with concurrent threads that access the same objects, it\u0026rsquo;s best if those objects are immutable. This way, we can not introduce any bugs that arise from accidentally modifying the state of an object in one of the threads.\nIn concurrency code, we should make objects mutable only if we have to.\nValue Objects Value objects are objects that represent a certain value and not a certain entity. Thus, they have a value (which may consist of more than one field) and no identity.\nExamples for value objects are:\n Java\u0026rsquo;s wrappers of primitives like Long and Integer a Money object representing a certain amount of money a Weight object representing a certain weight a Name object representing the name of a person a UserId object representing a certain numerical User-ID a TaxIdentificationNumber object representing a \u0026hellip; wait for it \u0026hellip; tax identification number \u0026hellip;  Since value objects represent a specific value, that value must not change. So, they must be immutable.\nImagine passing a Long object with value 42 to a third-party method only to have that method change the value to 13 \u0026hellip; scary, isn\u0026rsquo;t it? Can\u0026rsquo;t happen with an immutable.\nData Transfer Objects Another use case for immutables is when we need to transport data between systems or components that do not share the same data model. In this case, we can create a shared Data Transfer Object (DTO) that is created from the data of the source component and then passed to the target component.\nAlthough DTOs don\u0026rsquo;t necessarily have to be immutable, it helps to keep the state of a DTO in a single place instead of scattered over the codebase.\nImagine we have a large DTO with tens of fields which are set and re-set over hundreds of lines of code, depending on certain conditions, before the DTO is sent over the line to a remote system (I\u0026rsquo;ve seen it happen!). In case of an error, we\u0026rsquo;ll have a hard time finding out where the value of a specific field came from.\nIf we make the DTO immutable (or close to immutable) instead, with dedicated factory methods for valid state combinations, there are only a few entry points for the state of the object, easing debugging and maintenance considerably.\nDomain Objects Even domain objects can benefit from the concepts of immutability.\nLet\u0026rsquo;s define a domain object as an object with an identity that is loaded from the database, manipulated for a certain use case, and then stored back into the database, usually within a database transaction. There are certainly more general and complete definitions of a domain object out there, but for the sake of discussion, this should do.\nA domain object is most certainly not immutable, but we will benefit from making it as immutable as possible.\nAs an example, let\u0026rsquo;s look at this Account class from my clean architecture example application \u0026ldquo;BuckPal\u0026rdquo;:\n@AllArgsConstructor(access = AccessLevel.PRIVATE) public class Account { private final AccountId id; private final Money baselineBalance; @Getter private final ActivityWindow activityWindow; public static Account withoutId( Money baselineBalance, ActivityWindow activityWindow) { return new Account(null, baselineBalance, activityWindow); } public static Account withId( AccountId accountId, Money baselineBalance, ActivityWindow activityWindow) { return new Account(accountId, baselineBalance, activityWindow); } public Optional\u0026lt;AccountId\u0026gt; getId(){ return Optional.ofNullable(this.id); } public Money calculateBalance() { // calculate balance from baselineBalance and ActivityWindow  } public boolean withdraw(Money money, AccountId targetAccountId) { // add a negative Activity to the ActivityWindow  } public boolean deposit(Money money, AccountId sourceAccountId) { // add a positive Activity to the ActivityWindow  } } An Account can collect an unbounded number of Activitys over the years, which can either be positive (deposits) or negative (withdrawals). For the use case of depositing or withdrawing money to/from the account, we\u0026rsquo;re not loading the complete list of activities (which might be too large for processing), but instead only load the latest 10 or so activities into an ActivityWindow. To still be able to calculate the total account balance, the account has the field baselineBalance with the balance the account had just before the oldest activity in the window.\nAll fields are final, so an Account seems to be immutable at first glance. The deposit() and withdraw() methods manipulate the state of the associated AccountWindow, however, so it\u0026rsquo;s not immutable after all. These methods are better than standard getters and setters, though, because they provide very targeted entry points for manipulation that may even contain business rules that would otherwise be scattered over some services in the codebase.\nIn short, we\u0026rsquo;re making as many of the domain object\u0026rsquo;s fields as possible immutable and provide focused manipulation methods if we cannot get around it. An architecture style that supports this kind of domain objects is the Hexagonal Architecture explained hands-on in my book about clean architecture.\n\u0026ldquo;Stateless\u0026rdquo; Service Objects Even so-called \u0026ldquo;stateless\u0026rdquo; service objects usually have some kind of state. Usually, a service has dependencies to components that provide database access for loading and updating data:\n@RequiredArgsConstructor @Service @Transactional public class SendMoneyService { private final LoadAccountPort loadAccountPort; private final UpdateAccountStatePort updateAccountStatePort; // stateless methods omitted } In this service, the objects in loadAccountPort and updateAccountStatePort provide database access. These fields don\u0026rsquo;t make the service \u0026ldquo;stateful\u0026rdquo;, though, because their value doesn\u0026rsquo;t usually change during the runtime of the application.\nIf the values don\u0026rsquo;t change, why not make them immutable from the start? We can simply make the fields final and provide a matching constructor (in this case with Lombok\u0026rsquo;s @RequiredArgsConstructor). What we get from this is the compiler complaining about missing dependencies at compile time instead of the JRE complaining later at runtime.\nConclusion Every time we add a field to a class we should make it immutable (i.e. final) by default. If there is a reason to make it mutable, that\u0026rsquo;s fine, but unnecessary mutability increases the chance of introducing bugs and maintainability issues by unintentionally changing state.\nWhat\u0026rsquo;s your take on immutables?\nThe example code is available on GitHub.\n","date":"September 25, 2019","image":"https://reflectoring.io/images/stock/0053-rock-wave-1200x628-branded_hu3a5ac648bdd0aff4921db546c574641b_205532_650x0_resize_q90_box.jpg","permalink":"/java-immutables/","title":"Immutables in Java - Are Setters Allowed?"},{"categories":["Spring Boot","Java"],"contents":"Mockito is a very popular library to support testing. It allows us to replace real objects with \u0026ldquo;mocks\u0026rdquo;, i.e. with objects that are not the real thing and whose behavior we can control within our test.\nThis article gives a quick intro to the how and why of Mockito and Spring Boot\u0026rsquo;s integration with it.\n Example Code This article is accompanied by a working code example on GitHub. The System Under Test Before we dive into the details of mocking, let\u0026rsquo;s take a look at the application we\u0026rsquo;re going to test. We\u0026rsquo;ll use some code based on the payment example application \u0026ldquo;buckpal\u0026rdquo; of my book.\nThe system under test for this article will be a Spring REST controller that accepts requests to transfer money from one account to another:\n@RestController @RequiredArgsConstructor public class SendMoneyController { private final SendMoneyUseCase sendMoneyUseCase; @PostMapping(path = \u0026#34;/sendMoney/{sourceAccountId}/{targetAccountId}/{amount}\u0026#34;) ResponseEntity sendMoney( @PathVariable(\u0026#34;sourceAccountId\u0026#34;) Long sourceAccountId, @PathVariable(\u0026#34;targetAccountId\u0026#34;) Long targetAccountId, @PathVariable(\u0026#34;amount\u0026#34;) Integer amount) { SendMoneyCommand command = new SendMoneyCommand( sourceAccountId, targetAccountId, amount); boolean success = sendMoneyUseCase.sendMoney(command); if (success) { return ResponseEntity .ok() .build(); } else { return ResponseEntity .status(HttpStatus.INTERNAL_SERVER_ERROR) .build(); } } } The controller passes the input on to an instance of SendMoneyUseCase which is an interface with a single method:\npublic interface SendMoneyUseCase { boolean sendMoney(SendMoneyCommand command); @Value @Getter @EqualsAndHashCode(callSuper = false) class SendMoneyCommand { private final Long sourceAccountId; private final Long targetAccountId; private final Integer money; public SendMoneyCommand( Long sourceAccountId, Long targetAccountId, Integer money) { this.sourceAccountId = sourceAccountId; this.targetAccountId = targetAccountId; this.money = money; } } } Finally, we have a dummy service implementing the SendMoneyUseCase interface:\n@Slf4j @Component public class SendMoneyService implements SendMoneyUseCase { public SendMoneyService() { log.info(\u0026#34;\u0026gt;\u0026gt;\u0026gt; constructing SendMoneyService! \u0026lt;\u0026lt;\u0026lt;\u0026#34;); } @Override public boolean sendMoney(SendMoneyCommand command) { log.info(\u0026#34;sending money!\u0026#34;); return false; } } Imagine that there is some wildly complicated business logic going on in this class in place of the logging statements.\nFor most of this article, we\u0026rsquo;re not interested in the actual implementation of the SendMoneyUseCase interface. After all, we want to mock it away in our test of the web controller.\nWhy Mock? Why should we use a mock instead of a real service object in a test?\nImagine the service implementation above has a dependency to a database or some other third-party system. We don\u0026rsquo;t want to have our test run against the database. If the database isn\u0026rsquo;t available, the test will fail even though our system under test might be completely bug-free. The more dependencies we add in a test, the more reasons a test has to fail. And most of those reasons will be the wrong ones. If we use a mock instead, we can mock all those potential failures away.\nAside from reducing failures, mocking also reduces our tests' complexity and thus saves us some effort. It takes a lot of boilerplate code to set up a whole network of correctly-initialized objects to be used in a test. Using mocks, we only have to \u0026ldquo;instantiate\u0026rdquo; one mock instead of a whole rat-tail of objects the real object might need to be instantiated.\nIn summary, we want to move from a potentially complex, slow, and flaky integration test towards a simple, fast, and reliable unit test.\nSo, in a test of our SendMoneyController above, instead of a real instance of SendMoneyUseCase, we want to use a mock with the same interface whose behavior we can control as needed in the test.\nMocking with Mockito (and without Spring) As a mocking framework, we\u0026rsquo;ll use Mockito, since it\u0026rsquo;s well-rounded, well-established, and well-integrated into Spring Boot.\nBut the best kind of test doesn\u0026rsquo;t use Spring at all, so let\u0026rsquo;s first look at how to use Mockito in a plain unit test to mock away unwanted dependencies.\nPlain Mockito Test The plainest way to use Mockito is to simply instantiate a mock object using Mockito.mock() and then pass the so created mock object into the class under test:\npublic class SendMoneyControllerPlainTest { private SendMoneyUseCase sendMoneyUseCase = Mockito.mock(SendMoneyUseCase.class); private SendMoneyController sendMoneyController = new SendMoneyController(sendMoneyUseCase); @Test void testSuccess() { // given  SendMoneyCommand command = new SendMoneyCommand(1L, 2L, 500); given(sendMoneyUseCase .sendMoney(eq(command))) .willReturn(true); // when  ResponseEntity response = sendMoneyController .sendMoney(1L, 2L, 500); // then  then(sendMoneyUseCase) .should() .sendMoney(eq(command)); assertThat(response.getStatusCode()) .isEqualTo(HttpStatus.OK); } } We create a mock instance of SendMoneyService and pass this mock into the constructor of SendMoneyController. The controller doesn\u0026rsquo;t know that it\u0026rsquo;s a mock and will treat it just like the real thing.\nIn the test itself, we can use Mockito\u0026rsquo;s given() to define the behavior we want the mock to have and then() to check if certain methods have been called as expected. You can find more on Mockito\u0026rsquo;s mocking and verification methods in the docs.\nWeb Controllers Should Be Integration-tested!  Don't do this at home! The code above is just an example for how to create mocks. Testing a Spring Web Controller with a unit test like this only covers a fraction of the potential errors that can happen in production. The unit test above verifies that a certain response code is returned, but it does not integrate with Spring to check if the input parameters are parsed correctly from an HTTP request, or if the controller listens to the correct path, or if exceptions are transformed into the expected HTTP response, and so on.  Web controllers should instead be tested in integration with Spring as discussed in my article about the @WebMvcTest annotation.  Using Mockito Annotations with JUnit Jupiter Mockito provides some handy annotations that reduce the manual work of creating mock instances and passing them into the object we\u0026rsquo;re about to test.\nWith JUnit Jupiter, we need to apply the MockitoExtension to our test:\n@ExtendWith(MockitoExtension.class) class SendMoneyControllerMockitoAnnotationsJUnitJupiterTest { @Mock private SendMoneyUseCase sendMoneyUseCase; @InjectMocks private SendMoneyController sendMoneyController; @Test void testSuccess() { ... } } We can then use the @Mock and @InjectMocks annotations on fields of the test.\nFields annotated with @Mock will then automatically be initialized with a mock instance of their type, just like as we would call Mockito.mock() by hand.\nMockito will then try to instantiate fields annotated with @InjectMocks by passing all mocks into a constructor. Note that we need to provide such a constructor for Mockito to work reliably. If Mockito doesn\u0026rsquo;t find a constructor, it will try setter injection or field injection, but the cleanest way is still a constructor. You can read about the algorithm behind this in Mockito\u0026rsquo;s Javadoc.\nUsing Mockito Annotations with JUnit 4 With JUnit 4, it\u0026rsquo;s very similar, except that we need to use MockitoJUnitRunner instead of MockitoExtension:\n@RunWith(MockitoJUnitRunner.class) public class SendMoneyControllerMockitoAnnotationsJUnit4Test { @Mock private SendMoneyUseCase sendMoneyUseCase; @InjectMocks private SendMoneyController sendMoneyController; @Test public void testSuccess() { ... } } Mocking with Mockito and Spring Boot There are times when we have to rely on Spring Boot to set up an application context for us because it would be too much work to instantiate the whole network of classes manually.\nWe may not want to test the integration between all the beans in a certain test, however, so we need a way to replace certain beans within Spring\u0026rsquo;s application context with a mock. Spring Boot provides the @MockBean and @SpyBean annotations for this purpose.\nAdding a Mock Spring Bean with @MockBean A prime example for using mocks is using Spring Boot\u0026rsquo;s @WebMvcTest to create an application context that contains all the beans necessary for testing a Spring web controller:\n@WebMvcTest(controllers = SendMoneyController.class) class SendMoneyControllerWebMvcMockBeanTest { @Autowired private MockMvc mockMvc; @MockBean private SendMoneyUseCase sendMoneyUseCase; @Test void testSendMoney() { ... } } The application context created by @WebMvcTest will not pick up our SendMoneyService bean (which implements the SendMoneyUseCase interface), even though it is marked as a Spring bean with the @Component annotation. We have to provide a bean of type SendMoneyUseCase ourselves, otherwise, we\u0026rsquo;ll get an error like this:\nNo qualifying bean of type \u0026#39;io.reflectoring.mocking.SendMoneyUseCase\u0026#39; available: expected at least 1 bean which qualifies as autowire candidate. Instead of instantiating SendMoneyService ourselves or telling Spring to pick it up, potentially pulling in a rat-tail of other beans in the process, we can just add a mock implementation of SendMoneyUseCase to the application context.\nThis is easily done by using Spring Boot\u0026rsquo;s @MockBean annotation. The Spring Boot test support will then automatically create a Mockito mock of type SendMoneyUseCase and add it to the application context so that our controller can use it. In the test method, we can then use Mockito\u0026rsquo;s given() and when() methods just like above.\nThis way we can easily create a focused web controller test that instantiates only the objects it needs.\nReplacing a Spring Bean with @MockBean Instead of adding a new (mock) bean, we can use @MockBean similarly to replace a bean that already exists in the application context with a mock:\n@SpringBootTest @AutoConfigureMockMvc class SendMoneyControllerSpringBootMockBeanTest { @Autowired private MockMvc mockMvc; @MockBean private SendMoneyUseCase sendMoneyUseCase; @Test void testSendMoney() { ... } } Note that the test above uses @SpringBootTest instead of @WebMvcTest, meaning that the full application context of the Spring Boot application will be created for this test. This includes our SendMoneyService bean, as it is annotated with @Component and lies within the package structure of our application class.\nThe @MockBean annotation will cause Spring to look for an existing bean of type SendMoneyUseCase in the application context. If it exists, it will replace that bean with a Mockito mock.\nThe net result is the same: in our test, we can treat the sendMoneyUseCase object like a Mockito mock.\nThe difference is that the SendMoneyService bean will be instantiated when the initial application context is created before it\u0026rsquo;s replaced with the mock. If SendMoneyService did something in its constructor that requires a dependency to a database or third-party system that\u0026rsquo;s not available at test time, this wouldn\u0026rsquo;t work. Instead of using @SpringBootTest, we\u0026rsquo;d have to create a more focused application context and add the mock to the application context before the actual bean is instantiated.\nSpying on a Spring Bean with @SpyBean Mockito also allows us to spy on real objects. Instead of mocking away an object completely, Mockito creates a proxy around the real object and simply monitors which methods are being called to that we can later verify if a certain method has been called or not.\nSpring Boot provides the @SpyBean annotation for this purpose:\n@SpringBootTest @AutoConfigureMockMvc class SendMoneyControllerSpringBootSpyBeanTest { @Autowired private MockMvc mockMvc; @SpyBean private SendMoneyUseCase sendMoneyUseCase; @Test void testSendMoney() { ... } } @SpyBean works just like @MockBean. Instead of adding a bean to or replacing a bean in the application context it simply wraps the bean in Mockito\u0026rsquo;s proxy. In the test, we can then use Mockito\u0026rsquo;s then() to verify method calls just as above.\nWhy Do My Spring Tests Take So Long? If we use @MockBean and @SpyBean a lot in our tests, running the tests will take a lot of time. This is because Spring Boot creates a new application context for each test, which can be an expensive operation depending on the size of the application context.\nConclusion Mockito makes it easy for us to mock away objects that we don\u0026rsquo;t want to test right now. This allows to reduce integration overhead in our tests and can even transform an integration test into a more focused unit test.\nSpring Boot makes it easy to use Mockito\u0026rsquo;s mocking features in Spring-supported integration tests by using the @MockBean and @SpyBean annotations.\nAs easy as these Spring Boot features are to include in our tests, we should be aware of the cost: each test may create a new application context, potentially increasing the runtime of our test suite noticeable.\nThe code examples are available on GitHub.\n","date":"September 18, 2019","image":"https://reflectoring.io/images/stock/0052-mock-1200x628-branded_hu6cd8324df61b792144dc37534f748771_62678_650x0_resize_q90_box.jpg","permalink":"/spring-boot-mock/","title":"Mocking with (and without) Spring Boot"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you want to know how Basecamp builds software with bets instead of estimation you are interested in an agile process beyond Scrum \u0026amp; Co. you want inspiration on how to \u0026ldquo;shape\u0026rdquo; work before it can be implemented by programmers and designers  Overview In his book \u0026ldquo;Shape Up\u0026rdquo;, Ryan Singer describes the workflow and set of techniques Basecamp has developed over the years to build their project management and collaboration software with the same name.\n\u0026ldquo;Shape Up\u0026rdquo; covers the process from \u0026ldquo;shaping\u0026rdquo; raw ideas into low-risk, time-boxed projects to finally implementing the solution in small teams within 6-week cycles. It also discusses how to fight scope creep and monitor progress during a cycle.\nWhile the term \u0026ldquo;agile\u0026rdquo; is not mentioned in a single word throughout the text, I consider this workflow to be a welcome opinion on agile that goes beyond Scrum \u0026amp; Co. and provides a sustainable way of building software.\nThe book is available online for free, to be read in a browser, or to be downloaded as a PDF.\nLikes and Dislikes As a software engineer, I welcome the fact that the book has been programmed rather than written. It\u0026rsquo;s nicely formatted for screen reading and keeps a bookmark where you have left off.\nIf you have done a couple of years of software development, you can relate very well to the problems of requirements engineering, estimation, and getting things done, for which \u0026ldquo;Shape Up\u0026rdquo; offers opinionated but logical solutions.\nI like that \u0026ldquo;Shape Up\u0026rdquo; uses a set of very rich analogies like \u0026ldquo;shaping\u0026rdquo; instead of \u0026ldquo;requirements engineering\u0026rdquo;, \u0026ldquo;appetite\u0026rdquo; instead of \u0026ldquo;allowed time frame\u0026rdquo;, or \u0026ldquo;betting on pitches\u0026rdquo; instead of \u0026ldquo;planning projects\u0026rdquo;. This makes it so much more interesting and easier to grasp.\nThe text is accompanied by hand-drawn figures which do a great job of explaining the concepts.\nKey Takeaways Here are my notes from reading the book, along with a map of some of the keywords that I have assembled while reading \u0026hellip;.\nIntroduction  first focus on your ability to ship, then on shipping the right thing in a nutshell, Shape Up is to first shape a project, then bet on that it can be finished by a small, self-dependent team within six weeks  Principles of Shaping  wireframes are to concrete to shape work - they allow no creativity words are too abstract to shape work - they don\u0026rsquo;t describe well enough what should be built unshaped work is risky and unknown shaped work is rough, solved, and bounded shaping cannot be scheduled - keep it on a separate track from building so as not to let it delay the whole process shaping is done privately to give shapers the option to shelve things and get them back out later  Set Boundaries  shaping is influenced by an \u0026ldquo;appetite\u0026rdquo; defining how much time we\u0026rsquo;re willing to spend on a certain set of features an appetite starts with a number and ends with a design - opposite from an estimate part of shaping is to set a time boundary on the work done by a small team of 1 designer and 1-2 programmers a small batch means a project can be finished by a team in 2 weeks a big batch means a project can be finished by a team in 6 weeks the timebox forces the team to constantly make decisions to meet the appetite  Find the Elements  breadboarding is a technique used in electrical engineering to do the wiring without a chassis breadboarding can be used in software development by sketching places (web pages) and affordances (buttons etc.) and connecting them to mark transitions fat marker sketches can be used for visual problems as they make it impossible to add too much detail both shaping techniques avoid to add a bias to the shaped work and give room to the designers and programmers who will implement it  Risks and Rabbit Holes  go through a use case in slow motion to find holes in the shaping explicitly mark features as \u0026ldquo;out of bounds\u0026rdquo; if they threaten the appetite invite a technical expert to find any time bombs in the shaped work  Write the Pitch  the goal of a pitch is to present it to deciders who may bet upon it include the problem, the appetite (time frame), the solution, rabbit holes, and no-gos the problem makes it possible to evaluate the solution the appetite prevents discussion about out-of-scope solutions the solution can be presented with sketches or breadboards  Bets, not Backlogs  backlogs encourage constant reviewing, grooming, and organizing - you feel like you\u0026rsquo;re behind all the time pitches are brought up by different people in different departments - everyone tracks the pitches they have interest in themselves there is no central backlog - important ideas will come back  Bet Six Weeks  the common two-week sprint is too short for the overhead it brings a six-week cycle is the result of Basecamp experimenting over the years there\u0026rsquo;s a two-week \u0026ldquo;cooldown\u0026rdquo; phase after each cycle in which developers are free to follow up on work they\u0026rsquo;re invested in teams change from cycle to cycle and consist of a designer, 1-2 programmers, and a tester the \u0026ldquo;betting table\u0026rdquo; is a conference call with the highest stakeholders to decide which pitches make it into the next cycle the highest stakeholders must place the bets so that there is no higher authority to veto the bets and waste time and effort in the process there are no interruptions during a cycle - if something comes up, it can usually wait until the next cycle the \u0026ldquo;circuit breaker\u0026rdquo; rule says that projects do not get an extension if they are not finished within the cycle only one cycle is planned - a clean slate after each cycle avoids debts to carry around  Hand Over Responsibility  teams are assigned projects by the betting table teams need full autonomy to keep the big picture in mind and take responsibility for the whole thing a project is only done when it\u0026rsquo;s deployed the first days of a cycle can be silent - the teams need to orient themselves  Get One Piece Done  make progress in vertical slices rather than horizontal layers - have something to show and try out early it doesn\u0026rsquo;t have to be all or nothing - a simple UI may be enough to enable some backend work and vice versa prioritize work by their size (small things first) and their novelty (novel things first to reduce risks)  Map the Scopes  a \u0026ldquo;scope\u0026rdquo; is a vertical slice of a project - organized by feature and not by person or skill scopes provide a language to talk about the project where tasks are too granular scope mapping (grouping tasks into scopes) is done continuously during the project and not up-front - you cannot know the interdependencies in advance  Show Progress  work is like a hill: while going uphill you\u0026rsquo;re not certain on unknowns yet; when finally going downhill you know what to do tasks can be visualized on a hill chart push the riskiest task uphill first push multiple tasks over the top of the hill before doing all the downhill work to reduce overall risks  Decide When to Stop  we have to live with the fact that shipping on time means shipping something imperfect compare the currently finished work to the baseline (what the user currently can do with the software) to decide when to stop don\u0026rsquo;t compare the currently finished work to an ideal \u0026ldquo;scope grows like grass\u0026rdquo; - so the teams need to have the authority to cut the grass  Move On  don\u0026rsquo;t commit to feature requests before they have been shaped  Conclusion I believe Basecamp when they say they\u0026rsquo;re using the \u0026ldquo;Shape Up\u0026rdquo; way of doing things successfully to create their software. After all, they have spent years tuning their process to come up with this.\nI also believe that - with some experimentation on parameters like cycle duration - \u0026ldquo;Shape Up\u0026rdquo; can be applied to other software development environments. Management must be 100% in on it, which is the main reason for process change to fail, I\u0026rsquo;m afraid.\nEven if you don\u0026rsquo;t want to change your process you\u0026rsquo;ll get some inspiration and some helpful tools out of the book - like using breadboarding for sketching user interactions or hill charts for monitoring progress.\nIn conclusion, this is a clear reading recommendation for anyone working in software development, and perhaps especially those who are currently struggling to adapt to an agile method.\n","date":"September 11, 2019","image":"https://reflectoring.io/images/covers/shape-up_hu138af778a0e2a7c5cd5748a92ec81673_370826_650x0_resize_box_3.png","permalink":"/book-review-shape-up/","title":"Book Review: Shape Up"},{"categories":["Spring Boot","Java"],"contents":"Bean Validation is the de-facto standard for implementing validation logic in the Java ecosystem and it\u0026rsquo;s a great tool to have around.\nIn recent projects, however, I have been thinking a bit deeper about Bean Validation and have identified some practices I consider anti-patterns.\nAnti-Pattern Disclaimer   As with every discussion about patterns and anti-patterns, there's some opinion and personal experience involved. An anti-pattern in one context may very well be a best practice in another context (and vice-versa), so please don't take the discussion below as religious rules but as a trigger for thinking and constructive discussion on the topic.  Anti-Pattern #1: Validating Only in the Persistence Layer With Spring, it\u0026rsquo;s very easy to set up Bean Validation in the persistence layer. Say we have an entity with some bean validation annotations and an associated Spring Data repository:\n@Entity public class Person { @Id @GeneratedValue private Long id; @NotEmpty private String name; @NotNull @Min(0) private Integer age; // getters and setters omitted  } public interface PersonRepository extends CrudRepository\u0026lt;Person, Long\u0026gt; { // default CRUD methods provided by CrudRepository  } As long as we have a bean validation implementation like Hibernate Validator on the classpath, each call to the save() method of the repository will trigger a validation. If the state of the passed-in Input object is not valid according to the bean validation annotations, a ConstraintViolationException will be thrown.\nSo far, so good. This is pretty easy to set up and with the knowledge that everything will be validated before it\u0026rsquo;s sent to the database, we gain a sense of safety.\nBut is the persistence layer the right place to validate?\nI think it should at least not be the only place to validate.\nIn a common web application, the persistence layer is the bottom-most layer. We usually have a business layer and a web layer above. Data flows into the web layer, through the business layer and finally arrives in the persistence layer.\nIf we only validate in the persistence layer, we accept the risk that the web and business layer work with invalid data!\nInvalid data may lead to severe errors in the business layer (if we expect the data in the business layer to be valid) or to ultra-defensive programming with manual validation checks sprinkled all over the business layer (once we have learned that the data in the business layer cannot be trusted).\nIn conclusion, the input to the business layer should be valid already. Validation in the persistence layer can then act as an additional safety net, but not as the only place for validation.\nAnti-Pattern #2: Validating with a Shotgun Instead of validating too little, however, we can certainly validate too much. This is not a problem specific to Bean Validation, but with validation in general.\nData is validated using Bean Validation before it enters the system through the web layer. The web controller transforms the incoming data into an object that it can pass to a business service. The business service doesn\u0026rsquo;t trust the web layer, so it validates this object again using Bean Validation.\nBefore executing the actual business logic, the business service then programmatically checks every single constraint we can think of so that absolutely nothing can go wrong. Finally, the persistence layer validates the data again before it\u0026rsquo;s stored in the database.\nThis \u0026ldquo;shotgun validation\u0026rdquo; may sound like a good defensive approach to validation, but it leads to more problems than gains in my experience.\nFirst, if we use Bean Validation in a lot of places, we\u0026rsquo;ll have Bean Validation annotations everywhere. If in doubt, we\u0026rsquo;ll add Bean Validation annotations to an object even though it might not be validated after all. In the end, we\u0026rsquo;re spending time on adding and modifying validation rules that might not even be executed after all.\nSecond, validating everywhere leads to well-intentioned, but ultimately wrong validation rules. Imagine we\u0026rsquo;re validating a person\u0026rsquo;s first and last name to have a minimum of three characters. This was not a requirement, but we added this validation anyway because not validating is considered rude in our environment. Some day we\u0026rsquo;ll get an error report that says that a person named \u0026ldquo;Ed Sheeran\u0026rdquo; has failed to register in our system and has just started a shit storm on Twitter.\nWe've Always Done It This Way  As you may have noticed, a strong argument for shotgun validation is \"because we've always done it this way\". When developers on your team justify any decision with this argument you have my permission to slap them - be gentle the first time.  Third, validating everywhere slows down development. If we have validation rules sprinkled all over the code base, some in Bean Validation annotations and some in plain code, some of them might be in the way of a new feature we\u0026rsquo;re building. But we cannot just remove those validations, can we? Someone must have put them there for a reason, after all. If we use validation inflationary, this reason is often \u0026ldquo;because we\u0026rsquo;ve always done it this way\u0026rdquo;, but we can\u0026rsquo;t be sure that there\u0026rsquo;s not more to it. We\u0026rsquo;re slowed down because we have to think through each validation before we can apply our changes.\nFinally, with validation rules are all over the code, if we come across an unexpected validation error, we don\u0026rsquo;t know where to look to fix it. We have to find out where the validation was triggered, which can be hard if we\u0026rsquo;re using Bean Validation declaratively with @Validated and @Valid. Then, we need to search through our objects to find the responsible Bean Validation annotation. This is especially hard with nested objects.\nIn short, instead of validating everything, everywhere, we should have a clear and focused validation strategy.\nAnti-Pattern #3: Using Validation Groups for Use Case Validations The Bean Validation JSR provides a feature called validation groups. This feature allows us to associate validation annotations to certain groups so that we can choose which group to validate:\npublic class Person { @Null(groups = ValidateForCreate.class) @NotNull(groups = ValidateForUpdate.class) private Long id; @NotEmpty private String name; @NotNull @Min(value = 18, groups = ValidateForAdult.class) @Min(value = 0, groups = ValidateForChild.class) private int age; // getters and setters omitted  } When a Person is validated for creation, the id field is expected to be null. If it\u0026rsquo;s validated for update, the id field is expected not to be null.\nSimilarly, when a Person is validated in a use case that expects the person to be adult, it is expected to have a minimum age of 18. If it\u0026rsquo;s validated as a child, the age is expected to be greater than 0 instead.\nThese validations are triggered in a use case by stating which groups we want to validate:\n@Service @Validated class RegisterPersonService { @Validated({ValidateForAdult.class, ValidateForCreate.class}) void registerAdult(@Valid Person person) { // do something  } @Validated({ValidateForChild.class, ValidateForCreate.class}) void registerChild(@Valid Person person) { // do something  } } The @Validated annotation is a Spring annotation that validates the input to a method before it\u0026rsquo;s called, but validation groups can just as well be used without Spring.\nSo, what\u0026rsquo;s wrong with validation groups?\nFirst of all, we\u0026rsquo;re deliberately violating the Single Responsibility Principle. The Person model class knows the validation rules for all the use cases it is validated for. The model class has to change if a validation specific to a certain use case changes.\nSecond, it\u0026rsquo;s plain hard to read. The example above is simple yet, but you can imagine that it grows hard to understand with more use cases and more fields. It grows even harder to read if we use the @ConvertGroup annotation, which allows converting one group into another for a nested object.\nInstead of using validation groups, I propose the following:\n Use Bean Validation annotations only for syntactic validation that applies to all use cases. Add query methods for semantic information to the model class. In the case above, we would add the methods hasId() and isAdult(). In the use case code, call these query methods to validate the data semantically for the use case.  This way, the use case-specific semantics are validated in the use case code where they belong and the model code is free of the dependency to the use case. At the same time, the business rules are still encoded in a \u0026ldquo;rich\u0026rdquo; domain model class and accessible via query methods.\nValidate Consciously Bean Validation is a great tool to have at our fingertips, but with great tools comes great responsibility (sounds a bit trite but it\u0026rsquo;s spot-on if you ask me).\nInstead of using Bean Validation for everything and validating everywhere, we should have a clear validation strategy that tells us where to validate and when to use which tool for validation.\nWe should separate syntactic validation from semantic validation. Syntactic validation is a perfect use case for the declarative style supported by Bean Validation annotations, while semantic validation is better readable in plain code.\nIf you\u0026rsquo;re interested in a deeper discussion of validation in the context of software architecture, have a look at my book.\nLet me know your thoughts about validation in the comments.\n","date":"September 7, 2019","image":"https://reflectoring.io/images/stock/0051-stop-1200x628-branded_hu8c71944083c02ce8637d75428e8551b3_133770_650x0_resize_q90_box.jpg","permalink":"/bean-validation-anti-patterns/","title":"Bean Validation Anti-Patterns"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you are interested in how habits work in individuals, organizations, and societies you want to trigger some self-reflection on your habits you enjoy real-life stories that read like a novel  Overview Similar to The 7 Habits of Highly Effective People, I read {% include book-link.html book=\u0026ldquo;power-of-habit\u0026rdquo; %} in my quest for self-improvement, efficiency, and effectivity.\nWhile \u0026ldquo;The 7 Habits\u0026rdquo; concentrates more on interpersonal relations and the mindset to be effective in life, \u0026ldquo;The Power of Habit\u0026rdquo; by Charles Duhigg explains what habits are, how they develop and how they can be changed.\nThe book is divided into three parts with a couple of chapters each. The first part discusses the term \u0026ldquo;habit\u0026rdquo; as we usually understand it: as a habit of an individual. Duhigg doesn\u0026rsquo;t stop with individual habits, though, but goes on to discuss organizational habits in the second part and the habits of society in the third part.\nLikes \u0026amp; Dislikes Reading the book almost feels like reading a novel. Each chapter tells one or two (real) stories that drive the topic, written with suspense. Embedded within those stories are the well-researched scientific facts behind our habits.\nThe explanations are comprehensible enough to satisfy my engineer\u0026rsquo;s need for logic. Every fact in the book is based on interviews the author conducted with scientists, entrepreneurs, medical doctors, patients, and more.\nThe above makes the book very digestible and a very accessible and interesting read \u0026hellip; I have nothing to add to the \u0026ldquo;dislike\u0026rdquo; side.\nKey Takeaways Here are my notes from reading the book \u0026hellip;.\nThe Habit Loop  changing a single \u0026ldquo;keystone habit\u0026rdquo; can have a great effect on other habits as well the outer brain is responsible for conscious thought while the central brain is responsible for automatic behavior like habits the more often we do something the less brain activity is needed to do it even people who have lost the ability of conscious remembering can find their way home from a regular walk the habit loop consists of a cue, a routine, and a reward the brain shuts down during the routine  The Craving Brain  habits induce a craving for the reward - making us stick with the routine until the craving is satisfied to create a habit, we need to cultivate a craving to drive the habit loop - such a craving may even be a mild obsession brushing teeth only became a habit for half the world population after the producers added an ingredient that added the fresh and cool feeling - people soon craved this feeling  The Golden Rule of Habit Change  habits are there to stay - they cannot be removed, but they can be changed to change a habit, have to identify the cue and replace the routine - the reward should stay in place or be replaced with a similar reward successfully broken habits can return after a stressful event believing that change is possible or believing in some higher power can help to overcome such stressful events without falling back into old routines - this is easier in a community than alone  Keystone Habits  a keystone habit is a person\u0026rsquo;s or organization\u0026rsquo;s habit that, when changed, triggers many other habits organizational processes act like habits with cue, routine, and reward physical exercise is a keystone habit for many people as they also change their eating habits and smoke less a \u0026ldquo;small win\u0026rdquo; often indirectly paves the way for other small wins, ultimately leading to bigger change grit - the ability to stick to something despite drawbacks - is more if a success factor than high grades or physical fitness the habits of an organization make up its culture  Starbuck\u0026rsquo;s Habit of Success  willpower is the single most important keystone habit for individuals self-discipline has a bigger effect on academic success than intellect Starbucks teaches their baristas self-discipline to create a more comfortable environment for the customers willpower is learnable but can be depleted after using a lot of it willpower can be trained by deliberately planning cues, routines, and rewards training willpower improves many other habits like healthy eating, smoking, and exercising giving people control instead of commands greatly improves their capability for willpower (software engineering note: this is a central idea of the agile manifesto with its focus on individuals and interactions over processes and tools)  The Power of a Crisis  bad institutional habits (like an environment of arrogant doctors in a hospital) can lead to disaster (like surgery on the wrong side of the brain) organizations are guided by habits even if it seems they are making rational choices habits reduce uncertainty during a crisis, habits become malleable to change (software engineering note: if something went wrong in a project, a retrospective may help to change the habits that lead to the problems) Quote by Rahm Emanuel (Obama\u0026rsquo;s Chief of Staff): \u0026ldquo;You never let a crisis go to waste\u0026rdquo; publicly speaking about errors is a way to change habits habits within an organization create truces between divisions - often, though, these truces prevent opportunities  How Target Knows What You Want Before You Do  similar to crises, major life events are facilitators for habit change to change a habit, the new routine must be as similar as possible to the old one sandwich new habits between old habits to improve the chances of adoptions - radio DJs sandwiched new songs between known hits to make them successful  Saddleback Church and the Montgomery Bus Boycott  a movement starts because of social habits and strong personal ties - like the social ties of Rosa Parks after she refused to vacate a seat in a bus for a white man a movement grows because of the habits of a community (weak personal ties) a movement endures because of leaders give the participants new habits (like Martin Luther King did) friendship triggers a social habit of support when a friend is attacked peer pressure triggers social habits that encourage us to conform to group behavior - even if the group only consists of \u0026ldquo;weak tie\u0026rdquo;-acquaintances  The Neurology of Free Will  habits can become so strong that they overpower free will - for instance, gambling or drinking people with brain damage may lose some of their free will - this may look very similar to people with strong habits a man who murdered his wife while in a sleep terror - a form of sleepwalking, which is automatic behavior similar to a habit - because he wanted to defend her against imaginary attackers was exonerated a woman with a bad gambling habit became indebted to the casino that abused this habit was successfully sued by that casino it\u0026rsquo;s easier to sympathize with a devastated widower who killed his wife while sleepwalking than with a housewife who gambled her family\u0026rsquo;s money - to society, this seems fair since the widower wasn\u0026rsquo;t aware of his habits while the gambling woman was habits can only be modified if we\u0026rsquo;re aware of them the most important ingredient to changing habits is the habit of believing that we can change  A Reader\u0026rsquo;s Guide to Using These Ideas  identify the routine experiment with rewards   consciously switch the reward with something else and track results for a couple of days find out which rewards satisfy the craving then find out which craving it is that drives the habit  isolate the cue   identify the cue by asking questions each time the habit strikes where am I? what time is it? what\u0026rsquo;s my emotional state? who else is around? \u0026hellip; after a couple of days you may find out what cue is starting the habit by combining the answers to those questions  have a plan   write down a plan to change the habit  Conclusion If you\u0026rsquo;re interested in what makes us tick and how to change it, {% include book-link.html book=\u0026ldquo;power-of-habit\u0026rdquo; %} is a very good starting point to dive into the topic.\nIt\u0026rsquo;s well-researched, well-written, easily digestible and gets my unconditional reading recommendation (well, with the one condition that you should be interested in the topic of habits\u0026hellip;).\n","date":"September 2, 2019","image":"https://reflectoring.io/images/covers/power-of-habit-teaser_hu0b4d333dbd66feaabb771617c8c6b8ce_57435_650x0_resize_q90_box.jpg","permalink":"/book-review-the-power-of-habit/","title":"Book Review: The Power of Habit"},{"categories":["Spring Boot"],"contents":"There are certain cross-cutting concerns that we don\u0026rsquo;t want to implement from scratch for each Spring Boot application we\u0026rsquo;re building. Instead, we want to implement those features once and include them into any application as needed.\nIn Spring Boot, the term used for a module that provides such cross-cutting concerns is \u0026ldquo;starter\u0026rdquo;. A starter makes it easy to include a certain set of features to \u0026ldquo;get started\u0026rdquo; with them.\nSome example use cases for a Spring Boot starter are:\n providing a configurable and/or default logging configuration or making it easy to log to a central log server providing a configurable and/or default security configuration providing a configurable and/or default error handling strategy providing an adapter to a central messaging infrastructure integrating a third-party library and making it configurable to use with Spring Boot \u0026hellip;  In this article, we\u0026rsquo;ll build a Spring Boot starter that allows a Spring Boot application to easily send and receive Events over an imaginary central messaging infrastructure.\n Example Code This article is accompanied by a working code example on GitHub. Spring Boot Starter Vocabulary Before we dive into the details of creating a Spring Boot starter, let\u0026rsquo;s discuss some keywords that will help to understand the workings of a starter.\nWhat\u0026rsquo;s the Application Context? In a Spring application, the application context is the network of objects (or \u0026ldquo;beans\u0026rdquo;) that makes up the application. It contains our web controllers, services, repositories and whatever (usually stateless) objects we might need for our application to work.\nWhat\u0026rsquo;s a Spring Configuration? A class annotated with the @Configuration annotation serves as a factory for beans that are added to the application context. It may contain factory methods annotated with @Bean whose return values are automatically added to the application context by Spring.\nIn short, a Spring configuration contributes beans to the application context.\nWhat\u0026rsquo;s an Auto-Configuration? An auto-configuration is a @Configuration class that is automatically discovered by Spring. As soon as an auto-configuration is found on the classpath, it is evaluated and the configuration\u0026rsquo;s contribution is added to the application context.\nAn auto-configuration may be conditional so that its activation depends on external factors like a certain configuration parameter having a specific value.\nWhat\u0026rsquo;s an Auto-Configure Module? An auto-configure module is a Maven or Gradle module that contains an auto-configuration class. This way, we can build modules that automatically contribute to the application context, adding a certain feature or providing access to a certain external library. All we have to do to use it in our Spring Boot application is to include a dependency to it in our pom.xml or build.gradle.\nThis method is heavily used by the Spring Boot team to integrate Spring Boot with external libraries.\nWhat\u0026rsquo;s a Spring Boot Starter? Finally, a Spring Boot Starter is a Maven or Gradle module with the sole purpose of providing all dependencies necessary to \u0026ldquo;get started\u0026rdquo; with a certain feature. This usually means that it\u0026rsquo;s a solitary pom.xml or build.gradle file that contains dependencies to one or more auto-configure modules and any other dependencies that might be needed.\nIn a Spring Boot application, we then only need to include this starter to use the feature.\nCombining Auto-Configuration and Starter in a Single Module  The reference manual proposes to separate auto-configuration and starter each into a distinct Maven or Gradle module to separate the concern of auto-configuration from the concern of dependency management.  This may be a bit over-engineered in environments where we're not building an open-source library that is used by thousands of users. In this article, we're combining both concerns into a single starter module.  Building a Starter for Event Messaging Let\u0026rsquo;s discover how to implement a starter with an example.\nImagine we\u0026rsquo;re working in a microservice environment and want to implement a starter that allows the services to communicate with each other asynchronously. The starter we\u0026rsquo;re building will provide the following features:\n an EventPublisher bean that allows us to send events to a central messaging infrastructure an abstract EventListener class that can be implemented to subscribe to certain events from the central messaging infrastructure.  Note that the implementation in this article will not actually connect to a central messaging infrastructure, but instead provide a dummy implementation. The goal of this article is to showcase how to build a Spring Boot starter and not how to do messaging, after all.\nSetting Up the Gradle Build Since a starter is a cross-cutting concern across multiple Spring Boot applications, it should live in its own codebase and have its own Maven or Gradle module. We\u0026rsquo;ll use Gradle as the build tool of choice, but it works very similar with Maven.\nTo get the basic Spring Boot features into our starter, we need to declare a dependency to the basic Spring Boot starter in our build.gradle file:\nplugins { id \u0026#39;io.spring.dependency-management\u0026#39; version \u0026#39;1.0.8.RELEASE\u0026#39; id \u0026#39;java\u0026#39; } dependencyManagement { imports { mavenBom(\u0026#34;org.springframework.boot:spring-boot-dependencies:2.1.7.RELEASE\u0026#34;) } } dependencies { implementation \u0026#39;org.springframework.boot:spring-boot-starter\u0026#39; testImplementation \u0026#39;org.springframework.boot:spring-boot-starter-test\u0026#39; } The full file is available on github.\nTo get the version of the basic starter that is compatible to a certain Spring Boot version, we\u0026rsquo;re using the Spring Dependency Management plugin to include the BOM (bill of materials) of that specific version.\nThis way, Gradle looks up the compatible version of the starter (and the versions of any other dependencies Spring Boot needs) in this BOM and we don\u0026rsquo;t have to declare it manually.\nProviding an Auto-Configuration As an entry point to the features of our starter, we provide a @Configuration class:\n@Configuration class EventAutoConfiguration { @Bean EventPublisher eventPublisher(List\u0026lt;EventListener\u0026gt; listeners){ return new EventPublisher(listeners); } } This configuration includes all the @Bean definitions we need to provide the features of our starter. In this case, we simply add an EventPublisher bean to the application context.\nOur dummy implementation of the EventPublisher needs to know all EventListeners so it can deliver the events to them, so we let Spring inject the list of all EventListeners available in the application context.\nTo make our configuration an auto-configuration, we list it in the file META-INF/spring.factories:\norg.springframework.boot.autoconfigure.EnableAutoConfiguration=\\ io.reflectoring.starter.EventAutoConfiguration Spring Boot searches through all spring.factories files it finds on the classpath and loads the configurations declared within.\nWith the EventAutoConfiguration class in place, we now have an automatically activated single point of entry for our Spring Boot starter.\nMaking it Optional It\u0026rsquo;s always a good idea to allow the features of a Spring Boot starter to be disabled. This is especially important when providing access to an external system like a messaging service. That service won\u0026rsquo;t be available in a test environment, for instance, so we want to shut the feature down during tests.\nWe can make our entry point configuration optional by using Spring Boot\u0026rsquo;s conditional annotations:\n@Configuration @ConditionalOnProperty(value = \u0026#34;eventstarter.enabled\u0026#34;, havingValue = \u0026#34;true\u0026#34;) @ConditionalOnClass(name = \u0026#34;io.reflectoring.KafkaConnector\u0026#34;) class EventAutoConfiguration { ... } By using ConditionalOnProperty we tell Spring to only include the EventAutoConfiguration (and all the beans it declares) into the application context if the property eventstarter.enabled is set to true.\nThe @ConditionalOnClass annotation tells Spring to only activate the auto-configuration when the class io.reflectoring.KafkaConnector is on the classpath (this is just a dummy class to showcase the use of conditional annotations).\nMaking it Configurable For a library that is used in multiple applications, like our starter, it\u0026rsquo;s also a good idea to make the behavior as configurable as possible.\nImagine that an application is only interested in certain events. To make this configurable per application, we could provide a list of the enabled events in an application.yml (or application.properties) file:\neventstarter: listener: enabled-events: - foo - bar To make these properties easily accessible within the code of our starter, we can provide a @ConfigurationProperties class:\n@ConfigurationProperties(prefix = \u0026#34;eventstarter.listener\u0026#34;) @Data class EventListenerProperties { /** * List of event types that will be passed to {@link EventListener} * implementations. All other events will be ignored. */ private List\u0026lt;String\u0026gt; enabledEvents = Collections.emptyList(); } We enable the EventListenerProperties class by annotating our entry point configuration with @EnableConfigurationProperties:\n@Configuration @EnableConfigurationProperties(EventListenerProperties.class) class EventAutoConfiguration { ... } And finally, we can let Spring inject the EventListenerProperties bean anywhere we need it, for instance within our abstract EventListener class to filter out the events we\u0026rsquo;re not interested in:\n@RequiredArgsConstructor public abstract class EventListener { private final EventListenerProperties properties; public void receive(Event event) { if(isEnabled(event) \u0026amp;\u0026amp; isSubscribed(event)){ onEvent(event); } } private boolean isSubscribed(Event event) { return event.getType().equals(getSubscribedEventType()); } private boolean isEnabled(Event event) { return properties.getEnabledEvents().contains(event.getType()); } } Creating IDE-friendly Configuration Metadata With eventstarter.enabled and eventstarter.listener.enabled-events we have specified two configuration parameters for our starter. It would be nice if those parameters would be auto-completed when a developer starts typing event... within a configuration file.\nSpring Boot provides an annotation processor that collects metadata about configuration parameters from all @ConfigurationProperties classes it finds. We simply include it in our build.gradle file:\ndependencies { ... annotationProcessor \u0026#39;org.springframework.boot:spring-boot-configuration-processor\u0026#39; } This annotation processor will generate the file META-INF/spring-configuration-metadata.json that contains metadata about the configuration parameters in our EventListenerProperties class. This metadata includes the Javadoc on the fields so be sure to make the Javadoc as clear as possible.\nIn IntelliJ, the Spring Assistant plugin will read this metadata and provide auto-completion for those properties.\nThis still leaves the eventstarter.enabled property, though, since it\u0026rsquo;s not listed in a @ConfigurationProperties class.\nWe can add this property manually by creating the file META-INF/additional-spring-configuration-metadata.json:\n{ \u0026#34;properties\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;eventstarter.enabled\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;java.lang.Boolean\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Enables or disables the EventStarter completely.\u0026#34; } ] } The annotation processor will then automatically merge the contents of this file with the automatically generated file for IDE tools to pick up. The format of this file is documented in the reference manual.\nImproving Startup Time For each auto-configuration class on the classpath, Spring Boot has to evaluate the conditions encoded within the @Conditional... annotations to decide whether to load the auto-configuration and all the classes it needs. Depending on the size and number of starters in a Spring Boot application, this can be a very expensive operation and affect startup time.\nThere is yet another annotation processor that generates metadata about the conditions of all auto-configurations. Spring Boot reads this metadata during startup and can filter out configurations whose conditions are not met without actually having to inspect those classes.\nFor this metadata to be generated, we simply need to add the annotation processor to our starter module:\ndependencies { ... annotationProcessor \u0026#39;org.springframework.boot:spring-boot-autoconfigure-processor\u0026#39; } During the build, the metadata will be generated into the META-INF/spring-autoconfigure-metadata.properties file, which will look something like this:\nio.reflectoring.starter.EventAutoConfiguration= io.reflectoring.starter.EventAutoConfiguration.ConditionalOnClass=io.reflectoring.KafkaConnector io.reflectoring.starter.EventAutoConfiguration.Configuration= I\u0026rsquo;m not sure why the metadata contains the @ConditionalOnClass condition but not the @ConditionalOnProperty condition. If you know why, please let me know in the comments.\nUsing the Starter Now that the starter is polished it\u0026rsquo;s ready to be included into a Spring Boot application.\nThis is as simple as adding a single dependency in the build.gradle file:\ndependencies { ... implementation project(\u0026#39;:event-starter\u0026#39;) } In the example above, the starter is a module within the same Gradle build, so we don\u0026rsquo;t use the fully-qualified Maven coordinates to identify the starter.\nWe can now configure the starter using the configuration parameters we have introduced above. Hopefully, our IDE will evaluate the configuration metadata we created and auto-complete the parameter names for us.\nTo use our event starter, we can now inject an EventPublisher into our beans and use it to publish events. Also, we can create beans that extend the EventListener class to receive and act on events.\nA working example application is available on GitHub.\nConclusion Wrapping certain features into a starter to use them in any Spring Boot application is only a matter of a few simple steps. Provide an auto-configuration, make it configurable, and polish it with some auto-generated metadata to improve performance and usability.\n","date":"August 30, 2019","image":"https://reflectoring.io/images/stock/0039-start-1200x628-branded_hu0e786b71aef533dc2d1f5d8371554774_82130_650x0_resize_q90_box.jpg","permalink":"/spring-boot-starter/","title":"Quick Guide to Building a Spring Boot Starter"},{"categories":["Java"],"contents":"Remember the days when we had to manually download every single JAR file that our project needed to run? And not only the JAR files we directly depended upon, mind you, but even those JAR files that our dependencies and our dependencies' dependencies needed to work!\nLuckily, those days are over. Today, build tools like Maven and Gradle take care of resolving our dependencies. They do this following the rules of scopes and configurations that we put into the build script.\nThis has a downside, however. Years ago, when we downloaded each of the direct and transitive dependencies manually, we could decide for each of those dependencies if we really needed it for our project to compile and run. Today, we pay less attention to specifying the correct scopes or configurations, which often results in too many dependencies being available at compile time.\nWhat\u0026rsquo;s Dependency Pollution? Say we have a project X. It depends on libraries A and B. And C is a consumer of project X.\nC has a transitive dependency to A and B because X needs A and B to function.\nNow, imagine these dependencies are available at compile time, meaning\n X can use classes of A and B in its code, and C can use classes of X, A, and B in its code.  The dependencies of X leak into the compile-time classpath of C. This is what I\u0026rsquo;ll call \u0026ldquo;dependency pollution\u0026rdquo;.\nWhy are we only talking about compile-time dependencies?  This article only discusses the problems of too many compile-time dependencies and not those of too many runtime dependencies.  An unwanted compile-time dependency is more invasive because it allows binding the consumer's code to an external project, which may cause the problems discussed below.  An unwanted runtime dependency, on the other hand, will probably only bloat our final build artifact with a JAR file that we don't need (yes, there are scenarios in which a wrong runtime dependency can cause problems, but these are a completely different type of problem).  Problems of Dependency Pollution Let\u0026rsquo;s talk about the implications of polluting the compile time of consumers with transitive dependencies.\nAccidental Dependencies The first problem that can easily occur is that of an accidental compile-time dependency.\nFor instance, the developer of C may decide to use some classes of library A in her code. She may not be aware that A is actually a dependency of X and not a dependency of C itself, and the IDE will happily provide her those classes to the classpath.\nNow, the developers of X decide that with the next version of X, they no longer need library A. They sell this as a minor update that is completely backward-compatible because they haven\u0026rsquo;t changed the API of X at all.\nWhen the developer of C updates to this next version of X, she will get compile errors even though the update of X has been backward-compatible because the classes of A are no longer available. And she hasn\u0026rsquo;t even changed a single line of code.\nFact is, if we propagate our compile-time dependencies to our consumer\u0026rsquo;s compile time, the consumer may accidentally create compile-time dependencies she doesn\u0026rsquo;t really want to have. And she has to change her code if some other project changes its dependencies.\nShe loses control over her code.\nUnnecessary Recompiles Now, imagine that A, B, C, and X are modules within our own project.\nEvery time there is a change in the code of module A or B, module C has to be recompiled, even when module C doesn\u0026rsquo;t even use the code of A or B.\nThis is again because, through X, C has a transitive compile-time dependency to A and B. And the build tools happily (and rightly) recompile all consumers of a module that was modified.\nThis may not be an issue if the modules in a project are rather static. But if they are modified more often, this leads to unnecessarily long build times.\nUnnecessary Reasons to Change The problems discussed above boil down to a violation of the Single Responsibility Principle (SRP), which, freely interpreted, says that a module should have only one reason to change.\nLet\u0026rsquo;s interpret the SRP so that the one reason to change a module should be a change in the requirements of that module.\nAs we have seen above, however, we might have to modify the code of C even if the requirements of C haven\u0026rsquo;t changed a bit. Instead, we have given control over to the developers of A and B. If they change something in their code, we have to follow suit.\nIf a module only has one reason to change, we keep control of our own code. With transitive compile-time dependencies, we lose that control.\nGradle\u0026rsquo;s Solution What support do today\u0026rsquo;s build tools offer to avoid unwanted transitive compile-time dependencies?\nWith Maven, sadly, we have exactly the case outlined above. Every dependency in the compile scope is copied to the compile scope of the downstream consumer.\nWith Gradle, however, we have more control over dependencies, allowing us to reduce dependency pollution.\nUse the implementation Configuration The solution Gradle offers is fairly easy. If we have a compile-time dependency, we add it to the implementation configuration instead of the compile configuration (which has been deprecated in favor of implementation for some time now).\nSo, if the dependency of X to A is declared to the implementation configuration, C no longer has a transitive compile-time dependency to A. C can no longer accidentally use classes of A. If C needs to use classes of A, we have to declare the dependency to A explicitly.\nIf we do want to expose a certain dependency as a compile-time dependency, for example, if X uses classes of B as part of its API, we have the option to use the api configuration instead.\nMigrate from compile to implementation If a module you\u0026rsquo;re developing is still using the deprecated compile configuration, consider it a service to your consumers to migrate to the newer implementation configuration. It will reduce pollution to your consumers' compile-time classpath.\nHowever, make sure to notify your consumers of the change, because they might have used some classes from your dependencies. Don\u0026rsquo;t sell it as a backward-compatible update, because it will be a breaking change at least for some.\nThe consumers will have to check if their modules still compile after the change. If they don\u0026rsquo;t, they were using a transitive dependency that is no longer available and they have to declare that dependency themselves (or get rid of it, if it wasn\u0026rsquo;t intentional).\nConclusion If we leak our dependencies into our consumers' compile-time classpath, they may lose control over their code.\nKeeping transitive dependencies in check so that they don\u0026rsquo;t pollute consumer compile-time classpaths seems like a daunting task, but it\u0026rsquo;s fairly easy to do with Gradle\u0026rsquo;s implementation configuration.\n","date":"July 28, 2019","image":"https://reflectoring.io/images/stock/0001-network-1200x628-branded_hu72d229b68bf9f2a167eb763930d4c7d5_172647_650x0_resize_q90_box.jpg","permalink":"/gradle-pollution-free-dependencies/","title":"Pollution-Free Dependency Management with Gradle"},{"categories":["Java"],"contents":" table th:first-of-type { width: 40%; }  One of the key features of a build tool for Java is dependency management. We declare that we want to use a certain third-party library in our own project and the build tool takes care of downloading it and adding it to the classpath at the right times in the build lifecycle. One of the key features of a build tool for Java is dependency management. We declare that we want to use a certain third-party library in our own project and the build tool takes care of downloading it and adding it to the classpath at the right times in the build lifecycle.\nMaven has been around as a build tool for a long time. It\u0026rsquo;s stable and still well liked in the Java community.\nGradle has emerged as an alternative to Maven quite some time ago, heavily relying on Maven dependency infrastructure, but providing a more flexible way to declare dependencies.\nWhether you\u0026rsquo;re moving from Maven to Gradle or you\u0026rsquo;re just interested in the different ways of declaring dependencies in Maven or Gradle, this article will give an overview.\nWhat\u0026rsquo;s a Scope / Configuration? A Maven pom.xml file or a Gradle build.gradle file specifies the steps necessary to create a software artifact from our source code. This artifact can be a JAR file or a WAR file, for instance.\nIn most non-trivial projects, we rely on third-party libraries and frameworks. So, another task of build tools is to manage the dependencies to those third-party libraries and frameworks.\nSay we want to use the SLF4J logging library in our code. In a Maven pom.xml file, we would declare the following dependency:\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.slf4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;slf4j-api\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.7.26\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;compile\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; In a Gradle build.gradle file, the same dependency would look like this:\nimplementation \u0026#39;org.slf4j:slf4j-api:1.7.26\u0026#39; Both Maven and Gradle allow to define different groups of dependencies. These dependency groups are called \u0026ldquo;scopes\u0026rdquo; in Maven and \u0026ldquo;configurations\u0026rdquo; in Gradle.\nEach of those dependency groups has different characteristics and answers the following questions differently:\n In which steps of the build lifecycle will the dependency be made available? Will it be available at compile time? At runtime? At compile and runtime of tests? Is the dependency transitive? Will it be exposed to consumers of our own project, so that they can use it, too? If so, will it leak into the consumers' compile time and / or the consumers' runtime? Is the dependency included in the final build artifact? Will the WAR or JAR file of our own project include the JAR file of the dependency?  In the above example, we added the SLF4J dependency to the Maven compile scope and the Gradle implementation configuration, which can be considered the defaults for Maven and Gradle, respectively.\nLet\u0026rsquo;s look at the semantics of all those scopes and configurations.\nMaven Scopes Maven provides 6 scopes for Java projects.\nWe\u0026rsquo;re not going to look at the system and import scopes, however, since they are rather exotic.\ncompile The compile scope is the default scope. We can use it when we have no special requirements for declaring a certain dependency.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     compile timeruntimetest compile timetest runtime yes yes yes    Note that the compile scope leaks into the compile time, thus promoting dependency pollution.\nprovided We can use the provided scope to declare a dependency that will not be included in the final build artifact.\nIf we rely on the Servlet API in our project, for instance, and we deploy to an application server that already provides the Servlet API, then we would add the dependency to the provided scope.\n| When available? | Leaks into consumers' compile time? | Leaks into consumers' runtime? | Included in Artifact? | | \u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;- | \u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash; | \u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash; | | compile timeruntimetest compile timetest runtime | no | no | no |\nruntime We use the runtime scope for dependencies that are not needed at compile time, like when we\u0026rsquo;re compiling against an API and only need the implementation of that API at runtime.\nAn example is SLF4J where we include slf4j-api to the compile scope and an implementation of that API (like slf4j-log4j12 or logback-classic) to the runtime scope.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     runtimetest runtime no yes yes    test We can use the test scope for dependencies that are only needed in tests and that should not be available in production code.\nExamples dependencies for this scope are testing frameworks like JUnit, Mockito, or AssertJ.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     test compile timetest runtime no no no    Gradle Configurations Gradle has a more diverse set of configurations. This is the result of Gradle being younger and more actively developed, and thus able to adapt to more use cases.\nLet\u0026rsquo;s look at the standard configurations of Gradle\u0026rsquo;s Java Library Plugin. Note that we have to declare the plugin in the build script to get access to the configurations:\nplugins { id \u0026#39;java-library\u0026#39; } implementation The implementation configuration should be considered the default. We use it to declare dependencies that we don’t want to expose to our consumers' compile time.\nThis configuration was introduced to replace the deprecated compile configuration to avoid polluting the consumer\u0026rsquo;s compile time with dependencies we actually don\u0026rsquo;t want to expose.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     compile timeruntimetest compile timetest runtime no yes yes    api We use the api configuration do declare dependencies that are part of our API, i.e. for dependencies that we explicitly want to expose to our consumers.\nThis is the only standard configuration that exposes dependencies to the consumers' compile time.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     compile timeruntimetest compile timetest runtime yes yes yes    compileOnly The compileOnly configuration allows us to declare dependencies that should only be available at compile time, but are not needed at runtime.\nAn example use case for this configuration is an annotation processor like Lombok, which modifies the bytecode at compile time. After compilation it\u0026rsquo;s not needed anymore, so the dependency is not available at runtime.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     compile time no no no    runtimeOnly The runtimeOnly configuration allows us to declare dependencies that are not needed at compile time, but will be available at runtime, similar to Maven\u0026rsquo;s runtime scope.\nAn example is again SLF4J where we include slf4j-api to the implementation configuration and an implementation of that API (like slf4j-log4j12 or logback-classic) to the runtimeOnly configuration.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     runtime no yes yes    testImplementation Similar to implementation, but dependencies declared with testImplementation are only available during compilation and runtime of tests.\nWe can use it for declaring dependencies to testing frameworks like JUnit or Mockito that we only need in tests and that should not be available in the production code.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     test compile timetest runtime no no no    testCompileOnly Similar to compileOnly, but dependencies declared with testCompileOnly are only available during compilation of tests and not at runtime.\nI can\u0026rsquo;t think of a specific example, but there may be some annotation processors similar to Lombok that are only relevant for tests.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     test compile time no no no    testRuntimeOnly Similar to runtimeOnly, but dependencies declared with testRuntimeOnly are only available during runtime of tests and not at compile time.\nAn example would be declaring a dependency to the JUnit Jupiter Engine, which runs our unit tests, but which we don’t compile against.\n   When available? Leaks into consumers' compile time? Leaks into consumers' runtime? Included in Artifact?     test runtime no no no    Combining Gradle Configurations Since the Gradle configurations are very specific, sometimes we might want to combine their features. In this case, we can declare a dependency with more than one configuration. For example, if we want a compileOnly dependency to also be available at test compile time, we additionally declare it to the testCompileOnly configuration:\ndependencies { compileOnly \u0026#39;org.projectlombok:lombok:1.18.8\u0026#39; testCompileOnly \u0026#39;org.projectlombok:lombok:1.18.8\u0026#39; } To remove the duplicate declaration, we could also tell Gradle that we want the testCompileOnly configuration to include everything from the compileOnly configuration:\nconfigurations { testCompileOnly.extendsFrom compileOnly } dependencies { compileOnly \u0026#39;org.projectlombok:lombok:1.18.8\u0026#39; } Do this with care, however, since we\u0026rsquo;re losing flexibility in declaring dependencies every time we\u0026rsquo;re combining two configurations this way.\nMaven Scopes vs. Gradle Configurations Maven scopes don\u0026rsquo;t translate perfectly to Gradle configurations because Gradle configurations are more granular. However, here\u0026rsquo;s a table that translates between Maven scopes and Gradle configurations with a few notes about differences:\n   Maven Scope Equivalent Gradle Configuration     compile api if the dependency should be exposed to consumers, implementation if not   provided compileOnly (note that the provided Maven scope is also available at runtime while the compileOnly Gradle configuration is not)   runtime runtimeOnly   test testImplementation    Conclusion Gradle, being the younger build tool, provides a lot more flexibility in declaring dependencies. We have finer control about whether dependencies are available in tests, at runtime or at compile time.\nFurthermore, with the api and implementation configurations, Gradle allows us to explicitly specify which dependencies we want to expose to our consumers, reducing dependency pollution to the consumers.\n","date":"July 24, 2019","image":"https://reflectoring.io/images/stock/0002-telescope-1200x628-branded_hue331fac9ffa4d67ff3a1dbbc916d6c36_106537_650x0_resize_q90_box.jpg","permalink":"/maven-scopes-gradle-configurations/","title":"Maven Scopes and Gradle Configurations Explained"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you want to become a better writer (especially, but not exclusively, in online marketing) you are writing for a blog or marketing department you want to learn some tricks on how to keep writing  Overview The book {% include book-link.html book=\u0026ldquo;everybody-writes\u0026rdquo; %} by Ann Handley aims to be a book for everyone since everybody writes in their daily routine in emails, social media, or on a blog or website.\nThe book gives very concrete tips on writing, building the habits around it, grammar and usage of words, how to tell a story and on marketing things with words.\nLikes \u0026amp; Dislikes The book is a very easy read. I like the fact that it contains over 70 very short chapters, each with a very narrow focus on a certain aspect of writing, publishing or marketing. This makes it easy to use as a reference when you want to look something up.\nThe language is pleasantly conversational. The author blends in words like \u0026ldquo;discombobulated\u0026rdquo; or \u0026ldquo;Frankenword\u0026rdquo;, which breaks one of her own rules (but then, one of her rules is to break a rule here and there to stay interesting, so she\u0026rsquo;s not contradicting herself).\nThe second half of the book is focused on writing in marketing, even though the chapter titles sometimes suggest differently. So, despite the \u0026ldquo;Everybody\u0026rdquo; in the title, I don\u0026rsquo;t think this book is in fact for everybody. If you\u0026rsquo;re writing for a blog or a marketing department, however, this book contains invaluable advice.\nKey Takeaways Since the book is neatly structured in a lot of very small chapters, and I\u0026rsquo;m always keen on taking away something from every chapter, there are a lot of takeaways.\nHow to Write Better  the quality of content can be measured by the formula quality = utility x inspiration x empathy - it\u0026rsquo;s zero if any of the components is zero writing is a habit, not an art - write every day to improve don\u0026rsquo;t just punch emails into the keyboard - use them as writing practice write when you\u0026rsquo;re freshest to get the most out of it there is no single formula for structuring content like \u0026ldquo;intro / body / conclusion\u0026rdquo; - that would be boring to write and read be clear, be brief, and focus on the reader start sentences with the important re-frame your idea of the text to view it from the reader\u0026rsquo;s perspective to keep the focus on the reader \u0026ldquo;think before ink\u0026rdquo; - have an idea of the why and the what before starting to write find a workflow to organize your thoughts into an outline and make it a habit produce an \u0026ldquo;ugly first draft\u0026rdquo; without caring about spelling or grammar and then rewrite it to be more productive \u0026ldquo;relentlessly, unremittingly, obstinately focus on the reader\u0026rdquo; - best done when taking the reader\u0026rsquo;s place while rewriting or editing humor comes with the rewrite, not with the draft replace \u0026ldquo;I\u0026rdquo; and \u0026ldquo;we\u0026rdquo; with \u0026ldquo;you\u0026rdquo; to get the reader\u0026rsquo;s interest during a rewrite, first focus on paragraphs and then on single words (for trimming a hedge, you\u0026rsquo;ll first use a chainsaw and then a more precise pair of shears) start a page with \u0026ldquo;Dear XXX\u0026rdquo; (and delete it later) to create a more conversational writing voice, where \u0026ldquo;XXX\u0026rdquo; is a persona of your readers the first one or two paragraphs in a draft are usually superfluous - delete them or rewrite them ruthlessly make sure that modifiers like \u0026ldquo;only\u0026rdquo; clearly modify only the part of a sentence you want them to make the lead (first paragraphs) and the kicker (last paragraphs) of a text extra good - a good lead makes the reader read the content and a good kicker makes the reader sad that it\u0026rsquo;s over tell specific stories instead of generic ones - tell a story about 40-year-old Mr. Smith instead of \u0026ldquo;a middle-aged man\u0026rdquo; use analogies instead of simple adjectives to pique the interest writing should always make things clearer and aim to make sense of the world, not only in how-tos make things simple but don\u0026rsquo;t assume the reader is dumb find someone to write / review with to help make writing a habit don\u0026rsquo;t write by committee or nothing will get done a good editor (a human, not a computer program) drastically improves quality make readability a primary concern - use checks like the Flesch readability test leave something unfinished at the end of a writing session to take momentum into the next session set a daily word goal, not a time goal (start with something easily achievable and work upwards) set a deadline and don\u0026rsquo;t let yourself push it  Grammar and Usage  use real words instead of made-up marketing buzzwords avoid stitched-together \u0026ldquo;Frankenwords\u0026rdquo; like \u0026ldquo;Awesomesauce\u0026rdquo; or \u0026hellip; \u0026ldquo;Frankenword\u0026rdquo; don\u0026rsquo;t use technical-sounding words in non-technical contexts (\u0026ldquo;bandwidth\u0026rdquo;, \u0026ldquo;radar screen\u0026rdquo;, \u0026hellip;) use active over passive - it makes the text sound livelier use strong verbs over weak verbs to make the text clearer most adverbs only add bloat an can be ditched use clichés sparingly (\u0026ldquo;the rubber meets the road\u0026rdquo;, \u0026ldquo;drink from the fire hose\u0026rdquo;, \u0026hellip;) as they cheapen the text if using \u0026ldquo;this\u0026rdquo;, \u0026ldquo;these\u0026rdquo;, or \u0026ldquo;those\u0026rdquo;, make completely clear what you mean with it one-sentence paragraphs are fine to make a point use \u0026ldquo;further\u0026rdquo; for figurative distance and \u0026ldquo;farther\u0026rdquo; for actual distance an eggcorn is a misheard phrase that still makes sense, but in another way (\u0026ldquo;dutch tape\u0026rdquo; vs. \u0026ldquo;duct tape\u0026rdquo;) don\u0026rsquo;t moralize (i.e. don\u0026rsquo;t start sentences with \u0026ldquo;don\u0026rsquo;t\u0026rdquo; or \u0026ldquo;avoid\u0026rdquo;)  Story Rules  if you\u0026rsquo;re marketing, tell a story that\u0026rsquo;s bigger than just your company don\u0026rsquo;t tell who you are, tell why you matter to the reader choose 3-4 adjectives to define your writing voice and stick to them analogies are more powerful than examples  Publishing Rules  writing for a brand is a type of journalism be scrupulously trustworthy keep your eyes open for \u0026ldquo;content moments\u0026rdquo; that can spark content that make you a thought leader only write about what the reader will find useful to know acknowledge other points of view than your own to build trust if you haven\u0026rsquo;t understood something, ask or research - you owe it to the reader fact-check diligently - trust and credibility are the cornerstones of publishing always disclose potential conflicts of interest cite primary sources, not secondary sources cite as you write, otherwise you may forget citing if you curate other people\u0026rsquo;s content, add some value of your own ask for permission before using other people\u0026rsquo;s content if you use other people\u0026rsquo;s content, attribute it properly back your text with trustworthy data  13 Things Marketers Write  ideal blog post length: 1500 words ideal email subject length: 50 characters (6-10 words) ideal line length on a website: 12 words ideal paragraph length: 4 lines ideal title tag: 55 characters ideal meta description: \u0026lt;= 155 characters use Twitter as a sounding board for ideas don\u0026rsquo;t overuse hashtags in social media posts use humor when possible - everybody likes to laugh posts on facebook must have an image make social media content \u0026ldquo;snackable\u0026rdquo; use unique words to describe yourself on social media in email, talk directly to the reader writing for a landing page should be hyper-focused include a \u0026ldquo;curiosity gap\u0026rdquo; in headlines (a gap in the reader\u0026rsquo;s knowledge that makes him / her itch to read the article) on a homepage, use \u0026ldquo;you speak\u0026rdquo; and add 2-4 calls to action, not more even the \u0026ldquo;about us\u0026rdquo; page should bring value to the reader and not just state boring facts about the company an infographic should tell a story - hypothesis, narrative, call to action every blog post should have a (non-stock) image writing an annual review with successes, failures, changes, and growth adds a human side to your business  Conclusion As you see from the lists above, {% include book-link.html book=\u0026ldquo;everybody-writes\u0026rdquo; %} offers a lot of applicable content. The lists above just include the things I took away, and they may be different for you.\nThe book is very pleasant to read and makes it easy to take away valuable tips. I now have a big list of things I want to do to improve my writing :).\nIf you\u0026rsquo;re already writing a blog or as a marketer - or you\u0026rsquo;re planning to start - this book is a definite reading suggestion.\n","date":"July 17, 2019","image":"https://reflectoring.io/images/covers/everybody-writes-teaser_hu3907b257e8f4398575197c2080946d27_66119_650x0_resize_q90_box.jpg","permalink":"/book-review-everybody-writes/","title":"Book Review: Everybody Writes"},{"categories":["Spring Boot"],"contents":"The Spring Initializr is a great way to quickly create a Spring Boot application from scratch. It creates a single Gradle file that we can expand upon to grow our application.\nWhen projects become bigger, however, we might want to split our codebase into multiple build modules for better maintainability and understandability.\nThis article shows how to split up a Spring Boot application into multiple build modules with Gradle.\n Example Code This article is accompanied by a working code example on GitHub. What\u0026rsquo;s a Module? As we\u0026rsquo;ll be using the word \u0026ldquo;module\u0026rdquo; a lot in this tutorial, let\u0026rsquo;s first define what a module is.\nA module \u0026hellip;\n \u0026hellip; has a codebase that is separate from other modules' code, \u0026hellip; is transformed into a separate artifact (JAR file) during a build, and \u0026hellip; can define dependencies to other modules or third-party libraries.  A module is a codebase that can be maintained and built separately from other modules' codebases.\nHowever, a module is still part of a parent build process that builds all modules of our application and combines them to a single artifact like a WAR file.\nWhy Do We Need Multiple Modules? Why would we make the effort to split up our codebase into multiple modules when everything works just fine with a single, monolithic module?\nThe main reason is that a single monolithic codebase is susceptible to architectural decay. Within a codebase, we usually use packages to demarcate architectural boundaries. But packages in Java aren\u0026rsquo;t very good at protecting those boundaries (more about this in the chapter \u0026ldquo;Enforcing Architecture Boundaries\u0026rdquo; of my book). Suffice it to say that the dependencies between classes within a single monolithic codebase tend to quickly degrade into a big ball of mud.\nIf we split up the codebase into multiple smaller modules that each has clearly defined dependencies to other modules, we take a big step towards an easily maintainable codebase.\nThe Example Application Let\u0026rsquo;s take a look at the modular example web application we\u0026rsquo;re going to build in this tutorial. The application is called \u0026ldquo;BuckPal\u0026rdquo; and shall provide online payment functionality. It follows the hexagonal architecture style described in my book, which splits the codebase into separate, clearly defined architectural elements. For each of those architectural elements, we\u0026rsquo;ll create a separate Gradle build module, as indicated by the following folder structure:\n├── adapters | ├── buckpal-persistence | | ├── src | | └── build.gradle | └── buckpal-web | ├── src | └── build.gradle ├── buckpal-application | ├── src | └── build.gradle ├── common | ├── src | └── build.gradle ├── buckpal-configuration | ├── src | └── build.gradle ├── build.gradle └── settings.gradle Each module is in a separate folder with Java sources, a build.gradle file, and distinct responsibilities:\n The top-level build.gradle file configures build behavior that is shared between all sub-modules so that we don\u0026rsquo;t have to duplicate things in the sub-modules. The buckpal-configuration module contains the actual Spring Boot application and any Spring Java Configuration that puts together the Spring application context. To create the application context, it needs access to the other modules, which each provides certain parts of the application. I have also seen this module called infrastructure in other contexts. The common module provides certain classes that can be accessed by all other modules. The buckpal-application module holds classes that make up the \u0026ldquo;application layer\u0026rdquo;: services that implement use cases which query and modify the domain model. The adapters/buckpal-web module implements the web layer of our application, which may call the uses cases implemented in the application module. The adapters/buckpal-persistence module implements the persistence layer of our application.  In the rest of this article, we\u0026rsquo;ll look at how to create a separate Gradle module for each of those application modules. Since we\u0026rsquo;re using Spring, it makes sense to cut our Spring application context into multiple Spring modules along the same boundaries, but that\u0026rsquo;s a story for a different article.\nParent Build File To include all modules in the parent build, we first need to list them in the settings.gradle file in the parent folder:\ninclude \u0026#39;common\u0026#39; include \u0026#39;adapters:buckpal-web\u0026#39; include \u0026#39;adapters:buckpal-persistence\u0026#39; include \u0026#39;buckpal-configuration\u0026#39; include \u0026#39;buckpal-application\u0026#39; Now, if we call ./gradlew build in the parent folder, Gradle will automatically resolve any dependencies between the modules and build them in the correct order, regardless of the order they are listed in settings.gradle.\nFor instance, the common module will be built before all other modules since all other modules depend on it.\nIn the parent build.gradle file, we now define basic configuration that is shared across all sub-modules:\nplugins { id \u0026#34;io.spring.dependency-management\u0026#34; version \u0026#34;1.0.8.RELEASE\u0026#34; } subprojects { group = \u0026#39;io.reflectoring.reviewapp\u0026#39; version = \u0026#39;0.0.1-SNAPSHOT\u0026#39; apply plugin: \u0026#39;java\u0026#39; apply plugin: \u0026#39;io.spring.dependency-management\u0026#39; apply plugin: \u0026#39;java-library\u0026#39; repositories { jcenter() } dependencyManagement { imports { mavenBom(\u0026#34;org.springframework.boot:spring-boot-dependencies:2.1.7.RELEASE\u0026#34;) } } } First of all, we include the Spring Dependency Management Plugin which provides us with the dependencyManagement closure that we\u0026rsquo;ll use later.\nThen, we define a shared configuration within the subprojects closure. Everything within subprojects will be applied to all sub-modules.\nThe most important part within subprojects is the dependencyManagement closure. Here, we can define any dependencies to Maven artifacts in a certain version. If we need one of those dependencies within a sub-module, we can specify it in the sub-module without providing a version number since the version number will be loaded from the dependencyManagement closure.\nThis allows us to specify version numbers in a single place instead of spreading them over multiple modules, very similar to the \u0026lt;dependencyManagement\u0026gt; element in Maven\u0026rsquo;s pom.xml files.\nThe only dependency we added in the example is the dependency to the Maven BOM (bill of materials) of Spring Boot. This BOM includes all dependencies that a Spring Boot application potentially might need in the exact version that is compatible with a given Spring Boot version (2.1.7.RELEASE in this case). Thus, we don\u0026rsquo;t need to list every single dependency on our own and potentially get the version wrong.\nAlso, note that we apply the java-library plugin to all sub-modules. This enables us to use the api and implementation configurations which allow us to define finer-grained dependency scopes.\nModule Build Files In a module build file, we now simply add the dependencies the module needs.\nThe file adapters/buckpal-persistence/build.gradle looks like this:\ndependencies { implementation project(\u0026#39;:common\u0026#39;) implementation project(\u0026#39;:buckpal-application\u0026#39;) implementation \u0026#39;org.springframework.boot:spring-boot-starter-data-jpa\u0026#39; // ... more dependencies } The persistence module depends on the common and the application module. The common module is used by all modules, so this dependency is natural. The dependency to the application module comes from the fact that we\u0026rsquo;re following a hexagonal architecture style in which the persistence module implements interfaces located in the application layer, thus acting as a persistence \u0026ldquo;plugin\u0026rdquo; for the application layer.\nMore importantly, however, we add the dependency to spring-boot-starter-data-jpa which provides Spring Data JPA support for a Spring Boot application. Note that we did not add a version number because the version is automatically resolved from the spring-boot-dependencies BOM in the parent build file. In this case, we\u0026rsquo;ll get the version that is compatible with Spring Boot 2.1.7.RELEASE.\nNote that we added the spring-boot-starter-data-jpa dependency to the implementation configuration. This means that this dependency does not leak into the compile time of the modules that include the persistence module as a dependency. This keeps us from accidentally using JPA classes in modules where we don\u0026rsquo;t want it.\nThe build file for the web layer, adapters/buckpal-web/build.gradle, looks similar, just with a dependency to spring-boot-starter-web instead:\ndependencies { implementation project(\u0026#39;:common\u0026#39;) implementation project(\u0026#39;:application\u0026#39;) implementation \u0026#39;org.springframework.boot:spring-boot-starter-web\u0026#39; // ... more dependencies } Our modules have access to all the classes they need to build a web or persistence layer for a Spring Boot application, without having unnecessary dependencies.\nThe web module knows nothing about persistence and vice versa. As a developer, we cannot accidentally add persistence code to the web layer or web code to the persistence layer without consciously adding a dependency to a build.gradle file. This helps to avoid the dreaded big ball of mud.\nSpring Boot Application Build File Now, all we have to do is to aggregate those modules into a single Spring Boot application. We do this in the buckpal-configuration module.\nIn the buckpal-configuration/build.gradle build file, we add the dependencies to all of our modules:\nplugins { id \u0026#34;org.springframework.boot\u0026#34; version \u0026#34;2.1.7.RELEASE\u0026#34; } dependencies { implementation project(\u0026#39;:common\u0026#39;) implementation project(\u0026#39;:buckpal-application\u0026#39;) implementation project(\u0026#39;:adapters:buckpal-persistence\u0026#39;) implementation project(\u0026#39;:adapters:buckpal-web\u0026#39;) implementation \u0026#39;org.springframework.boot:spring-boot-starter\u0026#39; // ... more dependencies } We also add the Spring Boot Gradle plugin that, among other things, gives us the bootRun Gradle task. We can now start the application with Gradle using ./gradlew bootRun.\nAlso, we add the obligatory @SpringBootApplication-annotated class to the source folder of the buckpal-configuration module:\n@SpringBootApplication public class BuckPalApplication { public static void main(String[] args) { SpringApplication.run(BuckPalApplication.class, args); } } This class needs access to the SpringBootApplication and SpringApplication classes that the spring-boot-starter dependency gives us access to.\nConclusion In this tutorial, we\u0026rsquo;ve seen how to split up a Spring Boot application into multiple Gradle modules with the help of the Spring Dependency Plugin for Gradle. We can follow this approach to split an application up along technical layers like in the example application on GitHub, or along functional boundaries, or both.\nA very similar approach can be used with Maven.\nIf you\u0026rsquo;d like another perspective on the topic, there\u0026rsquo;s also a Spring guide on creating a multi-module Spring Boot application that talks about different aspects.\n","date":"June 28, 2019","image":"https://reflectoring.io/images/stock/0010-gray-lego-1200x628-branded_hu463ec94a0ba62d37586d8dede4e932b0_190778_650x0_resize_q90_box.jpg","permalink":"/spring-boot-gradle-multi-module/","title":"Building a Multi-Module Spring Boot Application with Gradle"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you\u0026rsquo;re ready to change the way you think about your interaction with the world around you you want to be more effective in communicating with those around you you want to learn some motives why you should change your habits  Overview I read the book \u0026ldquo;The 7 Habits of Highly Effective People\u0026rdquo; by Stephen Covey in my quest to shape my habits to be more productive at the things I do. Mostly at the things I do for work like my software developer job or this blog.\nHowever, this book didn\u0026rsquo;t give me the magic formula of how to become more productive at work. Instead, it\u0026rsquo;s about changing my own paradigms and my view of the world in order to become more effective, i.e. how to make things happen in the way I want them to, especially in dealing with other people. And the way to do this is to change yourself, as the subtitle \u0026ldquo;Powerful Lessons in Personal Change\u0026rdquo; suggests (which I didn\u0026rsquo;t read before ordering the book, obviously).\nEven though the book didn\u0026rsquo;t directly deliver what I have originally been looking for, it was very enlightening to read and definitely made me think about myself and my impact on the world around me. And ultimately, I\u0026rsquo;m even certain that this will make me more productive in the long run.\nThe book was published first when I went to elementary school, so it\u0026rsquo;s been out there for some thirty years now. As the title suggests, it\u0026rsquo;s structured around the 7 Habits that Covey has identified as main drivers for an effective and principle-based life.\nThe paperback edition has around 350 pages, so it\u0026rsquo;s quite a hunk to read, at least if you\u0026rsquo;re only used to reading fiction and tech books like me. But the content always drew me further, so I managed to read it completely without skipping anything.\nLikes \u0026amp; Dislikes The writing style is very conversational, without being too casual, which I liked very much. I also liked the sometimes very personal stories the author uses to explain the habits and why they work.\nI also liked that the rationale behind each of the habits is explained in a logical way and that each chapter ends with a few valuable suggestions on how we could apply the things we learned in that chapter.\nOn the \u0026ldquo;dislike\u0026rdquo; side is the fact that the chapters are each very long (one for each habit). I would have liked shorter chapters better, each one focused on a certain aspect of a habit. This would have made reading through the whole book easier for me.\nAt some point in the first quarter of the book, the author mentions how his religious beliefs help him with the habits. As a convinced atheist, I was afraid that the book would now turn into a recruiting text, which luckily was not the case.\nKey Takeaways Be Proactive  act, don\u0026rsquo;t be acted upon work on things within your circle of influence instead of complaining about things outside of your circle of influence don\u0026rsquo;t think deterministically, as if everything is pre-determined (have a \u0026ldquo;growth mindset\u0026rdquo; instead of a \u0026ldquo;fixed mindset\u0026rdquo;, even though he didn\u0026rsquo;t use these words) responsibility = response-ability - we can choose how to respond to a situation control your feelings proactively instead of letting them control you work on what you are, not on what you have if you made a mistake, admit it and correct it look at the weaknesses of others with compassion, not with accusation use proactive language instead of reactive language (think \u0026ldquo;I will \u0026hellip;\u0026rdquo; and \u0026ldquo;I choose \u0026hellip;\u0026rdquo; instead of \u0026ldquo;If only \u0026hellip;\u0026rdquo;, \u0026ldquo;I can\u0026rsquo;t \u0026hellip;\u0026rdquo;, and \u0026ldquo;I have to\u0026hellip;\u0026quot;)  Begin with the End in Mind  identify the different roles you have in life (father, husband, software engineer, \u0026hellip;) and define short-term and long-term goals for each of those roles create a personal mission statement defining your most important principles think not only of producing, but also of your long-term production capability visualize what you want to achieve  Put First Things First  do the important, not the urgent, in order to keep the important from getting urgent use the Eisenhower Matrix with four quadrants to identify what is important and urgent to you say \u0026ldquo;no\u0026rdquo; to things that are unimportant according to your mission statement plan weekly and daily to make sure you do the right things invest in training people to be able to delegate to them  Think Win/Win  thinking win (for yourself) / lose (for your counterpart) is a low-trust attitude thinking lose / win is a \u0026ldquo;nice guy finishes last\u0026rdquo; attitude and is not healthy in the long run thinking win / win frees up the mind for new solutions that are good for both \u0026ldquo;win / win or no deal\u0026rdquo; is an option to make both sides concentrate on win / win a win / win attitude requires the \u0026ldquo;abundance mentality\u0026rdquo; (there is enough for everyone) as opposed to the \u0026ldquo;scarcity mentality\u0026rdquo; as a manager, win / win greatly increases the number of people you can manage since you set goals for them instead of micro-managing them companies have great leverage to create a win / win attitude by setting up a suitable compensation system (among other things)  Seek First to Understand, Then to Be Understood  communication is the most important skill in life listening without interrupting allows the speaker to open up empathic listening is listening with the goal to understand don\u0026rsquo;t use autobiographical responses (responses comparing yourself to the speaker) like \u0026ldquo;when I was your age, I \u0026hellip;\u0026rdquo;, or \u0026ldquo;I would do it differently \u0026hellip;\u0026rdquo; instead re-phrase what you heard and repeat what you understood about how the speaker feels without understanding your counterpart you cannot create a win / win situation if you want to persuade someone, you have to understand him / her first (and then create a win / win situation)  Synergize  synergy allows 1 + 1 to be 100 instead of just 2 embrace different opinions as a chance to find a third win / win alternative to synergize on you cannot synergize with people that have the same opinion as you do synergy is only possible when understanding your counterpart and thinking win / win  Sharpen the Saw  \u0026ldquo;investment in yourself is the single most powerful investment you can ever make in life\u0026rdquo; investing in all dimensions of yourself is the basis to becoming a principled person that is able to follow the 6 habits above  physical dimension (sports, exercise) spiritual dimension (reading, music, religion, meditation) mental dimension (reading, organizing, planning, journaling) social / emotional dimension (understanding others, creating win / win situations)   a daily \u0026ldquo;private victory\u0026rdquo; (investing in yourself in the physical, spiritual or mental dimensions) is the basis for personal security and thus allows for a daily \u0026ldquo;public victory\u0026rdquo; in the social / emotional dimension  Conclusion Even though I was a little sceptical about \u0026ldquo;7 magical habits\u0026rdquo; at first, the habits are explained in logical order and made perfect sense once I understood them (the \u0026ldquo;logical\u0026rdquo; part is especially important for me, since I\u0026rsquo;m a very rational person). They are backed with stories from the author\u0026rsquo;s life which are interwoven with the text in a natural way, without disturbing the reading flow.\nThis book gets my definite reading recommendation. In the least, you\u0026rsquo;ll go through life a little more self-aware after reading it. At best, you might change yourself to become more secure in life and more effective in defining and achieving goals. I\u0026rsquo;m still deciding on which of the two it was for me\u0026hellip; .\n","date":"June 26, 2019","image":"https://reflectoring.io/images/covers/7-habits-teaser_hubcc6a2ea38bea42324ce7026b00a29a3_87081_650x0_resize_q90_box.jpg","permalink":"/book-review-7-habits/","title":"Book Review: The 7 Habits of Highly Effective People"},{"categories":["Java"],"contents":"I recently had a conversation about exception handling. I argued that business exceptions are a good thing because they clearly mark the possible failures of a business method. If a rule is violated, the business method throws a \u0026ldquo;business\u0026rdquo; exception that the client has to handle. If it\u0026rsquo;s a checked exception, the business rule is even made apparent in the method signature - at least the cases in which it fails.\nMy counterpart argued that failing business rules shouldn\u0026rsquo;t be exceptions because of multiple reasons. Having thought about it a bit more, I came to the conclusion that he was right. And I came up with even more reasons than he enumerated during our discussion.\nRead on to find out what distinguishes a business exception from a technical exception and why technical exceptions are the only true exceptions.\nTechnical Exceptions Let\u0026rsquo;s start with technical exceptions. These exceptions are thrown when something goes wrong that we cannot fix and usually cannot respond to in any sensible way.\nAn example is Java\u0026rsquo;s built-in IllegalArgumentException. If someone provides an argument to a method that does not follow the contract of that method, the method may throw an IllegalArgumentException.\nWhen we call a method and get an IllegalArgumentException thrown into our face, what can we do about it?\nWe can only fix the code.\nIt\u0026rsquo;s a programming error. If the illegal argument value comes from a user, it should have been validated earlier and an error message provided to the user. If the illegal argument comes from somewhere else in the code, we have to fix it there. In any case, someone screwed up somewhere else.\nA technical exception is usually derived from Java\u0026rsquo;s RuntimeException, meaning that it doesn\u0026rsquo;t have to be declared in a method signature.\nBusiness Exceptions Now, what\u0026rsquo;s a business exception?\nA business exception is thrown when a business rule within our application is violated:\nclass Rocket { private int fuel; void takeOff() throws NotEnoughFuelException { if (this.fuel \u0026lt; 50) { throw new NotEnoughFuelException(); } lockDoors(); igniteThrusters(); } } In this example, the Rocket only takes off if it has enough fuel. If it doesn\u0026rsquo;t have enough fuel, it throws an exception with the very imaginative name of NotEnoughFuelException.\nIt\u0026rsquo;s up to the client of the above code to make sure that the business rule (providing at least 50 units of fuel before takeoff) is satisfied. If the business rule is violated, the client has to to handle the exception (for example by filling the fuel tank and then trying again).\nNow that we\u0026rsquo;re on the same page about technical and business exceptions, let\u0026rsquo;s look at the reasons why business exceptions are a bad idea.\n#1: Exceptions Should not be an Expected Outcome First of all, just by looking at the meaning of the word \u0026ldquo;exception\u0026rdquo;, we\u0026rsquo;ll see that a business exception as defined above isn\u0026rsquo;t actually an exception.\nLet\u0026rsquo;s look at some definitions of the word \u0026ldquo;exception\u0026rdquo;:\n A person or thing that is excluded from a general statement or does not follow a rule (Oxford Dictionary).\n  An instance or case not conforming to the general rule (dictionary.com).\n  Someone or something that is not included in a rule, group, or list or that does not behave in the expected way (Cambridge Dictionary).\n All three definitions say that an exception is something that does not follow a rule which makes it unexpected.\nComing back to our example, you could say that we have used the NotEnoughFuelException as an exception to the rule \u0026ldquo;fuel tanks must contain at least 50 units of fuel\u0026rdquo;. I say, however, that we have used the NotEnoughFuelException to define the (inverted) rule \u0026ldquo;fuel tanks must not contain less than 50 units of fuel\u0026rdquo;.\nAfter all, we have added the exception to the signature of the takeOff() method. What is that if not defining some sort of expected outcome that\u0026rsquo;s relevant for the client code to know about?\nTo sum up, exceptions should be exceptions. Exceptions should not be an expected outcome. Otherwise we defy the english language.\n#2: Exceptions are Expensive What should the client code do if it encounters a NotEnoughFuelException?\nProbably, it will fill the fuel tanks and try again:\nclass FlightControl { void start(){ Rocket rocket = new Rocket(); try { rocket.takeOff(); } catch (NotEnoughFuelException e) { rocket.fillTanks(); rocket.takeOff(); } } } As soon as the client code reacts to an exception by executing a different branch of business code, we have misused the concept of exceptions for flow control.\nUsing try/catch for flow control creates code that is\n expensive to understand (because we need more time to understand it), and expensive to execute (because the JVM has to create a stacktrace for the catch block).  And, unlike in fashion, expensive is usually bad in software engineering.\nExceptions without Stacktraces?  In a comment I was made aware that Java's exception constructors allow passing in a parameter writableStackTrace that, when set to false, will cause the exception not to create a stacktrace, thus reducing the performance overhead. Use at your own peril.  #3: Exceptions Hinder Reusability The takeOff() method, as implemented above, will always check for fuel before igniting the thrusters.\nImagine that the funding for the space program has been reduced and we can\u0026rsquo;t afford to fill the fuel tanks anymore. We have to cut corners and start the rocket with less fuel (I hope it doesn\u0026rsquo;t work that way, but at least in the software industry this seems to be common practice).\nOur business rule has just changed. How do we change the code to reflect this? We want to be able to still execute the fuel check, so we don\u0026rsquo;t have to change a lot of code once the funding returns.\nSo, we could add a parameter to the method so that the NotEnoughFuelException is thrown conditionally:\nclass Rocket { private int fuel; void takeOff(boolean checkFuel) throws NotEnoughFuelException { if (checkFuel \u0026amp;\u0026amp; this.fuel \u0026lt; 50) { throw new NotEnoughFuelException(); } lockDoors(); igniteThrusters(); } } Ugly, isn\u0026rsquo;t it? And the client code still has to handle the NotEnoughFuelException even if it passes false into the takeOff() method.\nUsing an exception for a business rule prohibits reusability in contexts where the business rule should not be validated. And workarounds like the one above are ugly and expensive to read.\n#4: Exceptions May Interfere with Transactions If you have ever worked with Java\u0026rsquo;s or Spring\u0026rsquo;s @Transactional annotation to demarcate transaction boundaries, you will probably have thought about how exceptions affect transaction behavior.\nTo sum up the way Spring handles exceptions:\n If a runtime exception bubbles out of a method that is annotated with @Transactional, the transaction is marked for rollback. If a checked exception bubbles out of a method that is annotated with @Transactional, the transaction is not marked for rollback (= nothing happens).  The reasoning behind this is that a checked exception is a valid return value of the method (which makes a checked exception an expected outcome) while a runtime exception is unexpected.\nLet\u0026rsquo;s assume the Rocket class has a @Transactional annotation.\nBecause our NotEnoughFuelException is a checked exception, our try/catch from above would work as expected, without rolling back the current transaction.\nIf NotEnoughFuelException was a runtime exception instead, we could still try to handle the exception like above, only to run into a TransactionRolledBackException or a similar exception as soon as the transaction commits.\nSince the transaction handling code is hidden away behind a simple @Transactional annotation, we\u0026rsquo;re not really aware of the impact of our exceptions. Imagine someone refactoring a checked exception to a runtime exception. Every time this exception now occurs, the transaction will be rolled back where it wasn\u0026rsquo;t before. Dangerous, isn\u0026rsquo;t it?\n#5: Exceptions Evoke Fear Finally, using exceptions to mark failing business rules invokes fear in developers that are trying to understand the codebase, especially if they\u0026rsquo;re new to the project.\nAfter all, each exception marks something that can go wrong, doesn\u0026rsquo;t it? There are so many exceptions to have in mind when working with the code, and we have to handle them all!\nThis tends to make developers very cautious (in the negative sense of the word). Where they would otherwise feel free to refactor code, they will feel restrained instead.\nHow would you feel looking at an unknown codebase that\u0026rsquo;s riddled with exceptions and try/catch blocks, knowing you have to work with that code for the next couple years?\nWhat to Do Instead of Business Exceptions? The alternative to using business exceptions is pretty simple. Just use plain code to validate your business rules instead of exceptions:\nclass Rocket { private int fuel; void takeOff() { lockDoors(); igniteThrusters(); } boolean hasEnoughFuelForTakeOff(){ return this.fuel \u0026gt;= 50; } } class FlightControl { void startWithFuelCheck(){ Rocket rocket = new Rocket(); if(!rocket.hasEnoughFuel()){ rocket.fillTanks(); } rocket.takeOff(); } void startWithoutFuelCheck(){ Rocket rocket = new Rocket(); rocket.takeOff(); } } Instead of forcing each client to handle a NotEnoughFuelException, we let the client check if there is enough fuel available. With this simple change, we have achieved the following:\n If we stumble upon an exception, it really is an exception, as the expected control flow doesn\u0026rsquo;t throw an exception at all (#1). We have used normal code for normal control flow which is much better readable than try/catch blocks (#2). The takeOff() method is reusable in different contexts, like taking off with less than optimal fuel (#3). We have no exception that might or might not interfere with any database transactions (#4). We have no exception that evokes fear in the new guy that just joined the team (#5).  You might notice that this solution moves the responsibility of checking for business rules one layer up, from the Rocket class to the FlightControl class. This might feel like we\u0026rsquo;re giving up control of our business rules, since the clients of the Rocket class now have to check for the business rules themselves.\nYou might notice, too, however, that the business rule itself is still in the Rocket class, within the hasEnoughFuel() method. The client only has to invoke the business rule, not know about the internals.\nYes, we have moved a responsibility away from our domain object. But we have gained a lot of flexibility, readability, and understandability on the way.\nConclusion Using exceptions, both checked and unchecked, for marking failed business rules makes code less readable and flexible due to several reasons.\nBy moving the invocation of business rules out of a domain object and into a use case, we can avoid having to throw an exception in the case a business rule fails. The use case decides if the business rule should be validated or not, since there might be valid reasons not to validate a certain rule.\nWhat are your reasons to use / not to use business exceptions?\n","date":"June 4, 2019","image":"https://reflectoring.io/images/stock/0011-exception-1200x628-branded_hu5c84ec643e645bced334d00cceee0833_119970_650x0_resize_q90_box.jpg","permalink":"/business-exceptions/","title":"5 Reasons Why Business Exceptions Are a Bad Idea"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you want to get more productive at knowledge work (yes, software development is knowledge work!) you feel like you\u0026rsquo;re drowning in \u0026ldquo;shallow\u0026rdquo; work like emails and phone calls you need arguments to defend a change of work style towards your colleagues and your boss  Overview In his book \u0026ldquo;Deep Work\u0026rdquo;, Cal Newport gives a name to the productive state of \u0026ldquo;flow\u0026rdquo; most of us like to attain at work but which we can rarely maintain for more than a couple minutes when the next emergency interrupts our train of thought.\nNewport defines \u0026ldquo;Deep Work\u0026rdquo; as:\n \u0026ldquo;Deep Work: Professional activities performed in a state of distraction-free concentration that push your cognitive capabilities to their limit. These efforts create new value, improve your skill, and are hard to replicate.\u0026rdquo;\n The book is all about how to create an environment in which Deep Work is possible and how to reduce the time spent on \u0026ldquo;Shallow Work\u0026rdquo;:\n \u0026ldquo;Shallow Work: Noncognitively demanding, logistical-style tasks, often performed while distracted. These efforts tend to not create much new value in the world and are easy to replicate.\u0026rdquo;\n The book is structured in two parts. The first part motivates Deep Work in stating that Deep Work is valuable, rare and meaningful. The second part describes four rules that help to facilitate Deep Work.\nLikes \u0026amp; Dislikes As you might guess from the use of words like \u0026ldquo;noncognitively demanding\u0026rdquo; in the above definition of Shallow Work, Cal Newport is an academic. And he lets us know that he is a very successful one on every other page of the book. The writing style is pleasantly conversational, however.\nNewport tells a lot of anecdotes about his own academic work and about that of other important people in the world. The anecdotes always prove a certain point, so they definitely serve a purpose. In my opinion, however, the points could have been proven with less anecdotes and with less words, as the anecdotes take up a significant share of the text.\nThe book starts a little slow. I had some trouble staying motivated through the first part which goes into details about why Deep Work is important. The chapters are very long, with sub-headings in between. I like it better if chapters are short and I can read through them in a single session.\nThe second part of the book was worth every cent, however. It provides very actionable tips on how to plan for Deep Work and how to make the best of the time you set aside for it.\nKey Takeaways The second part of the book is full of tips on doing Deep Work. Here are my key takeaways in no particular order:\n Schedule time for Deep Work, ideally in a rhythmic fashion to establish a habit. Schedule every minute of your day in order to keep shallow distractions at bay. Consciously decide for every entry in your schedule if it\u0026rsquo;s deep or shallow to set the mood. Take breaks from focus - don\u0026rsquo;t take breaks from distraction. Schedule breaks from focused work regularly. Set impossible deadlines. The only way to keep an impossible deadline is focused work. Give yourself a budget of Shallow Work and don\u0026rsquo;t overspend it. Ritualize where you work and how you work. Create rules that help you to focus. Quit social media because it\u0026rsquo;s a shallow distraction. Be hard to reach to avoid shallow distractions. You needn\u0026rsquo;t be alone for Deep Work. Collaborative Deep Work is possible (Newport calls it the \u0026ldquo;Whiteboard Effect\u0026rdquo;). This doesn\u0026rsquo;t mean that Open Space is the best office layout, though. Execute like a business. Focus on the important, measure your deep work time and results and keep track of them on a scoreboard, and do a regular review. This is called the \u0026ldquo;4 Disciplines of Execution\u0026rdquo; (4DX) Framework Have a weekly rendezvous with yourself to review your achievements and plan out the next week. Don\u0026rsquo;t extend your work day into the evening to do Deep Work, because it\u0026rsquo;s most likely not productive. Establish a \u0026ldquo;shutdown ritual\u0026rdquo; to follow every day after work in which you check the status of today\u0026rsquo;s tasks and your calendar for the next day. This helps to free your mind to let go until the next day. Take downtimes away from work seriously. They help to recharge. Meditate productively on Deep Work problems when running, driving, or otherwise not mentally engaged. Identify the high-level goals you want to reach and the key activities that help you reach them.  Conclusion Even though I don\u0026rsquo;t particularly like the anecdotal writing style, \u0026ldquo;Deep Work\u0026rdquo; was very enlightening. Once through the book, I started to apply some of the tips and successfully created a habit of doing deep work every morning before work which allowed me to write most of my eBook, prepare and hold two conference talks and write articles that tripled the visitors to my blog - all within about 5 months.\nI definitely recommend it for anyone who is interested in creating a productive environment for cognitively demanding work.\n","date":"May 12, 2019","image":"https://reflectoring.io/images/covers/deep-work-teaser_hu2106e815d014beac7a6bd17d359f8252_143065_650x0_resize_q90_box.jpg","permalink":"/book-review-deep-work/","title":"Book Review: Deep Work"},{"categories":["Spring Boot"],"contents":"As a user of a web application we\u0026rsquo;re expecting pages to load quickly and only show the information that\u0026rsquo;s relevant to us. For pages that show a list of items, this means only displaying a portion of the items, and not all of them at once.\nOnce the first page has loaded quickly, the UI can provide options like filters, sorting and pagination that help the user to quickly find the items he or she is looking for.\nIn this tutorial, we\u0026rsquo;re examining Spring Data\u0026rsquo;s paging support and create examples of how to use and configure it along with some information about how it works under the covers.\n Example Code This article is accompanied by a working code example on GitHub. Paging vs. Pagination The terms \u0026ldquo;paging\u0026rdquo; and \u0026ldquo;pagination\u0026rdquo; are often used as synonyms. They don\u0026rsquo;t exactly mean the same, however. After consulting various web dictionaries, I\u0026rsquo;ve cobbled together the following definitions, which I\u0026rsquo;ll use in this text:\nPaging is the act of loading one page of items after another from a database, in order to preserve resources. This is what most of this article is about.\nPagination is the UI element that provides a sequence of page numbers to let the user choose which page to load next.\nInitializing the Example Project We\u0026rsquo;re using Spring Boot to bootstrap a project in this tutorial. You can create a similar project by using Spring Initializr and choosing the following dependencies:\n Web JPA H2 Lombok  I additionally replaced JUnit 4 with JUnit 5, so that the resulting dependencies look like this (Gradle notation):\ndependencies { implementation \u0026#39;org.springframework.boot:spring-boot-starter-data-jpa\u0026#39; implementation \u0026#39;org.springframework.boot:spring-boot-starter-web\u0026#39; compileOnly \u0026#39;org.projectlombok:lombok\u0026#39; annotationProcessor \u0026#39;org.projectlombok:lombok\u0026#39; runtimeOnly \u0026#39;com.h2database:h2\u0026#39; testImplementation(\u0026#39;org.junit.jupiter:junit-jupiter:5.4.0\u0026#39;) testImplementation(\u0026#39;org.springframework.boot:spring-boot-starter-test\u0026#39;){ exclude group: \u0026#39;junit\u0026#39;, module: \u0026#39;junit\u0026#39; } } Spring Data\u0026rsquo;s Pageable No matter if we want to do conventional pagination, infinite scrolling or simple \u0026ldquo;previous\u0026rdquo; and \u0026ldquo;next\u0026rdquo; links, the implementation in the backend is the same.\nIf the client only wants to display a \u0026ldquo;slice\u0026rdquo; of a list of items, it needs to provide some input parameters that describe this slice. In Spring Data, these parameters are bundled within the Pageable interface. It provides the following methods, among others (comments are mine):\npublic interface Pageable { // number of the current page  int getPageNumber(); // size of the pages  int getPageSize(); // sorting parameters  Sort getSort(); // ... more methods } Whenever we want to load only a slice of a full list of items, we can use a Pageable instance as an input parameter, as it provides the number of the page to load as well as the size of the pages. Through the Sort class, it also allows to define fields to sort by and the direction in which they should be sorted (ascending or descending).\nThe most common way to create a Pageable instance is to use the PageRequest implementation:\nPageable pageable = PageRequest.of(0, 5, Sort.by( Order.asc(\u0026#34;name\u0026#34;), Order.desc(\u0026#34;id\u0026#34;))); This will create a request for the first page with 5 items ordered first by name (ascending) and second by id (descending). Note that the page index is zero-based by default!\nConfusion with java.awt.print.Pageable?  When working with Pageable, you'll notice that your IDE will sometimes propose to import java.awt.print.Pageable instead of Spring's Pageable class. Since we most probably don't need any classes from the java.awt package, we can tell our IDE to ignore it alltogether.  In IntelliJ, go to \"General - Editor - Auto Import\" in the settings and add java.awt.* to the list labelled \"Exclude from import and completion\".  In Eclipse, go to \"Java - Appearance - Type Filters\" in the preferences and add java.awt.* to the package list.  Spring Data\u0026rsquo;s Page and Slice While Pageable bundles the input parameters of a paging request, the Page and Slice interfaces provide metadata for a page of items that is returned to the client (comments are mine):\npublic interface Page\u0026lt;T\u0026gt; extends Slice\u0026lt;T\u0026gt;{ // total number of pages  int getTotalPages(); // total number of items  long getTotalElements(); // ... more methods  } public interface Slice\u0026lt;T\u0026gt; { // current page number  int getNumber(); // page size  int getSize(); // number of items on the current page  int getNumberOfElements(); // list of items on this page  List\u0026lt;T\u0026gt; getContent(); // ... more methods  } With the data provided by the Page interface, the client has all the information it needs to provide a pagination functionality.\nWe can use the Slice interface instead, if we don\u0026rsquo;t need the total number of items or pages, for instance if we only want to provide \u0026ldquo;previous page\u0026rdquo; and \u0026ldquo;next page\u0026rdquo; buttons and have no need for \u0026ldquo;first page\u0026rdquo; and \u0026ldquo;last page\u0026rdquo; buttons.\nThe most common implementation of the Page interface is provided by the PageImpl class:\nPageable pageable = ...; List\u0026lt;MovieCharacter\u0026gt; listOfCharacters = ...; long totalCharacters = 100; Page\u0026lt;MovieCharacter\u0026gt; page = new PageImpl\u0026lt;\u0026gt;(listOfCharacters, pageable, totalCharacters); Paging in a Web Controller If we want to return a Page (or Slice) of items in a web controller, it needs to accept a Pageable parameter that defines the paging parameters, pass it on to the database, and then return a Page object to the client.\nActivating Spring Data Web Support Paging has to be supported by the underlying persistence layer in order to deliver paged answers to any queries. This is why the Pageable and Page classes originate from the Spring Data module, and not, as one might suspect, from the Spring Web module.\nIn a Spring Boot application with auto-configuration enabled (which is the default), we don\u0026rsquo;t have to do anything since it will load the SpringDataWebAutoConfiguration by default, which includes the @EnableSpringDataWebSupport annotation that loads the necessary beans.\nIn a plain Spring application without Spring Boot, we have to use @EnableSpringDataWebSupport on a @Configuration class ourselves:\n@Configuration @EnableSpringDataWebSupport class PaginationConfiguration { } If we\u0026rsquo;re using Pageable or Sort arguments in web controller methods without having activated Spring Data Web support, we\u0026rsquo;ll get exceptions like these:\njava.lang.NoSuchMethodException: org.springframework.data.domain.Pageable.\u0026lt;init\u0026gt;() java.lang.NoSuchMethodException: org.springframework.data.domain.Sort.\u0026lt;init\u0026gt;() These exceptions mean that Spring tries to create a Pageable or Sort instance and fails because they don\u0026rsquo;t have a default constructor.\nThis is fixed by the Spring Data Web support, since it adds the PageableHandlerMethodArgumentResolver and SortHandlerMethodArgumentResolver beans to the application context, which are responsible for finding web controller method arguments of types Pageable and Sort and populating them with the values of the page, size, and sort query parameters.\nAccepting a Pageable Parameter With the Spring Data Web support enabled, we can simply use a Pageable as an input parameter to a web controller method and return a Page object to the client:\n@RestController @RequiredArgsConstructor class PagedController { private final MovieCharacterRepository characterRepository; @GetMapping(path = \u0026#34;/characters/page\u0026#34;) Page\u0026lt;MovieCharacter\u0026gt; loadCharactersPage(Pageable pageable) { return characterRepository.findAllPage(pageable); } } An integration tests shows that the query parameters page, size, and sort are now evaluated and \u0026ldquo;injected\u0026rdquo; into the Pageable argument of our web controller method:\n@WebMvcTest(controllers = PagedController.class) class PagedControllerTest { @MockBean private MovieCharacterRepository characterRepository; @Autowired private MockMvc mockMvc; @Test void evaluatesPageableParameter() throws Exception { mockMvc.perform(get(\u0026#34;/characters/page\u0026#34;) .param(\u0026#34;page\u0026#34;, \u0026#34;5\u0026#34;) .param(\u0026#34;size\u0026#34;, \u0026#34;10\u0026#34;) .param(\u0026#34;sort\u0026#34;, \u0026#34;id,desc\u0026#34;) // \u0026lt;-- no space after comma!  .param(\u0026#34;sort\u0026#34;, \u0026#34;name,asc\u0026#34;)) // \u0026lt;-- no space after comma!  .andExpect(status().isOk()); ArgumentCaptor\u0026lt;Pageable\u0026gt; pageableCaptor = ArgumentCaptor.forClass(Pageable.class); verify(characterRepository).findAllPage(pageableCaptor.capture()); PageRequest pageable = (PageRequest) pageableCaptor.getValue(); assertThat(pageable).hasPageNumber(5); assertThat(pageable).hasPageSize(10); assertThat(pageable).hasSort(\u0026#34;name\u0026#34;, Sort.Direction.ASC); assertThat(pageable).hasSort(\u0026#34;id\u0026#34;, Sort.Direction.DESC); } } The test captures the Pageable parameter passed into the repository method and verifies that it has the properties defined by the query parameters.\nNote that I used a custom AssertJ assertion to create readable assertions on the Pageable instance.\nAlso note that in order to sort by multiple fields, we must provide the sort query parameter multiple times. Each may consist of simply a field name, assuming ascending order, or a field name with an order, separated by a comma without spaces. If there is a space between the field name and the order, the order will not be evaluated.\nAccepting a Sort Parameter Similarly, we can use a standalone Sort argument in a web controller method:\n@RestController @RequiredArgsConstructor class PagedController { private final MovieCharacterRepository characterRepository; @GetMapping(path = \u0026#34;/characters/sorted\u0026#34;) List\u0026lt;MovieCharacter\u0026gt; loadCharactersSorted(Sort sort) { return characterRepository.findAllSorted(sort); } } Naturally, a Sort object is populated only with the value of the sort query parameter, as this test shows:\n@WebMvcTest(controllers = PagedController.class) class PagedControllerTest { @MockBean private MovieCharacterRepository characterRepository; @Autowired private MockMvc mockMvc; @Test void evaluatesSortParameter() throws Exception { mockMvc.perform(get(\u0026#34;/characters/sorted\u0026#34;) .param(\u0026#34;sort\u0026#34;, \u0026#34;id,desc\u0026#34;) // \u0026lt;-- no space after comma!!!  .param(\u0026#34;sort\u0026#34;, \u0026#34;name,asc\u0026#34;)) // \u0026lt;-- no space after comma!!!  .andExpect(status().isOk()); ArgumentCaptor\u0026lt;Sort\u0026gt; sortCaptor = ArgumentCaptor.forClass(Sort.class); verify(characterRepository).findAllSorted(sortCaptor.capture()); Sort sort = sortCaptor.getValue(); assertThat(sort).hasSort(\u0026#34;name\u0026#34;, Sort.Direction.ASC); assertThat(sort).hasSort(\u0026#34;id\u0026#34;, Sort.Direction.DESC); } } Customizing Global Paging Defaults If we don\u0026rsquo;t provide the page, size, or sort query parameters when calling a controller method with a Pageable argument, it will be populated with default values.\nSpring Boot uses the @ConfigurationProperties feature to bind the following properties to a bean of type SpringDataWebProperties:\nspring.data.web.pageable.size-parameter=size spring.data.web.pageable.page-parameter=page spring.data.web.pageable.default-page-size=20 spring.data.web.pageable.one-indexed-parameters=false spring.data.web.pageable.max-page-size=2000 spring.data.web.pageable.prefix= spring.data.web.pageable.qualifier-delimiter=_ The values above are the default values. Some of these properties are not self-explanatory, so here\u0026rsquo;s what they do:\n with size-parameter we can change the name of the size query parameter with page-parameter we can change the name of the page query parameter with default-page-size we can define the default of the size parameter if no value is given with one-indexed-parameters we can choose if the page parameter starts with 0 or with 1 with max-page-size we can choose the maximum value allowed for the size query parameter (values larger than this will be reduced) with prefix we can define a prefix for the page and size query parameter names (not for the sort parameter!)  The qualifier-delimiter property is a very special case. We can use the @Qualifier annotation on a Pageable method argument to provide a local prefix for the paging query parameters:\n@RestController class PagedController { @GetMapping(path = \u0026#34;/characters/qualifier\u0026#34;) Page\u0026lt;MovieCharacter\u0026gt; loadCharactersPageWithQualifier( @Qualifier(\u0026#34;my\u0026#34;) Pageable pageable) { ... } } This has a similar effect to the prefix property from above, but it also applies to the sort parameter. The qualifier-delimiter is used to delimit the prefix from the parameter name. In the example above, only the query parameters my_page, my_size and my_sort are evaluated.\nspring.data.web.* Properties are not evaluated?  If changes to the configuration properties above have no effect, the SpringDataWebProperties bean is probably not loaded into the application context.  One reason for this could be that you have used @EnableSpringDataWebSupport to activate the pagination support. This will override SpringDataWebAutoConfiguration, in which the SpringDataWebProperties bean is created. Use @EnableSpringDataWebSupport only in a plain Spring application.  Customizing Local Paging Defaults Sometimes we might want to define default paging parameters for a single controller method only. For this case, we can use the @PagableDefault and @SortDefault annotations:\n@RestController class PagedController { @GetMapping(path = \u0026#34;/characters/page\u0026#34;) Page\u0026lt;MovieCharacter\u0026gt; loadCharactersPage( @PageableDefault(page = 0, size = 20) @SortDefault.SortDefaults({ @SortDefault(sort = \u0026#34;name\u0026#34;, direction = Sort.Direction.DESC), @SortDefault(sort = \u0026#34;id\u0026#34;, direction = Sort.Direction.ASC) }) Pageable pageable) { ... } } If no query parameters are given, the Pageable object will now be populated with the default values defined in the annotations.\nNote that the @PageableDefault annotation also has a sort property, but if we want to define multiple fields to sort by in different directions, we have to use @SortDefault.\nPaging in a Spring Data Repository Since the pagination features described in this article come from Spring Data, it doesn\u0026rsquo;t surprise that Spring Data has complete support for pagination. This support is, however, explained very quickly, since we only have to add the right parameters and return values to our repository interfaces.\nPassing Paging Parameters We can simply pass a Pageable or Sort instance into any Spring Data repository method:\ninterface MovieCharacterRepository extends CrudRepository\u0026lt;MovieCharacter, Long\u0026gt; { List\u0026lt;MovieCharacter\u0026gt; findByMovie(String movieName, Pageable pageable); @Query(\u0026#34;select c from MovieCharacter c where c.movie = :movie\u0026#34;) List\u0026lt;MovieCharacter\u0026gt; findByMovieCustom( @Param(\u0026#34;movie\u0026#34;) String movieName, Pageable pageable); @Query(\u0026#34;select c from MovieCharacter c where c.movie = :movie\u0026#34;) List\u0026lt;MovieCharacter\u0026gt; findByMovieSorted( @Param(\u0026#34;movie\u0026#34;) String movieName, Sort sort); } Even though Spring Data provides a PagingAndSortingRepository, we don\u0026rsquo;t have to use it to get paging support. It merely provides two convenience findAll methods, one with a Sort and one with a Pageable parameter.\nReturning Page Metadata If we want to return page information to the client instead of a simple list, we simply let our repository methods simply return a Slice or a Page:\ninterface MovieCharacterRepository extends CrudRepository\u0026lt;MovieCharacter, Long\u0026gt; { Page\u0026lt;MovieCharacter\u0026gt; findByMovie(String movieName, Pageable pageable); @Query(\u0026#34;select c from MovieCharacter c where c.movie = :movie\u0026#34;) Slice\u0026lt;MovieCharacter\u0026gt; findByMovieCustom( @Param(\u0026#34;movie\u0026#34;) String movieName, Pageable pageable); } Every method returning a Slice or Page must have exactly one Pageable parameter, otherwise Spring Data will complain with an exception on startup.\nConclusion The Spring Data Web support makes paging easy in plain Spring applications as well as in Spring Boot applications. It\u0026rsquo;s a matter of activating it and then using the right input and output parameters in controller and repository methods.\nWith Spring Boot\u0026rsquo;s configuration properties, we have fine-grained control over the defaults and parameter names.\nThere are some potential catches though, some of which I have described in the text above, so you don\u0026rsquo;t have to trip over them.\nIf you\u0026rsquo;re missing anything about paging with Spring in this tutorial, let me know in the comments.\nYou can find the example code used in this article on github.\n","date":"March 31, 2019","image":"https://reflectoring.io/images/stock/0012-pages-1200x628-branded_hufb8ee3f5c23483830eda0bab846d2b56_155969_650x0_resize_q90_box.jpg","permalink":"/spring-boot-paging/","title":"Paging with Spring Boot"},{"categories":["Spring Boot"],"contents":"Every application above play size requires some parameters at startup. These parameters may, for instance, define which database to connect to, which locale to support or which logging level to apply.\nThese parameters should be externalized, meaning that we should not bake them into a deployable artifact but instead provide them as a command-line argument or a configuration file when starting the application.\nWith the @ConfigurationProperties annotation, Spring boot provides a convenient way to access such parameters from within the application code.\nThis tutorial goes into the details of this annotation and shows how to use it to configure a Spring Boot application module.\n Example Code This article is accompanied by a working code example on GitHub. Using @ConfigurationProperties to Configure a Module Imagine we\u0026rsquo;re building a module in our application that is responsible for sending emails. In local tests, we don\u0026rsquo;t want the module to actually send emails, so we need a parameter to disable this functionality. Also, we want to be able to configure a default subject for these mails, so we can quickly identify emails in our inbox that have been sent from a test environment.\nSpring Boot offers many different options to pass parameters like these into an application. In this article, we choose to create an application.properties file with the parameters we need:\nmyapp.mail.enabled=true myapp.mail.default-subject=This is a Test Within our application, we could now access the values of these properties by asking Spring\u0026rsquo;s Environment bean or by using the @Value annotation, among other things.\nHowever, there\u0026rsquo;s a more convenient and safer way to access those properties by creating a class annotated with @ConfigurationProperties:\n@ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;) class MailModuleProperties { private Boolean enabled = Boolean.TRUE; private String defaultSubject; // getters / setters  } The basic usage of @ConfigurationProperties is pretty straightforward: we provide a class with fields for each of the external properties we want to capture. Note the following:\n The prefix defines which external properties will be bound to the fields of the class. The classes' property names must match the names of the external properties according to Spring Boot\u0026rsquo;s relaxed binding rules. We can define a default values by simply initializing a field with a value. The class itself can be package private. The classes' fields must have public setters.  If we inject a bean of type MailModuleProperties into an other bean, this bean can now access the values of those external configuration parameters in a type-safe manner.\nHowever, we still have to make our @ConfigurationProperties class known to Spring so it will be loaded into the application context.\nActivating @ConfigurationProperties For Spring Boot to create a bean of the MailModuleProperties class, we need to add it to the application context in one of several ways.\nFirst, we can simply let it be part of a component scan by adding the @Component annotation:\n@Component @ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;) class MailModuleProperties { // ... } This obviously only works if the class in within a package that is scanned for Spring\u0026rsquo;s stereotype annotations via @ComponentScan, which by default is any class in the package structure below the main application class.\nWe can achieve the same result using Spring\u0026rsquo;s Java Configuration feature:\n@Configuration class MailModuleConfiguration { @Bean public MailModuleProperties mailModuleProperties(){ return new MailModuleProperties(); } } As long as the MailModuleConfiguration class is scanned by the Spring Boot application, we\u0026rsquo;ll have access to a MailModuleProperties bean in the application context.\nAlternatively, we can use the @EnableConfigurationProperties annotation to make our class known to Spring Boot:\n@Configuration @EnableConfigurationProperties(MailModuleProperties.class) class MailModuleConfiguration { } Which is the Best Way to activate a @ConfigurationProperties Class?  All of the above ways are equally valid. I would suggest, however, to modularize your application and have each module provide its own @ConfigurationProperties class with only the properties it needs as we have done for the mail module in the code above. This makes it easy to refactor properties in one module without affecting other modules.  For this reason, I would not recommend to use @EnableConfigurationProperties on the application class itself, as is shown in many other tutorials, but instead on a module-specific @Configuration class which might also make use of package-private visibility to hide the properties from the rest of the application.  Failing on Unconvertible Properties What happens if we define a property in our application.properties that cannot be interpreted correctly? Say we provide the value 'foo' for our enabled property that expects a boolean:\nmyapp.mail.enabled=foo By default, Spring Boot will refuse to start the application with an exception:\njava.lang.IllegalArgumentException: Invalid boolean value \u0026#39;foo\u0026#39; If, for any reason, we don\u0026rsquo;t want Spring Boot to fail in cases like this, we can set the ignoreInvalidFields parameter to true (default is false):\n@ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;, ignoreInvalidFields = true) class MailModuleProperties { private Boolean enabled = Boolean.TRUE; // getters / setters } In this case, Spring Boot will set the enabled field to the default value we defined in the Java code. If we don\u0026rsquo;t initialize the field in the Java code, it would be null.\nFailing on Unknown Properties What happens if we have provided certain properties in our application.properties file that our MailModuleProperties class doesn\u0026rsquo;t know?\nmyapp.mail.enabled=true myapp.mail.default-subject=This is a Test myapp.mail.unknown-property=foo By default, Spring Boot will simply ignore properties that could not be bound to a field in a @ConfigurationProperties class.\nWe might, however, want to fail startup when there is a property in the configuration file that is not actually bound to a @ConfigurationProperties class. Maybe we have previously used this configuration property but it has been removed since, so we want to be triggered to remove it from the application.properties file as well.\nIf we want startup to fail on unknown properties, we can simply set the ignoreUnknownFields parameter to false (default is true):\n@ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;, ignoreUnknownFields = false) class MailModuleProperties { private Boolean enabled = Boolean.TRUE; private String defaultSubject; // getters / setters } We\u0026rsquo;ll now be rewarded with an exception on application startup that tells us that a certain property could not be bound to a field in our MailModuleProperties class since there was no matching field:\norg.springframework.boot.context.properties.bind.UnboundConfigurationPropertiesException: The elements [myapp.mail.unknown-property] were left unbound. Deprecation Warning  The paramater ignoreUnknownFields is to be deprecated in a future Spring Boot version. The reason is that we could have two @ConfigurationProperties classes bound to the same namespace. A property might be known to one of those classes and unknown to the other, causing a startup failure although we have two perfectly valid configurations.  Validating @ConfigurationProperties on Startup If we want to make sure that the parameters that the configuration parameters passed into the application are valid, we can add bean validation annotations to the fields and the @Validated annotation to the class itself:\n@ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;) @Validated class MailModuleProperties { @NotNull private Boolean enabled = Boolean.TRUE; @NotEmpty private String defaultSubject; // getters / setters } If we now forget to set the enabled property in our application.properties file and leave the defaultSubject empty, we\u0026rsquo;ll get a BindValidationException on startup:\nmyapp.mail.default-subject= org.springframework.boot.context.properties.bind.validation.BindValidationException: Binding validation errors on myapp.mail - Field error in object \u0026#39;myapp.mail\u0026#39; on field \u0026#39;enabled\u0026#39;: rejected value [null]; ... - Field error in object \u0026#39;myapp.mail\u0026#39; on field \u0026#39;defaultSubject\u0026#39;: rejected value []; ... If we need a validation that\u0026rsquo;s not supported by the default bean validation annotations, we can create a custom bean validation annotation.\nAnd if our validation logic is too special for bean validation, we can implement it in a method annotated with @PostConstruct that throws an exception if the validation fails.\nComplex Property Types Most parameters we want to pass into our application are primitive strings or numbers. In some cases, though, we have a parameter that we\u0026rsquo;d like to bind to a field in our @ConfigurationProperty class that has a complex datatype like a List.\nLists and Sets Imagine we need to provide a list of SMTP servers to our mail module. We can simply add a List field to our MailModuleProperties class:\n@ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;) class MailModuleProperties { private List\u0026lt;String\u0026gt; smtpServers; // getters / setters  } Spring Boot automatically fills this list if we use the array notation in our application.properties file:\nmyapp.mail.smtpServers[0]=server1 myapp.mail.smtpServers[1]=server2 YAML has built-in support for list types, so if we use an application.yml instead, the configuration file we better readable for us humans:\nmyapp: mail: smtp-servers: - server1 - server2 We can bind parameters to Set fields in the same way.\nDurations Spring Boot has built-in support for parsing durations from a configuration parameter:\n@ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;) class MailModuleProperties { private Duration pauseBetweenMails; // getters / setters  } This duration can either be provided as a long to indicate milliseconds or in a textual, human-readable way that includes the unit (one of ns, us, ms, s, m, h, d):\nmyapp.mail.pause-between-mails=5s File Sizes In a very similar manner, we can provide configuration parameters that define a file size:\n@ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;) class MailModuleProperties { private DataSize maxAttachmentSize; // getters / setters  } The DataSize type is provided by the Spring Framework itself. We can now provide a file size configuration parameter as a long to indicate the number of bytes or with a unit (one of B, KB, MB, GB, TB):\nmyapp.mail.max-attachment-size=1MB Custom Types In rare cases, we might want to parse a configuration parameter into a custom value object. Imagine that we want to provide the (hypothetical) maximum attachment weight for an email:\nmyapp.mail.max-attachment-weight=5kg We want to bind this property to a field of our custom type Weight:\n@ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;) class MailModuleProperties { private Weight maxAttachmentWeight; // getters / setters } There are two light-weight options to make Spring Boot automatically parse the String ('5kg') into an object of type Weight:\n the Weight class provides a constructor that takes a single String ('5kg') as an argument, or the Weight class provides a static valueOf method that takes a single String as an argument and returns a Weight object.  If we cannot provide a constructor or a valueOf method, we\u0026rsquo;re stuck with the slightly more invasive option of creating a custom converter:\nclass WeightConverter implements Converter\u0026lt;String, Weight\u0026gt; { @Override public Weight convert(String source) { // create and return a Weight object from the String  } } Once we have created our converter, we have to make it known to Spring Boot:\n@Configuration class MailModuleConfiguration { @Bean @ConfigurationPropertiesBinding public WeightConverter weightConverter() { return new WeightConverter(); } } It\u0026rsquo;s important to add the @ConfigurationPropertiesBinding annotation to let Spring Boot know that this converter is needed during the binding of configuration properties.\nemail Attachments with a Weight?  Obviously, emails cannot have \"real\" attachments with a weight. I'm quite aware of this. I had a hard time to come up with an example for a custom configuration type, though, since this is a rare case indeed.  Using the Spring Boot Configuration Processor for Auto-Completion Ever wanted auto-completion for any of Spring Boot\u0026rsquo;s built-in configuration parameters? Or your own configuration properties?\nSpring Boot provides a configuration processor that collects data from all @ConfigurationProperties annotations it finds in the classpath to create a JSON file with some metadata. IDEs can use this JSON file to provide features like auto-completion.\nAll we have to do is to add the dependency to the configuration processor to our project (gradle notation):\ndependencies { ... annotationProcessor \u0026#39;org.springframework.boot:spring-boot-configuration-processor\u0026#39; } When we build our project, the configuration processor now creates a JSON file that looks something like this:\n{ \u0026#34;groups\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;myapp.mail\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;io.reflectoring.configuration.mail.MailModuleProperties\u0026#34;, \u0026#34;sourceType\u0026#34;: \u0026#34;io.reflectoring.configuration.mail.MailModuleProperties\u0026#34; } ], \u0026#34;properties\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;myapp.mail.enabled\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;java.lang.Boolean\u0026#34;, \u0026#34;sourceType\u0026#34;: \u0026#34;io.reflectoring.configuration.mail.MailModuleProperties\u0026#34;, \u0026#34;defaultValue\u0026#34;: true }, { \u0026#34;name\u0026#34;: \u0026#34;myapp.mail.default-subject\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;java.lang.String\u0026#34;, \u0026#34;sourceType\u0026#34;: \u0026#34;io.reflectoring.configuration.mail.MailModuleProperties\u0026#34; } ], \u0026#34;hints\u0026#34;: [] } IntelliJ To get auto-completion in IntelliJ, we just install the Spring Assistant plugin. If we now hit CMD+Space in an application.properties or application.yml file, we get an auto-completion popup:\nEclipse I\u0026rsquo;d like to provide information about how to use the auto-completion feature for configuration properties in Eclipse, but I didn\u0026rsquo;t get it to work. If you have successfully done so, please let me know in the comments. I\u0026rsquo;d love to put that information here.\nMarking a Configuration Property as Deprecated A nice feature of the configuration processor is that it allows us to mark properties as deprecated:\n@ConfigurationProperties(prefix = \u0026#34;myapp.mail\u0026#34;) class MailModuleProperties { private String defaultSubject; @DeprecatedConfigurationProperty( reason = \u0026#34;not needed anymore\u0026#34;, replacement = \u0026#34;none\u0026#34;) public String getDefaultSubject(){ return this.defaultSubject; } // setter  } We can simply add the @DeprecatedConfigurationProperty annotation to a field our our @ConfigurationProperties class and the configuration processor will include deprecation information in the meta data:\n... { \u0026#34;name\u0026#34;: \u0026#34;myapp.mail.default-subject\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;java.lang.String\u0026#34;, \u0026#34;sourceType\u0026#34;: \u0026#34;io.reflectoring.configuration.mail.MailModuleProperties\u0026#34;, \u0026#34;deprecated\u0026#34;: true, \u0026#34;deprecation\u0026#34;: { \u0026#34;reason\u0026#34;: \u0026#34;not needed anymore\u0026#34;, \u0026#34;replacement\u0026#34;: \u0026#34;none\u0026#34; } } ... This information is then provided to us when typing away in the properties file (IntelliJ, in this case):\nConclusion Spring Boot\u0026rsquo;s @ConfigurationProperties annotation is a powerful tool to bind configuration parameters to type-safe fields in a Java bean.\nInstead of simply creating one configuration bean for our application, we can take advantage of this feature to create a separate configuration bean for each of our modules, giving us the flexibility to evolve each module separately not only in code, but also in configuration.\n","date":"March 18, 2019","image":"https://reflectoring.io/images/stock/0013-switchboard-1200x628-branded_hu4e75c8ecd0e5246b9132ae3e09f147a6_167298_650x0_resize_q90_box.jpg","permalink":"/spring-boot-configuration-properties/","title":"Configuring a Spring Boot Module with @ConfigurationProperties"},{"categories":["Spring Boot"],"contents":"When building a Spring Boot app, we sometimes want to only load beans or modules into the application context if some condition is met. Be it to disable some beans during tests or to react to a certain property in the runtime environment.\nSpring has introduced the @Conditional annotation that allows us to define custom conditions to apply to parts of our application context. Spring Boot builds on top of that and provides some pre-defined conditions so we don\u0026rsquo;t have to implement them ourselves.\nIn this tutorial, we\u0026rsquo;ll have a look some use cases that explain why we would need conditionally loaded beans at all. Then, we\u0026rsquo;ll see how to apply conditions and which conditions Spring Boot offers. To round things up, we\u0026rsquo;ll also implement a custom condition.\n Example Code This article is accompanied by a working code example on GitHub. Why do we need Conditional Beans? A Spring application context contains an object graph that makes up all the beans that our application needs at runtime. Spring\u0026rsquo;s @Conditional annotation allows us to define conditions under which a certain bean is included into that object graph.\nWhy would we need to include or exclude beans under certain conditions?\nIn my experience, the most common use case is that certain beans don\u0026rsquo;t work in a test environment. They might require a connection to a remote system or an application server that is not available during tests. So, we want to modularize our tests to exclude or replace these beans during tests.\nAnother use case is that we want to enable or disable a certain cross-cutting concern. Imagine that we have built a module that configures security. During developer tests, we don\u0026rsquo;t want to type in our usernames and passwords every time, so we flip a switch and disable the whole security module for local tests.\nAlso, we might want to load certain beans only if some external resource is available without which they cannot work. For instance, we want to configure our Logback logger only if a logback.xml file has been found on the classpath.\nWe\u0026rsquo;ll see some more use cases in the discussion below.\nDeclaring Conditional Beans Anywhere we define a Spring bean, we can optionally add a condition. Only if this condition is satisfied will the bean be added to the application context. To declare a condition, we can use any of the @Conditional... annotations that are described below.\nBut first, let\u0026rsquo;s look at how to apply a condition to a certain Spring bean.\nConditional @Bean If we add a condition to a single @Bean definition, this bean is only loaded if the condition is met:\n@Configuration class ConditionalBeanConfiguration { @Bean @Conditional... // \u0026lt;--  ConditionalBean conditionalBean(){ return new ConditionalBean(); }; } Conditional @Configuration If we add a condition to a Spring @Configuration, all beans contained within this configuration will only be loaded if the condition is met:\n@Configuration @Conditional... // \u0026lt;-- class ConditionalConfiguration { @Bean Bean bean(){ ... }; } Conditional @Component Finally, we can add a condition to any bean declared with one of the stereotype annotations @Component, @Service, @Repository, or @Controller:\n@Component @Conditional... // \u0026lt;-- class ConditionalComponent { } Pre-Defined Conditions Spring Boot offers some pre-defined @ConditionalOn... annotations that we can use out-of-the box. Let\u0026rsquo;s have a look at each one in turn.\n@ConditionalOnProperty The @ConditionalOnProperty annotation is, in my experience, the most commonly used conditional annotation in Spring Boot projects. It allows to load beans conditionally depending on a certain environment property:\n@Configuration @ConditionalOnProperty( value=\u0026#34;module.enabled\u0026#34;, havingValue = \u0026#34;true\u0026#34;, matchIfMissing = true) class CrossCuttingConcernModule { ... } The CrossCuttingConcernModule is only loaded if the module.enabled property has the value true. If the property is not set at all, it will still be loaded, because we have defined matchIfMissing as true. This way, we have created a module that is loaded by default until we decide otherwise.\nIn the same way we might create other modules for cross-cutting concerns like security or scheduling that we might want to disable in a certain (test) environment.\n@ConditionalOnExpression If we have a more complex condition based on multiple properties, we can use @ConditionalOnExpression:\n@Configuration @ConditionalOnExpression( \u0026#34;${module.enabled:true} and ${module.submodule.enabled:true}\u0026#34; ) class SubModule { ... } The SubModule is only loaded if both properties module.enabled and module.submodule.enabled have the value true. By appending :true to the properties we tell Spring to use true as a default value in the case the properties have not been set. We can use the full extend of the Spring Expression Language.\nThis way we can, for instance, create sub modules that should be disabled if the parent module is disabled, but can also be disabled if the parent module is enabled.\n@ConditionalOnBean Sometimes, we might want to load a bean only if a certain other bean is available in the application context:\n@Configuration @ConditionalOnBean(OtherModule.class) class DependantModule { ... } The DependantModule is only loaded if there is a bean of class OtherModule in the application context. We could also define the bean name instead of the bean class.\nThis way, we can define dependencies between certain modules, for example. One module is only loaded if a certain bean of another module is available.\n@ConditionalOnMissingBean Similarly, we can use @ConditionalOnMissingBean if we want to load a bean only if a certain other bean is not in the application context:\n@Configuration class OnMissingBeanModule { @Bean @ConditionalOnMissingBean DataSource dataSource() { return new InMemoryDataSource(); } } In this example, we\u0026rsquo;re only injecting an in-memory datasource into the application context if there is not already a datasource available. This is very similar to what Spring Boot does internally to provide an in-memory database in a test context.\n@ConditionalOnResource If we want to load a bean depending on the fact that a certain resource is available on the class path, we can use @ConditionalOnResource:\n@Configuration @ConditionalOnResource(resources = \u0026#34;/logback.xml\u0026#34;) class LogbackModule { ... } The LogbackModule is only loaded if the logback configuration file was found on the classpath. This way, we might create similar modules that are only loaded if their respective configuration file has been found.\nOther Conditions The conditional annotations described above are the more common ones that we might use in any Spring Boot application. Spring Boot provides even more conditional annotations. They are, however, not as common and some are more suited for framework development rather than application development (Spring Boot uses some of them heavily under the covers). So, let\u0026rsquo;s only have a brief look at them here.\n@ConditionalOnClass\nLoad a bean only if a certain class is on the classpath:\n@Configuration @ConditionalOnClass(name = \u0026#34;this.clazz.does.not.Exist\u0026#34;) class OnClassModule { ... } @ConditionalOnMissingClass\nLoad a bean only if a certain class is not on the classpath:\n@Configuration @ConditionalOnMissingClass(value = \u0026#34;this.clazz.does.not.Exist\u0026#34;) class OnMissingClassModule { ... } @ConditionalOnJndi\nLoad a bean only if a certain resource is available via JNDI:\n@Configuration @ConditionalOnJndi(\u0026#34;java:comp/env/foo\u0026#34;) class OnJndiModule { ... } @ConditionalOnJava\nLoad a bean only if running a certain version of Java:\n@Configuration @ConditionalOnJava(JavaVersion.EIGHT) class OnJavaModule { ... } @ConditionalOnSingleCandidate\nSimilar to @ConditionalOnBean, but will only load a bean if a single candidate for the given bean class has been determined. There probably isn\u0026rsquo;t a use case outside of auto-configurations:\n@Configuration @ConditionalOnSingleCandidate(DataSource.class) class OnSingleCandidateModule { ... } @ConditionalOnWebApplication\nLoad a bean only if we\u0026rsquo;re running inside a web application:\n@Configuration @ConditionalOnWebApplication class OnWebApplicationModule { ... } @ConditionalOnNotWebApplication\nLoad a bean only if we\u0026rsquo;re not running inside a web application:\n@Configuration @ConditionalOnNotWebApplication class OnNotWebApplicationModule { ... } @ConditionalOnCloudPlatform\nLoad a bean only if we\u0026rsquo;re running on a certain cloud platform:\n@Configuration @ConditionalOnCloudPlatform(CloudPlatform.CLOUD_FOUNDRY) class OnCloudPlatformModule { ... } Custom Conditions Aside from the conditional annotations, we can create our own and combine multiple conditions with logical operators.\nDefining a Custom Condition Imagine we have some Spring beans that talk to the operating system natively. These beans should only be loaded if we\u0026rsquo;re running the application on the respective operating system.\nLet\u0026rsquo;s implement a condition that loads beans only if we\u0026rsquo;re running the code on a unix machine. For this, we implement Spring\u0026rsquo;s Condition interface:\nclass OnUnixCondition implements Condition { @Override public boolean matches( ConditionContext context, AnnotatedTypeMetadata metadata) { return SystemUtils.IS_OS_LINUX; } } We simply use Apache Commons' SystemUtils class to determine if we\u0026rsquo;re running on a unix-like system. If needed, we could include more sophisticated logic that uses information about the current application context (ConditionContext) or about the annotated class (AnnotatedTypeMetadata).\nThe condition is now ready to be used in combination with Spring\u0026rsquo;s @Conditional annotation:\n@Bean @Conditional(OnUnixCondition.class) UnixBean unixBean() { return new UnixBean(); } Combining Conditions with OR If we want to combine multiple conditions into a single condition with the logical \u0026ldquo;OR\u0026rdquo; operator, we can extend AnyNestedCondition:\nclass OnWindowsOrUnixCondition extends AnyNestedCondition { OnWindowsOrUnixCondition() { super(ConfigurationPhase.REGISTER_BEAN); } @Conditional(OnWindowsCondition.class) static class OnWindows {} @Conditional(OnUnixCondition.class) static class OnUnix {} } Here, we have created a condition that is satisfied if the application runs on windows or unix.\nThe AnyNestedCondition parent class will evaluate the @Conditional annotations on the methods and combine them using the OR operator.\nWe can use this condition just like any other condition:\n@Bean @Conditional(OnWindowsOrUnixCondition.class) WindowsOrUnixBean windowsOrUnixBean() { return new WindowsOrUnixBean(); } Is your AnyNestedCondition or AllNestedConditions not working?  Check the ConfigurationPhase parameter passed into super(). If you want to apply your combined condition to @Configuration beans, use the value PARSE_CONFIGURATION. If you want to apply the condition to simple beans, use REGISTER_BEAN as shown in the example above. Spring Boot needs to make this distinction so it can apply the conditions at the right time during application context startup.  Combining Conditions with AND If we want to combine conditions with \u0026ldquo;AND\u0026rdquo; logic, we can simply use multiple @Conditional... annotations on a single bean. They will automatically be combined with the logical \u0026ldquo;AND\u0026rdquo; operator so that if at least one condition fails, the bean will not be loaded:\n@Bean @ConditionalOnUnix @Conditional(OnWindowsCondition.class) WindowsAndUnixBean windowsAndUnixBean() { return new WindowsAndUnixBean(); } This bean should never load, unless someone has created a Windows / Unix hybrid that I\u0026rsquo;m not aware of.\nNote that the @Conditional annotation cannot be used more than once on a single method or class. So, if we want to combine multiple annotations this way, we have to use custom @ConditionalOn... annotations, which do not have this restriction. Below, we\u0026rsquo;ll explore how to create the @ConditionalOnUnix annotation.\nAlternatively, if we want to combine conditions with AND into a single @Conditional annotation, we can extend Spring Boot\u0026rsquo;s AllNestedConditions class which works exactly the same as AnyNestedConditions described above.\nCombining Conditions with NOT Similar to AnyNestedCondition and AllNestedConditions, we can extend NoneNestedCondition to only load beans if NONE of the combined conditions match.\nDefining a Custom @ConditionalOn\u0026hellip; Annotation We can create a custom annotation for any condition. We simply need to meta-annotate this annotation with @Conditional:\n@Target({ ElementType.TYPE, ElementType.METHOD }) @Retention(RetentionPolicy.RUNTIME) @Documented @Conditional(OnLinuxCondition.class) public @interface ConditionalOnUnix {} Spring will evaluate this meta annotation when we annotate a bean with our new annotation:\n@Bean @ConditionalOnUnix LinuxBean linuxBean(){ return new LinuxBean(); } Conclusion With the @Conditional annotation and the possibility to create custom @Conditional... annotations, Spring already gives us a lot of power to control the content of our application context.\nSpring Boot builds on top of that by bringing some convenient @ConditionalOn... annotations to the table and by allowing us to combine conditions using AllNestedConditions, AnyNestedCondition or NoneNestedCondition. These tools allow us to modularize our production code as well as our tests.\nWith power comes responsibility, however, so we should take care not to litter our application context with conditions, lest we lose track of what is loaded when.\nThe code for this article is available on github.\n","date":"March 7, 2019","image":"https://reflectoring.io/images/stock/0017-coffee-beans-1200x628-branded_huece543939443a9c461a0d4760d3503b7_299333_650x0_resize_q90_box.jpg","permalink":"/spring-boot-conditionals/","title":"Conditional Beans with Spring Boot"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you are interested in metrics by which to evaluate a codebase or even organizational behavior of a development team you want to know how to gather those metrics you have a certain codebase that you want to analyze  Overview Your Code as a Crime Scene by Adam Tornhill is a book that aims to apply criminal investigation techniques to a codebase in order to gain insights about the structure and quality of the code.\nTornhill has created a command-line tool that allows to create certain reports from a Git repository. He uses this tool to create visualizations of patterns in a code base. Every step can be reproduced by the reader, if he or she wishes.\nThe book not only explains the visualized metrics but also goes into detail about how they can be interpreted.\nLikes The book describes a wealth of interesting code metrics and techniques to visualize those metrics in a way that allows to get the most out of them.\nWhat\u0026rsquo;s different from other books about code metrics is that all metrics explained in the book are extracted from a Git repository, which has a lot more information that just the code itself.\nIn summary, I found the metrics explained in the book very interesting.\nThe book is written in a very conversational style, making it easy to read. It has rather short chapters with a lot of helpful visualizations, making it possible to read a chapter in a short amount of time.\nIf you want to execute the code examples on your own machine to create the visualizations yourself, you need a bit more time, though (I actually didn\u0026rsquo;t do it).\nKey Takeaways The book starts with finding finding hotspots of a certain metric (like complexity, for instance) and visualizing them in a 3-dimensional city map in order to get a sense of the metric\u0026rsquo;s distribution within your code base. Ironically, this is exactly what a student of me did in his bachelor\u0026rsquo;s thesis based on coderadar, a tool I\u0026rsquo;ve started (but never completed) a couple years ago.\nThe book then describes how to extract data from a Git repository to create line diagrams that show the trend of a metric over time.\nAn interesting concept is the metric of temporal coupling, i.e. how often certain parts of the code are modified together. This can be used to find unintended couplings in the codebase.\nSimple, but still interesting, was the use of a word cloud of commit messages to get a sense of what\u0026rsquo;s happening in the codebase.\nIn the last part, the book goes into organizational metrics like the number of authors on a certain part of the code within a certain time frame. Assuming that a high number of authors means a high probability of defects, this metric can point out spots in the code that deserve attention.\nLater, the book also introduces the similar concept of code churn, meaning the degree of change of a certain part of the code. The more lines are added or deleted in a certain file, the higher its code churn.\nA Git repository also provides information about the knowledge distribution within a team. The books explains how to create a knowledge distribution map of a codebase, showing which authors know about which parts of the code. What\u0026rsquo;s even more interesting is to visualize knowledge loss, i.e. which parts of the code have not been touched by an active developer within a certain time frame.\nDislikes The only thing I didn\u0026rsquo;t really like about the book was that the analogies between criminal investigation and code analysis seemed a little far-fetched to me in many places. Here and there, Tornhill writes a couple of pages about criminal psychology and investigation techniques that, in my opinion, don\u0026rsquo;t have that much to do with the code metrics he discussed afterwards.\nFor me, the gems within the book are the code metrics, not the connection to criminal investigation. But I don\u0026rsquo;t like crime shows in TV much, either\u0026hellip; .\nConclusion Your Code as a Crime Scene introduces very interesting code metrics, so if you\u0026rsquo;re interested in measuring or visualizing code, this book is definitely worth to spend your time on.\nThe book is very hands-on in that it provides step-by-step examples you can follow to create visualizations of your own codebase, so it\u0026rsquo;s even more worthwhile if you want to actively analyze a certain codebase.\n","date":"February 17, 2019","image":"https://reflectoring.io/images/stock/0014-handcuffs-1200x628-branded_huae3cc3247040192bd8d36200fb5209d6_187949_650x0_resize_q90_box.jpg","permalink":"/book-review-your-code-as-a-crime-scene/","title":"Book Review: Your Code as a Crime Scene"},{"categories":["Spring Boot"],"contents":"Aside from unit tests, integration tests play a vital role in producing quality software. A special kind of integration test deals with the integration between our code and the database.\nWith the @DataJpaTest annotation, Spring Boot provides a convenient way to set up an environment with an embedded database to test our database queries against.\nIn this tutorial, we\u0026rsquo;ll first discuss which types of queries are worthy of tests and then discuss different ways of creating a database schema and database state to test against.\n Example Code This article is accompanied by a working code example on GitHub. The \u0026ldquo;Testing with Spring Boot\u0026rdquo; Series This tutorial is part of a series:\n Unit Testing with Spring Boot Testing Spring MVC Web Controllers with @WebMvcTest Testing JPA Queries with Spring Boot and @DataJpaTest Integration Tests with @SpringBootTest  If you like learning from videos, make sure to check out Philip\u0026rsquo;s Testing Spring Boot Applications Masterclass (if you buy through this link, I get a cut).\nDependencies In this tutorial, aside from the usual Spring Boot dependencies, we\u0026rsquo;re using JUnit Jupiter as our testing framework and H2 as an in-memory database.\ndependencies { compile(\u0026#39;org.springframework.boot:spring-boot-starter-data-jpa\u0026#39;) compile(\u0026#39;org.springframework.boot:spring-boot-starter-web\u0026#39;) runtime(\u0026#39;com.h2database:h2\u0026#39;) testCompile(\u0026#39;org.springframework.boot:spring-boot-starter-test\u0026#39;) testCompile(\u0026#39;org.junit.jupiter:junit-jupiter-engine:5.2.0\u0026#39;) } What to Test? The first question to answer to ourselves is what we need to test. Let\u0026rsquo;s consider a Spring Data repository responsible for UserEntity objects:\ninterface UserRepository extends CrudRepository\u0026lt;UserEntity, Long\u0026gt; { // query methods } We have different options to create queries. Let\u0026rsquo;s look at some of those in detail to determine if we should cover them with tests.\nInferred Queries The first option is to create an inferred query:\nUserEntity findByName(String name); We don\u0026rsquo;t need to tell Spring Data what to do, since it automatically infers the SQL query from the name of the method name.\nWhat\u0026rsquo;s nice about this feature is that Spring Data also automatically checks if the query is valid at startup. If we renamed the method to findByFoo() and the UserEntity does not have a property foo, Spring Data will point that out to us with an exception:\norg.springframework.data.mapping.PropertyReferenceException: No property foo found for type UserEntity! So, as long as we have at least one test that tries to start up the Spring application context in our code base, we do not need to write an extra test for our inferred query.\nNote that this is not true for queries inferred from long method names like findByNameAndRegistrationDateBeforeAndEmailIsNotNull(). This method name is hard to grasp and easy to get wrong, so we should test if it really does what we intended.\nHaving said this, it\u0026rsquo;s good practice to rename such methods to a shorter, more meaningful name and add a @Query annotation to provide a custom JPQL query.\nCustom JPQL Queries with @Query If queries become more complex, it makes sense to provide a custom JPQL query:\n@Query(\u0026#34;select u from UserEntity u where u.name = :name\u0026#34;) UserEntity findByNameCustomQuery(@Param(\u0026#34;name\u0026#34;) String name); Similar to inferred queries, we get a validity check for those JPQL queries for free. Using Hibernate as our JPA provider, we\u0026rsquo;ll get a QuerySyntaxException on startup if it found an invalid query:\norg.hibernate.hql.internal.ast.QuerySyntaxException: unexpected token: foo near line 1, column 64 [select u from ...] Custom queries, however, can get a lot more complicated than finding an entry by a single attribute. They might include joins with other tables or return complex DTOs instead of an entity, for instance.\nSo, should we write tests for custom queries? The unsatisfying answer is that we have to decide for ourselves if the query is complex enough to require a test.\nNative Queries with @Query Another way is to use a native query:\n@Query( value = \u0026#34;select * from user as u where u.name = :name\u0026#34;, nativeQuery = true) UserEntity findByNameNativeQuery(@Param(\u0026#34;name\u0026#34;) String name); Instead of specifying a JPQL query, which is an abstraction over SQL, we\u0026rsquo;re specifying an SQL query directly. This query may use a database-specific SQL dialect.\nIt\u0026rsquo;s important to note that neither Hibernate nor Spring Data validate native queries at startup. Since the query may contain database-specific SQL, there\u0026rsquo;s no way Spring Data or Hibernate can know what to check for.\nSo, native queries are prime candidates for integration tests. However, if they really use database-specific SQL, those tests might not work with the embedded in-memory database, so we would have to provide a real database in the background (for instance in a docker container that is set up on-demand in the continuous integration pipeline).\n@DataJpaTest in a Nutshell To test Spring Data JPA repositories, or any other JPA-related components for that matter, Spring Boot provides the @DataJpaTest annotation. We can just add it to our unit test and it will set up a Spring application context:\n@ExtendWith(SpringExtension.class) @DataJpaTest class UserEntityRepositoryTest { @Autowired private DataSource dataSource; @Autowired private JdbcTemplate jdbcTemplate; @Autowired private EntityManager entityManager; @Autowired private UserRepository userRepository; @Test void injectedComponentsAreNotNull(){ assertThat(dataSource).isNotNull(); assertThat(jdbcTemplate).isNotNull(); assertThat(entityManager).isNotNull(); assertThat(userRepository).isNotNull(); } } @ExtendWith  The code examples in this tutorial use the @ExtendWith annotation to tell JUnit 5 to enable Spring support. As of Spring Boot 2.1, we no longer need to load the SpringExtension because it's included as a meta annotation in the Spring Boot test annotations like @DataJpaTest, @WebMvcTest, and @SpringBootTest.  The so created application context will not contain the whole context needed for our Spring Boot application, but instead only a \u0026ldquo;slice\u0026rdquo; of it containing the components needed to initialize any JPA-related components like our Spring Data repository.\nWe can, for instance, inject a DataSource, @JdbcTemplate or @EntityManagerinto our test class if we need them. Also, we can inject any of the Spring Data repositories from our application. All of the above components will be automatically configured to point to an embedded, in-memory database instead of the \u0026ldquo;real\u0026rdquo; database we might have configured in application.properties or application.yml files.\nNote that by default the application context containing all these components, including the in-memory database, is shared between all test methods within all @DataJpaTest-annotated test classes.\nThis is why, by default, each test method runs in its own transaction, which is rolled back after the method has executed. This way, the database state stays pristine between tests and the tests stay independent of each other.\nCreating the Database Schema Before we can test any queries to the database, we need to create an SQL schema to work with. Let\u0026rsquo;s look at some different ways to do this.\nUsing Hibernate\u0026rsquo;s ddl-auto By default, @DataJpaTest will configure Hibernate to create the database schema for us automatically. The property responsible for this is spring.jpa.hibernate.ddl-auto, which Spring Boot sets to create-drop by default, meaning that the schema is created before running the tests and dropped after the tests have executed.\nSo, if we\u0026rsquo;re happy with Hibernate creating the schema for us, we don\u0026rsquo;t have to do anything.\nUsing schema.sql Spring Boot supports executing a custom schema.sql file when the application starts up.\nIf Spring finds a schema.sql file in the classpath, this will be executed against the datasource. This overrides the ddl-auto configuration of Hibernate discussed above.\nWe can control whether the schema.sql file should be executed with the property spring.datasource.initialization-mode. The default value is embedded, meaning it will only execute for an embedded database (i.e. in our tests). If we set it to always, it will always execute.\nThe following log output confirms that the file has been executed:\nExecuting SQL script from URL [file:.../out/production/resources/schema.sql] It makes sense to set Hibernate\u0026rsquo;s ddl-auto configuration to validate when using a script to initialize the schema, so that Hibernate checks if the created schema matches the entity classes on startup:\n@ExtendWith(SpringExtension.class) @DataJpaTest @TestPropertySource(properties = { \u0026#34;spring.jpa.hibernate.ddl-auto=validate\u0026#34; }) class SchemaSqlTest { ... } Using Flyway Flyway is a database migration tool that allows to specify multiple SQL scripts to create a database schema. It keeps track of which of these scripts have already been executed on the target database, so that it executes only those that have not been executed before.\nTo activate Flyway, we just need to drop the dependency into our build.gradle file (similar if we\u0026rsquo;d use Maven):\ncompile(\u0026#39;org.flywaydb:flyway-core\u0026#39;) Hibernate\u0026rsquo;s ddl-auto configuration will automatically back off if we have not specifically configured it, so that Flyway has precedence and will by default execute all SQL scripts it finds in the folder src/main/resources/db/migration against our in-memory test database.\nAgain, it makes sense to set ddl-auto to validate, to let Hibernate check if the schema generated by Flyway matches the expectations of our Hibernate entities:\n@ExtendWith(SpringExtension.class) @DataJpaTest @TestPropertySource(properties = { \u0026#34;spring.jpa.hibernate.ddl-auto=validate\u0026#34; }) class FlywayTest { ... } The Value of using Flyway in Tests  If we're using Flyway in production it's really nice if we can also use it in our JPA tests as described above. Only then do we know at test time that the flyway scripts work as expected.  This only works, however, as long as the scripts contain SQL that is valid on both the production database and the in-memory database used in the tests (an H2 database in our example). If this is not the case, we must disable Flyway in our tests by setting the spring.flyway.enabled property to false and the spring.jpa.hibernate.ddl-auto property to create-drop to let Hibernate generate the schema.  In any case, let's make sure to set the ddl-auto property to validate in the production profile! It's our last line of defense against errors in our Flyway scripts!  Using Liquibase Liquibase is another database migration tool that works similar to Flyway but supports other input formats besides SQL. We can provide YAML or XML files, for example, that define the database schema.\nWe activate it by simply adding the dependency:\ncompile(\u0026#39;org.liquibase:liquibase-core\u0026#39;) Liquibase will then automatically create the schema defined in src/main/resources/db/changelog/db.changelog-master.yaml by default.\nYet again, it makes sense to set ddl-auto to validate:\n@ExtendWith(SpringExtension.class) @DataJpaTest @TestPropertySource(properties = { \u0026#34;spring.jpa.hibernate.ddl-auto=validate\u0026#34; }) class LiquibaseTest { ... } The Value of using Liquibase in Tests  As Liquibase allows multiple input formats that act as an abstraction layer over SQL, the same scripts can be used across multiple databases, even if their SQL dialects differ. This makes it possible to use the same Liquibase scripts in our tests and in production.  The YAML format is very sensitive, though, and I recently had trouble maintaining a collection of big YAML files. This, and the fact that in spite of the abstraction we actually had to edit those files for different databases, ultimately led to a switch to Flyway.  Populating the Database Now that we have created a database schema for our tests, we can finally start the actual testing. In database query tests, we usually add some data to the database and then validate if our queries return the correct results.\nAgain, there are multiple ways of adding data to our in-memory database, so let\u0026rsquo;s discuss each of them.\nUsing data.sql Similar to schema.sql, we can use a data.sql file containing insert statements to populate our database. The same rules apply as above.\nMaintainability  A data.sql file forces us to put all our insert statements into a single place. Every single test will depend on this one script to set up the database state. This script will soon become very large and hard to maintain. And what if there are tests that require conflicting database states?  This approach should therefore be considered with caution.  Inserting Entities Manually The easiest way to create a specific database state per test is to just save some entities in the test before running the query under test:\n@Test void whenSaved_thenFindsByName() { userRepository.save(new UserEntity( \u0026#34;Zaphod Beeblebrox\u0026#34;, \u0026#34;zaphod@galaxy.net\u0026#34;)); assertThat(userRepository.findByName(\u0026#34;Zaphod Beeblebrox\u0026#34;)).isNotNull(); } This is easy for simple entities like in the example above. But in real projects those entities usually are a lot more complex to build and have relationships to other entities. Also, if we want to test a more complex query than findByName, chances are that we need to create more data than a single entity. This quickly becomes very tiresome.\nOne way to tame this complexity is to create factory methods, perhaps in combination with the Objectmother and Builder patterns.\nThe approach of \u0026ldquo;manually\u0026rdquo; programming the database population in Java code has a big advantage over the other approaches in that it\u0026rsquo;s refactoring-safe. Changes in the codebase lead to compile errors in our test code. In all other approaches, we have to run the tests to be notified about potential errors due to a refactoring.\nUsing Spring DBUnit DBUnit is a library that supports setting databases into a certain state. Spring DBUnit integrates DBUnit with Spring so that it automatically works with Spring\u0026rsquo;s transactions, among other things.\nTo use it, we need to add the dependencies to Spring DBUnit and DBUnit:\ncompile(\u0026#39;com.github.springtestdbunit:spring-test-dbunit:1.3.0\u0026#39;) compile(\u0026#39;org.dbunit:dbunit:2.6.0\u0026#39;) Then, for each test we can create a custom XML file containing the desired database state:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;dataset\u0026gt; \u0026lt;user id=\u0026#34;1\u0026#34; name=\u0026#34;Zaphod Beeblebrox\u0026#34; email=\u0026#34;zaphod@galaxy.net\u0026#34; /\u0026gt; \u0026lt;/dataset\u0026gt; By default, the XML file (let\u0026rsquo;s name it createUser.xml) lie in the classpath next to the test class.\nIn the test class, we need to add two TestExecutionListeners to enable DBUnit support. To set a certain database state we can then use @DatabaseSetup on a test method:\n@ExtendWith(SpringExtension.class) @DataJpaTest @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, TransactionDbUnitTestExecutionListener.class }) class SpringDbUnitTest { @Autowired private UserRepository userRepository; @Test @DatabaseSetup(\u0026#34;createUser.xml\u0026#34;) void whenInitializedByDbUnit_thenFindsByName() { UserEntity user = userRepository.findByName(\u0026#34;Zaphod Beeblebrox\u0026#34;); assertThat(user).isNotNull(); } } For testing queries that change the database state we could even use @ExpectedDatabase to define the state the database is expected to be in after the test.\nNote, however, that Spring DBUnit has not been maintained since 2016.\n@DatabaseSetup not working?  In my tests I had the problem that the @DatabaseSetup annotation was silently ignored. Turned out there was a ClassNotFoundException as some DBUnit class could not be found. This exception was swallowed, though.  The reason was that I forgot to include the dependency to DBUnit, since I thought that Spring Test DBUnit included it transitively. So, if you have the same problem, check if you have included both dependencies.  Using @Sql A very similar approach is using Spring\u0026rsquo;s @Sql annotation. Instead of using XML to describe the database state, we\u0026rsquo;re using SQL directly:\n-- createUser.sql INSERT INTO USER (id, NAME, email) VALUES (1, \u0026#39;Zaphod Beeblebrox\u0026#39;, \u0026#39;zaphod@galaxy.net\u0026#39;); In our test, we can simply use the @Sql annotation to refer to the SQL file to populate the database:\n@ExtendWith(SpringExtension.class) @DataJpaTest class SqlTest { @Autowired private UserRepository userRepository; @Test @Sql(\u0026#34;createUser.sql\u0026#34;) void whenInitializedByDbUnit_thenFindsByName() { UserEntity user = userRepository.findByName(\u0026#34;Zaphod Beeblebrox\u0026#34;); assertThat(user).isNotNull(); } } If we need more than one script, we can use @SqlGroup to combine them.\nConclusion To test database queries we need the means to create a schema and populate it with some data. Since tests should be independent of each other, it\u0026rsquo;s best to do this for each test separately.\nFor simple tests and simple database entities, it suffices to create the state manually by creating and saving JPA entities. For more complex scenarios, @DatabaseSetup and @Sql provide a way to externalize the database state in XML or SQL files.\nWhat experience have you made with the different approaches? Let me know in the comments!\nIf you like learning from videos, make sure to check out Philip\u0026rsquo;s Testing Spring Boot Applications Masterclass (if you buy through this link, I get a cut).\n","date":"February 3, 2019","image":"https://reflectoring.io/images/stock/0019-magnifying-glass-1200x628-branded_hudd3c41ec99aefbb7f273ca91d0ef6792_109335_650x0_resize_q90_box.jpg","permalink":"/spring-boot-data-jpa-test/","title":"Testing JPA Queries with Spring Boot and @DataJpaTest"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you are starting with Java and want some advice for producing quality code you are teaching Java and want your students to learn best practices you are working with junior developers and want them to get them up to speed  Overview In a nutshell, Java by Comparison by Simon Harrer, Jörg Lenhard, and Linus Dietz teaches best practices in the Java programming language. It\u0026rsquo;s aimed at Java beginners and intermediates. As the name suggest, the book compares code snippets that have room for improvement with revised versions that have certain best practices applied.\nAmong other things, the book covers best practices on\n basic language features, usage of comments, naming things, exception handling, test assertions, and working with streams.  Each chapter covers one comparison of code snippets. The starting code snippet is displayed on one page and the revised edition on the next page, so both are visible to the reader at the same time (at least in the print and pdf versions). The improvements between the starting code and the revised code are discussed in the text.\nThe last chapter deviates from this structure in that it explains aspects of software development that are important for \u0026ldquo;real live\u0026rdquo; software projects that cannot be explained alongside of code examples. Among others, these topics include\n static code analysis, continuous integration, and logging.  Likes The book is very strictly structured with every chapter having the same rigid pattern of an initial code snippet and a revised version of this code snippet along with a discussion of what has been improved. This appeals to my nerd\u0026rsquo;s sense of symmetry.\nThe chapters are each very short. There is no chapter that should take more than about 5 minutes to read. I like short chapters very much, since that tends to pull me through a book faster than long chapters would.\nI like the idea of direct comparison of code examples. In the print and PDF versions, you can see both the original and the improved code examples at a glance, which makes the changes much easier to grasp.\nThe best practices described in the book are directly applicable to every-day programming, so the contents will stick best if you read it while you are programming every day.\nThe last chapter about things like continuous integration, logging, and static code analysis is very valuable for beginners. In my experience, students fresh from college don\u0026rsquo;t know anything about such things. Just knowing the basic ideas explained in this book should help them get along better in job interviews.\nSuggestions for Improvement I missed mention of two basic tools I have used in every-day Java development for a couple of years know.\nThe first is AssertJ, which is the de-facto standard library to create highly readable assertions. The chapter about assertions discusses the JUnit framework, but does not mention AssertJ. I think AssertJ might even have deserved its own chapter, comparing unreadable assertions with beautiful AssertJ assertions.\nSecond, in the chapter about static code analysis, I would have mentioned spotless alongside Google Java Format as a tool for enforcing the same code format in a team. It\u0026rsquo;s more flexible in that it does not restrict you to the Google Code Format, but this might just be my personal taste, since there is a point in not having much freedom in code style.\nIn the chapter about combining state with behavior, I would have expected a mention of DDD and rich domain models. Just as a pointer for further research, so that the reader can connect the dots.\nMy Key Takeaways Having more than 10 years of Java experience under my belt, I really did not learn very much from this book. As advertised, it\u0026rsquo;s a book for beginners. I would have profited greatly from the book, however, had it been available 8-10 years ago.\nI did learn two things I was not aware of, though.\nFirst, I learned that integer and long values in Java may contain underscores to use as thousands separators, for instance (i.e. 1_000_000). It\u0026rsquo;s not discussed in a single word in the book but used in enough of the code snippets to have made me google it.\nSecond, I was not aware of the chaining potential of Optional\u0026rsquo;s stream-like functional methods like orElseThrow() and get(). This had a welcome impact in my programming style.\nConclusion The book did not contain a single best practice I did not agree with, so I definitely recommend it to anyone starting off with Java. If you have more than a couple years of practice with Java, though, you should not expect to learn too much from this book.\nIf you are a college student learning Java or a seasoned programmer switching to Java, Java by Comparison is definitely worth its money.\nAs the authors themselves do in the preface of the book, I explicitly suggest reading the PDF or print version and not the eBook version. The eBook version is no fun since you cannot see more than one code snippet at a time and often have to scroll back-and-forth.\n","date":"January 28, 2019","image":"https://reflectoring.io/images/stock/0004-book-coffee-1200x628-branded_hud7b2f2f7fd5663f7856fb589e0dfd11d_129593_650x0_resize_q90_box.jpg","permalink":"/book-review-java-by-comparison/","title":"Book Review: Java by Comparison"},{"categories":["Spring Boot"],"contents":"In this second part of the series on testing with Spring Boot, we\u0026rsquo;re going to look at web controllers. First, we\u0026rsquo;re going to explore what a web controller actually does so that we can build tests that cover all of its responsibilities.\nThen, we\u0026rsquo;re going to find out how to cover each of those responsibilities in a test. Only with those responsibilities covered can we be sure that our controllers behave as expected in a production environment.\n Example Code This article is accompanied by a working code example on GitHub. The \u0026ldquo;Testing with Spring Boot\u0026rdquo; Series This tutorial is part of a series:\n Unit Testing with Spring Boot Testing Spring MVC Web Controllers with Spring Boot and @WebMvcTest Testing JPA Queries with Spring Boot and @DataJpaTest Integration Tests with @SpringBootTest  If you like learning from videos, make sure to check out Philip\u0026rsquo;s Testing Spring Boot Applications Masterclass (if you buy through this link, I get a cut).\nDependencies We\u0026rsquo;re going to use JUnit Jupiter (JUnit 5) as the testing framework, Mockito for mocking, AssertJ for creating assertions and Lombok to reduce boilerplate code:\ndependencies { compile(\u0026#39;org.springframework.boot:spring-boot-starter-web\u0026#39;) compileOnly(\u0026#39;org.projectlombok:lombok\u0026#39;) testCompile(\u0026#39;org.springframework.boot:spring-boot-starter-test\u0026#39;) testCompile \u0026#39;org.junit.jupiter:junit-jupiter-engine:5.2.0\u0026#39; testCompile(\u0026#39;org.mockito:mockito-junit-jupiter:2.23.0\u0026#39;) } AssertJ and Mockito automatically come with the dependency to spring-boot-starter-test.\nResponsibilities of a Web Controller Let\u0026rsquo;s start by looking at a typical REST controller:\n@RestController @RequiredArgsConstructor class RegisterRestController { private final RegisterUseCase registerUseCase; @PostMapping(\u0026#34;/forums/{forumId}/register\u0026#34;) UserResource register( @PathVariable(\u0026#34;forumId\u0026#34;) Long forumId, @Valid @RequestBody UserResource userResource, @RequestParam(\u0026#34;sendWelcomeMail\u0026#34;) boolean sendWelcomeMail) { User user = new User( userResource.getName(), userResource.getEmail()); Long userId = registerUseCase.registerUser(user, sendWelcomeMail); return new UserResource( userId, user.getName(), user.getEmail()); } } The controller method is annotated with @PostMapping to define the URL, HTTP method and content type it should listen to.\nIt takes input via parameters annotated with @PathVariable, @RequestBody, and @RequestParam which are automatically filled from the incoming HTTP request.\nParameters my be annotated with @Valid to indicate that Spring should perform bean validation on them.\nThe controller then works with those parameters, calling the business logic before returning a plain Java object, which is automatically mapped into JSON and written into the HTTP response body by default.\nThere\u0026rsquo;s a lot of Spring magic going on here. In summary, for each request, a controller usually does the following steps:\n   # Responsibility Description     1. Listen to HTTP Requests The controller should respond to certain URLs, HTTP methods and content types.   2. Deserialize Input The controller should parse the incoming HTTP request and create Java objects from variables in the URL, HTTP request parameters and the request body so that we can work with them in the code.   3. Validate Input The controller is the first line of defense against bad input, so it\u0026rsquo;s a place where we can validate the input.   4. Call the Business Logic Having parsed the input, the controller must transform the input into the model expected by the business logic and pass it on to the business logic.   5. Serialize the Output The controller takes the output of the business logic and serializes it into an HTTP response.   6. Translate Exceptions If an exception occurs somewhere on the way, the controller should translate it into a meaningful error message and HTTP status for the user.    A controller apparently has a lot to do!\nWe should take care not to add even more responsibilities like performing business logic. Otherwise, our controller tests will become fat and unmaintainable.\nHow are we going to write meaningful tests that cover all of those responsibilities?\nUnit or Integration Test? Do we write unit tests? Or integration tests? What\u0026rsquo;s the difference, anyways? Let\u0026rsquo;s discuss both approaches and decide for one.\nIn a unit test, we would test the controller in isolation. That means we would instantiate a controller object, mocking away the business logic, and then call the controller\u0026rsquo;s methods and verify the response.\nWould that work in our case? Let\u0026rsquo;s check which of the 6 responsibilities we have identified above we can cover in an isolated unit test:\n   # Responsibility Covered in a Unit Test?     1. Listen to HTTP Requests  No, because the unit test would not evaluate the @PostMapping annotation and similar annotations specifying the properties of a HTTP request.   2. Deserialize Input  No, because annotations like @RequestParam and @PathVariable would not be evaluated. Instead we would provide the input as Java objects, effectively skipping deserialization from an HTTP request.   3. Validate Input  Not when depending on bean validation, because the @Valid annotation would not be evaluated.   4. Call the Business Logic  Yes, because we can verify if the mocked business logic has been called with the expected arguments.   5. Serialize the Output  No, because we can only verify the Java version of the output, and not the HTTP response that would be generated.   6. Translate Exceptions  No. We could check if a certain exception was raised, but not that it was translated to a certain JSON response or HTTP status code.    In summary, a simple unit test will not cover the HTTP layer. So, we need to introduce Spring to our test to do the HTTP magic for us. Thus, we\u0026rsquo;re building an integration test that tests the integration between our controller code and the components Spring provides for HTTP support.\nAn integration test with Spring fires up a Spring application context that contains all the beans we need. This includes framework beans that are responsible for listening to certain URLs, serializing and deserializing to and from JSON and translating exceptions to HTTP. These beans will evaluate the annotations that would be ignored by a simple unit test.\nSo, how do we do it?\nVerifying Controller Responsibilities with @WebMvcTest Spring Boot provides the @WebMvcTest annotation to fire up an application context that contains only the beans needed for testing a web controller:\n@ExtendWith(SpringExtension.class) @WebMvcTest(controllers = RegisterRestController.class) class RegisterRestControllerTest { @Autowired private MockMvc mockMvc; @Autowired private ObjectMapper objectMapper; @MockBean private RegisterUseCase registerUseCase; @Test void whenValidInput_thenReturns200() throws Exception { mockMvc.perform(...); } } @ExtendWith  The code examples in this tutorial use the @ExtendWith annotation to tell JUnit 5 to enable Spring support. As of Spring Boot 2.1, we no longer need to load the SpringExtension because it's included as a meta annotation in the Spring Boot test annotations like @DataJpaTest, @WebMvcTest, and @SpringBootTest.  We can now @Autowire all the beans we need from the application context. Spring Boot automatically provides beans like an ObjectMapper to map to and from JSON and a MockMvc instance to simulate HTTP requests.\nWe use @MockBean to mock away the business logic, since we don\u0026rsquo;t want to test integration between controller and business logic, but between controller and the HTTP layer. @MockBean automatically replaces the bean of the same type in the application context with a Mockito mock.\nYou can read more about the @MockBean annotation in my article about mocking.\nUse @WebMvcTest with or without the controllers parameter?  By setting the controllers parameter to RegisterRestController.class in the example above, we're telling Spring Boot to restrict the application context created for this test to the given controller bean and some framework beans needed for Spring Web MVC. All other beans we might need have to be included separately or mocked away with @MockBean.  If we leave away the controllers parameter, Spring Boot will include all controllers in the application context. Thus, we need to include or mock away all beans any controller depends on. This makes for a much more complex test setup with more dependencies, but saves runtime since all controller tests will re-use the same application context.  I tend to restrict the controller tests to the narrowest application context possible in order to make the tests independent of beans that I don't even need in my test, even though Spring Boot has to create a new application context for each single test.  Let\u0026rsquo;s go through each of the responsibilities and see how we can use MockMvc to verify each of them in order build the best integration test we can.\n1. Verifying HTTP Request Matching Verifying that a controller listens to a certain HTTP request is pretty straightforward. We simply call the perform() method of MockMvc and provide the URL we want to test:\nmockMvc.perform(post(\u0026#34;/forums/42/register\u0026#34;) .contentType(\u0026#34;application/json\u0026#34;)) .andExpect(status().isOk()); Aside from verifying that the controller responds to a certain URL, this test also verifies the correct HTTP method (POST in our case) and the correct request content type. The controller we have seen above would reject any requests with a different HTTP method or content type.\nNote that this test would still fail, yet, since our controller expects some input parameters.\nMore options to match HTTP requests can be found in the Javadoc of MockHttpServletRequestBuilder.\n2. Verifying Input Deserialization To verify that the input is successfully deserialized into Java objects, we have to provide it in the test request. Input can be either the JSON content of the request body (@RequestBody), a variable within the URL path (@PathVariable), or an HTTP request parameter (@RequestParam):\n@Test void whenValidInput_thenReturns200() throws Exception { UserResource user = new UserResource(\u0026#34;Zaphod\u0026#34;, \u0026#34;zaphod@galaxy.net\u0026#34;); mockMvc.perform(post(\u0026#34;/forums/{forumId}/register\u0026#34;, 42L) .contentType(\u0026#34;application/json\u0026#34;) .param(\u0026#34;sendWelcomeMail\u0026#34;, \u0026#34;true\u0026#34;) .content(objectMapper.writeValueAsString(user))) .andExpect(status().isOk()); } We now provide the path variable forumId, the request parameter sendWelcomeMail and the request body that are expected by the controller. The request body is generated using the ObjectMapper provided by Spring Boot, serializing a UserResource object to a JSON string.\nIf the test is green, we now know that the controller\u0026rsquo;s register() method has received those parameters as Java objects and that they have been successfully parsed from the HTTP request.\n3. Verifying Input Validation Let\u0026rsquo;s say the UserResource uses the @NotNull annotation to deny null values:\n@Value public class UserResource { @NotNull private final String name; @NotNull private final String email; } Bean validation is triggered automatically when we add the @Valid annotation to a method parameter like we did with the userResource parameter in our controller. So, for the happy path (i.e. when the validation succeeds), the test we created in the previous section is enough.\nIf we want to test if the validation fails as expected, we need to add a test case in which we send an invalid UserResource JSON object to the controller. We then expect the controller to return HTTP status 400 (Bad Request):\n@Test void whenNullValue_thenReturns400() throws Exception { UserResource user = new UserResource(null, \u0026#34;zaphod@galaxy.net\u0026#34;); mockMvc.perform(post(\u0026#34;/forums/{forumId}/register\u0026#34;, 42L) ... .content(objectMapper.writeValueAsString(user))) .andExpect(status().isBadRequest()); } Depending on how important the validation is for the application, we might add a test case like this for each invalid value that is possible. This can quickly add up to a lot of test cases, though, so you should talk to your team about how you want to handle validation tests in your project.\n4. Verifying Business Logic Calls Next, we want to verify that the business logic is called as expected. In our case, the business logic is provided by the RegisterUseCase interface and expects a User object and a boolean as input:\ninterface RegisterUseCase { Long registerUser(User user, boolean sendWelcomeMail); } We expect the controller to transform the incoming UserResource object into a User and to pass this object into the registerUser() method.\nTo verify this, we can ask the RegisterUseCase mock, which has been injected into the application context with the @MockBean annotation:\n@Test void whenValidInput_thenMapsToBusinessModel() throws Exception { UserResource user = new UserResource(\u0026#34;Zaphod\u0026#34;, \u0026#34;zaphod@galaxy.net\u0026#34;); mockMvc.perform(...); ArgumentCaptor\u0026lt;User\u0026gt; userCaptor = ArgumentCaptor.forClass(User.class); verify(registerUseCase, times(1)).registerUser(userCaptor.capture(), eq(true)); assertThat(userCaptor.getValue().getName()).isEqualTo(\u0026#34;Zaphod\u0026#34;); assertThat(userCaptor.getValue().getEmail()).isEqualTo(\u0026#34;zaphod@galaxy.net\u0026#34;); } After the call to the controller has been performed, we use an ArgumentCaptor to capture the User object that was passed to the RegisterUseCase.registerUser() and assert that it contains the expected values.\nThe verify call checks that registerUser() has been called exactly once.\nNote that if we do a lot of assertions on User objects, we can create our own custom Mockito assertion methods for better readability.\n5. Verifying Output Serialization After the business logic has been called, we expect the controller to map the result into a JSON string and include it in the HTTP response. In our case, we expect the HTTP response body to contain a valid UserResource object in JSON form:\n@Test void whenValidInput_thenReturnsUserResource() throws Exception { MvcResult mvcResult = mockMvc.perform(...) ... .andReturn(); UserResource expectedResponseBody = ...; String actualResponseBody = mvcResult.getResponse().getContentAsString(); assertThat(actualResponseBody).isEqualToIgnoringWhitespace( objectMapper.writeValueAsString(expectedResponseBody)); } To do assertions on the response body, we need to store the result of the HTTP interaction in a variable of type MvcResult using the andReturn() method.\nWe can then read the JSON string from the response body and compare it to the expected string using isEqualToIgnoringWhitespace(). We can build the expected JSON string from a Java object using the ObjectMapper provided by Spring Boot.\nNote that we can make this much more readable by using a custom ResultMatcher, as described later.\n6. Verifying Exception Handling Usually, if an exception occurs, the controller should return a certain HTTP status. 400, if something is wrong with the request, 500, if an exception bubbles up, and so on.\nSpring takes care of most of these cases by default. However, if we have a custom exception handling, we want to test it. Let\u0026rsquo;s say we want to return a structured JSON error response with a field name and error message for each field that was invalid in the request. We\u0026rsquo;d create a @ControllerAdvice like this:\n@ControllerAdvice class ControllerExceptionHandler { @ResponseStatus(HttpStatus.BAD_REQUEST) @ExceptionHandler(MethodArgumentNotValidException.class) @ResponseBody ErrorResult handleMethodArgumentNotValidException(MethodArgumentNotValidException e) { ErrorResult errorResult = new ErrorResult(); for (FieldError fieldError : e.getBindingResult().getFieldErrors()) { errorResult.getFieldErrors() .add(new FieldValidationError(fieldError.getField(), fieldError.getDefaultMessage())); } return errorResult; } @Getter @NoArgsConstructor static class ErrorResult { private final List\u0026lt;FieldValidationError\u0026gt; fieldErrors = new ArrayList\u0026lt;\u0026gt;(); ErrorResult(String field, String message){ this.fieldErrors.add(new FieldValidationError(field, message)); } } @Getter @AllArgsConstructor static class FieldValidationError { private String field; private String message; } } If bean validation fails, Spring throws an MethodArgumentNotValidException. We handle this exception by mapping Spring\u0026rsquo;s FieldError objects into our own ErrorResult data structure. The exception handler causes all controllers to return HTTP status 400 in this case and puts the ErrorResult object into the response body as a JSON string.\nTo verify that this actually happens, we expand on our earlier test for failing validations:\n\n@Test void whenNullValue_thenReturns400AndErrorResult() throws Exception { UserResource user = new UserResource(null, \u0026#34;zaphod@galaxy.net\u0026#34;); MvcResult mvcResult = mockMvc.perform(...) .contentType(\u0026#34;application/json\u0026#34;) .param(\u0026#34;sendWelcomeMail\u0026#34;, \u0026#34;true\u0026#34;) .content(objectMapper.writeValueAsString(user))) .andExpect(status().isBadRequest()) .andReturn(); ErrorResult expectedErrorResponse = new ErrorResult(\u0026#34;name\u0026#34;, \u0026#34;must not be null\u0026#34;); String actualResponseBody = mvcResult.getResponse().getContentAsString(); String expectedResponseBody = objectMapper.writeValueAsString(expectedErrorResponse); assertThat(actualResponseBody) .isEqualToIgnoringWhitespace(expectedResponseBody); } Again, we read the JSON string from the response body and compare it against an expected JSON string. Additionally, we check that the response status is 400.\nThis, too, can be implemented in a much more readable manner, as we\u0026rsquo;ll learn below.\nCreating Custom ResultMatchers Certain assertions are rather hard to write and, more importantly, hard to read. Especially when we want to compare the JSON string from the HTTP response to an expected value it takes a lot of code, as we have seen in the last two examples.\nLuckily, we can create custom ResultMatchers that we can use within the fluent API of MockMvc. Let\u0026rsquo;s see how we can do this for our use cases.\nMatching JSON Output Wouldn\u0026rsquo;t it be nice to use the following code to verify if the HTTP response body contains a JSON representation of a certain Java object?\n@Test void whenValidInput_thenReturnsUserResource_withFluentApi() throws Exception { UserResource user = ...; UserResource expected = ...; mockMvc.perform(...) ... .andExpect(responseBody().containsObjectAsJson(expected, UserResource.class)); } No need to manually compare JSON strings anymore. And it\u0026rsquo;s much better readable. In fact, the code is so self-explanatory that I\u0026rsquo;m going to stop explaining here.\nTo be able to use the above code, we create a custom ResultMatcher:\npublic class ResponseBodyMatchers { private ObjectMapper objectMapper = new ObjectMapper(); public \u0026lt;T\u0026gt; ResultMatcher containsObjectAsJson( Object expectedObject, Class\u0026lt;T\u0026gt; targetClass) { return mvcResult -\u0026gt; { String json = mvcResult.getResponse().getContentAsString(); T actualObject = objectMapper.readValue(json, targetClass); assertThat(actualObject).isEqualToComparingFieldByField(expectedObject); }; } static ResponseBodyMatchers responseBody(){ return new ResponseBodyMatchers(); } } The static method responseBody() serves as the entrypoint for our fluent API. It returns the actual ResultMatcher that parses the JSON from the HTTP response body and compares it field by field with the expected object that is passed in.\nMatching Expected Validation Errors We can even go a step further to simplify our exception handling test. It took us 4 lines of code to verify that the JSON response contained a certain error message. We can to it in one line instead:\n@Test void whenNullValue_thenReturns400AndErrorResult_withFluentApi() throws Exception { UserResource user = new UserResource(null, \u0026#34;zaphod@galaxy.net\u0026#34;); mockMvc.perform(...) ... .content(objectMapper.writeValueAsString(user))) .andExpect(status().isBadRequest()) .andExpect(responseBody().containsError(\u0026#34;name\u0026#34;, \u0026#34;must not be null\u0026#34;)); } Again, the code is self-explanatory.\nTo enable this fluent API, we must add the method containsErrorMessageForField() to our ResponseBodyMatchers class from above:\npublic class ResponseBodyMatchers { private ObjectMapper objectMapper = new ObjectMapper(); public ResultMatcher containsError( String expectedFieldName, String expectedMessage) { return mvcResult -\u0026gt; { String json = mvcResult.getResponse().getContentAsString(); ErrorResult errorResult = objectMapper.readValue(json, ErrorResult.class); List\u0026lt;FieldValidationError\u0026gt; fieldErrors = errorResult.getFieldErrors().stream() .filter(fieldError -\u0026gt; fieldError.getField().equals(expectedFieldName)) .filter(fieldError -\u0026gt; fieldError.getMessage().equals(expectedMessage)) .collect(Collectors.toList()); assertThat(fieldErrors) .hasSize(1) .withFailMessage(\u0026#34;expecting exactly 1 error message\u0026#34; + \u0026#34;with field name \u0026#39;%s\u0026#39; and message \u0026#39;%s\u0026#39;\u0026#34;, expectedFieldName, expectedMessage); }; } static ResponseBodyMatchers responseBody() { return new ResponseBodyMatchers(); } } All the ugly code is hidden within this helper class and we can happily write clean assertions in our integration tests.\nConclusion Web controllers have a lot of responsibilities. If we want to cover a web controller with meaningful tests, it\u0026rsquo;s not enough to just check if it returns the correct HTTP status.\nWith @WebMvcTest, Spring Boot provides everything we need to build web controller tests, but for the tests to be meaningful, we need to remember to cover all of the responsibilities. Otherwise, we may be in for ugly surprises at runtime.\nThe example code from this article is available on github.\nIf you like learning from videos, make sure to check out Philip\u0026rsquo;s Testing Spring Boot Applications Masterclass (if you buy through this link, I get a cut).\n","date":"January 19, 2019","image":"https://reflectoring.io/images/stock/0021-controller-1200x628-branded_hu0a4dc1d7f6ebf67c50f4d65bdae8d75a_71759_650x0_resize_q90_box.jpg","permalink":"/spring-boot-web-controller-test/","title":"Testing MVC Web Controllers with Spring Boot and @WebMvcTest"},{"categories":["Spring Boot"],"contents":"Writing good unit tests can be considered an art that is hard to master. But the good news is that the mechanics supporting it are easy to learn.\nThis tutorial provides you with these mechanics and goes into the technical details that are necessary to write good unit tests with a focus on Spring Boot applications.\nWe\u0026rsquo;ll have a look at how to create Spring beans in a testable manner and then discuss usage of Mockito and AssertJ, both libraries that Spring Boot by default includes for testing.\nNote that this article only discusses unit tests. Integration tests, tests of the web layer and tests of the persistence layer will be discussed in upcoming articles of this series.\n Example Code This article is accompanied by a working code example on GitHub. The \u0026ldquo;Testing with Spring Boot\u0026rdquo; Series This tutorial is part of a series:\n Unit Testing with Spring Boot Testing Spring MVC Web Controllers with Spring Boot and @WebMvcTest Testing JPA Queries with Spring Boot and @DataJpaTest Integration Tests with @SpringBootTest  If you like learning from videos, make sure to check out Philip\u0026rsquo;s Testing Spring Boot Applications Masterclass (if you buy through this link, I get a cut).\nDependencies For the unit test in this tutorial, we\u0026rsquo;ll use JUnit Jupiter (JUnit 5), Mockito, and AssertJ. We\u0026rsquo;ll also include Lombok to reduce a bit of boilerplate code:\ndependencies{ compileOnly(\u0026#39;org.projectlombok:lombok\u0026#39;) testCompile(\u0026#39;org.springframework.boot:spring-boot-starter-test\u0026#39;) testCompile \u0026#39;org.junit.jupiter:junit-jupiter-engine:5.2.0\u0026#39; testCompile(\u0026#39;org.mockito:mockito-junit-jupiter:2.23.0\u0026#39;) } Mockito and AssertJ are automatically imported with the spring-boot-starter-test dependency, but we\u0026rsquo;ll have to include Lombok ourselves.\nDon\u0026rsquo;t Use Spring in Unit Tests If you have written tests with Spring or Spring Boot in the past, you\u0026rsquo;ll probably say that we don\u0026rsquo;t need Spring to write unit tests. Why is that?\nConsider the following \u0026ldquo;unit\u0026rdquo; test that tests a single method of the RegisterUseCase class:\n@ExtendWith(SpringExtension.class) @SpringBootTest class RegisterUseCaseTest { @Autowired private RegisterUseCase registerUseCase; @Test void savedUserHasRegistrationDate() { User user = new User(\u0026#34;zaphod\u0026#34;, \u0026#34;zaphod@mail.com\u0026#34;); User savedUser = registerUseCase.registerUser(user); assertThat(savedUser.getRegistrationDate()).isNotNull(); } } This test takes about 4.5 seconds to run on an empty Spring project on my computer.\nBut a good unit test only takes milliseconds. Otherwise it hinders the \u0026ldquo;test / code / test\u0026rdquo; flow promoted by the idea of Test-Driven Development (TDD). But even when we\u0026rsquo;re not practicing TDD, waiting on a test that takes too long ruins our concentration.\nExecution of the test method above actually only takes milliseconds. The rest of the 4.5 seconds is due to the @SpringBootRun telling Spring Boot to set up a whole Spring Boot application context.\nSo we have started the whole application only to autowire a RegisterUseCase instance into our test. It will take even longer once the application gets bigger and Spring has to load more and more beans into the application context.\nSo, why this article when we shouldn\u0026rsquo;t use Spring Boot in a unit test? To be honest, most of this tutorial is about writing unit tests without Spring Boot.\nCreating a Testable Spring Bean However, there are some things we can do to make our Spring beans better testable.\nField Injection is Evil Let\u0026rsquo;s start with a bad example. Consider the following class:\n@Service public class RegisterUseCase { @Autowired private UserRepository userRepository; public User registerUser(User user) { return userRepository.save(user); } } This class cannot be unit tested without Spring because it provides no way to pass in a UserRepository instance. Instead, we need to write the test in the way discussed in the previous section to let Spring create a UserRepository instance and inject it into the field annotated with @Autowired.\nThe lesson here is not to use field injection.\nProviding a Constructor Actually let\u0026rsquo;s not use the @Autowired annotation at all:\n@Service public class RegisterUseCase { private final UserRepository userRepository; public RegisterUseCase(UserRepository userRepository) { this.userRepository = userRepository; } public User registerUser(User user) { return userRepository.save(user); } } This version allows constructor injection by providing a constructor that allows to pass in a UserRepository instance. In the unit test, we can now create such an instance (perhaps a mock instance as we\u0026rsquo;ll discuss later) and pass it into the constructor.\nSpring will automatically use this constructor to instantiate a RegisterUseCase object when creating the production application context. Note that prior to Spring 5, we need to add the @Autowired annotation to the constructor for Spring to find the constructor.\nAlso note that the UserRepository field is now final. This makes sense, since the field content won\u0026rsquo;t ever change during the lifetime of an application. It also helps to avoid programming errors, because the compiler will complain if we have forgotten to initialize the field.\nReducing Boilerplate Code Using Lombok\u0026rsquo;s @RequiredArgsConstructor annotation we can let the constructor be automatically generated:\n@Service @RequiredArgsConstructor public class RegisterUseCase { private final UserRepository userRepository; public User registerUser(User user) { user.setRegistrationDate(LocalDateTime.now()); return userRepository.save(user); } } Now, we have a very concise class without boilerplate code that can be instantiated easily in a plain java test case:\nclass RegisterUseCaseTest { private UserRepository userRepository = ...; private RegisterUseCase registerUseCase; @BeforeEach void initUseCase() { registerUseCase = new RegisterUseCase(userRepository); } @Test void savedUserHasRegistrationDate() { User user = new User(\u0026#34;zaphod\u0026#34;, \u0026#34;zaphod@mail.com\u0026#34;); User savedUser = registerUseCase.registerUser(user); assertThat(savedUser.getRegistrationDate()).isNotNull(); } } There\u0026rsquo;s a piece missing, yet, and that is how to mock away the UserRepository instance our class under test depends on, because we don\u0026rsquo;t want to rely on the real thing, which probably needs a connection to a database.\nUsing Mockito to Mock Dependencies The de-facto standard mocking library nowadays is Mockito. It provides at least two ways to create a mocked UserRepository to fill the blank in the previous code example.\nMocking Dependencies with Plain Mockito The first way is to just use Mockito programmatically:\nprivate UserRepository userRepository = Mockito.mock(UserRepository.class); This will create an object that looks like a UserRepository from the outside. By default, it will do nothing when a method is called and return null if the method has a return value.\nOur test would now fail with a NullPointerException at assertThat(savedUser.getRegistrationDate()).isNotNull() because userRepository.save(user) now returns null.\nSo, we have to tell Mockito to return something when userRepository.save() is called. We do this with the static when method:\n@Test void savedUserHasRegistrationDate() { User user = new User(\u0026#34;zaphod\u0026#34;, \u0026#34;zaphod@mail.com\u0026#34;); when(userRepository.save(any(User.class))).then(returnsFirstArg()); User savedUser = registerUseCase.registerUser(user); assertThat(savedUser.getRegistrationDate()).isNotNull(); } This will make userRepository.save() return the same user object that is passed into the method.\nMockito has a whole lot more features that allow for mocking, matching arguments and verifying method calls. For more information have a look at the reference documentation.\nMocking Dependencies with Mockito\u0026rsquo;s @Mock Annotation An alternative way of creating mock objects is Mockito\u0026rsquo;s @Mock annotation in combination with the MockitoExtension for JUnit Jupiter:\n@ExtendWith(MockitoExtension.class) class RegisterUseCaseTest { @Mock private UserRepository userRepository; private RegisterUseCase registerUseCase; @BeforeEach void initUseCase() { registerUseCase = new RegisterUseCase(userRepository); } @Test void savedUserHasRegistrationDate() { // ...  } } The @Mock annotation specifies the fields in which Mockito should inject mock objects. The @MockitoExtension tells Mockito to evaluate those @Mock annotations because JUnit does not do this automatically.\nThe result is the same as if calling Mockito.mock() manually, it\u0026rsquo;s a matter of taste which way to use. Note, though, that by using MockitoExtension our tests are bound to the test framework.\nNote that instead of constructing an RegisterUseCase object manually, we can just as well use the @InjectMocks annotation on the registerUseCase field. Mockito will then create an instance for us, following a specified algorithm:\n@ExtendWith(MockitoExtension.class) class RegisterUseCaseTest { @Mock private UserRepository userRepository; @InjectMocks private RegisterUseCase registerUseCase; @Test void savedUserHasRegistrationDate() { // ...  } } Creating Readable Assertions with AssertJ Another library that comes automatically with the Spring Boot test support is AssertJ. We have already used it above to implement our assertion:\nassertThat(savedUser.getRegistrationDate()).isNotNull(); However, wouldn\u0026rsquo;t it be nice to make the assertion even more readable? Like this, for example:\nassertThat(savedUser).hasRegistrationDate(); There are many cases where small changes like this make the test so much better to understand. So, let\u0026rsquo;s create our own custom assertion in the test sources folder:\nclass UserAssert extends AbstractAssert\u0026lt;UserAssert, User\u0026gt; { UserAssert(User user) { super(user, UserAssert.class); } static UserAssert assertThat(User actual) { return new UserAssert(actual); } UserAssert hasRegistrationDate() { isNotNull(); if (actual.getRegistrationDate() == null) { failWithMessage( \u0026#34;Expected user to have a registration date, but it was null\u0026#34; ); } return this; } } Now, if we import the assertThat method from the new UserAssert class instead from the AssertJ library, we can use the new, easier to read assertion.\nCreating a custom assertion like this may seem like a lot of work, but it\u0026rsquo;s actually done in a couple minutes. I believe strongly that it\u0026rsquo;s worth to invest these minutes to create readable test code, even if it\u0026rsquo;s only marginally better readable afterwards. We only write the test code once, after all, and others (including \u0026ldquo;future me\u0026rdquo;) have to read, understand and then manipulate the code many, many times during the lifetime of the software.\nIf it still seems like too much work, have a look at AssertJ\u0026rsquo;s Assertions Generator.\nConclusion There are reasons to start up a Spring application in a test, but for plain unit tests, it\u0026rsquo;s not necessary. It\u0026rsquo;s even harmful due to the longer turnaround times. Instead, we should build our Spring beans in a way that easily supports writing plain unit tests for.\nThe Spring Boot Test Starter comes with Mockito and AssertJ as testing libraries.\nLet\u0026rsquo;s exploit those testing libraries to create expressive unit tests!\nThe code example in its final form is available on github.\nIf you like learning from videos, make sure to check out Philip\u0026rsquo;s Testing Spring Boot Applications Masterclass (if you buy through this link, I get a cut).\n","date":"January 12, 2019","image":"https://reflectoring.io/images/stock/0020-black-box-1200x628-branded_hu58a7e8f58d4ad11497d3dd60e6b0f398_85411_650x0_resize_q90_box.jpg","permalink":"/unit-testing-spring-boot/","title":"Unit Testing with Spring Boot"},{"categories":["Meta"],"contents":"It\u0026rsquo;s easiest to review things if they are measured. So, in this post, I\u0026rsquo;ll try to measure as much of my year 2018 in hindsight to draw a conclusion to this year and then make plans for the next.\nThe following will include statistics of my blog and my professional life as well as some more personal things about my year 2018. I hope it\u0026rsquo;s a mix that\u0026rsquo;s interesting to you :).\nThe Blog Let\u0026rsquo;s start with the obvious: some facts and figures about this blog.\nThis year, I wanted to seriously start producing content for reflectoring, but I actually produced less articles than in 2017. In 2018, I published 22 blog articles, including this one:\nHowever, the blog had more than 150.000 unique visitors in 2018, which is a lot more than the meager 11.000 of 2017 (the blog was only included in Google Search in September 2017, though).\nWith about 1.000 unique visitors per week, the blog has now twice the weekly visitors it had in the beginning of 2018:\nAs for the content, I experimented a bit with pretty generic software engineering tips and tricks, but the content most valuable to my readers seem to be hands-on tutorials that scratch a specific itch, especially in combination with Spring Boot, which is not unexpected since the usage of Spring Boot skyrocketed in the last couple years.\nThus, my by far most successful article this year is an in-depth tutorial on all aspects of Bean Validation with Spring Boot:\nFor the next year, I\u0026rsquo;ll concentrate on high-quality Spring Boot tutorials that provide a real hands-on value starting with a series about Testing.\nEditorial Work One reason I was not as productive as I wanted with the blog is that I started editorial work for baeldung.com.\nI reviewed 95 baeldung articles with a total of 66.342 words in 2018. This is just about as much as a short novel!\nThe editorial work gives me ample opportunity to learn new stuff about the Java ecosystem, which is why I\u0026rsquo;d like to keep it up.\nIn 2019, I will drastically reduce the work load, though, to free up more time for my other endeavors.\nThe Book As the members of my mailing list may know, I have started writing on a short eBook with hands-on advice on how to build a software in a \u0026ldquo;Clean Architecture\u0026rdquo; style.\nI was underwhelmed with the online resources on this topic (and with the print resources, too, for that matter). All articles I found discuss \u0026ldquo;Clean Architecture\u0026rdquo; or \u0026ldquo;Hexagonal Architecture\u0026rdquo; on a very generic level without going into the details on how to actually implement such an architecture.\nThe goal of my book is to fill this gap with my interpretation of a hands-on \u0026ldquo;Clean Architecture\u0026rdquo;. Members of my mailing list will get early access for free within the next months, so sign up now if you haven\u0026rsquo;t done so yet!\nWork on the book has been slow in the two months since I started (I have no more than 2 chapters to show), but I\u0026rsquo;m going to dedicate the time I free up from other activities.\nTalks 2018 was the year in which I held the most talks at conferences, yet. I held 6 talks at public software development conferences all over Germany and one at the internal summIT conference of my employer, adesso.\nHere\u0026rsquo;s the list of my talks in 2018. Feel free to get in touch if you\u0026rsquo;re interested in one of the topics:\n Best of REST, Softwerkskammer Ruhrgebiet Contracts Can Be Fun, 3-hour Workshop on Consumer-Driven Contracts at MicroExchange, Berlin Best of REST, JAX, Mainz Open Source @ Work, Java Forum Stuttgart Open Source @ Work, Herbstcampus, Nürnberg Contracts Can Be Fun, Full-Day Workshop on Consumer-Driven Contracts at API Conference, Berlin Clean Architecture, adesso summIT  I have already submitted a talk on the topic of a \u0026ldquo;Clean Architecture\u0026rdquo; at a couple of conferences in 2019, so I hope that 2019 will be just as active as 2018 in this regard.\nAlso, I already have two speaking engagements about the topic of hands-on software development and software erosion in two university classes where I will hopefully transport some hands-on software development experience to the students.\nArticles This year, I wrote only two pieces that were published aside from my blog:\n Architecture Decisions in a Software Development Team on simpleprogrammer.com Vertrag dich mit Microservices - Integration von Microservices mit Consumer-Driven Contracts testen, entwickler magazin 4.2018 (German Print Magazine)  I don\u0026rsquo;t have any plans on publishing aside from my blog (and my book) in 2019, yet, so I\u0026rsquo;ll be opportunistic in this regard.\nReading \u0026amp; Listening Now to the more personal things: which (audio) books have I read and listened to in 2018?\nAs of yet, I have never kept track of my reading, but I often wanted to know if and when I have read a certain book. So, as a new habit, I\u0026rsquo;ll list the books I have been reading.\nAs you\u0026rsquo;ll see, I\u0026rsquo;m a fantasy and sci-fi nerd and I like to read series so that I don\u0026rsquo;t have to learn a whole new universe each time I start a new book:\n Blackcollar: The Judas Solution by Timothy Zahn Blackcollar: The Backlash Mission by Timothy Zahn Blackcollar by Timothy Zahn The Slow Regard of Silent Things by Patrick Rothfuss The Wise Man\u0026rsquo;s Fear by Patrick Rothfuss  As for non-fiction, I have been reading these two books:\n Deep Work by Cal Newport Clean Architecture by Robert C. Martin  Both of these books have influenced the way I think about my work, so I can recommend them very much.\nSince my commute to work and back takes about 1.5 to 2 hours every work day, I have been a very active audio book listener this year. These are the books I listened to. No surprise, a lot of sci-fi and fantasy:\n Warbreaker by Brandon Sanderson QualityLand by Marc-Uwe Kling Skyward by Brandon Sanderson The Singularity Trap by Dennis E. Taylor Edgedancer by Brandon Sanderson Oathbringer by Brandon Sanderson Words of Radiance by Brandon Sanderson The Way of Kings by Brandon Sanderson It by Stephen King Enceladus by Brandon Q. Morris Artemis by Andy Weir Tuf Voyaging by George R. R. Martin Omni by Andreas Brandhorst Giants' Star by James P. Hogan The Gentle Giants of Ganymede by James P. Hogan Inherit the Stars by James P. Hogan  Wow, I wasn\u0026rsquo;t aware of how much time I seem to spend driving. I\u0026rsquo;ll try to squeeze in more home office days in 2019.\nVideo Games And here\u0026rsquo;s the list of video games I have played in 2018. I haven\u0026rsquo;t played as much as in the years before (mostly due to the other activities listed above), but I have managed to get some shallow video gaming into my schedule:\n Darkest Dungeon The Banner Saga 3 XCOM 2: War of the Chosen Fortnite (until I realized it swallowed too much of my time) Steamworld Dig 1 \u0026amp; 2 (together with my son) Into the Breach Faster than Light  No real blockbuster game this year (except perhaps for Fortnite), as I was afraid of spending too much time in the games.\nI will choose my games very carefully in 2019, so as not to let them eat up my time too much.\nConclusion In conclusion, 2018 was a fun year for me, especially with the many speaking opportunities.\nHowever, I definitely felt the drain on my time by all the different activities so that some activities, like nursing my blog, did not get the attention I wanted them to have.\nSo, my new year\u0026rsquo;s resolutions are as follows:\n I will measure how much time I spend with each activity and how productive I am in each activity to be able to prioritize consciously. It\u0026rsquo;s actually very much like setting up a monitoring infrastructure for a software product. Since I\u0026rsquo;m a great advocate of monitoring I wonder why I haven\u0026rsquo;t applied this concept to my professional life earlier\u0026hellip; . I will reduce my blogging to high-quality and in-depth Spring Boot tutorials for the time being. I will finish my eBook \u0026ldquo;Getting Your Hands Dirty on Clean Architecture\u0026rdquo; within the first half of 2019. I will schedule my time aggressively and schedule as many home office days as possible to gain more time on \u0026ldquo;deep work\u0026rdquo;.  That\u0026rsquo;s it for 2018. I wish a happy year to all of my readers and hope that you will meet your goals for 2019.\n","date":"December 31, 2018","image":"https://reflectoring.io/images/stock/0024-2018-1200x628-branded_hu1c806d92f13fc390fc2b6f0fc80b9aa3_223428_650x0_resize_q90_box.jpg","permalink":"/review-2018/","title":"My Personal Review of 2018"},{"categories":["programming"],"contents":"Consumer-driven contract (CDC) tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture.\nIn this article, we\u0026rsquo;re going to create a contract between a Node-based consumer and provider of asynchronous messages with Pact.\nWe\u0026rsquo;ll then create a consumer and a provider test verifying that both the consumer and provider work as defined by the contract.\n Example Code This article is accompanied by a working code example on GitHub. Setting Up a Node Project Let\u0026rsquo;s start by setting up a Node project from scratch that will later contain both, the message consumer and the message provider.\nNote that in the real world, the consumer and producer will most likely be in completely different projects.\nTo set up the project, we create a package.json file with the following content:\n// package.json { \u0026#34;name\u0026#34;: \u0026#34;pact-node-messages\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;main\u0026#34;: \u0026#34;index.js\u0026#34;, \u0026#34;scripts\u0026#34;: { \u0026#34;test:pact:consumer\u0026#34;: \u0026#34;mocha src/consumer/*.spec.js --exit\u0026#34;, \u0026#34;test:pact:provider\u0026#34;: \u0026#34;mocha src/provider/*.spec.js --exit\u0026#34;, \u0026#34;publish:pact\u0026#34;: \u0026#34;node pact/publish.js\u0026#34; }, \u0026#34;author\u0026#34;: \u0026#34;Zaphod Beeblebrox\u0026#34;, \u0026#34;license\u0026#34;: \u0026#34;MIT\u0026#34;, \u0026#34;devDependencies\u0026#34;: { \u0026#34;@pact-foundation/pact\u0026#34;: \u0026#34;^7.0.3\u0026#34;, \u0026#34;mocha\u0026#34;: \u0026#34;^5.2.0\u0026#34; } } Noteworthy in the package.json file are the scripts and devDependencies sections.\nIn the devDependencies section, we pull in the following dependencies used only in tests:\n we use @pact-foundation/pact as the framework to facilitate our contract tests, both for the consumer and provider side we use mocha as the testing framework to drive the contract tests.  In the scripts section, we have created three scripts:\n with npm test:pact:consumer, we tell mocha to run the consumer-side contract tests with npm publish:pact, we can publish the contract file created by the consumer-side contract test with npm test:pact:provider, we can then tell mocha to run the provider-side contract tests against the previously published contracts  Note the --exit in both test scripts. This is added to tell mocha to kill the process after having run all tests, instead of waiting for changes in the source files and then automatically re-running the tests. This is needed to make the tests runnable within a CI pipeline.\nDefining the Message Structure Since we want to exchange a message between a consumer and a provider, the next step is to define the message structure.\nAs an example to work with, we\u0026rsquo;ll use the \u0026ldquo;Hero\u0026rdquo; domain. The message provider wants to express that a new Hero has been created, so we create a class named HeroCreatedEvent that both the consumer and the provider can use to send and receive a message (the terms \u0026ldquo;event\u0026rdquo; and \u0026ldquo;message\u0026rdquo; are used interchangably in the rest of this tutorial):\n// ./src/common/hero-created-event.js class HeroCreatedEvent { constructor(name, superpower, universe, id) { this.id = id; this.name = name; this.superpower = superpower; this.universe = universe; } static validateUniverse(event) { if (typeof event.universe !== \u0026#39;string\u0026#39;) { throw new Error(`Hero universe must be a string! Invalid value: ${event.universe}`) } } static validateSuperpower(event) { if (typeof event.superpower !== \u0026#39;string\u0026#39;) { throw new Error(`Hero superpower must be a string! Invalid value: ${event.superpower}`) } } static validateName(event) { if (typeof event.name !== \u0026#39;string\u0026#39;) { throw new Error(`Hero name must be a string! Invalid value: ${event.name}`); } } static validateId(event) { if (typeof event.id !== \u0026#39;number\u0026#39;) { throw new Error(`Hero id must be a number! Invalid value: ${event.id}`) } } } module.exports = HeroCreatedEvent; The class simply contains a couple of attributes and a method to validate each attribute. We\u0026rsquo;ll talk about why validation is important later.\nThere are probably a lot of other, not-so-verbose, ways of doing validation in Javascript, but bear with me here :).\nImplementing the Message Consumer When doing consumer-driven contracts we start with the consumer-side. So let\u0026rsquo;s see how to implement the consumer.\nMessage Handler Our message consumer should receive a HeroCreatedEvent, so we\u0026rsquo;re simply building an event handler with a function that takes an object and validates if it really is a HeroCreatedEvent:\n// ./src/consumer/hero-event-handler.js const HeroCreatedEvent = require(\u0026#39;../common/hero-created-event\u0026#39;); exports.HeroEventHandler = { handleHeroCreatedEvent: (message) =\u0026gt; { HeroCreatedEvent.validateId(message); HeroCreatedEvent.validateName(message); HeroCreatedEvent.validateSuperpower(message); HeroCreatedEvent.validateUniverse(message); // ... pass the event into domain logic  } }; Again, such an event handler can be implemented in a myriad of other ways, it\u0026rsquo;s just important that it takes an event as an argument and validates that it really has all attributes expected of such an event.\nThe handler should then forward the event to the domain logic that actually processes the event.\nThe handler should not implement that domain logic itself. Instead, in the context of the upcoming contract test, the domain logic should be mocked away, for example by using dependency injection.\nThis way, we don\u0026rsquo;t have to pull up a database and whatever other dependencies our consumer application needs to function properly.\nWhat About My Messaging Middleware? You might be wondering where the messaging middleware comes into play. We might use an on-premise messaging platform like Kafka or RabbitMQ or we could use a cloud provider like Amazon Kinesis.\nHowever, for our contract tests, the messaging middleware is irrelevant. We want to verify that provider and consumer speak the same language (i.e. use the same message structure). We don\u0026rsquo;t want to test connectivity to our messaging middleware.\nTo be able to test the message structure without the messaging middleware, we need a clean architecture for our message handler.\nIn production, there will be a message listener in front of our handler that actually connects to the middleware and forwards the plain message to the handler.\nThe handler in turn forwards the validated message to the domain logic, which we can mock away in the contract test.\nConsumer-Side Contract Test Let\u0026rsquo;s create the consumer-side contract test next:\n// ./src/consumer/hero-event-handler.spec.js const {MessageConsumerPact, Matchers, synchronousBodyHandler} = require(\u0026#39;@pact-foundation/pact\u0026#39;); const {HeroEventHandler} = require(\u0026#39;./hero-event-handler\u0026#39;); const path = require(\u0026#39;path\u0026#39;); describe(\u0026#34;message consumer\u0026#34;, () =\u0026gt; { const messagePact = new MessageConsumerPact({ consumer: \u0026#34;node-message-consumer\u0026#34;, provider: \u0026#34;node-message-provider\u0026#34;, dir: path.resolve(process.cwd(), \u0026#34;pacts\u0026#34;), pactfileWriteMode: \u0026#34;update\u0026#34;, logLevel: \u0026#34;info\u0026#34;, }); describe(\u0026#34;\u0026#39;hero created\u0026#39; message Handler\u0026#34;, () =\u0026gt; { it(\u0026#34;should accept a valid hero created message\u0026#34;, (done) =\u0026gt; { messagePact .expectsToReceive(\u0026#34;a hero created message\u0026#34;) .withContent({ id: Matchers.like(42), name: Matchers.like(\u0026#34;Superman\u0026#34;), superpower: Matchers.like(\u0026#34;flying\u0026#34;), universe: Matchers.term({generate: \u0026#34;DC\u0026#34;, matcher: \u0026#34;^(DC|Marvel)$\u0026#34;}) }) .withMetadata({ \u0026#34;content-type\u0026#34;: \u0026#34;application/json\u0026#34;, }) .verify(synchronousBodyHandler(HeroEventHandler.handleHeroCreatedEvent)) .then(() =\u0026gt; done(), (error) =\u0026gt; done(error)); }).timeout(5000); }); }); In the test, we create a MessageConsumerPact and provide some metadata for the contract:\n the consumer option defines the name of the consumer application the provider option defines the name of the provider application we\u0026rsquo;re receiving the message from with the dir option we can point to the directory where Pact should create the contract files (\u0026ldquo;pact files\u0026rdquo;) the pactfileWriteMode option defines if existing pact files should be updated or overwritten the logLevel option finally defines the granularity of Pact\u0026rsquo;s logging output.  We\u0026rsquo;re using the MessageConsumerPact object in the test to define a message interaction between the provider and consumer. In this interaction, we define the structure of the message, i.e. the attributes of a HeroCreatedEvent.\nThis is our contract definition and will be stored in a pact file later.\nNext, we\u0026rsquo;re passing our event handler into the verify function. Depending on whether our event handler returns synchronously or asynchronously (i.e. returns a Promise), we have to wrap it into a synchronousBodyHandler or a asynchronousBodyHandler.\nPact will now create a message from the contract we have defined above and pass it into the handler. Since the handler verifies incoming messages, the test will fail if the contract defines a different structure from the structure the handler expects.\nThis is why the validation in the handler is so important. If the validation step was missing, the test might be green even for messages not matching the domain logic\u0026rsquo;s expectations, leading to painful errors in production.\nWe can now run the test with the command npm run test:pact:consumer and it should pass and create a pact file in the ./pacts folder.\nPublishing the Contract Since the provider needs the contract for testing, we need to publish it. We can do so with a simple script:\n// ./pact/publish.js let publisher = require(\u0026#39;@pact-foundation/pact-node\u0026#39;); let path = require(\u0026#39;path\u0026#39;); let opts = { pactFilesOrDirs: [path.resolve(process.cwd(), \u0026#39;pacts\u0026#39;)], pactBroker: \u0026#39;BROKER_URL\u0026#39;, pactBrokerUsername: process.env.PACT_USERNAME, pactBrokerPassword: process.env.PACT_PASSWORD, consumerVersion: \u0026#39;2.0.0\u0026#39; }; publisher.publishPacts(opts).then( () =\u0026gt; console.log(\u0026#34;Pacts successfully published\u0026#34;)); When this script is called, it will send all pacts in the ./pacts folder to the specified Pact Broker. A Pact Broker serves as neutral ground between the consumer and provider that both can access from a CI pipeline.\nWe can now publish the pact created earlier with the command npm run publish:pact.\nImplementing the Message Provider Now that the Pact is published, we can implement and test the message provider.\nMessage Producer Similar to the message handler on the consumer side, the message producer has a very specific responsibility, namely being the single instance in the provider application that creates HeroCreatedEvents:\n// ./src/provider/hero-event-producer.js const HeroCreatedEvent = require(\u0026#39;../common/hero-created-event\u0026#39;); exports.CreateHeroEventProducer = { produceHeroCreatedEvent: () =\u0026gt; { return new Promise((resolve, reject) =\u0026gt; { resolve(new HeroCreatedEvent(\u0026#34;Superman\u0026#34;, \u0026#34;Flying\u0026#34;, \u0026#34;DC\u0026#34;, 42)); }); } }; I\u0026rsquo;ll stress it again to make the importance clear: the above event producer must be the single place in the whole provider application where events of Type HeroCreatedEvent are created.\nThis way we\u0026rsquo;re making sure that in our provider test, we\u0026rsquo;re testing against the message structure that is actually used in the provider code base.\nAlso similar to the consumer side, the message producer needs no connection to the messaging middleware. In production, the domain logic will call our producer to create an event and then pass it to the messaging middleware.\nIf you desing the message producer to send the events to the messaging middleware directly, make sure to mock that dependency away in the upcoming contract test.\nProvider-Side Contract Test Let\u0026rsquo;s verify that our message producer implementation actually creates messages that satisfy the contract\u0026rsquo;s dependencies.\nFor this, we create another test:\n// ./src/provider/hero-event-producer.spec.js const {MessageProviderPact} = require(\u0026#39;@pact-foundation/pact\u0026#39;); const {CreateHeroEventProducer} = require(\u0026#39;./hero-event-producer\u0026#39;); const path = require(\u0026#39;path\u0026#39;); describe(\u0026#34;message producer\u0026#34;, () =\u0026gt; { const messagePact = new MessageProviderPact({ messageProviders: { \u0026#34;a hero created message\u0026#34;: () =\u0026gt; CreateHeroEventProducer.produceHeroCreatedEvent(), }, log: path.resolve(process.cwd(), \u0026#34;logs\u0026#34;, \u0026#34;pact.log\u0026#34;), logLevel: \u0026#34;info\u0026#34;, provider: \u0026#34;node-message-provider\u0026#34;, pactBrokerUrl: \u0026#34;BROKER_URL\u0026#34;, pactBrokerUsername: process.env.PACT_USERNAME, pactBrokerPassword: process.env.PACT_PASSWORD }); describe(\u0026#34;\u0026#39;hero created\u0026#39; message producer\u0026#34;, () =\u0026gt; { it(\u0026#34;should create a valid hero created message\u0026#34;, (done) =\u0026gt; { messagePact .verify() .then(() =\u0026gt; done(), (error) =\u0026gt; done(error)); }).timeout(5000); }); }); First, we\u0026rsquo;re creating an instance of MessageProviderPact and again provide some metadata:\n in the messageProviders map, we define a message producer for each interaction of the contracts we\u0026rsquo;re testing; this is where we pass in our producer implementation the log option allows to specify the path to a log file (definitely check this log file when running into errors!) the provider option allows us to define the name of our provider; Pact will verify the provider against all contracts from the Pact Broker that it finds with this provider name with the pactBroker* options we define the connection to the Pact Broker  Note that due to a bug or configuration error I was not able to successfully run the provider test against a pact broker (in fact, the test always succeeded, even if the message producer produced a message with an invalid structure). Instead, I use the pactUrls option to load the contract from a file until the issue is solved.\nIn the actual test, we\u0026rsquo;re simply calling the verify() function on the MessageProviderPact instance. Pact will then run through all contracts associated with the provider and call our message producer to create an event. Pact will then check that the structure of that event matches to the structure defined in the contract.\nWe can now run the provider test with the command npm run test:pact:provider and it should succeed. If we change the event producer to return an invalid event it should fail.\nConclusion In this tutorial, we have created a messaging consumer and provider based on Node and tested them against a contract created with Pact.\nWe learned that for those contract tests we don\u0026rsquo;t need a connection to the actual messaging middleware and that it\u0026rsquo;s important to validate incoming messages on the consumer side and to have a single point of responsibility for creating messages on the provider side.\nYou can access the code examples on my github repo.\n","date":"November 14, 2018","image":"https://reflectoring.io/images/stock/0025-signature-1200x628-branded_hu40d5255a109b1d14ac3f4eab2daeb887_126452_650x0_resize_q90_box.jpg","permalink":"/pact-node-messaging/","title":"Implementing a Consumer-Driven Contract between a Node Message Consumer and a Node Message Producer"},{"categories":["programming"],"contents":"Consumer-driven contract (CDC) tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture.\nThis article explains the steps of setting up a GraphQL client (or \u0026ldquo;consumer\u0026rdquo;) using the Apollo framework. We\u0026rsquo;ll then create and publish a consumer-driven contract for the GraphQL interaction between the GraphQL client and the API provider and implement a contract test that validates that our consumer is working as expected by the contract. For this, we\u0026rsquo;re using the Node version of the Pact framework.\nThis tutorial builds upon a recent tutorial about creating a React consumer for a REST API, so you\u0026rsquo;ll find some links to that tutorial for more detailed explanations.\n Example Code This article is accompanied by a working code example on GitHub. Creating the Node App To set up a Node app, follow the instructions in the previous tutorial. There, we\u0026rsquo;re using the create-react-app tool to create a React client that already has Jest set up as a testing framework.\nHowever, since we\u0026rsquo;re not using React in this tutorial, you can also create a plain Node app. Then you have to set up a test framework manually, though.\nAdding Dependencies In our package.json, we need to declare some additional dependencies:\n{ \u0026#34;dependencies\u0026#34;: { \u0026#34;apollo-cache-inmemory\u0026#34;: \u0026#34;^1.3.9\u0026#34;, \u0026#34;apollo-client\u0026#34;: \u0026#34;^2.4.5\u0026#34;, \u0026#34;apollo-link-http\u0026#34;: \u0026#34;^1.5.5\u0026#34;, \u0026#34;graphql\u0026#34;: \u0026#34;^14.0.2\u0026#34;, \u0026#34;graphql-tag\u0026#34;: \u0026#34;^2.10.0\u0026#34;, \u0026#34;node-fetch\u0026#34;: \u0026#34;^2.2.1\u0026#34; } }  apollo-client provides Apollo\u0026rsquo;s GraphQL client implementation apollo-cache-inmemory contains Apollo\u0026rsquo;s implementation of an in-memory-cache that is used to cache GraphQL query results to reduce the number of requests to the server apollo-link-http allows us to use GraphQL over HTTP graphql and graphql-tag provide the means to work with GraphQL queries node-fetch implements the global fetch operation that is available in browsers, but not in a Node environment.  Don\u0026rsquo;t forget to run npm install after changing the dependencies.\nSetting Up Jest We\u0026rsquo;re using Jest as the testing framework for our contract tests.\nFollow the instructions in the previous tutorial to set up Jest. If you want the code of the previous tutorial and the code of this tutorial to exist in parallel, note these changes:\n copy the file pact/setup.js to pact/setup-graphql.js and use different consumer and provider names in package.json add a script test:pact:graphql referring to pact/setup-graphql.js and using --testMatch \\\u0026quot;**/*.test.graphql.pact.js\\\u0026quot; in order to only execute our graphQL client tests  Now, we can run the pact tests with this command:\nnpm run test:pact:graphql We just don\u0026rsquo;t have a test to run, yet.\nThe Hero GraphQL Client Let\u0026rsquo;s implement a GraphQL client that we can test.\nWe\u0026rsquo;re going to create a client that allows us to query heroes from a GraphQL server.\nThe Hero Class A hero resource has an id, a name, a superpower and it belongs to a certain universe (e.g. \u0026ldquo;DC\u0026rdquo; or \u0026ldquo;Marvel\u0026rdquo;):\n// hero.js class Hero { constructor(name, superpower, universe, id) { this.name = name; this.superpower = superpower; this.universe = universe; this.id = id; } } export default Hero; Strictly, we don\u0026rsquo;t need to declare a class for our hero objects, since we can just use plain JSON objects instead. However, having a Java background, I couldn\u0026rsquo;t resist the urge to fake type safety ;).\nThe Hero GraphQL Client Service For loading a hero from the server via GraphQL, we\u0026rsquo;re creating the GraphQLHeroService class:\nimport {ApolloClient} from \u0026#34;apollo-client\u0026#34; import {InMemoryCache} from \u0026#34;apollo-cache-inmemory\u0026#34; import {HttpLink} from \u0026#34;apollo-link-http\u0026#34; import gql from \u0026#34;graphql-tag\u0026#34; import Hero from \u0026#34;hero\u0026#34;; class GraphQLHeroService { constructor(baseUrl, port, fetch) { this.client = new ApolloClient({ link: new HttpLink({ uri: `${baseUrl}:${port}/graphql`, fetch: fetch }), cache: new InMemoryCache() }); } getHero(heroId) { if (heroId == null) { throw new Error(\u0026#34;heroId must not be null!\u0026#34;); } return this.client.query({ query: gql` query GetHero($heroId: Int!) { hero(id: $heroId) { name superpower } } `, variables: { heroId: heroId } }).then((response) =\u0026gt; { return new Promise((resolve, reject) =\u0026gt; { try { const hero = new Hero(response.data.hero.name, response.data.hero.superpower, null, heroId); Hero.validateName(hero); Hero.validateSuperpower(hero); resolve(hero); } catch (error) { reject(error); } }) }); }; } export default GraphQLHeroService; First, we\u0026rsquo;re creating a new ApolloClient that is pointed to a certain URL and port.\nIn the constructor, we\u0026rsquo;re passing a fetch function. In a browser environment, this is a globally available function. However, we\u0026rsquo;re going to run our tests in a Node environment where this function is not available by default. So, to make our service compatible to both environments, we\u0026rsquo;re taking a fetch function as a parameter and pass it on to be used by the GraphQL client.\nIn the getHero function, we\u0026rsquo;re using gql to create a GraphQL query.\nImplementing a Contract Test In this test, we\u0026rsquo;re going to:\n create a contract between our GraphQL client and GraphQL provider verify that our GraphQL client works as defined in the contract.  The Test Template The test structure will look like this:\n// hero.service.test.graphql.pact.js import GraphQLHeroService from \u0026#39;./hero.service.graphql\u0026#39;; import * as Pact from \u0026#39;@pact-foundation/pact\u0026#39;; import fetch from \u0026#39;node-fetch\u0026#39;; describe(\u0026#39;HeroService GraphQL API\u0026#39;, () =\u0026gt; { const heroService = new HeroService(\u0026#39;http://localhost\u0026#39;, global.port, fetch); describe(\u0026#39;getHero()\u0026#39;, () =\u0026gt; { beforeEach((done) =\u0026gt; { // ...  }); it(\u0026#39;sends a request according to contract\u0026#39;, (done) =\u0026gt; { // ...  }); }); }); We see the usual describe() and it() functions popular in javascript testing frameworks.\nAlso, we create an instance of our GraphQLHeroService GraphQL client and tell it to please send its requests to localhost:8080.\nAdditionally, we\u0026rsquo;re importing the fetch function from node-fetch to pass it into our GraphQLHeroService to make it compatible within the Node environment.\nWe\u0026rsquo;ll fill in the beforeEach() and it() functions next.\nDefining the Contract Within the beforeEach function, we\u0026rsquo;re defining our contract:\n// hero.service.test.graphql.pact.js beforeEach((done) =\u0026gt; { const contentTypeJsonMatcher = Pact.Matchers.term({ matcher: \u0026#34;application\\\\/json; *charset=utf-8\u0026#34;, generate: \u0026#34;application/json; charset=utf-8\u0026#34; }); global.provider.addInteraction(new Pact.GraphQLInteraction() .uponReceiving(\u0026#39;a GetHero Query\u0026#39;) .withRequest({ path: \u0026#39;/graphql\u0026#39;, method: \u0026#39;POST\u0026#39;, }) .withOperation(\u0026#34;GetHero\u0026#34;) .withQuery(` query GetHero($heroId: Int!) { hero(id: $heroId) { name superpower __typename } }`) .withVariables({ heroId: 42 }) .willRespondWith({ status: 200, headers: { \u0026#39;Content-Type\u0026#39;: contentTypeJsonMatcher }, body: { data: { hero: { name: Pact.Matchers.somethingLike(\u0026#39;Superman\u0026#39;), superpower: Pact.Matchers.somethingLike(\u0026#39;Flying\u0026#39;), __typename: \u0026#39;Hero\u0026#39; } } } })) .then(() =\u0026gt; done()); }); By calling provider.addInteraction(), we\u0026rsquo;re passing a request / response pair to the pact mock server (which has been started by the jest-wrapper.js script we defined above).\nSince we want to create a GraphQL interaction, we\u0026rsquo;re using Pact\u0026rsquo;s GraphQLInteraction class to describe this interaction.\nThe differences to a standard REST interaction are the .withOperation(), .withQuery() and .withVariables() functions. These we can use to define the name of the GraphQL operation (if we have defined a name in the query), the GraphQL query itself and the variables used within the query.\nFor a discussion of the GraphQL Syntax, refer to the GraphQL documentation.\nNote the __typename field in the query. We have not defined such a field in our Hero class. However, the Apollo GraphQL client adds this field by itself, so we need to include it into our contract.\nAlso note that whitespaces are not important in the GraphQL query. If the GraphQL client adds whitespaces and line breaks in a different manner, it doesn\u0026rsquo;t matter.\nVerifying the GraphQL Client Now, we want to make sure that our GraphQLHeroService works as expected by the contract. We do this in the actual test method it():\n// hero.service.test.graphql.pact.js it(\u0026#39;sends a request according to contract\u0026#39;, (done) =\u0026gt; { heroService.getHero(42) .then(hero =\u0026gt; { expect(hero.name).toEqual(\u0026#39;Superman\u0026#39;); }) .then(() =\u0026gt; { global.provider.verify() .then(() =\u0026gt; done(), error =\u0026gt; { done.fail(error) }) }); }); We\u0026rsquo;re calling our heroService to fetch a hero for us. Since the heroService is configured to send requests to the Pact mock provider, Pact can check if the request matches a certain request / response pair.\nIn our case, we have only defined a single request / response pair, so if the request does not match the request we have defined in our before() function above, we\u0026rsquo;ll get an error.\nIf the request matches, the Pact mock provider will return the response we have provided in the contract. To prove that, we assert that the heroes name is the one we provided in the contract.\nBy calling provider.verify() we also make sure that the test fails if the heroService doesn\u0026rsquo;t send any request at all or a request that did not match any of the registered interactions.\nWe can now run our test with npm run test:pact:graphql and it should be green. Also, it should have created a contract file in the pacts folder that can be published so that the provider can test against it, too.\nImproving Contract Quality with Validation Read this discussion in my previous tutorial.\nDebugging Read this discussion in my previous tutorial.\nPublishing the Contract Read this discussion in my previous tutorial.\nConclusion In this tutorial, we have successfully created a GraphQL client with Node and Apollo. We have also defined a contract for this client and verified that this client works as expected by the contract.\nThe contract can now be used to verify that a certain GraphQL provider works as expected.\nThe code for this tutorial can be found on github.\n","date":"November 10, 2018","image":"https://reflectoring.io/images/stock/0025-signature-1200x628-branded_hu40d5255a109b1d14ac3f4eab2daeb887_126452_650x0_resize_q90_box.jpg","permalink":"/pact-graphql-consumer/","title":"Implementing a Consumer-Driven Contract for a GraphQL Consumer with Node and Apollo"},{"categories":["programming"],"contents":"Consumer-driven contract (CDC) tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture.\nThis article explains the steps of setting up a GraphQL client (or \u0026ldquo;consumer\u0026rdquo;) using the Apollo framework. We\u0026rsquo;ll then create and publish a consumer-driven contract for the GraphQL interaction between the GraphQL client and the API provider and implement a contract test that validates that our consumer is working as expected by the contract. For this, we\u0026rsquo;re using the Node version of the Pact framework.\nThis tutorial builds upon a recent tutorial about creating a React consumer for a REST API, so you\u0026rsquo;ll find some links to that tutorial for more detailed explanations.\n Example Code This article is accompanied by a working code example on GitHub. Creating the Node App To set up a Node app, follow the instructions in the previous tutorial. There, we\u0026rsquo;re using the create-react-app tool to create a React client that already has Jest set up as a testing framework.\nHowever, since we\u0026rsquo;re not using React in this tutorial, you can also create a plain Node app. Then you have to set up a test framework manually, though.\nAdding Dependencies In our package.json, we need to declare some additional dependencies:\n{ \u0026#34;dependencies\u0026#34;: { \u0026#34;apollo-cache-inmemory\u0026#34;: \u0026#34;^1.3.9\u0026#34;, \u0026#34;apollo-client\u0026#34;: \u0026#34;^2.4.5\u0026#34;, \u0026#34;apollo-link-http\u0026#34;: \u0026#34;^1.5.5\u0026#34;, \u0026#34;graphql\u0026#34;: \u0026#34;^14.0.2\u0026#34;, \u0026#34;graphql-tag\u0026#34;: \u0026#34;^2.10.0\u0026#34;, \u0026#34;node-fetch\u0026#34;: \u0026#34;^2.2.1\u0026#34; } }  apollo-client provides Apollo\u0026rsquo;s GraphQL client implementation apollo-cache-inmemory contains Apollo\u0026rsquo;s implementation of an in-memory-cache that is used to cache GraphQL query results to reduce the number of requests to the server apollo-link-http allows us to use GraphQL over HTTP graphql and graphql-tag provide the means to work with GraphQL queries node-fetch implements the global fetch operation that is available in browsers, but not in a Node environment.  Don\u0026rsquo;t forget to run npm install after changing the dependencies.\nSetting Up Jest We\u0026rsquo;re using Jest as the testing framework for our contract tests.\nFollow the instructions in the previous tutorial to set up Jest. If you want the code of the previous tutorial and the code of this tutorial to exist in parallel, note these changes:\n copy the file pact/setup.js to pact/setup-graphql.js and use different consumer and provider names in package.json add a script test:pact:graphql referring to pact/setup-graphql.js and using --testMatch \\\u0026quot;**/*.test.graphql.pact.js\\\u0026quot; in order to only execute our graphQL client tests  Now, we can run the pact tests with this command:\nnpm run test:pact:graphql We just don\u0026rsquo;t have a test to run, yet.\nThe Hero GraphQL Client Let\u0026rsquo;s implement a GraphQL client that we can test.\nWe\u0026rsquo;re going to create a client that allows us to query heroes from a GraphQL server.\nThe Hero Class A hero resource has an id, a name, a superpower and it belongs to a certain universe (e.g. \u0026ldquo;DC\u0026rdquo; or \u0026ldquo;Marvel\u0026rdquo;):\n// hero.js class Hero { constructor(name, superpower, universe, id) { this.name = name; this.superpower = superpower; this.universe = universe; this.id = id; } } export default Hero; Strictly, we don\u0026rsquo;t need to declare a class for our hero objects, since we can just use plain JSON objects instead. However, having a Java background, I couldn\u0026rsquo;t resist the urge to fake type safety ;).\nThe Hero GraphQL Client Service For loading a hero from the server via GraphQL, we\u0026rsquo;re creating the GraphQLHeroService class:\nimport {ApolloClient} from \u0026#34;apollo-client\u0026#34; import {InMemoryCache} from \u0026#34;apollo-cache-inmemory\u0026#34; import {HttpLink} from \u0026#34;apollo-link-http\u0026#34; import gql from \u0026#34;graphql-tag\u0026#34; import Hero from \u0026#34;hero\u0026#34;; class GraphQLHeroService { constructor(baseUrl, port, fetch) { this.client = new ApolloClient({ link: new HttpLink({ uri: `${baseUrl}:${port}/graphql`, fetch: fetch }), cache: new InMemoryCache() }); } getHero(heroId) { if (heroId == null) { throw new Error(\u0026#34;heroId must not be null!\u0026#34;); } return this.client.query({ query: gql` query GetHero($heroId: Int!) { hero(id: $heroId) { name superpower } } `, variables: { heroId: heroId } }).then((response) =\u0026gt; { return new Promise((resolve, reject) =\u0026gt; { try { const hero = new Hero(response.data.hero.name, response.data.hero.superpower, null, heroId); Hero.validateName(hero); Hero.validateSuperpower(hero); resolve(hero); } catch (error) { reject(error); } }) }); }; } export default GraphQLHeroService; First, we\u0026rsquo;re creating a new ApolloClient that is pointed to a certain URL and port.\nIn the constructor, we\u0026rsquo;re passing a fetch function. In a browser environment, this is a globally available function. However, we\u0026rsquo;re going to run our tests in a Node environment where this function is not available by default. So, to make our service compatible to both environments, we\u0026rsquo;re taking a fetch function as a parameter and pass it on to be used by the GraphQL client.\nIn the getHero function, we\u0026rsquo;re using gql to create a GraphQL query.\nImplementing a Contract Test In this test, we\u0026rsquo;re going to:\n create a contract between our GraphQL client and GraphQL provider verify that our GraphQL client works as defined in the contract.  The Test Template The test structure will look like this:\n// hero.service.test.graphql.pact.js import GraphQLHeroService from \u0026#39;./hero.service.graphql\u0026#39;; import * as Pact from \u0026#39;@pact-foundation/pact\u0026#39;; import fetch from \u0026#39;node-fetch\u0026#39;; describe(\u0026#39;HeroService GraphQL API\u0026#39;, () =\u0026gt; { const heroService = new HeroService(\u0026#39;http://localhost\u0026#39;, global.port, fetch); describe(\u0026#39;getHero()\u0026#39;, () =\u0026gt; { beforeEach((done) =\u0026gt; { // ...  }); it(\u0026#39;sends a request according to contract\u0026#39;, (done) =\u0026gt; { // ...  }); }); }); We see the usual describe() and it() functions popular in javascript testing frameworks.\nAlso, we create an instance of our GraphQLHeroService GraphQL client and tell it to please send its requests to localhost:8080.\nAdditionally, we\u0026rsquo;re importing the fetch function from node-fetch to pass it into our GraphQLHeroService to make it compatible within the Node environment.\nWe\u0026rsquo;ll fill in the beforeEach() and it() functions next.\nDefining the Contract Within the beforeEach function, we\u0026rsquo;re defining our contract:\n// hero.service.test.graphql.pact.js beforeEach((done) =\u0026gt; { const contentTypeJsonMatcher = Pact.Matchers.term({ matcher: \u0026#34;application\\\\/json; *charset=utf-8\u0026#34;, generate: \u0026#34;application/json; charset=utf-8\u0026#34; }); global.provider.addInteraction(new Pact.GraphQLInteraction() .uponReceiving(\u0026#39;a GetHero Query\u0026#39;) .withRequest({ path: \u0026#39;/graphql\u0026#39;, method: \u0026#39;POST\u0026#39;, }) .withOperation(\u0026#34;GetHero\u0026#34;) .withQuery(` query GetHero($heroId: Int!) { hero(id: $heroId) { name superpower __typename } }`) .withVariables({ heroId: 42 }) .willRespondWith({ status: 200, headers: { \u0026#39;Content-Type\u0026#39;: contentTypeJsonMatcher }, body: { data: { hero: { name: Pact.Matchers.somethingLike(\u0026#39;Superman\u0026#39;), superpower: Pact.Matchers.somethingLike(\u0026#39;Flying\u0026#39;), __typename: \u0026#39;Hero\u0026#39; } } } })) .then(() =\u0026gt; done()); }); By calling provider.addInteraction(), we\u0026rsquo;re passing a request / response pair to the pact mock server (which has been started by the jest-wrapper.js script we defined above).\nSince we want to create a GraphQL interaction, we\u0026rsquo;re using Pact\u0026rsquo;s GraphQLInteraction class to describe this interaction.\nThe differences to a standard REST interaction are the .withOperation(), .withQuery() and .withVariables() functions. These we can use to define the name of the GraphQL operation (if we have defined a name in the query), the GraphQL query itself and the variables used within the query.\nFor a discussion of the GraphQL Syntax, refer to the GraphQL documentation.\nNote the __typename field in the query. We have not defined such a field in our Hero class. However, the Apollo GraphQL client adds this field by itself, so we need to include it into our contract.\nAlso note that whitespaces are not important in the GraphQL query. If the GraphQL client adds whitespaces and line breaks in a different manner, it doesn\u0026rsquo;t matter.\nVerifying the GraphQL Client Now, we want to make sure that our GraphQLHeroService works as expected by the contract. We do this in the actual test method it():\n// hero.service.test.graphql.pact.js it(\u0026#39;sends a request according to contract\u0026#39;, (done) =\u0026gt; { heroService.getHero(42) .then(hero =\u0026gt; { expect(hero.name).toEqual(\u0026#39;Superman\u0026#39;); }) .then(() =\u0026gt; { global.provider.verify() .then(() =\u0026gt; done(), error =\u0026gt; { done.fail(error) }) }); }); We\u0026rsquo;re calling our heroService to fetch a hero for us. Since the heroService is configured to send requests to the Pact mock provider, Pact can check if the request matches a certain request / response pair.\nIn our case, we have only defined a single request / response pair, so if the request does not match the request we have defined in our before() function above, we\u0026rsquo;ll get an error.\nIf the request matches, the Pact mock provider will return the response we have provided in the contract. To prove that, we assert that the heroes name is the one we provided in the contract.\nBy calling provider.verify() we also make sure that the test fails if the heroService doesn\u0026rsquo;t send any request at all or a request that did not match any of the registered interactions.\nWe can now run our test with npm run test:pact:graphql and it should be green. Also, it should have created a contract file in the pacts folder that can be published so that the provider can test against it, too.\nImproving Contract Quality with Validation Read this discussion in my previous tutorial.\nDebugging Read this discussion in my previous tutorial.\nPublishing the Contract Read this discussion in my previous tutorial.\nConclusion In this tutorial, we have successfully created a GraphQL client with Node and Apollo. We have also defined a contract for this client and verified that this client works as expected by the contract.\nThe contract can now be used to verify that a certain GraphQL provider works as expected.\nThe code for this tutorial can be found on github.\n","date":"November 10, 2018","image":"https://reflectoring.io/images/stock/0025-signature-1200x628-branded_hu40d5255a109b1d14ac3f4eab2daeb887_126452_650x0_resize_q90_box.jpg","permalink":"/pact-node-graphql-consumer/","title":"Implementing a Consumer-Driven Contract for a GraphQL Consumer with Node and Apollo"},{"categories":["programming"],"contents":"Consumer-driven contract (CDC) tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture.\nIn this tutorial, we\u0026rsquo;re going to create a GraphQL API provider with Node and Express that implements the Heroes query from the contract created previously by a GraphQL consumer.\nThen, we\u0026rsquo;ll create a contract test with the JavaScript version of Pact that verifies that our provider works as specified in the contract.\nThis tutorial assumes you have a current version of Node installed.\n Example Code This article is accompanied by a working code example on GitHub. Creating an Express Server If you have already followed the previous tutorial about Node and Pact, you can re-use the Node Express server created there.\nOtherwise, follow the instructions in the previous tutorial to create an Express Server from scratch.\nAdding the Heroes GraphQL Endpoint Having a base Express project, we\u0026rsquo;re ready to implement a new GraphQL endpoint.\nThe Contract But first, let\u0026rsquo;s have a look at the contract against which we\u0026rsquo;re about to implement. The contract has been created by the consumer in this article:\n{ \u0026#34;consumer\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;graphql-hero-consumer\u0026#34; }, \u0026#34;provider\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;graphql-hero-provider\u0026#34; }, \u0026#34;interactions\u0026#34;: [ { \u0026#34;description\u0026#34;: \u0026#34;a GetHero Query\u0026#34;, \u0026#34;request\u0026#34;: { \u0026#34;method\u0026#34;: \u0026#34;POST\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/graphql\u0026#34;, \u0026#34;headers\u0026#34;: { \u0026#34;content-type\u0026#34;: \u0026#34;application/json\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;operationName\u0026#34;: \u0026#34;GetHero\u0026#34;, \u0026#34;query\u0026#34;: \u0026#34;\\nquery GetHero($heroId: Int!) {\\nhero(id: $heroId) {\\nname\\nsuperpower\\n__typename\\n}\\n}\u0026#34;, \u0026#34;variables\u0026#34;: { \u0026#34;heroId\u0026#34;: 42 } }, \u0026#34;matchingRules\u0026#34;: { \u0026#34;$.body.query\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;regex\u0026#34;, \u0026#34;regex\u0026#34;: \u0026#34;...\u0026#34; } } }, \u0026#34;response\u0026#34;: { \u0026#34;status\u0026#34;: 200, \u0026#34;headers\u0026#34;: { \u0026#34;Content-Type\u0026#34;: \u0026#34;application/json; charset=utf-8\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;data\u0026#34;: { \u0026#34;hero\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Superman\u0026#34;, \u0026#34;superpower\u0026#34;: \u0026#34;Flying\u0026#34;, \u0026#34;__typename\u0026#34;: \u0026#34;Hero\u0026#34; } } }, \u0026#34;matchingRules\u0026#34;: { \u0026#34;$.headers.Content-Type\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;regex\u0026#34;, \u0026#34;regex\u0026#34;: \u0026#34;application\\\\/json; *charset=utf-8\u0026#34; }, \u0026#34;$.body.data.hero.name\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;type\u0026#34; }, \u0026#34;$.body.data.hero.superpower\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;type\u0026#34; } } } } ], \u0026#34;metadata\u0026#34;: { \u0026#34;pactSpecification\u0026#34;: { \u0026#34;version\u0026#34;: \u0026#34;2.0.0\u0026#34; } } } The contract contains a single request / response pair, called an \u0026ldquo;interaction\u0026rdquo;. In this interaction, the consumer sends a POST request containing a GraphQL query to the /graphql HTTP endpoint. The expected response to that request has HTTP status 200 and contains a hero JSON object.\nIn the following, we assume that the contract has been published on a Pact Broker by the consumer. But it\u0026rsquo;s also possible to take the contract file from the consumer codebase and copy it into the provider code base (be careful, though, we\u0026rsquo;re loosing the single source of truth!).\nAdding an Express Route To implement the GraphQL endpoint on the provider side, we create a new route in our express server:\n// ./routes/graphql.js const graphqlHTTP = require(\u0026#39;express-graphql\u0026#39;); const {buildSchema} = require(\u0026#34;graphql\u0026#34;); const heroesSchema = buildSchema(` type Query { hero(id: Int!): Hero } type Hero { id: Int! name: String! superpower: String! universe: String! } `); const getHero = function () { return { id: 42, name: \u0026#34;Superman\u0026#34;, superpower: \u0026#34;Flying\u0026#34;, universe: \u0026#34;DC\u0026#34; } }; const root = { hero: getHero }; const router = graphqlHTTP({ schema: heroesSchema, graphiql: true, rootValue: root }); module.exports = router; First, we\u0026rsquo;re defining a GraphQL schema for querying heroes. Note that this schema must provide the query we have seen in the consumer-driven contract above.\nFor a detailed discussion of GraphQL schemas, refer to the GraphQL documentation.\nNext, we\u0026rsquo;re providing a getHero() function that is responsible to find a hero. In this example, we\u0026rsquo;re simply always returning the same object. In the real world, this function would load a hero from an external resource like a database depending on an ID that\u0026rsquo;s passed in.\nIn the root object, we\u0026rsquo;re defining the GraphQL root. Since we\u0026rsquo;re only providing a GraphQL query for heroes, the only root is hero which should resolve to a hero object, so we\u0026rsquo;re using the getHero function we have defined above.\nUsing the express-graphql module, we\u0026rsquo;re then creating a GraphQL HTTP resolver (I called it \u0026ldquo;router\u0026rdquo; in the style of simple REST endpoints). We set the graphiql property to true in order to get access to a nice GraphQL query web interface.\nFinally, we have to make the new endpoint known to the express server by adding it in the app.js file:\n// ./app.js const graphqlRouter = require(\u0026#39;./routes/graphql\u0026#39;); app.use(\u0026#39;/graphql\u0026#39;, graphqlRouter); Testing the GraphQL Endpoint We now have a working /graphql endpoint. We can test it by running npm run start and type the URL http://localhost:3000/graphql into a browser.\nWe should see the graphiql interface that looks something like this:\nWe can play around and enter a query like shown in the screenshot to check if the server responds accordingly.\nSetting Up Pact Now, we want to verify that our GraphQL endpoint works as expected by the contract.\nSo, let\u0026rsquo;s create a contract test that does the following:\n start up our express server with the /graphql endpoint send a request against the endpoint with a hero query verify that the response matches the expectations expressed in the contract  Pact will do most of the work, but we need to set it up correctly.\nDependencies First, we add some dependencies to package.json:\n// ./package.json { \u0026#34;devDependencies\u0026#34;: { \u0026#34;@pact-foundation/pact\u0026#34;: \u0026#34;7.0.3\u0026#34;, \u0026#34;start-server-and-test\u0026#34;: \u0026#34;^1.7.5\u0026#34; } }  we use pact to interpret a given contract file and create a provider test for us we use start-server-and-test to allow us to start up the Express server and the provider test at once.  Creating a Provider-Side Contract Test The actual contract testing is done by Pact. We simply have to make sure that our GraphQL endpoint is up and running and ready to receive requests.\nLet\u0026rsquo;s create the script pact/provider_tests_graphql.js to configure Pact:\nconst { Verifier } = require(\u0026#39;@pact-foundation/pact\u0026#39;); const packageJson = require(\u0026#39;../package.json\u0026#39;); let opts = { providerBaseUrl: \u0026#39;http://localhost:3000\u0026#39;, provider: \u0026#39;graphql-hero-provider\u0026#39;, pactBrokerUrl: \u0026#39;https://adesso.pact.dius.com.au\u0026#39;, pactBrokerUsername: process.env.PACT_USERNAME, pactBrokerPassword: process.env.PACT_PASSWORD, publishVerificationResult: true, providerVersion: packageJson.version, }; new Verifier().verifyProvider(opts).then(function () { console.log(\u0026#34;Pacts successfully verified!\u0026#34;); }); To make the script runnable via Node, we add some scripts to package.json:\n// ./package.json { \u0026#34;scripts\u0026#34;: { \u0026#34;start\u0026#34;: \u0026#34;node ./bin/www.js\u0026#34;, \u0026#34;pact:providerTests:graphql\u0026#34;: \u0026#34;node ./pact/provider_tests_graphql.js\u0026#34;, \u0026#34;test:pact:graphql\u0026#34;: \u0026#34;start-server-and-test start http://localhost:3000 pact:providerTests:graphql\u0026#34; } } The scripts are explained in detail in my previous tutorial on creating a contract test for a Node REST provider.\nWe can now run the provider tests and they should be green:\nnpm run test:pact:graphql Conclusion In this tutorial we went through the steps to create an Express server with a GraphQL endpoint and enabled it to run provider contract tests against a Pact contract.\nYou can look at the example code from this tutorial in my github repo.\n","date":"November 10, 2018","image":"https://reflectoring.io/images/stock/0026-signature-1200x628-branded_hua6bf2a4b7ae34ab845137fd515e2ba8a_112398_650x0_resize_q90_box.jpg","permalink":"/pact-node-graphql-provider/","title":"Implementing a Consumer-Driven Contract for a GraphQL Provider with Node and Express"},{"categories":["programming"],"contents":"Consumer-driven contract (CDC) tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture.\nIn this tutorial, we\u0026rsquo;re going to create a REST provider with Node and Express that implements the Heroes endpoints from the contract created in this article.\nThen, we\u0026rsquo;ll create a contract test with the JavaScript version of Pact that verifies that our provider works as specified in the contract.\nThis tutorial assumes you have a current version of Node installed.\n Example Code This article is accompanied by a working code example on GitHub. Creating an Express Server Let\u0026rsquo;s start by creating an Express server from scratch.\nSince we don\u0026rsquo;t want to do this by hand, we\u0026rsquo;ll install the express-generator:\nnpm install -g express-generator Then, we simply call the generator to create a project template for us:\nexpress --no-view pact-node-provider We\u0026rsquo;re using the --no-view parameter since we\u0026rsquo;re only implementing REST endpoint and thus don\u0026rsquo;t need any templating engine.\nDon\u0026rsquo;t forget to call npm install in the created project folder now to install the dependencies.\nAdding the Heroes Endpoint Having a base Express project, we\u0026rsquo;re ready to implement a new REST endpoint.\nThe Contract But first, let\u0026rsquo;s have a look at the contract against which we\u0026rsquo;re about to implement. The contract has been created by the consumer in this article:\n{ \u0026#34;consumer\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;hero-consumer\u0026#34; }, \u0026#34;provider\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;hero-provider\u0026#34; }, \u0026#34;interactions\u0026#34;: [ { \u0026#34;description\u0026#34;: \u0026#34;a POST request to create a hero\u0026#34;, \u0026#34;providerState\u0026#34;: \u0026#34;provider allows hero creation\u0026#34;, \u0026#34;request\u0026#34;: { \u0026#34;method\u0026#34;: \u0026#34;POST\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/heroes\u0026#34;, \u0026#34;headers\u0026#34;: { \u0026#34;Accept\u0026#34;: \u0026#34;application/json; charset=utf-8\u0026#34;, \u0026#34;Content-Type\u0026#34;: \u0026#34;application/json; charset=utf-8\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Superman\u0026#34;, \u0026#34;superpower\u0026#34;: \u0026#34;flying\u0026#34;, \u0026#34;universe\u0026#34;: \u0026#34;DC\u0026#34; }, \u0026#34;matchingRules\u0026#34;: { \u0026#34;$.headers.Accept\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;regex\u0026#34;, \u0026#34;regex\u0026#34;: \u0026#34;application\\\\/json; *charset=utf-8\u0026#34; }, \u0026#34;$.headers.Content-Type\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;regex\u0026#34;, \u0026#34;regex\u0026#34;: \u0026#34;application\\\\/json; *charset=utf-8\u0026#34; } } }, \u0026#34;response\u0026#34;: { \u0026#34;status\u0026#34;: 201, \u0026#34;headers\u0026#34;: { \u0026#34;Content-Type\u0026#34;: \u0026#34;application/json; charset=utf-8\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;id\u0026#34;: 42, \u0026#34;name\u0026#34;: \u0026#34;Superman\u0026#34;, \u0026#34;superpower\u0026#34;: \u0026#34;flying\u0026#34;, \u0026#34;universe\u0026#34;: \u0026#34;DC\u0026#34; }, \u0026#34;matchingRules\u0026#34;: { \u0026#34;$.headers.Content-Type\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;regex\u0026#34;, \u0026#34;regex\u0026#34;: \u0026#34;application\\\\/json; *charset=utf-8\u0026#34; }, \u0026#34;$.body\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;type\u0026#34; } } } } ], \u0026#34;metadata\u0026#34;: { \u0026#34;pactSpecification\u0026#34;: { \u0026#34;version\u0026#34;: \u0026#34;2.0.0\u0026#34; } } } The contract contains a single request / response pair, called an \u0026ldquo;interaction\u0026rdquo;. In this interaction, the consumer sends a POST request with a Hero-JSON-object in the body and the provider is expected to return a response with HTTP status 201 which again contains the hero in the body, this time with an ID that was added by the server.\nIn the following, we assume that the contract has been published on a Pact Broker by the consumer. But it\u0026rsquo;s also possible to take the contract file from the consumer codebase and access it directly in the provider code base.\nAdding an Express Route We now want to implement the contract on the provider side.\nFor this, we create a new POST route that expects a hero JSON object as payload:\n// ./routes/heroes.js const express = require(\u0026#39;express\u0026#39;); const router = express.Router(); router.route(\u0026#39;/\u0026#39;) .post(function (req, res) { res.status(201); res.json({ id: 42, superpower: \u0026#39;flying\u0026#39;, name: \u0026#39;Superman\u0026#39;, universe: \u0026#39;DC\u0026#39; }); }); module.exports = router; I highly recommend to add some kind of validation to check the incoming request (e.g. check that the body contains all expected fields). I explained here why validation immensely improves the quality of our contract tests.\nNow we have to make the new route available to the Express application by adding it in app.js:\n// ./app.js const heroesRouter = require(\u0026#39;./routes/heroes\u0026#39;); app.use(\u0026#39;/heroes\u0026#39;, heroesRouter); We just implemented the provider side of the contract. We can check if it works by calling npm run start and sending a POST request to http://localhost:3000/heroes with a REST client tool. Or, we can just type the URL into your browser. However, we\u0026rsquo;ll get HTTP status 405 then, because the browser sends a GET request and a POST request is expected.\nNow we have to prove that the provider actually works as expected by the contract.\nSetting Up Pact So, let\u0026rsquo;s set up Pact to implement a provider test that verifies our endpoint against the contract.\nThe provider test reads the interactions from a contract and for each interaction, does the following:\n put the provider into a state that allows to respond accordingly send the request to the provider validate that the response from the provider matches the response from the contract.  Pact does most of the work here, we just need to set it up correctly.\nDependencies First, we add some dependencies to package.json:\n// ./package.json { \u0026#34;devDependencies\u0026#34;: { \u0026#34;@pact-foundation/pact\u0026#34;: \u0026#34;7.0.3\u0026#34;, \u0026#34;start-server-and-test\u0026#34;: \u0026#34;^1.7.5\u0026#34; } }  we use pact to interpret a given contract file and create a provider test for us we use start-server-and-test to allow us to start up the Express server and the provider test at once.  Adding a Provider State Endpoint The first step of the provider test for each interaction is to put the provider into a certain state, called \u0026ldquo;provider state\u0026rdquo; in Pact lingo.\nIn the contract above the provider state for our single interaction is called \u0026ldquo;provider allows hero creation\u0026rdquo;.\nProvider states can be used by the provider to mock database queries, for example. When the provider is notified to go into the state \u0026ldquo;provider allows hero creation\u0026rdquo; it knows which database queries are needed and can set up mocks that simulate the database accordingly.\nThus, we don\u0026rsquo;t need to spin up a database during the test. A major advantage of CDC tests is to be able to execute them without spinning up a whole server farm with a database and other dependencies. Hence, we should make use of mocks that react to the provider states.\nYou can read more about provider states in the Pact docs.\nIn order to put the provider into a certain state, it needs a POST endpoint that accepts the consumer and state query parameters:\n// ./routes/provider_state.js const express = require(\u0026#39;express\u0026#39;); const router = express.Router(); router.route(\u0026#39;/\u0026#39;) .post(function (req, res) { const consumer = req.query[\u0026#39;consumer\u0026#39;]; const providerState = req.query[\u0026#39;state\u0026#39;]; // imagine we\u0026#39;re setting the server into a certain state  res.send(`changed to provider state \u0026#34;${providerState}\u0026#34; for consumer \u0026#34;${consumer}\u0026#34;`); res.status(200); }); module.exports = router; Note that the endpoint implementation above is just a dummy implementation. We don\u0026rsquo;t have any database access in our /heroes endpoint, hence we don\u0026rsquo;t need to mock anything.\nNext, we make the endpoint available to the Express app:\n// ./app.js var providerStateRouter = require(\u0026#39;./routes/provider_state\u0026#39;); if (process.env.PACT_MODE === \u0026#39;true\u0026#39;) { app.use(\u0026#39;/provider-state\u0026#39;, providerStateRouter); } We only activate the endpoint when the environment variable PACT_MODE is set to true, since we don\u0026rsquo;t want this endpoint in production.\nMake sure to set this environment variable when running the test later.\nProviding an endpoint that is only needed in tests is quite invasive. There\u0026rsquo;s a feature proposal that provides \u0026ldquo;state handlers\u0026rdquo; that can react to provider states within your provider test. This way, we can mock external dependencies depending on the provider state more cleanly within the test, instead of \u0026ldquo;polluting\u0026rdquo; our application with a dedicated endpoint. However, this feature has not made it into Pact, yet.\nCreating a Provider-Side Contract Test Now we create a script pact/provider_tests.js to use Pact to do the actual testing:\n// ./pact/provider_tests.js const { Verifier } = require(\u0026#39;@pact-foundation/pact\u0026#39;); const packageJson = require(\u0026#39;../package.json\u0026#39;); let opts = { providerBaseUrl: \u0026#39;http://localhost:3000\u0026#39;, pactBrokerUrl: \u0026#39;https://adesso.pact.dius.com.au\u0026#39;, pactBrokerUsername: process.env.PACT_USERNAME, pactBrokerPassword: process.env.PACT_PASSWORD, provider: \u0026#39;hero-provider\u0026#39;, publishVerificationResult: true, providerVersion: packageJson.version, providerStatesSetupUrl: \u0026#39;http://localhost:3000/provider-state\u0026#39; }; new Verifier().verifyProvider(opts).then(function () { console.log(\u0026#34;Pacts successfully verified!\u0026#34;); }); In the script we define some options and pass them to a Verifier instance that executes the three steps (provider state, send request, validate response).\nThe most important options are:\n pactBroker\u0026hellip;: coordinates to the pact broker instance where Pact can download the contracts. Username and password are read from environment variables since we don\u0026rsquo;t want to include them in code. provider: we tell pact to download only contracts for the provider we\u0026rsquo;re currently implementing, which in this case is hero-provider. providerBaseUrl: base url of the provider to which the requests are going to be sent. In our case, we\u0026rsquo;re starting the Express server locally on port 3000. providerStatesSetupUrl: the url to change provider states. This refers to the endpoint we have created above. In our case, we could actually leave this option out, since our provider state endpoint doesn\u0026rsquo;t really do anything.  Instead of providing the coordinates to a pact broker, we could also provide a pactUrls option pointing directly to local pact files.\nA full description of the options can be found here.\nIf the script is run, it will load all contracts for the provider hero-provider from the specified Pact Broker and then call Pact\u0026rsquo;s Verifier. For each interaction defined int the loaded contracts the Verifier will send a request to http://localhost:3000 and check if the response matches the expectations expressed in the contract.\nTo make the script runnable via Node, we add some scripts to package.json:\n// ./package.json { \u0026#34;scripts\u0026#34;: { \u0026#34;start\u0026#34;: \u0026#34;node ./bin/www.js\u0026#34;, \u0026#34;pact:providerTests\u0026#34;: \u0026#34;node ./pact/provider_tests.js\u0026#34;, \u0026#34;test:pact\u0026#34;: \u0026#34;start-server-and-test start http://localhost:3000 pact:providerTests\u0026#34; } } The start script has already been added by the Express generator.\nThe script pact:providerTests runs the provider_tests.js script from above. However, this will only work when the Express server is already running.\nSo we create a third script test:pact that uses the start-server-and-test tool we added to our dependencies earlier to start up the Express server first and then run the provider tests.\nWe tell the tool to run the start task first and run the server on localhost:3000 before running the pact:providerTests task.\nWe can now run the provider tests and they should be green:\nnpm run test:pact Conclusion In this tutorial we went through the steps to create an Express server from scratch and enabled it to run provider tests against a Pact contract.\nYou can look at the example code from this tutorial in my github repo.\n","date":"October 28, 2018","image":"https://reflectoring.io/images/stock/0026-signature-1200x628-branded_hua6bf2a4b7ae34ab845137fd515e2ba8a_112398_650x0_resize_q90_box.jpg","permalink":"/pact-node-provider/","title":"Implementing a Consumer-Driven Contract for a Node Express Server with Pact"},{"categories":["programming"],"contents":"Consumer-driven contract (CDC) tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture.\nThis article leads through the steps of setting up a fresh React app which calls a backend REST service using Axios. We\u0026rsquo;ll then see how to create and publish a consumer-driven contract for the REST interaction between the React consumer and the API provider and how to verify our REST client against that contract using the Jest testing framework.\nNote that this is not a tutorial on React, but rather on how to create a REST client with Axios and using Pact in combination with Jest to implement consumer-driven contracts. But since the create-react-app bootstrapper uses Jest as the default testing framework, this tutorial describes a way to implement CDC tests for your React app.\nThe core of this tutorial stems from an example in the pact-js repository.\n Example Code This article is accompanied by a working code example on GitHub. Creating a React App Let\u0026rsquo;s start by creating our react app. I\u0026rsquo;m assuming you have a current version of Node installed.\nFirst, we need to install the create-react-app bootstrapper, then we can use it to create a project template containing a minimal React app for us:\nnpm install -g create-react-app create-react-app pact-consumer You can choose whatever name you wish for your project instead of pact-consumer.\nAdding Dependencies At this stage, our package.json already declares some dependencies to React libraries.\nHowever, for the rest of this tutorial to work, we need additional dependencies:\n{ \u0026#34;dependencies\u0026#34;: { \u0026#34;axios\u0026#34;: \u0026#34;0.18.0\u0026#34; }, \u0026#34;devDependencies\u0026#34;: { \u0026#34;@pact-foundation/pact\u0026#34;: \u0026#34;7.0.1\u0026#34;, \u0026#34;@pact-foundation/pact-node\u0026#34;: \u0026#34;6.20.0\u0026#34;, \u0026#34;cross-env\u0026#34;: \u0026#34;^5.2.0\u0026#34; } }  axios provides REST client capabilities pact-node provides the pact mock server that receives and checks our REST client\u0026rsquo;s requests during the contract test pact is an easier-to-use wrapper around pact-node cross-env allows to create npm command lines that are independent of the operating system  Don\u0026rsquo;t forget to run npm install after changing the contents of package.json.\nThe details on how to use the libraries follow in the sections below.\nSetting Up Jest We want to create a contract that defines how the REST client (consumer) and REST server (provider) interact with one another. The contract is created from within a unit test, so we have to set up the Jest testing framework to cooperate with Pact.\nInitializing the Pact Mock Provider Within the unit test for our REST client, we want to create a contract and verify that the REST client works as defined in the contract.\nFor the verification step, we let our REST client send a request to a local mock provider. We have to take some steps to set up this mock provider.\nThis is where the pact dependency from above comes into play. In a separate javascript file pact/setup.js, we configure the Pact mock provider:\n// ./pact/setup.js const path = require(\u0026#39;path\u0026#39;); const Pact = require(\u0026#39;@pact-foundation/pact\u0026#39;).Pact; global.port = 8080; global.provider = new Pact({ cors: true, port: global.port, log: path.resolve(process.cwd(), \u0026#39;logs\u0026#39;, \u0026#39;pact.log\u0026#39;), loglevel: \u0026#39;debug\u0026#39;, dir: path.resolve(process.cwd(), \u0026#39;pacts\u0026#39;), spec: 2, pactfileWriteMode: \u0026#39;update\u0026#39;, consumer: \u0026#39;hero-consumer\u0026#39;, provider: \u0026#39;hero-provider\u0026#39;, host: \u0026#39;127.0.0.1\u0026#39; }); Later, we\u0026rsquo;ll include this file in the test runs, so that it will be executed before any test is run and so that all tests can rely upon the fact that the mock provider is configured correctly.\nThe provider instance is made globally available for later access in the tests.\nWe configured some important things:\n the path where pact will put a log file (./logs/pact.log) the path where pact will put contract files (./pacts) the pactfileWriteMode is set to update so that the contract files will not be created anew for each test, but rather added to the consumer and provider names  Starting and Stopping the Pact Mock Provider Next, we have to tell Jest to start the mock provider before the tests start and to kill it after the tests are finished. We do this in the script pact/jest-wrapper.js:\n// ./pact/jest-wrapper.js beforeAll((done) =\u0026gt; { global.provider.setup().then(() =\u0026gt; done()); }); afterAll((done) =\u0026gt; { global.provider.finalize().then(() =\u0026gt; done()); }); The call to setup() will start up the mock provider with the configuration from above (i.e. we\u0026rsquo;ll have a real HTTP server running on localhost:8080 that behaves as defined in a certain contract).\nThe call to finalize() will trigger the mock provider to create contract files for all interactions it has received during the test run in the pacts folder.\nWe\u0026rsquo;ll include this script in the Jest config in the next step.\nCreating an NPM Task to Run the Pact Tests Now that we have created two scripts to tell Jest what to do, we have to make those scripts known to Jest.\nWe do this by creating a new NPM script command test:pact in package.json that executes our pact tests:\n// package.json { \u0026#34;scripts\u0026#34;: { \u0026#34;test:pact\u0026#34;: \u0026#34;cross-env CI=true react-scripts test --runInBand --setupFiles ./pact/setup.js --setupTestFrameworkScriptFile ./pact/jest-wrapper.js --testMatch \\\u0026#34;**/*.test.pact.js\\\u0026#34;\u0026#34; } } Note that the line breaks above actually make the JSON invalid and have only been added for better readability.\nThe options in detail:\n With cross-env CI=true, we tell Jest to run in CI mode, meaning the tests should only run once and not in watch mode (this is optional, but I had some problems with zombie processes in watch mode). --runInBand tells Jest to run the tests sequentially instead of in parallel. This is necessary for the Pact provider to be properly started and stopped. With --setupFiles, we make sure that Jest executes our setup.js from above before every test run. Similarly, with --setupTestFrameworkScriptfile, we make sure that Jest calls the beforeAll() and afterAll() functions from jest-wrapper.js before and after all tests. With --testMatch, we tell Jest to only execute tests that end with test.pact.js.  Now, we can run the tests:\nnpm run test:pact This will only execute the pact tests. It\u0026rsquo;s a good idea to run pact tests separately from other unit tests, since they have some special needs, as we can see in all the configuration above.\nThe Hero REST Client Up until now, it was all configuration. Let\u0026rsquo;s implement a REST client for which we\u0026rsquo;ll later create a consumer-driven contract.\nFor the sake of simplicity, the REST client only has a single operation which allows us to store a Hero resource on the server.\nThe Hero Class A hero resource has an id, a name, a superpower and it belongs to a certain universe (e.g. \u0026ldquo;DC\u0026rdquo; or \u0026ldquo;Marvel\u0026rdquo;):\n// hero.js class Hero { constructor(name, superpower, universe, id) { this.name = name; this.superpower = superpower; this.universe = universe; this.id = id; } } export default Hero; Strictly, we don\u0026rsquo;t need to declare a class for our hero objects, since we can just use plain JSON objects instead. However, having a Java background, I couldn\u0026rsquo;t resist the urge to fake type safety ;).\nThe Hero REST Client Service Our REST client simply provides a method to POST a hero object to the provider:\n// hero.service.js import Hero from \u0026#34;./hero\u0026#34;; const axios = require(\u0026#39;axios\u0026#39;); import adapter from \u0026#39;axios/lib/adapters/http\u0026#39;; class HeroService { constructor(baseUrl, port){ this.baseUrl = baseUrl; this.port = port; } createHero(hero) { return axios.request({ method: \u0026#39;POST\u0026#39;, url: `/heroes`, baseURL: `${this.baseUrl}:${this.port}`, headers: { \u0026#39;Accept\u0026#39;: \u0026#39;application/json; charset=utf-8\u0026#39;, \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json; charset=utf-8\u0026#39; }, data: hero }, adapter); }; } export default HeroService; We use Axios to submit a POST request to a certain url and port containing a hero object as payload.\nNote that we use the Axios http adapter to make sure that the requests are made just as they would in a browser, even in a Node environment.\nWe can use this service in our React components now and it should work.\nImplementing a Contract Test Next, let\u0026rsquo;s create a test for our REST client.\nWithin this test, we want to:\n define the contract between the REST client and the REST provider verify that our REST client works as defined in the contract.  The Test Template The test structure will look like this:\n// hero.service.test.pact.js import HeroService from \u0026#39;./hero.service\u0026#39;; import * as Pact from \u0026#39;@pact-foundation/pact\u0026#39;; import Hero from \u0026#39;./hero\u0026#39;; describe(\u0026#39;HeroService API\u0026#39;, () =\u0026gt; { const heroService = new HeroService(\u0026#39;http://localhost\u0026#39;, global.port); describe(\u0026#39;createHero()\u0026#39;, () =\u0026gt; { beforeEach((done) =\u0026gt; { // ...  }); it(\u0026#39;sends a request according to contract\u0026#39;, (done) =\u0026gt; { // ...  }); }); }); We see the usual describe() and it() functions popular in javascript testing frameworks.\nAlso, we create an instance of our HeroService REST client and tell it to please send its requests to localhost:8080.\nWe\u0026rsquo;ll fill the beforeEach() and it() functions next.\nDefining the Contract Within beforeEach(), we\u0026rsquo;ll define our contract and make it known to the pact mock provider:\nbeforeEach((done) =\u0026gt; { const contentTypeJsonMatcher = Pact.Matchers.term({ matcher: \u0026#34;application\\\\/json; *charset=utf-8\u0026#34;, generate: \u0026#34;application/json; charset=utf-8\u0026#34; }); global.provider.addInteraction({ state: \u0026#39;provider allows hero creation\u0026#39;, uponReceiving: \u0026#39;a POST request to create a hero\u0026#39;, withRequest: { method: \u0026#39;POST\u0026#39;, path: \u0026#39;/heroes\u0026#39;, headers: { \u0026#39;Accept\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;Content-Type\u0026#39;: contentTypeJsonMatcher }, body: new Hero(null, \u0026#39;Superman\u0026#39;, \u0026#39;flying\u0026#39;, \u0026#39;DC\u0026#39;) }, willRespondWith: { status: 201, headers: { \u0026#39;Content-Type\u0026#39;: contentTypeJsonMatcher }, body: Pact.Matchers.somethingLike( new Hero(42, \u0026#39;Superman\u0026#39;, \u0026#39;flying\u0026#39;, \u0026#39;DC\u0026#39;)) } }).then(() =\u0026gt; done()); }); A request / response pair is called an \u0026ldquo;interaction\u0026rdquo; in Pact lingo.\nBy calling provider.addInteraction(), we pass such a request / response pair to the mock provider. If the mock provider afterwards receives a request that matches the request of that pair, it will respond with the response paired with that request.\nAlso, when calling provider.verify() (as we\u0026rsquo;ll do later), the provider will check if all requests that have been passed into addInteraction() earlier have been received and will fail if any are missing.\nThe JSON structure of an interaction is pretty self-explanatory. For a list of all options refer to the dsl implementation.\nNote, however, that we\u0026rsquo;re not expecting the response body to match exactly.\nInstead, we\u0026rsquo;re expecting the response body to contain a JSON object that looks like a Hero object by using Pact.Matchers.somethingLike(). This matcher will check that the body contains all fields of a hero and that each field has the correct type.\nWe\u0026rsquo;re using another matcher on the content type. This is a simple regex matcher that ignores the white space in application/json; charset=utf-8. This is necessary for the test to work with some servers that seem to forget this whitespace.\nThe matchers decouple our contract from the provider test because the provider does not have to return the exact object specified in the contract. In turn, this will make our tests much more stable through changes that might happen over time.\nVerifying the REST Client All we have left to do is to verify that our REST client works as the contract expects it to. We do this in the actual test method it():\nit(\u0026#39;sends a request according to contract\u0026#39;, (done) =\u0026gt; { heroService.createHero(new Hero(\u0026#39;Superman\u0026#39;, \u0026#39;flying\u0026#39;, \u0026#39;DC\u0026#39;)) .then(response =\u0026gt; { const hero = response.data; expect(hero.id).toEqual(42); }) .then(() =\u0026gt; { global.provider.verify() .then(() =\u0026gt; done(), error =\u0026gt; { done.fail(error) }) }); }); Here, we simply call our HeroService and pass it the Hero object we want to send to the server.\nSince the HeroService is configured to send the requests against the mock provider on localhost:8080, the mock provider will receive it and check if any previously registered interaction matches to this request.\nIf the mock provider finds a match, it returns the associated response. If not, it will return a HTTP 500 error and the test will fail.\nBy calling provider.verify() we also make sure that the test fails if the HeroService doesn\u0026rsquo;t send any request at all or a request that did not match any of the registered interactions.\nWe can now run our test with npm run test:pact and it should be green. Also, it should have created a contract file in the pacts folder that can be published so that the provider can test against it, too.\nImproving Contract Quality with Validation Once the test we created above is green, we have successfully proved that our HeroService sends valid Hero objects to the provider.\nHave we really?\nIf we give the createHero() method a closer look, we\u0026rsquo;ll see that it simply passes on the hero parameter it gets from outside:\n// hero.service.js class HeroService { createHero(hero) { return axios.request({ data: hero // ...  } ); }; } What happens if some client code passes an invalid hero object into the createHero() method? The REST provider will most certainly interpret it as a bad request and return HTTP error status 400.\nAlso, what if we have forgotten to add the Hero attribute capeColor into our contract but we\u0026rsquo;re happily using it in our consumer code base? The REST provider will certainly not include this attribute in its responses since it\u0026rsquo;s not part of the contract, which may lead to errors in the client application.\nThe test is green, but in production anything can still go wrong!\nThis is a problem we can solve by adding some validation logic to our HeroService:\nclass HeroService { createHero(hero) { this._validateHeroForCreation(hero); return axios.request({ // ...  }).then((response) =\u0026gt; { const hero = response.data; return new Promise((resolve, reject) =\u0026gt; { try { this._validateIncomingHero(hero); resolve(hero); } catch (error) { reject(error); } }); }); }; } Now, before we even submit the request, we pass the incoming hero object into _validateHeroForCreation() where it will be validated for the use case of creating a hero. Within this method we can include whatever validation logic we deem necessary and throw an error if the object is invalid.\nThis forces the client code using HeroService to send valid objects.\nOn the response side, we do the same by passing the response data into _validateIncomingHero() to validate the response object before returning it to the client code wrapped into a Promise.\nThis ensures that the test is red if the response we get from the mock provider during the test returns an object that does not satisfy our validations. In turn, this ensures that the contract is specified according to our validation rules and that the real REST provider will return valid objects, too, since it\u0026rsquo;s going to be verified against the contract.\nAdding validation to a provider-facing service class is not only good software design, but also plainly and simply necessary for creating high-quality contracts that help our software to behave as it\u0026rsquo;s expected to.\nDebugging As with a lot of other tests, it can be time-consuming to search for the cause of a test failure with Pact. Here are some hints that help along the way.\nIf a pact test fails, have a look at the log file Pact creates (logs/pact.log in the configuration used above).\nAlso, those async promises can be a pain. Make sure to call the done function at the proper places, otherwise this may lead to errors which are very hard to isolate.\nFinally, I sometimes had problems with zombie node and ruby processes on my windows machine and had to kill them manually (the ruby process is the Pact mock provider).\nPublishing the Contract Now that we have successfully created a contract and verified our consumer against it, we need to publish the contract so that the provider can do its own verification.\nFor this, Pact provides the Pact Broker, which is a web application that serves as a registry for pacts.\nTo publish the pact we created from the test above, we use yet another script:\n// ./pact/publish.js let publisher = require(\u0026#39;@pact-foundation/pact-node\u0026#39;); let path = require(\u0026#39;path\u0026#39;); let opts = { providerBaseUrl: \u0026#39;http://localhost:8080\u0026#39;, pactFilesOrDirs: [path.resolve(process.cwd(), \u0026#39;pacts\u0026#39;)], pactBroker: \u0026#39;https://adesso.pact.dius.com.au/\u0026#39;, pactBrokerUsername: process.env.PACT_USERNAME, pactBrokerPassword: process.env.PACT_PASSWORD, consumerVersion: \u0026#39;2.0.0\u0026#39; }; publisher.publishPacts(opts).then(() =\u0026gt; console.log(\u0026#34;Pacts successfully published\u0026#34;)); Also, we add this script to our package.json:\n{ \u0026#34;scripts\u0026#34;: { \u0026#34;publish:pact\u0026#34;: \u0026#34;node pact/publish.js\u0026#34; } } After setting the environment variables PACT_USERNAME and PACT_PASSWORD (how to do this depends on your operating system), we can publish the pact with this command:\nnpm run publish:pact This task can be nicely integrated in a CI build so that the pact files on the broker always represent the current state of the consumer.\nConclusion In this tutorial, we used Pact to create a contract from within a Jest unit test and Axios to create a REST client that is tested against this contract.\nSince Jest is the default test framework for React apps (at least if you use the create-react-app bootstrapper), the described setup is well applicable to implements CDC for a REST-consuming React app.\nThe generated contract can now be used to create a REST provider, for example with Spring Boot or Node.\nThe code for this tutorial can be found on github.\n","date":"October 25, 2018","image":"https://reflectoring.io/images/stock/0025-signature-1200x628-branded_hu40d5255a109b1d14ac3f4eab2daeb887_126452_650x0_resize_q90_box.jpg","permalink":"/pact-react-consumer/","title":"Implementing a Consumer-Driven Contract for a React App with Pact and Jest"},{"categories":["Java"],"contents":"A NoSuchMethodError occurs when we\u0026rsquo;re calling a method that does not exist at runtime.\nThe method must have existed at compile time, since otherwise the compiler would have refused to compile the class calling that method with an error: cannot find symbol.\nCommon Causes and Solutions Let\u0026rsquo;s discuss some common situations that cause a NoSuchMethodError.\nBreaking Change in a 3rd Party Library The potential root cause for a NoSuchMethodError is that one of the libraries we use in our project had a breaking change from one version to the next. This breaking change removed a method from the code of that library.\nHowever, since our own code calling the method in question has been successfully compiled, the classpath must be different during compile time and runtime.\nAt compile time we use the correct version of the library while at runtime we somehow included a different version that does not provide the method in question. This indicates a problem in our build process.\nOverriding a 3rd Party Library Version Imagine we\u0026rsquo;re using a 3rd party library (A) as described above, but we\u0026rsquo;re not calling it directly. Rather, it\u0026rsquo;s a dependency of another 3rd party library (B) that we use (i.e. A is a transitive dependency to our project).\nIn this case, which is the most common cause for NoSuchMethodErrors in my experience, we probably have a version conflict in our build system. There probably is a third library (C) which also has a dependency on B, but on a different version.\nBuild systems like Gradle and Maven usually resolve a version conflict like this by simply choosing one of the versions, opening the door for a NoSuchMethodError.\nBreaking Change in Our Own Module The same can happen in multi-module builds, though this is less common. We have removed a certain method from the code in one module (A) and during runtime the code of another module (B) fails with a NoSuchMethodError.\nThis indicates an error in our build pipeline since module B obviously has not been compiled against the new version of module A.\nFixing a NoSuchMethodError There are a lot of different flavors of NoSuchMethodErrors, but they all boil down to the fact that the compile time classpath differs from the runtime classpath.\nThe following steps will help to pinpoint the problem:\nStep 1: Find Out Where the Class Comes From First, we need to find out where the class containing the method in question comes from. We find this information in the error message of the NoSuchMethodError:\nException in thread \u0026#34;main\u0026#34; java.lang.NoSuchMethodError: io.reflectoring.nosuchmethod.Service.sayHello(Ljava/lang/String;)Ljava/lang/String; Now, we can search the web or within the IDE to find out which JAR file contains this class. In the case above, we can see that it\u0026rsquo;s the Service class from our own codebase and not a class from another library.\nIf we have trouble finding the JAR file of the class, we can add the Java option -verbose:class when running our application. This will cause Java to print out all classes and the JARs they have been loaded from:\n[Loaded io.reflectoring.nosuchmethod.Service from file: /C:/daten/workspaces/code-examples2/patterns/build/libs/java-1.0.jar] Step 2: Find Out Who Calls the Class Next, we want find out where the method is being called. This information is available in the first element of the stack trace:\nException in thread \u0026#34;main\u0026#34; java.lang.NoSuchMethodError: io.reflectoring.nosuchmethod.Service.sayHello(Ljava/lang/String;)Ljava/lang/String; at io.reflectoring.nosuchmethod.ProvokeNoSuchMethodError.main(ProvokeNoSuchMethodError.java:7) Here, the class ProvokeNoSuchMethodError tries to call a method that does not exist at runtime. We should now find out which library this file belongs to.\nStep 3: Check the Versions Now that we know where the NoSuchMethodError is provoked and what method is missing, we can act.\nWe should now list all of our project dependencies.\nIn Gradle, we can call:\n./gradlew dependencies \u0026gt; dependencies.txt If we\u0026rsquo;re using Maven, a similiar result can be achieved with:\nmvn dependency:list \u0026gt; dependencies.txt` In this file, we can search for the libraries that contain the class with the missing method and the class that tries to call this method.\nUsually we\u0026rsquo;ll find an output like this somewhere:\n\\--- org.springframework.retry:spring-retry:1.2.2.RELEASE | \\--- org.springframework:spring-core:4.3.13.RELEASE -\u0026gt; 5.0.8.RELEASE The above means that the spring-retry library depends on spring-core in version 4.3.13, but some other library also depends on spring-core in version 5.0.8 and overrules the dependency version.\nWe can now search our dependencies.txt file for 5.0.8.RELEASE to find out which library introduces the dependency to this version.\nFinally, we need to decide which of the two versions we actually need to satisfy both dependencies. Usually, this is the newer version since most frameworks are backwards compatible to some point. However, it can be the other way around or we might even not be able to resolve the conflict at all.\nAnd What About NoSuchMethodException? NoSuchMethodException is related to NoSuchMethodError, but occurs in another context. While a NoSuchMethodError occurs when some JAR file has a different version at runtime that it had at compile time, a NoSuchMethodException occurs during reflection when we try to access a method that does not exist.\nThis can be easily provoked with the following code:\nString.class.getMethod(\u0026#34;foobar\u0026#34;); Here, we\u0026rsquo;re trying to access the method foobar() of class String, which does not exist.\nThe steps to find the cause of the exception and to fix it are pretty much the same as those for the NoSuchMethodError.\nConclusion This article went through some common causes of NoSuchMethodErrors and NoSuchMethodExceptions and walked through some steps that can help to fix them.\nWe need to find out where the error is caused and who causes it before we can compare versions and try to fix the problem.\n","date":"October 8, 2018","image":"https://reflectoring.io/images/stock/0011-exception-1200x628-branded_hu5c84ec643e645bced334d00cceee0833_119970_650x0_resize_q90_box.jpg","permalink":"/nosuchmethod/","title":"3 Steps to Fix NoSuchMethodErrors and NoSuchMethodExceptions"},{"categories":["Java"],"contents":"As discussed in my article about 100% Code Coverage*, a code coverage tool should provide the means not only to measure code coverage, but also to enforce it. This tutorial shows how to measure and enforce code coverage with JaCoCo and its Gradle plugin, but the concepts are also valid for the JaCoCo Maven plugin.\n Example Code This article is accompanied by a working code example on GitHub. Why JaCoCo? JaCoCo is currently the most actively maintained and sophisticated code coverage measurement tool for the Java ecosystem.\nThere\u0026rsquo;s also Cobertura, but at the time of this writing, the latest commit is from 10 months ago and the build pipeline is failing \u0026hellip; signs that the project is not actively maintained.\nHow Does It Work? JaCoCo measures code coverage by instrumenting the Java bytecode on-the-fly using a Java Agent. This means that it modifies the class files to create hooks that count if a certain line of code or a certain branch have been executed during a test run.\nJaCoCo can be used standalone or integrated within a build tool. In this tutorial, we\u0026rsquo;re using JaCoCo from within a Gradle build.\nBasic Gradle Setup The basic setup is very straightforward. We simply have to apply the jacoco plugin within our build.gradle:\napply plugin: \u0026#39;jacoco\u0026#39; In this tutorial, we\u0026rsquo;re using JUnit 5 as our testing framework. With the current Gradle version, we still have to tell Gradle to use the new JUnit Platform for running tests:\ntest { useJUnitPlatform() } Creating a Binary Coverage Report Let\u0026rsquo;s run our Gradle build:\n./gradlew build\nJaCoCo now automatically creates a file build/jacoco/test.exec which contains the coverage statistics in binary form.\nThe destination for this file can be configured in the jacocoTestReports closure in build.gradle which is documented on the JaCoCo Gradle Plugin site.\nCreating an HTML Coverage Report Since the binary report is not readable for us, let\u0026rsquo;s create an HTML report:\n./gradlew build jacocoTestReport When calling the jacocoTestReport task, JaCoCo by default reads the binary report, transforms it into a human-readable HTML version, and puts the result into build/reports/jacoco/test/html/index.html.\nNote that the jacocoTestReport task simply does nothing when the test.exec file does not exist. So, we should always run the build or test task first.\nThe following log output is an indicator that we forgot to run the build or test task:\n\u0026gt; Task :tools:jacoco:jacocoTestReport SKIPPED` We can let this task run automatically with every build by adding it as a finalizer for the build task in build.gradle:\ntest.finalizedBy jacocoTestReport Why put jacocoTestReport after test? The test report should be generated as soon as the tests have completed. If we generate the report at a later time, for instance by using build.finalizedBy jacocoTestReport, other steps may fail in the meantime, stopping the build without having created the report. Thanks to Alexander Burchak for pointing this out in the comments.  Enforcing Code Coverage The JaCoCo Gradle Plugin allows us to define rules to enforce code coverage. If any of the defined rules fails, the verification will fail.\nWe can execute the verification by calling:\n./gradlew build jacocoTestCoverageVerification Note that by default, this task is not called by ./gradlew check. To include it, we can add the following to our build.gradle:\ncheck.dependsOn jacocoTestCoverageVerification Let\u0026rsquo;s look at how to define verification rules.\nGlobal Coverage Rule The following configuration will enforce that 100% of the lines are executed during tests:\njacocoTestCoverageVerification { violationRules { rule { limit { counter = \u0026#39;LINE\u0026#39; value = \u0026#39;COVEREDRATIO\u0026#39; minimum = 1.0 } } } } Instead of enforcing line coverage, we can also count other entities and hold them against our coverage threshold:\n LINE: counts the number of lines BRANCH: counts the number of execution branches CLASS: counts the number of classes INSTRUCTION: counts the number of code instructions METHOD: counts the number of methods  Also, we can measure these other metrics, aside from the covered ratio:\n COVEREDRATIO: ratio of covered items to uncovered items (i.e. percentage of total items that are covered) COVEREDCOUNT: absolute number of covered items MISSEDCOUNT: absolute number of items not covered MISSEDRATIO: ratio of items not covered TOTALCOUNT: total number of items  Excluding Classes and Methods Instead of defining a rule for the whole codebase, we can also define a local rule for just some classes.\nThe following rule enforces 100% line coverage on all classes except the excluded ones:\njacocoTestCoverageVerification { violationRules { rule { element = \u0026#39;CLASS\u0026#39; limit { counter = \u0026#39;LINE\u0026#39; value = \u0026#39;COVEREDRATIO\u0026#39; minimum = 1.0 } excludes = [ \u0026#39;io.reflectoring.coverage.part.PartlyCovered\u0026#39;, \u0026#39;io.reflectoring.coverage.ignored.*\u0026#39;, \u0026#39;io.reflectoring.coverage.part.NotCovered\u0026#39; ] } } } Excludes can either be defined on CLASS level like above, or on METHOD level.\nIf you want to exclude methods, you have to use their fully qualified signature in the excludes like this:\nio.reflectoring.coverage.part.PartlyCovered.partlyCovered(java.lang.String, boolean) Combining Rules We can combine a global rule with more specific rules:\nviolationRules { rule { element = \u0026#39;CLASS\u0026#39; limit { counter = \u0026#39;LINE\u0026#39; value = \u0026#39;COVEREDRATIO\u0026#39; minimum = 1.0 } excludes = [ \u0026#39;io.reflectoring.coverage.part.PartlyCovered\u0026#39;, \u0026#39;io.reflectoring.coverage.ignored.*\u0026#39;, \u0026#39;io.reflectoring.coverage.part.NotCovered\u0026#39; ] } rule { element = \u0026#39;CLASS\u0026#39; includes = [ \u0026#39;io.reflectoring.coverage.part.PartlyCovered\u0026#39; ] limit { counter = \u0026#39;LINE\u0026#39; value = \u0026#39;COVEREDRATIO\u0026#39; minimum = 0.8 } } } The above enforces 100% line coverage except for a few classes and redefines the minimum coverage for the class io.reflectoring.coverage.part.PartlyCovered to 80%.\nNote that if we want to define a lower threshold than the global threshold for a certain class, we have to exclude it from the global rule as we did above! Otherwise the global rule will fail if that class does not reach 100% coverage.\nExcluding Classes from the HTML Report The HTML report we created above still contains all classes, even though we have excluded some methods from our coverage rules. We might want to exclude the same classes and methods from the report that we have excluded from our rules.\nHere\u0026rsquo;s how we can exclude certain classes from the report:\njacocoTestReport { afterEvaluate { classDirectories = files(classDirectories.files.collect { fileTree(dir: it, exclude: [ \u0026#39;io/reflectoring/coverage/ignored/**\u0026#39;, \u0026#39;io/reflectoring/coverage/part/**\u0026#39; ]) }) } } However, this is a workaround at best. We\u0026rsquo;re excluding some classes from the classpath of the JaCoCo plugin so that these classes will not be instrumented at all. Also, we can only exclude classes and not methods.\nUsing a @Generated annotation as described in the next section is a much better solution.\nExcluding Classes and Methods From Rules and Reports If we want to exclude certain classes and methods completely from JaCoCos coverage inspection (i.e. from the rules and the coverage report), there is an easy method using a @Generated annotation.\nAs of version 0.8.2 JaCoCo completely ignores classes and methods annotated with @Generated. We can just create an annotation called Generated and add it to all the methods and classes we want to exclude. They will be excluded from the report as well as from the rules we define.\nAt the time of this writing, the JaCoCo Gradle plugin still uses version 0.8.1, so I had to tell it to use the new version in order to make this feature work:\njacoco { toolVersion = \u0026#34;0.8.2\u0026#34; } Excluding Code Generated By Lombok A lot of projects use Lombok to get rid of a lot of boilerplate code like getters, setters, or builders.\nLombok reads certain annotations like @Data and @Builder and generates boilerplate methods based on them. This means that the generated code will show up in JaCoCo\u0026rsquo;s coverage reports and will be evaluated in the rules we defined.\nLuckily, JaCoCo honors Lombok\u0026rsquo;s @Generated annotation by ignoring methods annotated with it. We simply have to tell Lombok to add this annotation by creating a file lombok.config in the main folder of our project with the following content:\nlombok.addLombokGeneratedAnnotation = true Missing Features In my article about 100% Code Coverage I propose to always enforce 100% code coverage while excluding certain classes and methods that don\u0026rsquo;t need tests. To exclude those classes and methods from both the rules and the report, the easiest way would be to annotate them with @Generated.\nHowever, this can be a dangerous game. If someone just annotates everything with @Generated, we have 100% enforced code coverage but not a single line of code is actually covered!\nThus, I would very much like to create a coverage report that does not honor the @Generated annotation in order to know the real code coverage.\nAlso, I would like to be able to use a custom annotation with a different name than @Generated to exclude classes and methods, because our code is not really generated.\nConclusion This tutorial has shown the main features of the JaCoCo Gradle Plugin, allowing to measure and enforce code coverage.\nYou can have a look at the example code in my github repository.\n","date":"October 5, 2018","image":"https://reflectoring.io/images/stock/0027-cover-1200x628-branded_hud0e8018d4bb3bffe77108325dc949a45_281256_650x0_resize_q90_box.jpg","permalink":"/jacoco/","title":"Definitive Guide to the JaCoCo Gradle Plugin"},{"categories":["Software Craft"],"contents":"In the recent past, I stumbled a few times over the definition of the words \u0026ldquo;upstream\u0026rdquo; and \u0026ldquo;downstream\u0026rdquo; in various software development contexts. Each time, I had to look up what it meant. Reason enough to write about it to make it stick.\nUpstream and Downstream in a Production Process Let\u0026rsquo;s start with a simple production process, even though it has nothing to with software development, so we can build on that to define upstream and downstream in software development.\nIn the above example, we have three steps:\n collecting parts assembling the parts painting the assembly  A production process is very similar to a river, so it\u0026rsquo;s easy to grasp that as the process goes from one step to the next, we\u0026rsquo;re moving downstream.\nWe can deduct the following rules:\n Dependency Rule: each item depends on all the items upstream from its viewpoint Value Rule: moving downstream, each step adds more value to the product  Now, let\u0026rsquo;s try to apply these rules to different software development contexts.\nUpstream and Downstream Software Dependencies Most software components have dependencies to other components. So what\u0026rsquo;s an upstream dependency and a downstream dependency?\nConsider this figure:\nComponent C depends on component B which in turn depends on component A.\nApplying the Dependency Rule, we can safely say that component A is upstream from component B which is upstream from component C (even though the arrows point in the other direction).\nApplying the Value Rule here is a little more abstract, but we can say that component C holds the most value since it \u0026ldquo;imports\u0026rdquo; all the features of components B and A and adds its own value to those features, making it the downstream component.\nUpstream and Downstream Open Source Projects Another context where the words \u0026ldquo;upstream\u0026rdquo; and \u0026ldquo;downstream\u0026rdquo; are used a lot is in open source development. It\u0026rsquo;s actually very similar to the component dependencies discussed above.\nConsider the projects A and B, where A is an original project and B is a fork of A:\nThis is a rather common development style in open source projects: we create a fork of a project, fix a bug or add a feature in that fork and then submit a patch to the original project.\nIn this context, the Dependency Rule makes project A the upstream project since it can very well live without project B but project B (the fork) wouldn\u0026rsquo;t even exist without project A (the original project).\nThe Value Rule applies as well: since project B adds a new feature or bugfix, it has added value to the original project A.\nSo, each time we contribute a patch to an open source project we can say that we have sent a patch upstream.\nUpstream and Downstream (Micro-)Services In systems consisting of microservices (or just plain old distributed services for the old-fashioned), there\u0026rsquo;s also talk about upstream and downstream services.\nUnsurprisingly, both the Dependency Rules and the Value Rule also apply to this context.\nService B is the upstream service since service A depends on it. And service A is the downstream service since it adds to the value of service B.\nNote that the \u0026ldquo;stream\u0026rdquo; defining what is upstream and what is downstream in this case is not the stream of data coming into the system through service A but rather the stream of data from the heart of the system down to the user-facing services.\nThe closer a service is to the user (or any other end-consumer), the farther downstream it is.\nConclusion In any context where the concept of \u0026ldquo;upstream\u0026rdquo; and \u0026ldquo;downstream\u0026rdquo; is used, we can apply two simple rules to find out which item is upstream or downstream of another.\nIf an item adds value to another or depends on it in any other way, it\u0026rsquo;s most certainly downstream.\n","date":"September 27, 2018","image":"https://reflectoring.io/images/stock/0028-stream-1200x628-branded_hu11001180a5e52edcf84edd732cef6975_235946_650x0_resize_q90_box.jpg","permalink":"/upstream-downstream/","title":"What is Upstream and Downstream in Software Development?"},{"categories":["Spring Boot"],"contents":"Among other things, testing an interface between two systems with\n(consumer-driven) contract tests is faster and more stable than doing so with end-to-end tests. This tutorial shows how to create a contract between a message producer and a message consumer using the Pact framework and how to test the producer and consumer against this contract.\nThe Scenario As an example to work with, let\u0026rsquo;s say we have a user service that sends a message to a message broker each time a new user has been created. The message contains a UUID and a user object.\nIn Java code, the message looks like this:\n@Data public class UserCreatedMessage { @NotNull private String messageUuid; @NotNull private User user; } @Data public class User { @NotNull private long id; @NotNull private String name; } In order to reduce boilerplate code, we use Lombok\u0026rsquo;s @Data annotation to automatically generate getters and setters for us.\nJava objects of type UserCreatedMessage are mapped into JSON strings before we send them to the message broker. We use Jackson\u0026rsquo;s ObjectMapper to do the mapping from Java objects to JSON strings and back, since it\u0026rsquo;s included in Spring Boot projects by default.\nNote the @NotNull annotations on the fields. This annotation is part of the standard Java Bean Validation annotations we\u0026rsquo;ll be using to validate message objects later on.\nConsumer and Producer Architecture Before diving into the consumer and producer tests, let\u0026rsquo;s have a look at the architecture. Having a clean architecture is important since we don\u0026rsquo;t want to test the whole conglomerate of classes, but only those classes that are responsible for consuming and producing messages.\nThe figure below shows the data flow through our consumer and provider code base.\n In the domain logic on the producer side, something happens that triggers a message. The message is passed as a Java object to the MessageProducer class which transforms it into a JSON string. The JSON string is passed on to the MessagePublisher, whose single responsibility is to send it to the message broker. On the consumer side, the MessageListener class receives the message as a string from the broker. The string message is passed to the MessageConsumer, which transforms it back into a Java object. The Java object is passed into the domain logic on the consumer side to be processed.  In the contract between consumer and producer, we want to define the structure of the exchanged JSON message. So, to verify the contract, we actually only need to check that\n MessageProducer correctly transforms Java objects into JSON strings MessageConsumer correctly transforms JSON strings into Java objects.  Since we\u0026rsquo;re testing the MessageProducer and MessageConsumer classes in isolation, we don\u0026rsquo;t care what message broker we\u0026rsquo;re using. We\u0026rsquo;re just verifying that these two classes speak the same (JSON) language and can be sure that the contract between producer and consumer is met.\nTesting the Message Consumer Since we\u0026rsquo;re doing consumer-driven contract testing, we\u0026rsquo;re starting with the consumer side. You can find the code for the consumer in my github repo.\nOur MessageConsumer class looks like this:\npublic class MessageConsumer { private ObjectMapper objectMapper; public MessageConsumer(ObjectMapper objectMapper) { this.objectMapper = objectMapper; } public void consumeStringMessage(String messageString) throws IOException { UserCreatedMessage message = objectMapper.readValue(messageString, UserCreatedMessage.class); Validator validator = Validation.buildDefaultValidatorFactory().getValidator(); Set\u0026lt;ConstraintViolation\u0026lt;UserCreatedMessage\u0026gt;\u0026gt; violations = validator.validate(message); if(!violations.isEmpty()){ throw new ConstraintViolationException(violations); } // pass message into business use case  } } It takes a string message as input, interprets it as JSON and transforms it into a UserCreatedMessage object with the help of ObjectMapper.\nTo check if all fields are valid, we use a Java Bean Validator. In our case, the validator will check if all fields are set since we used the @NotNull annotation on all fields in the message class.\nIf the validation fails, we throw an exception. This is important since we need some kind of signal if the incoming string message is invalid.\nIf everything looks good, we pass the message object into the business logic.\nTo test the consumer, we create a unit test similar as we would for a plain REST consumer test:\n@RunWith(SpringRunner.class) @SpringBootTest public class MessageConsumerTest { @Rule public MessagePactProviderRule mockProvider = new MessagePactProviderRule(this); private byte[] currentMessage; @Autowired private MessageConsumer messageConsumer; @Pact(provider = \u0026#34;userservice\u0026#34;, consumer = \u0026#34;userclient\u0026#34;) public MessagePact userCreatedMessagePact(MessagePactBuilder builder) { PactDslJsonBody body = new PactDslJsonBody(); body.stringType(\u0026#34;messageUuid\u0026#34;); body.object(\u0026#34;user\u0026#34;) .numberType(\u0026#34;id\u0026#34;, 42L) .stringType(\u0026#34;name\u0026#34;, \u0026#34;Zaphod Beeblebrox\u0026#34;) .closeObject(); return builder .expectsToReceive(\u0026#34;a user created message\u0026#34;) .withContent(body) .toPact(); } @Test @PactVerification(\u0026#34;userCreatedMessagePact\u0026#34;) public void verifyCreatePersonPact() throws IOException { messageConsumer.consumeStringMessage(new String(this.currentMessage)); } /** * This method is called by the Pact framework. */ public void setMessage(byte[] message) { this.currentMessage = message; } } We use @SpringBootTest so we can let Spring create a MessageConsumer and @Autowire it into our test. We could do without Spring and just create the MessageConsumer ourselves, though.\nThe MessageProviderRule takes care of starting up a mock provider that accepts a message and validates if it matches the contract.\nThe contract itself is defined in the method annotated with @Pact. The method annotated with @PactVerification verifies that our MessageConsumer can read the message.\nFor the verification, we simply pass the string message provided by Pact into the consumer and if there is no exception, we assume that the consumer can handle the message. This is why it\u0026rsquo;s important that the MessageConsumer class does all the JSON parsing and validation.\nTesting the Message Producer Let\u0026rsquo;s look at the producer side. You can find the producer source code in my github repo.\nThe MessageProducer class looks something like this:\nclass MessageProducer { private ObjectMapper objectMapper; private MessagePublisher messagePublisher; MessageProducer( ObjectMapper objectMapper, MessagePublisher messagePublisher) { this.objectMapper = objectMapper; this.messagePublisher = messagePublisher; } void produceUserCreatedMessage(UserCreatedMessage message) throws IOException { String stringMessage = objectMapper.writeValueAsString(message); messagePublisher.publishMessage(stringMessage, \u0026#34;user.created\u0026#34;); } } The central part is the method produceUserCreatedMessage(). It takes a UserCreatedMessage object, transforms it into a JSON string, and then passes that string to the MessagePublisher who will send it to the message broker.\nThe Java-to-JSON mapping is done with an ObjectMapper instance.\nThe test for the MessageProducer class looks like this:\n@RunWith(PactRunner.class) @Provider(\u0026#34;userservice\u0026#34;) @PactFolder(\u0026#34;../pact-message-consumer/target/pacts\u0026#34;) public class UserCreatedMessageProviderTest { @TestTarget public final Target target = new AmqpTarget(Collections.singletonList(\u0026#34;io.reflectoring\u0026#34;)); private MessagePublisher publisher = Mockito.mock(MessagePublisher.class); private MessageProducer messageProvider = new MessageProducer(new ObjectMapper(), publisher); @PactVerifyProvider(\u0026#34;a user created message\u0026#34;) public String verifyUserCreatedMessage() throws IOException { // given  doNothing() .when(publisher) .publishMessage(any(String.class), eq(\u0026#34;user.created\u0026#34;)); // when  UserCreatedMessage message = UserCreatedMessage.builder() .messageUuid(UUID.randomUUID().toString()) .user(User.builder() .id(42L) .name(\u0026#34;Zaphod Beeblebrox\u0026#34;) .build()) .build(); messageProvider.produceUserCreatedMessage(message); // then  ArgumentCaptor\u0026lt;String\u0026gt; messageCapture = ArgumentCaptor.forClass(String.class); verify(publisher, times(1)) .publishMessage(messageCapture.capture(), eq(\u0026#34;user.created\u0026#34;)); return messageCapture.getValue(); } } With the @PactFolder and @Provider annotation, we tell Pact to load the contracts for the provider named userservice from a certain folder. The contract must have been created earlier by the consumer.\nFor each interaction in those contracts, we need a method annotated with @PactVerifyProvider, in our case only one. In this method, we use Mockito to mock all dependencies of our MessageProducer away and then pass to it an object of type UserCreatedMessage.\nThe MessageProducer will dutifully transform that message object into a JSON string and pass that string to the mocked MessagePublisher. We capture the JSON string that is passed to the MessagePublisher and return it.\nPact will automatically send the produced string message to the Target field annotated with @TestTarget (in this case an instance of AmqpTarget) where it will be checked against the contract.\nClasspath Issues I couldn\u0026rsquo;t quite get the AmqpTarget class to work due to classpath issues. Hence, I created a subclass that overrides some of the reflection magic. Have a look at the code if you run into the same problem. {% endcapture %}\nConclusion Due to a clean architecture with our components having single responsibilities, we can reduce the contract test between a message producer and a message consumer to verifying that the mapping between Java objects and JSON strings works as expected.\nWe don\u0026rsquo;t have to deal with the actual or even a simulated message broker to verify that message consumer and message provider speak the same language.\n","date":"September 13, 2018","image":"https://reflectoring.io/images/stock/0029-contract-1200x628-branded_hu7a19ccad5c11568ad8f2270ae968f76d_151831_650x0_resize_q90_box.jpg","permalink":"/cdc-pact-messages/","title":"Testing a Spring Message Producer and Consumer against a Contract with Pact"},{"categories":["Software Craft"],"contents":"Most software that does more than a \u0026ldquo;hello world\u0026rdquo; needs to be configured in some way or another in order to function in a certain environment. This article explains why this configuration must not be part of the software itself, and explores some ways on how to externalize configuration parameters.\nWhat Do We Need Configuration For? Looking under the hood of a software project, we\u0026rsquo;ll find configuration parameters all over the place. A typical web application might need to be configured with the following parameters that may have different values for different runtime environments:\n a URL, username and password of the database to use as persistent storage a URL, username and password of the mail server to use for sending email a flag whether to disable authentication for easier testing during development the locale to use for date formats the number of seconds that web responses should be kept in the browser cache the logging level to decide which log messages to log and which not \u0026hellip;  There\u0026rsquo;s literally no end to potential configuration parameters.\nA mid-sized enterprise application might have hundreds of such configuration parameters.\nSetting one of those parameters to a wrong value may lead to startup errors of the application. Or worse, the application starts up, happily serving users, and we only notice a day later that no emails have been sent and thus lost a lot of profit\u0026hellip; .\nSo how do we handle those configuration parameters?\nThe Road to Hell: Internal Configuration Let\u0026rsquo;s say we have two runtime environments: the production environment and a development environment used for testing.\nIn the naive approach, we have a magic build process that takes our code and our configuration parameters for the production and development environments and creates a deployment artifact for each environment as shown in the figure below.\nSince the artifacts have the configuration baked into them, they must each be deployed to the specific runtime environment they are configured for.\nThe configuration parameters are inside of the deployment artifact, which is why I call this internal configuration.\nSo what\u0026rsquo;s wrong with this approach?\nFirst off, this approach doesn\u0026rsquo;t scale. Each time we\u0026rsquo;re changing a configuration parameter we have to re-build and re-deploy an artifact. Each time we have to wait for the build to finish before we can test the change.\nAlso, since we have to create a separate artifact for each runtime environment, we have to modify and test the build process each time we want to support a new runtime environment.\nAnother major drawback is that we\u0026rsquo;re testing one artifact in the development environment and then deploying another artifact to the production environment. Who can say what bugs are hidden in the untested production artifact?\nBasically, it all boils down to this approach being a violation of the Single Responsibility Principle. This principle says that a unit of code should have as few reasons to change as possible.\nIf we transfer this principle to our deployment artifact, we see that our deployment artifact simply has too many reasons to change. Any configuration parameter is such a reason. A change in any parameter inevitably leads to a new artifact.\nInternal configuration comes in different flavors. It may simply be a configuration file within the deployment artifact.\nEven more evil is a build process that changes compiled code (or even worse: source code) during the build, depending on the target environment.\nA clear indicator for internal configuration is when the build process takes a parameter that specifies a certain runtime environment.\nExternal Configuration to the Rescue We can do better and gain a lot of flexibility by externalizing our configuration as depicted below.\nOur build process no longer needs to know about the runtime environments, since we\u0026rsquo;re deploying the same artifact in all environments.\nWithin each environment lives a configuration that is valid for this environment only. This configuration is passed into the application at startup.\nThis approach negates all drawbacks of internal configuration discussed above.\nOnce we have tested the artifact in the development environment, we know that it will work in the production environment since we\u0026rsquo;re deploying the same artifact.\nAlso, we don\u0026rsquo;t have to change the build process anymore when we want to support a new environment.\nWe successfully have reduced the responsibilities of our deployment artifact since it doesn\u0026rsquo;t need to change for each and every configuration parameter anymore.\nLet\u0026rsquo;s dive into a couple ways how we can externalize our configuration parameters.\nFixed-Location Configuration Files The easiest way to migrate from an internal configuration file to external configuration is by simply removing the file from the deployment artifact and making it available in the file system of the target environment.\nWe can put the file in a fixed location that is the same in all environments, for example, \u0026ldquo;/etc/myapp.conf\u0026rdquo;.\nIn our code, we can load the file from this location and read the configuration parameters from it. If the file doesn\u0026rsquo;t exist, we should make sure that the application doesn\u0026rsquo;t start at all in order to keep chaos contained.\nCommand-Line Parameters Another simple approach is to pass command-line parameters into our application. For every configuration parameter we have, we expect a certain command-line parameter.\nThis approach is more flexible than the configuration file approach since we\u0026rsquo;re no longer expecting a file to be available in a certain fixed location. But a command may grow rather long with a lot of configuration parameters.\nEnvironment Variables A common approach to getting rid of long command-line parameter lists is to move the parameters into environment variables provided by the operating system.\nAll operating systems support environment variables. They can be set to a certain value by an easy command:\n for Unix systems using the Bourne shell: export myparameter=myvalue  for Unix systems using the Korn shell: myparameter=myvalue export myparameter  for Windows systems: SET myparameter=myvalue   All major programming languages provide a way to access these environment variables from source code.\nUsing environment variables, we can create a start script for our application that starts the application only after all environment variables have been properly set. This script lives in each target environment with different variable values.\nConfiguration Servers If we want to scale our application horizontally (i.e. add more running instances to distribute load), we probably want to configure all instances the same.\nUsing environment variables would mean that we have to distribute the same start script to all instances.\nA change in a single configuration parameter would result in a change to the start script on all instances.\nThis pain can be reduced by using a configuration server. The server knows all configuration parameters for all environments and provides an API to access those parameters.\nAt startup, the application calls the configuration server and loads all configuration parameters it needs. We might even want to re-load configuration parameters at an interval to consider changes to the parameters during runtime since the configuration server makes it easy to change parameters at a single source.\nCombine and Conquer Each technology stack provides features that support external configuration. A very good example is Spring Boot which allows a lot of different configuration sources, loads them in a sensible priority and even allows to bind them to fields in Java objects.\nSuch a combination of configuration sources makes it possible to define defaults in one source (i.e. a configuration file) that can be overridden by another source (i.e. the command-line). This gives us all the flexibility we could wish for in configuring our application.\nConclusion All configuration parameters should be held outside of our deployment artifacts to avoid multiple builds, long turnaround times and quality issues.\nConfiguration parameters can be externalized by using configuration files, command-line parameters or environment variables.\n","date":"September 9, 2018","image":"https://reflectoring.io/images/stock/0013-switchboard-1200x628-branded_hu4e75c8ecd0e5246b9132ae3e09f147a6_167298_650x0_resize_q90_box.jpg","permalink":"/externalize-configuration/","title":"Build Once, Run Anywhere: Externalize Your Configuration"},{"categories":["Software Craft"],"contents":"Yeah, I know. Everyone says aiming for 100% code coverage is bullshit.\nAnd the way test coverage is usually defined, I fully agree!\nAs the asterisk in the title suggests, there\u0026rsquo;s a little fine print here that will introduce a new definition of \u0026ldquo;code coverage\u0026rdquo; (or \u0026ldquo;test coverage\u0026rdquo;) to help explain why aiming at 100% code coverage might be the right thing to do.\nThe Problem with 100% Actual Code Coverage Let\u0026rsquo;s stick for a minute with the usual definition of code coverage and call it \u0026ldquo;actual code coverage\u0026rdquo;:\n Actual code coverage is the percentage of lines of code that are executed during an automated test run.\n Actually, we can replace \u0026ldquo;lines of code\u0026rdquo; with \u0026ldquo;conditional branches\u0026rdquo;, \u0026ldquo;files\u0026rdquo;, \u0026ldquo;classes\u0026rdquo;, or whatever we want to take as a basis for counting.\nWhy is it bullshit to aim for 100% actual code coverage?\nBecause 100% code coverage does not mean that there are no more bugs in the code. And because people would write useless tests to reach that 100%. And for a lot more reasons I\u0026rsquo;m not going to discuss here.\nSo, back to the question: why would I promote to enforce 100% code coverage? Let\u0026rsquo;s discuss some criminal psychology!\nBroken Windows: Cracks in Your Code Coverage In 1969, Philip Zimbardo, an American psychologist, conducted an experiment where he put an unguarded car on a street in a New York problem area and another one in a better neighborhood in California.\nAfter a short time, a passerby broke a window of the New York car. Rapidly after the window had been broken, the car was vandalized completely.\nThe car in California wasn\u0026rsquo;t damaged for a couple days, so he smashed a window himself. The same effect occurred as with the New York car: the car was rapidly vandalized as soon as a broken window was visible.\nThis experiment shows that once some sign of disorder is visible, people tend to give in to this disorder, independent of their surroundings.\nLet\u0026rsquo;s transfer this to our code coverage discussion: as soon as our code coverage shows cracks, chances are that developers will not care much if they introduce untested code that further lowers the code coverage of the code base.\nI have actually observed this on myself.\nHere\u0026rsquo;s where we could argue that we need to enforce 100% actual code coverage so as not to let things slip due to the Broken Windows Theory.\nBut as stated above: aiming for 100% actual code coverage is bullshit.\nSo how can we avoid the Broken Windows effect without aiming at 100% actual code coverage?\nAvoid Broken Windows by Excluding False Positives The answer is to remove false positives from the actual code coverage, so 100% code coverage becomes a worthwhile and meaningful target.\nWhat\u0026rsquo;s a false positive?\n A false positive is a line of code that is not required to be covered with a test and is not executed during an automated test run.\n If code is not covered by a test, but shows up in a coverage report as \u0026ldquo;not being covered\u0026rdquo;, it\u0026rsquo;s a false positive.\nAgain, we can replace \u0026ldquo;lines of code\u0026rdquo; with \u0026ldquo;conditional branches\u0026rdquo; or another basis for counting.\nA false positive in this sense might be:\n a trivial getter or setter method a facade method that solely acts as a trivial forwarding mechanism a class or method for which automated tests are considered too costly (not as good a reason as the others) \u0026hellip;  If we have a way to exclude false positives from our actual code coverage, we have a new coverage metric, which I will call \u0026ldquo;cleaned code coverage\u0026rdquo; for lack of a better term:\n Cleaned code coverage is the percentage of lines of code that are required to be covered by a test and that are executed during an automated test run.\n What have we gained by applying the cleaned code coverage metric instead of the actual code coverage?\nEnforce Cleaned Code Coverage to Keep Coverage High Granted, 100% cleaned code coverage still doesn\u0026rsquo;t mean that there are no bugs in the code. But 100% clean code coverage has a lot more meaning than 100% actual code coverage, because we no longer have to interpret what it really means.\nThus, we can aim at 100% and reduce the Broken Windows effect.\nWe can even enforce 100% by setting up a test coverage tool to break the build if we don\u0026rsquo;t have 100% cleaned code coverage (provided the tool supports excluding false positives).\nThis way, each time a developer introduces new code that is not covered with tests, a breaking build will make her aware of missing tests.\nUsually, test coverage tools create a report of which lines of code have not been covered by tests. The developer can just look at this report and will directly see what part of her newly introduced code is not covered with tests. If the report worked with actual code coverage, this information would be drowned in false positives!\nLooking at the report, the developer can then decide whether she should add a test to cover the code or if she should mark it as a false positive to be excluded from actual code coverage.\nExcluding lines of code from the cleaned code coverage thus becomes a conscious decision. This decision is an obstacle that we don\u0026rsquo;t have when just looking at the actual code coverage and deciding that a reduction in code coverage is OK.\nMonitor Actual Code Coverage to Find Untested Code But, as good as our intentions are, it still may happen that due to criminal intent or external pressure we excluded a little too much code from our code coverage metric.\nIn the extreme, we may have 100% cleaned code coverage and 0% actual code coverage (when we defined all code as false positives).\nThat\u0026rsquo;s why the actual code coverage should still be monitored.\nCleaned code coverage should be used for automated build breaking to get the developer\u0026rsquo;s attention and reduce the Broken Windows effect.\nActual code coverage should still be regularly inspected to identify pockets of code that are not tested but perhaps should be.\nTooling Let\u0026rsquo;s define our requirements for a code coverage tool to support the practice discussed in this article.\nThe code coverage tool must:\n allow us to define exclusions / false positives create a report about cleaned code coverage (i.e. taking into regard the exclusions) create a report about actual code coverage (i.e. disregarding the exclusions) allow to break a build at \u0026lt;100% cleaned code coverage  JaCoCo is a tool that supports all of the above bullet points except creating a coverage report about the actual code coverage when we have defined exclusions.\nIf you know of a tool that supports all of the above features, let me know in the comments!\nConclusion Naively aiming at 100% code coverage is bullshit.\nHowever, if we allow excluding code that doesn\u0026rsquo;t need to be tested from the coverage metric, aiming at 100% becomes much more meaningful and it becomes easier to keep a high test coverage due to psychological effects.\nWhat\u0026rsquo;s your take on 100% code coverage?\n","date":"September 1, 2018","image":"https://reflectoring.io/images/stock/0027-cover-1200x628-branded_hud0e8018d4bb3bffe77108325dc949a45_281256_650x0_resize_q90_box.jpg","permalink":"/percent-test-coverage/","title":"Why You Should Enforce 100% Code Coverage*"},{"categories":["Software Craft","Java"],"contents":"To test our business code we always need some kind of test data. This tutorial explains how to do just that with the Object Mother pattern and why we should combine it with a Fluent Builder to create test data factories that are fun to work with.\n Example Code This article is accompanied by a working code example on GitHub. What do we Need a Test Data Factory For? Let\u0026rsquo;s imagine that we want to create some tests around Invoice objects that are structured as shown in the figure below.\nAn Invoice has a target Address and zero or more InvoiceItems, each containing the amount and price of a certain product that is billed with the invoice.\nNow, we want to test our invoice handling business logic with a couple of test cases:\n a test verifying that invoices with an abroad invoice address are sent to an invoicing service specialized on foreign invoicing a test verifying that a missing house number in an invoice address leads to a validation error a test verifying that an invoice with a negative total price is forwarded to a refund service  For each of these test cases, we obviously need an Invoice object in a certain state:\n an invoice with an address in another country, an invoice with an address with a missing house number, and an invoice with a negative total price.  How are we going to create these Invoice instances?\nOf course, we can go ahead an create the needed Invoice instance locally in each test case. But, alas, creating an Invoice requires creating some InvoiceItems and an Address, too \u0026hellip; that seems like a lot of boiler plate code.\nApply the Object Mother Pattern to Reduce Duplication The example classes used in this article are rather simple. In the real world, classes like Invoice, InvoiceItem or Address can easily contain 20 or more fields each.\nDo we really want to have code that initializes such complex object graphs in multiple places of our test code base?\nBad test code structure hinders the development of new features just as much as bad production code, as Robert C. Martin\u0026rsquo;s Clean Architecture has once more brought to my attention (link points to ebooks.com; read my book review).\nSo, let\u0026rsquo;s try to keep test code duplication to a minimum by applying the Object Mother pattern.\nThe Object Mother pattern is essentially a special case of the Factory pattern used for creating test objects. It provides one or more factory methods that each create an object in a specific, meaningful configuration.\nIn a test, we can call one of those factory methods and work with the object created for us. If the pre-defined object returned by the Object Mother doesn\u0026rsquo;t fully meet our test requirements, we can go ahead and change some fields of that object locally so that it meets the requirements of our test.\nIn our example, the Object Mother might provide these factory methods for pre-defined Invoice objects:\n InvoiceMother.complete(): creates a complete and valid Invoice object including sensibly configured InvoiceItems and a valid Address InvoiceMother.refund(): creates a complete and valid Invoice object with a negative total price  For our three test cases, we can then use these factory methods:\n To create an Invoice with an abroad address, we call InvoiceMother.complete() and change the country field of the address locally To create an Invoice with a missing house number, we call InvoiceMother.complete() and remove the house number from the address locally To create an Invoice with a negative total price, we simply call InvoiceMother.refund()  The goal of the Object Mother pattern is not to provide a factory method for every single test requirement we might have but instead to provide ways to create a few functionally meaningful versions of an object that can be easily adapted within a concrete test.\nEven with that goal in mind, over time, an Object Mother might degrade to the code equivalent of a termite queen, birthing new objects for each and every use case we might have. In every test case, we\u0026rsquo;d have a dependency to our Object Mother to create objects just right for the requirements at hand.\nEach time we change one of our test cases, we would also have to change the factory method in our Object Mother. This violates the Single Responsibility Principle since the Object Mother must be changed for a lot of different reasons.\nWe stated above that we want to keep our test code base clean, so how can we reduce the risk for violating the Single Responsibility Principle?\nIntroduce the Fluent Builder Pattern to Promote the Single Responsibility Principle That\u0026rsquo;s where the Builder pattern comes into play.\nA Builder is an object with methods that allow us to define the parameters for creating a certain object. It also provides a factory method that creates an object from these parameters.\nInstead of returning readily initialized objects, the factory methods of our Object Mother now return Builder objects that can be further modified by the client to meet the requirements of the specific use case.\nThe code for creating an Invoice with a modified address might look like this:\nInvoice.InvoiceBuilder invoiceBuilder = InvoiceMother.complete(); Address.AddressBuilder addressBuilder = AddressMother.abroad(); invoiceBuilder.address(addressBuilder.build()); Invoice invoice = invoiceBuilder.build(); So far, we haven\u0026rsquo;t really won anything over the pure Object Mother approach described in the previous section. Our InvoiceMother now simply returns instances of InvoiceBuilder instead of directly returning Invoice objects.\nLet\u0026rsquo;s introduce a fluent interface to our Builder. A fluent interface is a programming style that allows to chain multiple method calls in a single statement and is perfectly suited for the Builder pattern.\nThe code from above can now be changed to make use of this fluent interface:\nInvoice invoice = InvoiceMother.complete() .address(AddressMother.abroad() .build()) .build(); But why should this reduce the chance for violating the Single Responsibility Principle in an Object Mother class?\nWith a fluent API and an IDE that supports code completion, we can let the API guide us in creating the object we need.\nHaving this power at our fingertips we\u0026rsquo;ll more likely configure the specific Invoice we need in our test code and we\u0026rsquo;ll less likely create a new factory method in our Object Mother that is probably only relevant four our current test.\nThus, combining the Object Mother pattern with a fluent Builder reduces the potential of violating the Single Responsibility Principle by making it easier to do the right thing.\nMay a Factory Method Call Another Factory Method? When creating an Object Mother (or actually any other kind of factory), a question that often arises is: \u0026ldquo;May I call another factory method from the factory method I\u0026rsquo;m currently coding?\u0026rdquo;.\nMy answer to this question is a typical \u0026ldquo;yes, but\u0026hellip;\u0026rdquo;.\nOf course, we may take advantage of other existing Object Mothers. For instance, in the code of InvoiceMother, we may happily call AddressMother and InvoiceItemMother:\nclass InvoiceMother { static Invoice.InvoiceBuilder complete() { return Invoice.Builder() .id(42L) .address(AddressMother.complete() .build()) .items(Collections.singletonList( InvoiceItemMother.complete() .build())); } } But the same rules apply as in our client test code. We don\u0026rsquo;t want to add responsibilities to our factory method that don\u0026rsquo;t belong there.\nSo, before creating a custom factory method in an Object Mother we want to call from the factory method we\u0026rsquo;re currently coding, let\u0026rsquo;s think about whether we should rather use one of the pre-defined factory methods and customize the returned builder via fluent API to suit our requirements.\nConclusion The Object Mother pattern by itself is a big help in quickly getting pre-defined objects to use in tests.\nBy returning Builders with a fluent API instead of directly returning object instances, we add a lot of flexibility to our test data generation, which makes creating new test objects for any given requirement a breeze. It supports the Single Responsibility Principle by making it easy to adjust created objects locally.\nFurther Reading  Clean Architecture by Robert C. Martin, chapter 28 about the quality of test code (link points to ebooks.com) Martin Fowler on Object Mother Object Mother at java-design-patterns.com TestDataBuilder at wiki.c2.com   Example Code This article is accompanied by a working code example on GitHub. ","date":"August 31, 2018","image":"https://reflectoring.io/images/stock/0030-builder-1200x628-branded_hu64e9d0e0a46a5c80a2ce7e3a12f4f455_125268_650x0_resize_q90_box.jpg","permalink":"/objectmother-fluent-builder/","title":"Combining Object Mother and Fluent Builder for the Ultimate Test Data Factory"},{"categories":["Book Notes"],"contents":"TL;DR: Read this Book, when\u0026hellip;  you are interested in building maintainable software you need arguments to persuade your stakeholders to take some more time for creating quality software you want some inspiration on building applications in a different way than the default \u0026ldquo;3-layer architecture\u0026rdquo;  Overview As the name suggests, Clean Architecture - A Craftsman\u0026rsquo;s Guide to Software Structure and Design by Robert C. Martin (\u0026ldquo;Uncle Bob\u0026rdquo;) takes a step back from the details of programming and discusses the bigger picture.\nThe \u0026ldquo;prequels\u0026rdquo; Clean Code and The Clean Coder have been must-reads for a software engineer, so I expected a lot from this new book.\nThe book has about 320 pages in 34 chapters, not counting the appendix. Few chapters are longer than 10 pages.\nIn my opinion, the book starts and ends weakly with a strong middle part.\nPart I of the book gives a motivation of the topic by defining architecture as the means to facilitate the development, deployment, operation, and maintenance of a software and not as the means to make the software work. Thus, good architecture will minimize the lifetime cost of a software.\nParts II-IV cover the three programming paradigms (structured programming, functional programming, and object-oriented programming) and a recap of the SOLID principles that Martin himself has introduced in one of his papers back when I was just graduating from school. Also, Martin discusses some Component Principles around cohesion and coupling.\nPart V is titled \u0026ldquo;Architecture\u0026rdquo; and brings the most value. The main topics of this part are boundaries and dependencies. Most of the chapters in this part discuss how to structure your code in order to adhere to the SOLID principles and keep your software flexible.\nPart VI discusses frameworks as mere details to the overall architecture and that it should be treated as such by holding them at arm\u0026rsquo;s length from your precious domain code.\nFinally, in part VII, titled \u0026ldquo;Architecture Archaeology\u0026rdquo;, Martin describes some of the software projects he\u0026rsquo;s been working on the last 45 years and the lessons he learned from them. I admit that I only read the first couple pages. I found the pictures of room-filling computers with giant disk storage alongside a dry explanation of the software projects around those computers rather boring.\nTo Martin\u0026rsquo;s defense: this last part is in the appendix, so he probably expected people to skip it.\nMy Key Takeaways For me, often being in the role of a software architect, the book reminded me again of some architecture principles I hadn\u0026rsquo;t consciously thought about in a while sparking some new ideas.\nOne key thing I\u0026rsquo;m taking away is the statement that we should defer decisions on certain details (like which database to use) to the latest possible moment. This way, we don\u0026rsquo;t have to consider these details in our every-day programming routine, saving time and money in the long run.\nI realized I\u0026rsquo;m doing this quite often, but not often enough and will do so much more consciously from now on.\nThe next thing I\u0026rsquo;m taking away is the dependency inversion principle. I\u0026rsquo;ve known it but reading again that we can point our dependencies in the direction we would like them to will make me think again the next time that I\u0026rsquo;m introducing some kind of dependency into my code.\nComing with the dependency inversion is the plugin architecture or \u0026ldquo;Clean Architecture\u0026rdquo; Martin promotes. The domain code sits in the center of the architecture, surrounded by details that implement plugins for the domain code. All dependencies point inward, against the control flow.\nAnother thing that provoked some thoughts is that tests should be considered as part of the system and should be just as easily maintainable, otherwise they hinder the development just as badly designed production code would. Thus, tests should not depend on volatile things like a GUI, rather they should use an API specifically designed for tests.\nFinally, my understanding of the word \u0026ldquo;firmware\u0026rdquo; broadened. I always thought of it as software embedded on devices. It is that, but Martin argues that the software we\u0026rsquo;re writing every day also \u0026ldquo;degrades\u0026rdquo; to firmware (= software that\u0026rsquo;s hard to change) if we don\u0026rsquo;t take care of its architecture.\nDislikes Though I liked the book and read through it about a chapter a day, there are some things I didn\u0026rsquo;t like.\nMartin describes a Clean Architecture and provides diagrams to highlight certain boundary-techniques. But I\u0026rsquo;m missing a hands-on, real-life example that applies this architecture. Chapter 33 attempts to do that with a case study on a video sales application, but the example feels rather artificial.\nWhat\u0026rsquo;s more, in the case study chapter, he promotes boundaries between the layers of the application without discussing functional boundaries (slices), which is a popular method to organize code. This also seems to contradict his own \u0026ldquo;Screaming Architecture\u0026rdquo; metaphor from chapter 21, which says that an architecture should \u0026ldquo;scream out\u0026rdquo; which use cases it supports.\n\u0026ldquo;The Missing Chapter\u0026rdquo; (chapter 34) by Simon Brown has also disappointed me. It promises a solution for using the package structure to promote the architecture that\u0026rsquo;s different from the usual \u0026ldquo;package by layer\u0026rdquo; and \u0026ldquo;package by feature\u0026rdquo; approaches. However, that solution amounts to \u0026ldquo;put everything that belongs together into the same package\u0026rdquo;. Yes, this allows us to use package-private visibility, but at the cost of code organization.\nConclusion In conclusion, I recommend reading this book.\nEspecially software engineers that want to take a step toward more senior development roles will find value in it.\nFor more experienced software engineers, most of the content will not be new, but it will probably spark some ideas for current and upcoming software projects, as it has for me.\n","date":"August 25, 2018","image":"https://reflectoring.io/images/covers/clean-architecture-teaser_hua2ce2f450d2d3fc625b450a1e3af5692_315961_650x0_resize_q90_box.jpg","permalink":"/book-review-clean-architecture/","title":"Book Review: Clean Architecture by Robert C. Martin"},{"categories":["Spring Boot"],"contents":"Consumer-driven contract tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture. In the Java ecosystem, Spring Boot is a widely used technology for implementing microservices. Spring Cloud Contract is a framework that facilitates consumer-driven contract tests. So let\u0026rsquo;s have a look at how to test a REST API provided by a Spring Boot application against a contract previously defined by the API consumer using Spring Cloud Contract.\n Example Code This article is accompanied by a working code example on GitHub. In this Article Instead of testing API consumer and provider in an end-to-end manner, with consumer-driven contract tests we split up the test of our API into two parts:\n a consumer test testing against a mock provider and a provider test testing against a mock consumer  This article focuses on the provider side. A consumer of our API has created a contract in advance and we want to verify that the REST API provided by our Spring Boot Service matches the expectations of that contract.\nIn this article we will:\n have a look at the API contract created in advance by an API consumer create a Spring MVC controller providing the desired REST API set up Spring Cloud Contract to automatically generate JUnit tests that verify the controller against the contract  The Contract In Spring Cloud Contract contracts are defined with a DSL in a Groovy file. The contract we\u0026rsquo;re using in this article looks like this:\npackage userservice import org.springframework.cloud.contract.spec.Contract Contract.make { description(\u0026#34;When a POST request with a User is made, the created user\u0026#39;s ID is returned\u0026#34;) request { method \u0026#39;POST\u0026#39; url \u0026#39;/user-service/users\u0026#39; body( firstName: \u0026#34;Arthur\u0026#34;, lastName: \u0026#34;Dent\u0026#34; ) headers { contentType(applicationJson()) } } response { status 201 body( id: 42 ) headers { contentType(applicationJson()) } } } Each contract defines a single request / response pair. The contract above defines an API provided by user-service that consists of a POST request to the URL /user-service/users containing some user data in the body and an expected response to that request returning HTTP code 201 and the newly created user\u0026rsquo;s database id as body.\nFor later usage, the contract file is expected to be filed under src/test/resources/contracts/userservice/shouldSaveUser.groovy.\nThe Spring Controller A Spring controller that obeys the above contract is easily created:\n@RestController public class UserController { private UserRepository userRepository; @Autowired public UserController(UserRepository userRepository) { this.userRepository = userRepository; } @PostMapping(path = \u0026#34;/user-service/users\u0026#34;) public ResponseEntity\u0026lt;IdObject\u0026gt; createUser(@RequestBody @Valid User user) { User savedUser = this.userRepository.save(user); return ResponseEntity .status(201) .body(new IdObject(savedUser.getId())); } } IdObject is a simple bean that has the single field id.\nThe Provider Test Next, let\u0026rsquo;s set up Spring Cloud Contract to verify that the above controller really obeys the contract. We\u0026rsquo;re going to use Gradle as build tool (but Maven is supported as well).\nTest Base To verify an API provider (the Spring controller in our case), Spring Cloud Contract automatically generates JUnit tests from a given contract. In order to give these automatically generated tests a working context, we need to create a base test class which is subclassed by all generated tests:\n@RunWith(SpringRunner.class) @SpringBootTest(classes = DemoApplication.class) public abstract class UserServiceBase { @Autowired WebApplicationContext webApplicationContext; @MockBean private UserRepository userRepository; @Before public void setup() { User savedUser = new User(); savedUser.setFirstName(\u0026#34;Arthur\u0026#34;); savedUser.setLastName(\u0026#34;Dent\u0026#34;); savedUser.setId(42L); when(userRepository.save(any(User.class))).thenReturn(savedUser); RestAssuredMockMvc.webAppContextSetup(webApplicationContext); } } In this base class, we\u0026rsquo;re setting up a Spring Boot application with @SpringBootTest and are mocking away the UserRepository so that it always returns the user specified in the contract. Then, we set up RestAssured so that the generated tests can simply use RestAssured to send requests against our controller.\nNote that the contract DSL allows to specify matchers instead of static content, so that the user name defined in our contract does not have to be \u0026ldquo;Arthur Dent\u0026rdquo; but may for example be any String.\nSetting up the build.gradle Spring Cloud Contract provides a Gradle plugin that takes care of generating the tests for us:\napply plugin: \u0026#39;spring-cloud-contract\u0026#39; The plugin needs the following dependencies withing the buildscript scope:\nbuildscript { repositories { // ...  } dependencies { classpath \u0026#34;org.springframework.boot:spring-boot-gradle-plugin:2.0.4.RELEASE\u0026#34; classpath \u0026#34;org.springframework.cloud:spring-cloud-contract-gradle-plugin:2.0.1.RELEASE\u0026#34; } } In the contracts closure, we define some configuration for the plugin:\ncontracts { baseClassMappings { baseClassMapping(\u0026#34;.*userservice.*\u0026#34;, \u0026#34;io.reflectoring.UserServiceBase\u0026#34;) } } The mapping we defined above tells Spring Cloud Contract that the tests generated for any contracts it finds in src/test/resources/contracts that contain \u0026ldquo;userservice\u0026rdquo; in their path are to be subclassed from our test base class UserServiceBase. We could define more mappings if different tests require different setups (i.e. different base classes).\nIn order for the automatically generated tests to work, we need to include some further dependencies in the testCompile scope:\ndependencies { // ...  testCompile(\u0026#39;org.codehaus.groovy:groovy-all:2.4.6\u0026#39;) testCompile(\u0026#34;org.springframework.cloud:spring-cloud-starter-contract-verifier:2.0.1.RELEASE\u0026#34;) testCompile(\u0026#34;org.springframework.cloud:spring-cloud-contract-spec:2.0.1.RELEASE\u0026#34;) testCompile(\u0026#34;org.springframework.boot:spring-boot-starter-test:2.0.4.RELEASE\u0026#34;) } The Generated Test Once we call ./gradlew generateContractTests, the Spring Cloud Contract Gradle plugin will now generate a JUnit test in the folder build/generated-test-sources:\npublic class UserserviceTest extends UserServiceBase { @Test public void validate_shouldSaveUser() throws Exception { // given:  MockMvcRequestSpecification request = given() .header(\u0026#34;Content-Type\u0026#34;, \u0026#34;application/json\u0026#34;) .body(\u0026#34;{\\\u0026#34;firstName\\\u0026#34;:\\\u0026#34;Arthur\\\u0026#34;,\\\u0026#34;lastName\\\u0026#34;:\\\u0026#34;Dent\\\u0026#34;}\u0026#34;); // when:  ResponseOptions response = given().spec(request) .post(\u0026#34;/user-service/users\u0026#34;); // then:  assertThat(response.statusCode()).isEqualTo(201); assertThat(response.header(\u0026#34;Content-Type\u0026#34;)).matches(\u0026#34;application/json.*\u0026#34;); // and:  DocumentContext parsedJson = JsonPath.parse(response.getBody().asString()); assertThatJson(parsedJson).field(\u0026#34;[\u0026#39;id\u0026#39;]\u0026#34;).isEqualTo(42); } } As you can see, the generated test sends the request specified in the contract an validates that the controller returns the response expected from the contract.\nThe Gradle task generateContractTests is automatically included within the build task so that a normal build will generate and then run the tests.\nBonus: Generating Tests from a Pact Above, we used a contract defined with the Spring Cloud Contract DSL. However, Spring Cloud Contract currently only supports JVM languages and you might want to verify a contract generated by a non-JVM consumer like an Angular application. In this case you may want to use Pact on the consumer side since Pact supports other languages as well. You can read up how to create a contract with Pact from an Angular client in this article.\nSpring Cloud Contract Pact Support Luckily, Spring Cloud Contract supports the Pact contract format as well. To automatically generate tests from a pact file, you need to put the pact file (which is a JSON file) into the folder src/test/contracts and add these dependencies to your build.gradle:\nbuildscript { repositories { // ...  } dependencies { // other dependencies ...  classpath \u0026#34;org.springframework.cloud:spring-cloud-contract-spec-pact:1.2.5.RELEASE\u0026#34; classpath \u0026#39;au.com.dius:pact-jvm-model:2.4.18\u0026#39; } } Spring Cloud Contract then automatically picks up the pact file and generates tests for it just like for the \u0026ldquo;normal\u0026rdquo; contract files.\nConclusion In this article, we set up a Gradle build using Spring Cloud Contract to auto-generate tests that verify that a Spring REST controller obeys a certain contract. Details about Spring Cloud Contract can be looked up in the reference manual. Also, check the github repo containing the example code to this article.\n","date":"August 16, 2018","image":"https://reflectoring.io/images/stock/0025-signature-1200x628-branded_hu40d5255a109b1d14ac3f4eab2daeb887_126452_650x0_resize_q90_box.jpg","permalink":"/consumer-driven-contract-provider-spring-cloud-contract/","title":"Testing a Spring Boot REST API against a Contract with Spring Cloud Contract"},{"categories":["Software Craft"],"contents":"Logging to files and analyzing them by hand is not the way to go anymore. This article explains the reasons why a log server is the way to go for collecting and analyzing log data.\nA Motivating Story Imagine we have successfully released our application, which is happily serving real users. But there\u0026rsquo;s this bug that prevents users from finishing an important use case under certain conditions. If we only knew which conditions lead to the bug \u0026hellip; but that information should be available in the logs, right?\nDay 1:\nOk, let\u0026rsquo;s write an email to the ops people to request the logs from today! Wait, they\u0026rsquo;re not allowed to send us just any logs due to privacy concerns. We need to specify some filter for the data we need. Ok then, let\u0026rsquo;s filter the logs by date (we only need today\u0026rsquo;s logs) and by component (we suspect the bug in a certain component).\nDay 2:\nThe ops people get clearance for the log request and send us a log excerpt. But, damn, the log excerpt doesn\u0026rsquo;t help us. The component we suspected wasn\u0026rsquo;t responsible for the bug after all. Let\u0026rsquo;s widen the search to another component and try again\u0026hellip; .\nDay n: We finally found the information we needed to fix the bug after playing email ping pong with the ops people for n days.\nNo need to say that this kind of turnaround makes fixing evasive bugs an unproductive task that no one really wants to be involved in.\nLet\u0026rsquo;s have a look at some reasons why a log server would help us in this situation.\nReason #1: Centralization The main reason for a log server is that the log data is being centralized with the log server as a single point of entry. All other reasons mentioned in this article depend on the log data being centrally available.\nIn a distributed environment, every service simply sends its log events to the log server where it is aggregated and made available for log analysis. No need for ops people to semi-automatically gather log files from across all services.\nLog aggregation, filtering, searching, monitoring and alerting are done at a single place. A straight forward implementation of the DRY principle.\nEven for a monolithic application, centralized log data is a great benefit.\nOne might argue that log data in a monolithic application is already centralized, since we have only this one application. But it\u0026rsquo;s very likely that we have multiple instances of the application running for scalability and availability so that the same arguments apply as for distributed environments.\nReason #2: Searchability Searching through log files is no fun.\nReally, it sucks.\nEven when we have mastered awk, sed and grep to filter and transform log data into a form that is more helpful for the task at hand.\nA main feature of log servers is to provide search capabilities across the collected log data. To trace a bug reported by a user, we can simply type in the correlation id that was shown on the user\u0026rsquo;s screen and voilá, we will probably see an error message in the log that allows us to analyze the bug (of course we have implemented a correlation id mechanism).\nOK, we can do this with grep just as well, provided we can grep across distributed log files.\nBut how about this: we want to see all log events across all threads in all services that were involved in processing a certain asynchronous message to trace that message through the distributed system.\nThis is easy for a log server, since\n it has access to the log events of all services it can index and efficiently search structured data appended to the log events, such as a trace id.  With clever use of such structured log data for providing context information, we can make the data flow through our application easily visible.\nReason #3: Accessability Every developer should have access to the logs.\nThis should be a fundamental right for software developers.\nLooking through the logs regularly makes our relationship to the application much more intimate, and we learn to read her little aches and pains and get better in soothing them.\nA log server is way more easily accessible than logging on a host per SSH and grepping the log files because:\n it\u0026rsquo;s just plain easier to fire up a browser and log in to a log server logging onto a host with SSH in today\u0026rsquo;s containerized world we might not even know the address to the SSH host we\u0026rsquo;re looking for not every developer has enough unix skills to use grep and consorts efficiently to sift through log files (shame on them!).  Now, our organization\u0026rsquo;s privacy agent might get a heart attack when confronted with our request to grant production log access to all developers. And he or she has a point.\nEspecially in the EU, data privacy is a big thing and after all we don\u0026rsquo;t want break our users' trust.\nThe solution to this is to separate log data that contains personal data from technical log data. The technical log data should be available on the log server for analysis and bug fixing, while the personal log data may be stored somewhere more private.\nA separation of log data like this may take some planning in our security architecture and careful code reviews, but it\u0026rsquo;s worth the effort when it means that we can access at least part of the production logs.\nReason #4: Monitoring \u0026amp; Alerting Especially in the early age of an application, right after going to production, we want to monitor it like we would monitor an infant in the next room with a baby monitor.\nPart of that monitoring is to check the log files for certain kinds of messages.\nA log server usually provides functionality to automatically filter and visualize certain log messages on a dashboard. So, if we get anxious and want to know if the baby\u0026rsquo;s still breathing, we can have a look at the dashboard an be at ease.\nGoing further, some log events are urgent enough that they should trigger an alarm. This is less like a baby monitor and more like a heart rate alarm in a hospital\u0026rsquo;s intensive care.\nThis is another feature provided by most log servers.\nAgain, we have a central place where we can monitor our application\u0026rsquo;s health and define rules for alerting. All without having to handle log files in any form.\nReason #5: Minimal Effort The most frequent excuse for not doing something, when we know we should, is: \u0026ldquo;it costs too much\u0026rdquo;.\nIn the best case we have planned the setup of a log server into the project backlog from day 1 (who doesn\u0026rsquo;t use a log server in a new project these days anyways?). Then, we can insist on setting up the log server by pointing to the backlog.\nIf a log server hasn\u0026rsquo;t been planned into the budget, we have to pitch it to the people responsible for the project budget. We usually don\u0026rsquo;t bother with that, because we know the answer will be \u0026ldquo;no\u0026rdquo;.\nBut will it really?\nSetting up a log server is really nothing special. We can have one running on our local machine in minutes.\nYes, it has to be set up in all our test and production environments. But with today\u0026rsquo;s container technology this shouldn\u0026rsquo;t be much of a pain.\nConclusion Using a log server should be a default for the development and operation of most server applications.\nIt\u0026rsquo;s not hard to set up and brings a lot of advantages. If you\u0026rsquo;re having trouble convincing the right people to be allowed to use one, try to apply the above reasons in your argumentation.\nDo you know other reasons why we should (or perhaps should not) use a log server? Let me know in the comments!\n","date":"August 13, 2018","image":"https://reflectoring.io/images/stock/0032-dashboard-1200x628-branded_hu32014b78b20b83682c90e2a7c4ea87ba_153646_650x0_resize_q90_box.jpg","permalink":"/log-server/","title":"5 Good Reasons to Use a Log Server"},{"categories":["Java"],"contents":"In a previous Tip, I proposed to use a human-readable logging format so that we can quickly scan a log to find the information we need. This article shows how to implement this logging format with the Logback and Descriptive Logger libraries.\n Example Code This article is accompanied by a working code example on GitHub. The Target Logging Format The Logging format we want to achieve looks something like this:\n2018-07-29 | 21:10:29.178 | thread-1 | INFO | com.example.MyService | 000425 | Service started in 3434 ms. 2018-07-29 | 21:10:29.178 | main | WARN | some.external.Configuration | | Parameter \u0026#39;foo\u0026#39; is missing. Using default value \u0026#39;bar\u0026#39;! 2018-07-29 | 21:10:29.178 | scheduler | ERROR | com.example.jobs.ScheduledJob | 000972 | Scheduled job cancelled due to NullPointerException! ... Stacktrace ... We have distinct columns so we can quickly scan the log messages for the information we need. The columns contain the following information:\n the date the time the name of the thread the level of the log message the name of the logger the unique ID of the log message for quick reference of the log message in the code (log messages from third party libraries won\u0026rsquo;t have an ID, since we can\u0026rsquo;t control it) the message itself potentially a stacktrace.  Let\u0026rsquo;s look at how we can configure our application to create log messages that look like this.\nAdding a Unique ID to each Log Message First, we need to collect all the information contained in the log messages. Every information except the unique ID is pretty much default so we don\u0026rsquo;t have to do anything to get it.\nBut in order to add a unique ID to each log message, we have to provide such an ID. For this, we use the Descriptive Logger library, a small wrapper on top of SLF4J I created.\nWe need to add the following dependency to our build:\ndependencies { compile(\u0026#34;io.reflectoring:descriptive-logger:1.0\u0026#34;) } Descriptive Logger is a library that allows us to descriptively define log messages with the help annotations.\nFor each associated set of log messages we create an interface annotated with @DescriptiveLogger:\n@DescriptiveLogger public interface MyLogger { @LogMessage(level=Level.DEBUG, message=\u0026#34;This is a DEBUG message.\u0026#34;, id=14556) void logDebugMessage(); @LogMessage(level=Level.INFO, message=\u0026#34;This is an INFO message.\u0026#34;, id=5456) void logInfoMessage(); @LogMessage(level=Level.ERROR, message=\u0026#34;This is an ERROR message with a very long ID.\u0026#34;, id=1548654) void logMessageWithLongId(); } Each method annotated with @LogMessage defines a log message. Here\u0026rsquo;s where we can also define the unique ID for each message by setting the id field. This ID will be added to the Mapped Diagnostic Context (MDC) which we can later use when we\u0026rsquo;re defining our Logging Pattern for Logback.\nIn our application code we let the LoggerFactory create an implementation of the above interface and simply call the log methods to output the log messages:\npublic class LoggingFormatTest { private MyLogger logger = LoggerFactory.getLogger(MyLogger.class, LoggingFormatTest.class); @Test public void testLogPattern(){ Thread.currentThread().setName(\u0026#34;very-long-thread-name\u0026#34;); logger.logDebugMessage(); Thread.currentThread().setName(\u0026#34;short\u0026#34;); logger.logInfoMessage(); logger.logMessageWithLongId(); } } In between the messages we change the thread name to test the log output with thread names of different lengths.\nConfiguring the Logging Format with Logback Now that we can create log output with all the information we need, we can configure logback with the desired logging format. The configuration is located in the file logback.xml:\n\u0026lt;configuration\u0026gt; \u0026lt;appender name=\u0026#34;CONSOLE\u0026#34; class=\u0026#34;ch.qos.logback.core.ConsoleAppender\u0026#34;\u0026gt; \u0026lt;encoder\u0026gt; \u0026lt;pattern\u0026gt;%d{yyyy-MM-dd} | %d{HH:mm:ss.SSS} | %thread | %5p | %logger{25} | %12(ID: %8mdc{id}) | %m%n\u0026lt;/pattern\u0026gt; \u0026lt;charset\u0026gt;utf8\u0026lt;/charset\u0026gt; \u0026lt;/encoder\u0026gt; \u0026lt;/appender\u0026gt; \u0026lt;root level=\u0026#34;DEBUG\u0026#34;\u0026gt; \u0026lt;appender-ref ref=\u0026#34;CONSOLE\u0026#34;/\u0026gt; \u0026lt;/root\u0026gt; \u0026lt;/configuration\u0026gt; Within the \u0026lt;pattern\u0026gt; xml tag, we define the logging format. The formats that have been used here can be looked up in the Logback documentation.\nHowever, if we try out this logging format, it will not be formatted properly:\n2018-08-03 | 22:04:29.119 | main | DEBUG | o.s.a.f.JdkDynamicAopProxy | ID: | Creating JDK dynamic proxy: target source is EmptyTargetSource: no target class, static 2018-08-03 | 22:04:29.133 | very-long-thread-name | DEBUG | i.r.l.LoggingFormatTest | ID: 14556 | This is a DEBUG message. 2018-08-03 | 22:04:29.133 | short | INFO | i.r.l.LoggingFormatTest | ID: 5456 | This is an INFO message. 2018-08-03 | 22:04:29.133 | short | ERROR | i.r.l.LoggingFormatTest | ID: 1548654 | This is an ERROR message with a very long ID. The thread and logger name columns don\u0026rsquo;t have the same width in each line.\nTo fix this, we could try to use Logback\u0026rsquo;s padding feature, which allows us to pad a column with spaces up to a certain number by adding %\u0026lt;number\u0026gt; before the format in question. This way, we could try %20thread instead of just %thread to pad the thread name to 20 characters.\nIf the thread name is longer than these 20 characters, though, the column will overflow.\nSo, we need some way to truncate the thread and logger names to a defined maximum of characters.\nTruncating Thread and Logger Names Luckily, Logback provides an option to truncate fields.\nIf we change the patterns for thread and logger to %-20.20thread and %-25.25logger{25}, Logback will pad the values with spaces if they are shorter than 20 or 25 characters and truncate them from the start if they are longer than 20 or 25 characters.\nThe final pattern looks like this:\n\u0026lt;pattern\u0026gt;%d{yyyy-MM-dd} | %d{HH:mm:ss.SSS} | %-20.20thread | %5p | %-25.25logger{25} | %12(ID: %8mdc{id}) | %m%n\u0026lt;/pattern\u0026gt; Now, if we run our logging code again, we have the output we wanted, without any overflowing columns:\n2018-08-11 | 21:31:20.436 | main | DEBUG | .s.a.f.JdkDynamicAopProxy | ID: | Creating JDK dynamic proxy: target source is EmptyTargetSource: no target class, static 2018-08-11 | 21:31:20.450 | ery-long-thread-name | DEBUG | i.r.l.LoggingFormatTest | ID: 14556 | This is a DEBUG message. 2018-08-11 | 21:31:20.450 | short | INFO | i.r.l.LoggingFormatTest | ID: 5456 | This is an INFO message. 2018-08-11 | 21:31:20.450 | short | ERROR | i.r.l.LoggingFormatTest | ID: 1548654 | This is an ERROR message with a very long ID. Actually, the ID column may still overflow if we provide a very high ID number for a log message. However, an ID should never be truncated and since we\u0026rsquo;re controlling those IDs we can constrict them to a maximum number so that the column does not overflow.\nHave we Lost Information by Truncating? One might argue that we mustn\u0026rsquo;t truncate the logger or thread name since we\u0026rsquo;re losing information. But have we really lost information?\nHow often do we need the full name of a logger or a thread? These cases are very rare, I would say. Most of the times, it\u0026rsquo;s enough to see the last 20 or so characters to know enough to act upon it.\nEven if truncated, the information isn\u0026rsquo;t really lost. It\u0026rsquo;s still contained in the log events!\nIf we\u0026rsquo;re logging to a log server, the information will still be there. It has just been removed from the string representation of the log message.\nWe might configure the above logging format for local development only. Here, a human-readable logging format is most valuable, since we\u0026rsquo;re probably logging to a file or console and not to a log server like we\u0026rsquo;re doing in production.\nConclusion Logback has to be tweaked a bit to provide a column-based logging format that allows for quick scanning, but it can be done with a little customization.\nUsing Descriptive Logger, we can easily add a unique ID to each log message for quick reference into the code.\nThe code used in this article is available on github.\n","date":"August 11, 2018","image":"https://reflectoring.io/images/stock/0031-matrix-1200x628-branded_hufb3c207f9151b804bbf7fe86cefe5814_184798_650x0_resize_q90_box.jpg","permalink":"/logging-format-logback/","title":"How to Configure a Human-Readable Logging Format with Logback and Descriptive Logger"},{"categories":["Spring Boot"],"contents":"Consumer-driven contract tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture. In the Java ecosystem, Spring Boot is a widely used technology for implementing microservices. Pact is a framework that facilitates consumer-driven contract tests. So let\u0026rsquo;s have a look at how to test a REST API provided by a Spring Boot application against a contract previously defined by the API consumer.\n Example Code This article is accompanied by a working code example on GitHub. In this Article Instead of testing API consumer and provider in an end-to-end manner, with consumer-driven contract tests we split up the test of our API into two parts:\n a consumer test testing against a mock provider and a provider test testing against a mock consumer  This article focuses on the provider side. A consumer of our API has created a contract in advance and we want to verify that the REST API provided by our Spring Boot Service matches the expectations of that contract.\nIn this article we will:\n have a look at the API contract created in advance by an API consumer create a Spring MVC controller providing the desired REST API verify that the controller against the contract within a JUnit test modify our test to load the contract file from a Pact Broker  For an overview of the big picture of consumer-driven contract testing, have a look at this article.\nThe Pact Since we\u0026rsquo;re using the Pact framework as facilitator for our consumer-driven contract tests, contracts are called \u0026ldquo;pacts\u0026rdquo;. We\u0026rsquo;ll use the following pact that was created by an Angular consumer in another article:\n{ \u0026#34;consumer\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;ui\u0026#34; }, \u0026#34;provider\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;userservice\u0026#34; }, \u0026#34;interactions\u0026#34;: [ { \u0026#34;description\u0026#34;: \u0026#34;a request to POST a person\u0026#34;, \u0026#34;providerState\u0026#34;: \u0026#34;provider accepts a new person\u0026#34;, \u0026#34;request\u0026#34;: { \u0026#34;method\u0026#34;: \u0026#34;POST\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/user-service/users\u0026#34;, \u0026#34;headers\u0026#34;: { \u0026#34;Content-Type\u0026#34;: \u0026#34;application/json\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;firstName\u0026#34;: \u0026#34;Arthur\u0026#34;, \u0026#34;lastName\u0026#34;: \u0026#34;Dent\u0026#34; } }, \u0026#34;response\u0026#34;: { \u0026#34;status\u0026#34;: 201, \u0026#34;headers\u0026#34;: { \u0026#34;Content-Type\u0026#34;: \u0026#34;application/json\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;id\u0026#34;: 42 }, \u0026#34;matchingRules\u0026#34;: { \u0026#34;$.body\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;type\u0026#34; } } } } ], \u0026#34;metadata\u0026#34;: { \u0026#34;pactSpecification\u0026#34;: { \u0026#34;version\u0026#34;: \u0026#34;2.0.0\u0026#34; } } } As you can see, the pact contains a single POST request to /user-service/users with a user object as payload and an associated response that is expected to have the status code 201 and should contain the ID of the created user. A request / response pair like this is called an interaction.\nThe Spring Controller It\u0026rsquo;s pretty easy to create a Spring controller that should obey that contract:\n@RestController public class UserController { private UserRepository userRepository; @Autowired public UserController(UserRepository userRepository) { this.userRepository = userRepository; } @PostMapping(path = \u0026#34;/user-service/users\u0026#34;) public ResponseEntity\u0026lt;IdObject\u0026gt; createUser(@RequestBody @Valid User user) { User savedUser = this.userRepository.save(user); return ResponseEntity .status(201) .body(new IdObject(savedUser.getId())); } } IdObject is a simple bean that has the single field id. The UserRepository is a standard Spring Data repository that saves and loads User objects to and from a database.\nThe Provider Test The controller works, we can test it by manually sending requests against it using Postman, for example. But now, we want to verify that it actually obeys the contract specified above. This verification should be done in every build, so doing this in a JUnit tests seems a natural fit.\nPact Dependencies To create that JUnit test, we need to add the following dependencies to our project:\ndependencies { testCompile(\u0026#34;au.com.dius:pact-jvm-provider-junit5_2.12:3.5.20\u0026#34;) // Spring Boot dependencies omitted } This will transitively pull the JUnit 5 dependency as well.\nSet up the JUnit Test Next, we create a JUnit test that:\n starts up our Spring Boot application that provides the REST API (our contract provider) starts up a mock consumer that sends all requests from our pact to that API fails if the response does not match the response from the pact  @ExtendWith(SpringExtension.class) @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT, properties = \u0026#34;server.port=8080\u0026#34;) @Provider(\u0026#34;userservice\u0026#34;) @PactFolder(\u0026#34;../pact-angular/pacts\u0026#34;) public class UserControllerProviderTest { @MockBean private UserRepository userRepository; @BeforeEach void setupTestTarget(PactVerificationContext context) { context.setTarget(new HttpTestTarget(\u0026#34;localhost\u0026#34;, 8080, \u0026#34;/\u0026#34;)); } @TestTemplate @ExtendWith(PactVerificationInvocationContextProvider.class) void pactVerificationTestTemplate(PactVerificationContext context) { context.verifyInteraction(); } @State({\u0026#34;provider accepts a new person\u0026#34;}) public void toCreatePersonState() { User user = new User(); user.setId(42L); user.setFirstName(\u0026#34;Arthur\u0026#34;); user.setLastName(\u0026#34;Dent\u0026#34;); when(userRepository.findById(eq(42L))).thenReturn(Optional.of(user)); when(userRepository.save(any(User.class))).thenReturn(user); } } The test uses the standard SpringExtension together with @SpringBootTest to start up our Spring Boot application. We\u0026rsquo;re configuring it to start on a fixed port 8080.\nWith @PactFolder we tell Pact where to look for pact files that serve as the base for our contract test. Note that there are other options for loading pact files such as the @PactBroker annotation.\nThe annotation @Provider(\u0026quot;userservice\u0026quot;) tells Pact that we\u0026rsquo;re testing the provider called \u0026ldquo;userservice\u0026rdquo;. Pact will automatically filter the interactions from the loaded pact files so that only those interaction with this provider are being tested.\nSince Pact creates a mock consumer for us that \u0026ldquo;replays\u0026rdquo; all requests from the pact files, it needs to know where to send those requests. In the @BeforeEach annotated method, we define the target for those requests by calling PactVerificationContext#setTarget(). This should target the Spring Boot application we started with @SpringBootTest so the ports must match.\n@MockBean is another standard annotation from Spring Boot that - in our case - replaces the real UserRepository with a Mockito mock. We do this so that we do not have to initialize the database and any other dependencies our controller may have. With our consumer-driven contract test, we want to test that consumer and provider can talk to each other - we do not want to test the business logic behind the API. That\u0026rsquo;s what unit tests are for.\nNext, we create a method annotated with @State that puts our Spring Boot application into a defined state that is suitable to respond to the mock consumer\u0026rsquo;s requests. In our case, the pact file defines a single providerState named provider accepts a new person. In this method, we set up our mock repository so that it returns a suitable User object that fits the object expected in the contract.\nFinally, we make use of JUnit 5\u0026rsquo;s @TestTemplate feature in combination with PactVerificationInvocationContextProvider that allows Pact to dynamically create one test for each interaction found in the pact files. For each interaction from the pact file, context.verifyInteraction() will be called. This will automatically call the correct @State method and then fire the request defined in the interaction verify the result against the pact.\nThe test should output something like this in the log:\nVerifying a pact between ui and userservice Given provider accepts a new person a request to POST a person returns a response which has status code 201 (OK) includes headers \u0026#34;Content-Type\u0026#34; with value \u0026#34;application/json\u0026#34; (OK) has a matching body (OK) Load the Contract from a Pact Broker Consumer-Driven contracts loose their value if you have multiple versions of the same contract file in the consumer and provider codebase. We need a single source of truth for the contract files.\nFor this reason, the Pact team has developed a web application called Pact Broker which serves as a repository for pact files.\nOur test from above can be modified to load the pact file directly from a Pact Broker instead of a local folder by using the @PactBroker annotation instead of the @PactFolder annotation:\n@PactBroker(host = \u0026#34;host\u0026#34;, port = \u0026#34;80\u0026#34;, protocol = \u0026#34;https\u0026#34;, authentication = @PactBrokerAuth(username = \u0026#34;username\u0026#34;, password = \u0026#34;password\u0026#34;)) public class UserControllerProviderTest { ... } Conclusion In this article, we created a JUnit test that verified a REST API against a contract previously created by a consumer of that API. This test can now run in every CI build and we can sleep well knowing that consumer and provider still speak the same language.\n","date":"August 11, 2018","image":"https://reflectoring.io/images/stock/0026-signature-1200x628-branded_hua6bf2a4b7ae34ab845137fd515e2ba8a_112398_650x0_resize_q90_box.jpg","permalink":"/consumer-driven-contract-provider-pact-spring/","title":"Testing a Spring Boot REST API against a Consumer-Driven Contract with Pact"},{"categories":["Spring Boot"],"contents":"Consumer-driven contract tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture. In the Java ecosystem, Feign in combination with Spring Boot is a popular stack for creating API clients in a distributed architecture. Pact is a polyglot framework that facilitates consumer-driven contract tests. So let\u0026rsquo;s have a look at how to create a contract with Feign and Pact and test a Feign client against that contract.\n Example Code This article is accompanied by a working code example on GitHub. In this Article Instead of testing API consumer and provider in an end-to-end manner, with consumer-driven contract tests we split up the test of our API into two parts:\n a consumer test testing against a mock provider and a provider test testing against a mock consumer  This article focuses on the consumer side.\nIn this article we will:\n define an API contract with the Pact DSL create a client against that API with Feign verify the client against the contract within an integration test publish the contract to a Pact Broker  Define the Contract Unsurprising, a contract is called a \u0026ldquo;pact\u0026rdquo; within the Pact framework. In order to create a pact we need to include the pact library:\ndependencies { ... testCompile(\u0026#34;au.com.dius:pact-jvm-consumer-junit5_2.12:3.5.20\u0026#34;) } The pact-jvm-consumer-junit5_2.12 library is part of pact-jvm, a collection of libraries facilitating consumer-driven-contracts for various frameworks on the JVM.\nAs the name suggests, we\u0026rsquo;re generating a contract from a JUnit5 unit test.\nLet\u0026rsquo;s create a test class called UserServiceConsumerTest that is going to create a pact for us:\n@ExtendWith(PactConsumerTestExt.class) public class UserServiceConsumerTest { @Pact(provider = \u0026#34;userservice\u0026#34;, consumer = \u0026#34;userclient\u0026#34;) public RequestResponsePact createPersonPact(PactDslWithProvider builder) { // @formatter:off  return builder .given(\u0026#34;provider accepts a new person\u0026#34;) .uponReceiving(\u0026#34;a request to POST a person\u0026#34;) .path(\u0026#34;/user-service/users\u0026#34;) .method(\u0026#34;POST\u0026#34;) .willRespondWith() .status(201) .matchHeader(\u0026#34;Content-Type\u0026#34;, \u0026#34;application/json\u0026#34;) .body(new PactDslJsonBody() .integerType(\u0026#34;id\u0026#34;, 42)) .toPact(); // @formatter:on  } } This method defines a single interaction between a consumer and a provider, called a \u0026ldquo;fragment\u0026rdquo; of a pact. A test class can contain multiple such fragments which together make up a complete pact.\nThe fragment we\u0026rsquo;re defining here should define the use case of creating a new User resource.\nThe @Pact annotation tells Pact that we want to define a pact fragment. It contains the names of the consumer and the provider to uniquely identify the contract partners.\nWithin the method, we make use of the Pact DSL to create the contract. In the first two lines we describe the state the provider should be in to be able to answer this interaction (\u0026ldquo;given\u0026rdquo;) and the request the consumer sends (\u0026ldquo;uponReceiving\u0026rdquo;).\nNext, we define how the request should look like. In this example, we define a URI and the HTTP method POST.\nHaving defined the request, we go on to define the expected response to this request. Here, we expect HTTP status 201, the content type application/json and a JSON response body containing the id of the newly created User resource.\nNote that the test will not run yet, since we have not defined and @Test methods yet. We will do that in the section Verify the Client against the Contract.\nTip: don\u0026rsquo;t use dashes (\u0026quot;-\u0026quot;) in the names of providers and consumers because Pact will create pact files with the name \u0026ldquo;consumername-providername.json\u0026rdquo; so that a dash within either the consumer or provider name will make it less readable.\nCreate a Client against the API Before we can verify a client, we have to create it first.\nWe choose Feign as the technology to create a client against the API defined in the contract.\nWe need to add the Feign dependency to the Gradle build:\ndependencies { compile(\u0026#34;org.springframework.cloud:spring-cloud-starter-openfeign\u0026#34;) // ... other dependencies } Note that we\u0026rsquo;re not specifying a version number here, since we\u0026rsquo;re using Spring\u0026rsquo;s depency management plugin. You can see the whole source of the build.gradle file in the github repo.\nNext, we create the actual client and the data classes used in the API:\n@FeignClient(name = \u0026#34;userservice\u0026#34;) public interface UserClient { @RequestMapping(method = RequestMethod.POST, path = \u0026#34;/user-service/users\u0026#34;) IdObject createUser(@RequestBody User user); } public class User { private Long id; private String firstName; private String lastName; // getters / setters / constructors omitted } public class IdObject { private Long id; // getters / setters / constructors omitted } The @FeignClient annotation tells Spring Boot to create an implementation of the UserClient interface that should run against the host that configured under the name userservice. The @RequestMapping and @RequestBody annotations specify the details of the POST request and the corresponding response defined in the contract.\nFor the Feign client to work, we need to add the @EnableFeignClients and @RibbonClient annotations to our application class and provide a configuration for Ribbon, the loadbalancing solution from the Netflix stack:\n@SpringBootApplication @EnableFeignClients @RibbonClient(name = \u0026#34;userservice\u0026#34;, configuration = RibbonConfiguration.class) public class ConsumerApplication { ... } public class RibbonConfiguration { @Bean public IRule ribbonRule(IClientConfig config) { return new RandomRule(); } } Verify the Client against the Contract Let\u0026rsquo;s go back to our JUnit test class UserServiceConsumerTest and extend it so that it verifies that the Feign client we just created actually works as defined in the contract:\n@ExtendWith(PactConsumerTestExt.class) @ExtendWith(SpringExtension.class) @PactTestFor(providerName = \u0026#34;userservice\u0026#34;, port = \u0026#34;8888\u0026#34;) @SpringBootTest({ // overriding provider address  \u0026#34;userservice.ribbon.listOfServers: localhost:8888\u0026#34; }) public class UserServiceConsumerTest { @Autowired private UserClient userClient; @Pact(provider = \u0026#34;userservice\u0026#34;, consumer = \u0026#34;userclient\u0026#34;) public RequestResponsePact createPersonPact(PactDslWithProvider builder) { ... // see code above  } @Test @PactTestFor(pactMethod = \u0026#34;createPersonPact\u0026#34;) public void verifyCreatePersonPact() { User user = new User(); user.setFirstName(\u0026#34;Zaphod\u0026#34;); user.setLastName(\u0026#34;Beeblebrox\u0026#34;); IdObject id = userClient.createUser(user); assertThat(id.getId()).isEqualTo(42); } } We start off by using the standard @SpringBootTest annotation together with the SpringExtension for JUnit 5. Important to note is that we configure the Ribbon loadbalancer so that our client sends its requests against localhost:8888.\nWith the PactConsumerTestExt together with the @PactTestFor annotation, we tell pact to start a mock API provider on localhost:8888. This mock provider will return responses according to all pact fragments from the @Pact methods within the test class.\nThe actual verification of our Feign client is implemented in the method verifyCreatePersonPact(). The @PactTestFor annotation defines which pact fragment we want to test (the fragment property must be the name of a method annotated with @Pact within the test class).\nHere, we create a User object, put it into our Feign client and assert that the result contains the user ID we entered as an example into our pact fragment earlier.\nIf the request the client sends to the mock provider looks as defined in the pact, the according response will be returned and the test will pass. If the client does something differently, the test will fail, meaning that we do not meet the contract.\nOnce the test has passed, a pact file with the name userclient-userservice.json will be created in the target/pacts folder.\nPublish the Contract to a Pact Broker The pact file created from our test now has to be made available to the provider side so that the provider can also test against the contract.\nPact provides a Gradle plugin that we can use for this purpose. Let\u0026rsquo;s include this plugin into our Gradle build:\nplugins { id \u0026#34;au.com.dius.pact\u0026#34; version \u0026#34;3.5.20\u0026#34; } pact { publish { pactDirectory = \u0026#39;target/pacts\u0026#39; pactBrokerUrl = \u0026#39;URL\u0026#39; pactBrokerUsername = \u0026#39;USERNAME\u0026#39; pactBrokerPassword = \u0026#39;PASSWORD\u0026#39; } } We can now run ./gradlew pactPublish to publish all pacts generated from our tests to the specified Pact Broker. The API provider can get the pact from there to validate his own code against the contract.\nWe can integrate this task into a CI build to automate publishing of the pacts.\nConclusion This article gave a quick tour of the consumer-side workflow of Pact. We created a contract and verified our Feign client against this contract from a JUnit test class. Then we published the pact to a Pact Broker that is accessible by our API provider so that he can test against the contract as well.\n","date":"August 10, 2018","image":"https://reflectoring.io/images/stock/0026-signature-1200x628-branded_hua6bf2a4b7ae34ab845137fd515e2ba8a_112398_650x0_resize_q90_box.jpg","permalink":"/consumer-driven-contract-feign-pact/","title":"Creating a Consumer-Driven Contract with Feign and Pact"},{"categories":["Software Craft"],"contents":"Have you ever had a situation where you stared at en error message in a log and wondered \u0026ldquo;how the hell is this supposed to help me?\u0026rdquo;. You probably have. And you probably cursed whoever added that particular log message to the code base. This tip provides some rules as to what context information should be contained in a log message in which case.\nA Motivating Story I once tried to deploy a Java application to one of the big cloud providers. The application should connect to a SQL database which was in the cloud as well.\nAfter a lot of uploading the application, tweaking some configuration and uploading it again, I finally got the application deployed\u0026hellip;only to find out that it couldn\u0026rsquo;t create the connection to the cloud database.\nThe root cause information I got from the log was this:\nCaused by: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_171] at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_171] You can probably imagine my frustration. I had to guess what to change and then start the \u0026ldquo;upload and tweak\u0026rdquo; cycle once again.\nThe log didn\u0026rsquo;t answer any of my questions:\n the connection to what URL timed out? how long is the configured timeout? which component is responsible for setting the timeout? which configuration parameter can be adjusted to modify the timeout?  Adding one or more of the context information snippets above would have helped me a lot in finding out how to fix the problem.\nIn order to minimize frustration with our own code, let\u0026rsquo;s have a look at some more typical cases where some contextual information can be of great help.\n\u0026ldquo;Not Found\u0026rdquo; Errors There are always cases where something is requested by a client but our application cannot serve it because it\u0026rsquo;s not there.\nIf these \u0026ldquo;not found\u0026rdquo; errors are being logged, the log should contain:\n the parameters for the search query (often, this is the ID of an entity) the type of whatever was not found.  Bad Example:\nUser not found. Good Examples:\nUser with ID \u0026#39;42\u0026#39; was not found. No Contract found for client \u0026#39;42\u0026#39;. Exceptions Logging Exceptions is a whole topic in itself, but we can identify some contextual information that should be contained when logging an exception:\n the data constellation that led to the exception the root exception (if any) that led to the exception.  Bad Example:\nRegistration failed. Good Examples:\nRegistration failed because the user name \u0026#39;superman42\u0026#39; is already taken. Registration failed due to database error: (stacktrace of root exception) Validation Errors Input data from a user or an external system should usually be validated so that our application can safely work with it.\nIf such a validation fails, it helps tremendously to add the following context information to the log output:\n the use case during which validation failed the name of the field whose validation failed the reason why validation failed the value of the field that was responsible for the validation failure.  Also, we should make sure to include all validation errors in the log and not only the first error.\nUsually, the information above should not only be logged but included in the response so that the client can directly see what went wrong. But even then, clients will ask questions, so we can be prepared by having set up good logging.\nBad Examples:\nValidation failed. Validation failed for field \u0026#34;name\u0026#34;. Validation failed: field must not be null. Good Examples:\nRegistration failed: field \u0026#34;name\u0026#34; must not be null. Registration failed due to the following reasons: \u0026#34;age\u0026#34; must be a number; \u0026#34;name\u0026#34; must not be null. Status Changes When an entity from our application moves from one state into another, the following information can help in a log message:\n the id of the affected entity the type of the affected entity the previous state of the entity the new state of the entity.  Bad Examples:\nStatus changed. Status changed to \u0026#34;PROCESSED\u0026#34;. Good Example:\nStatus of Job \u0026#34;42\u0026#34; changed from \u0026#34;IN_PROGRESS\u0026#34; to \u0026#34;PROCESSED\u0026#34;. Configuration Parameters On application startup it can be of great help to print out the current configuration of the application, including:\n the name of each configuration parameter the value of each configuration parameter the default fallback value if the parameter was not explicity set.  When a configuration parameter changes during the life time of the application it should be logged just like a status change.\nBad Example:\nParameter \u0026#34;waitTime\u0026#34; has not been set. Parameter \u0026#34;timeout\u0026#34; has been set. Good Examples:\nParameter \u0026#34;waitTime\u0026#34; falls back to default value \u0026#34;5\u0026#34;. Parameter \u0026#34;timeout\u0026#34; set to \u0026#34;10\u0026#34;. Method Tracing When we\u0026rsquo;re tracing method invocations it should be self-evident to provide some contextual information:\n the fully-qualified name of the method or job that is being traced the duration of the execution time, preferably in human-readable form (i.e. \u0026ldquo;3m 5s 354ms\u0026rdquo; instead of \u0026ldquo;185354ms\u0026rdquo;) the values of method parameters (only if they have an impact on execution time).  Note that if the trace log is automatically processed to gather statistics about execution times, it\u0026rsquo;s obviously better to log the execution time in milliseconds instead of human-readable form.\nBad Example:\nTook 543ms to finish. (Yes, I actually stumble over log messages like that in production code from time to time\u0026hellip; .)\nGood Example:\nMethod \u0026#34;FooBar.myMethod()\u0026#34; took \u0026#34;1s 345ms\u0026#34; to finish. FooBar.myMethod() processed \u0026#34;432\u0026#34; records in \u0026#34;1s 345ms\u0026#34;. Batch Jobs Batch Jobs usually process a number of records in one way or another. Adding the following contextual information to the log can help when analyzing them:\n the start time of the job the end time of the job the duration of the job the number of records that have been touched by the job the number of records that have NOT been touched by the job and the reason why (i.e. because they don\u0026rsquo;t match the filter defined by the batch job the type of records that are being processed. the processing status of the job (waiting / in progress / processed) the success status of the job (success / failure)  Bad Examples:\nBatch Job finished. Batch Job \u0026#34;SendNewsletter\u0026#34; finished in 5123ms. Good Examples:\nBatch Job \u0026#34;SendNewsLetter\u0026#34; sent \u0026#34;3456\u0026#34; mails in \u0026#34;5s 123ms\u0026#34;. 324 mails were not sent due to an invalid mail address. What If the Contextual Information Is Not Passed into My Code? There are times when we\u0026rsquo;re actually thinking about adding contextual information to a log message but the information we would like to add is not available to us because it has not been passed into the method we\u0026rsquo;re currently working on.\nWhen the calling code is our responsibility, it\u0026rsquo;s easy to fix, since we can just change the calling code to pass on the information we want to add to the log.\nOne might argue that adding method parameters only to provide information in log messages pollutes the code. Yes, it does. But the benefit is worth it!\nEven if the calling code is outside of our own code base, we can do something: talk to the team / project that owns the code. Perhaps they will change their code accordingly.\nConclusion Adding some helpful contextual information to log messages is usually not a lot of effort, but it may even pay off to change some method signatures to pass in contextual data just for logging.\nNote that providing context information as structured data instead of just text makes it even easier for us to find the information we\u0026rsquo;re looking for.\nHave you encountered cases that are not listed in this article? I would like to hear about them and add them here!\n","date":"August 5, 2018","image":"https://reflectoring.io/images/stock/0031-matrix-1200x628-branded_hufb3c207f9151b804bbf7fe86cefe5814_184798_650x0_resize_q90_box.jpg","permalink":"/logging-context/","title":"Tip: Provide Contextual Information in Log Messages"},{"categories":["Software Craft"],"contents":"Application logs are all about finding the right information in the least amount of time. Automated log servers may help us in finding and filtering log messages. But in the end it\u0026rsquo;s us - the humans - who have to be able to interpret a log message. This article discusses how the format of a log message helps us and what such a format should look like.\nWhy Log Messages should be Human-Readable In our digital age, one might argue that something like acting on a message in an application log should be automated.\nBut have you ever seen a system that automatically acts on ERROR messages (other than just re-starting a service or alerting us humans that something is wrong, that is)?\nIt\u0026rsquo;s nice when log messages are machine-readable so they can be automatically parsed and processed and refined. But the ultimate goal of every automation around log messages is to prepare the data in a way that makes it easy for us to understand it.\nSo, why not format the log messages in a way that makes it easy for us in the first place?\nWhat makes a Log Message Human-Readable? A log message is human-readable in the definition of this article if the contained information can be grasped completely at a glance. We don\u0026rsquo;t want to look at a log message and first have to figure out what information it actually contains.\nLet\u0026rsquo;s consider this log excerpt:\n2018-07-29 21:10:29.178 thread-1 INFO com.example.MyService Service started in 3434 ms. 2018-07-29 21:10:29.178 main WARN some.external.Configuration parameter \u0026#39;foo\u0026#39; is missing. Using default value \u0026#39;bar\u0026#39;! 2018-07-29 21:10:29.178 scheduler ERROR com.example.jobs.ScheduledJob Scheduled job cancelled due to NullPointerException! Now, we humans are extraordinarily good at recognizing patterns. That ability is maybe even the only thing that distinguishes us from machines. After all, we have to prove our humanness by solving captchas every other day.\nBut what patterns do we see in the above log excerpt? We quickly grasp that each line starts with a date followed by what is probably a thread name, the logging level and then our pattern recognition fails us.\nOnly on second or third glance do we see the pattern in the rest of each message. But wouldn\u0026rsquo;t it be nice to grasp a log message at first glance?\nHere\u0026rsquo;s another example with the same content, only formatted differently:\n2018-07-29 | 21:10:29.178 | thread-1 | INFO | com.example.MyService | Service started in 3434 ms. 2018-07-29 | 21:10:29.178 | main | WARN | some.external.Configuration | Parameter \u0026#39;foo\u0026#39; is missing. Using default value \u0026#39;bar\u0026#39;! 2018-07-29 | 21:10:29.178 | scheduler | ERROR | com.example.jobs.ScheduledJob | Scheduled job cancelled due to NullPointerException! ... Stacktrace ... We can clearly distinguish the different information blocks at a glance and know in which column to look for the information we\u0026rsquo;re currently searching.\nThat\u0026rsquo;s pattern recognition on steroids. And it even makes the log easier to process for our machines.\nWhich Information to Include Let\u0026rsquo;s keep in mind that we want to grasp a log message at first glance, so there any single log message actually should not contain that much information.\nHere\u0026rsquo;s a list of the things that should definitely be included in any proper log message:\n  Date \u0026amp; Time should always be included in any log message. We need it to correlate it with other events.\n  If we\u0026rsquo;re building a multi-threaded application (which most of us probably do), the thread name should be included, because it allows us to quickly deduce information (e.g. \u0026ldquo;it happened in the scheduler thread, so it cannot have been triggered by an incoming user request\u0026rdquo;).\n  The logging level must be included. It\u0026rsquo;s simply needed to quickly sort messages into different buckets by urgency, helping us to quickly filter the data.\n  There should be some information available that tells us where the log message comes from. This is usually referred to as the \u0026ldquo;name\u0026rdquo; of a logger.\n  An even quicker way to find the code responsible for a certain log message is to include a message ID that is unique to each type of message. When we encounter such an ID in a log, we can just do a full text search for this ID in the code base and be sure that it\u0026rsquo;s the right spot.\n  There is the message itself that must be included. It contains the actual information whereas the other information is simply meta-data that helps us in sorting and filtering.\n  Finally, if the log message is an error, it should contain a stack trace to help us find where the error occured.\n  Including any more information should we well thought-out, because it hinders our ability to quickly grasp it.\nThere\u0026rsquo;s always the option to add more information to a log message that is not directly visible in its text representation (in the Java world, the mechanism used for this is called \u0026ldquo;Mapped Diagnostic Context\u0026rdquo;). This additional information may be visible at second glance in the search result of a log server, for example, but that\u0026rsquo;s a topic for another article.\nA Human-Readable Logging Format With the information above, the final log format I propose is this:\n2018-07-29 | 21:10:29.178 | thread-1 | INFO | com.example.MyService | 000425 | Service started in 3434 ms. 2018-07-29 | 21:10:29.178 | main | WARN | some.external.Configuration | | Parameter \u0026#39;foo\u0026#39; is missing. Using default value \u0026#39;bar\u0026#39;! 2018-07-29 | 21:10:29.178 | scheduler | ERROR | com.example.jobs.ScheduledJob | 000972 | Scheduled job cancelled due to NullPointerException! ... Stacktrace ...  Each column is separated by a distinct character so it actually looks like a table. It includes a unique message id for quick reference within the code (000425 and 000972). Since third party libraries usually don\u0026rsquo;t define a message id, we still include the logger name (e.g. some.external.Configuration) to be able to correlate the log message with the code of that library.  What about Log Servers? When using a log server, a log is no longer a text file, but a stream of searchable log events each containing structured data rather than text. It might seem then that the textual structure of a log message isn\u0026rsquo;t as important anymore.\nHowever, it\u0026rsquo;s still good practice to provide a well-structured textual representation of log messages. After all, when developing locally, we usually don\u0026rsquo;t send our logs to a log server but to a local text file.\nConclusion Since we\u0026rsquo;re primed for pattern recognition, we should provide clear patterns within our log messages. This, and the fact that we only include the most important information, allows us to quickly grasp a message and save a lot of time analyzing logs.\nThe argument for a clearly-structured text representation of log messages loses a little weight when using a log server, but it\u0026rsquo;s still a good idea to provide a structured logging format for those cases where the logs are still being written in a file (for example local development).\nHow to Implement this Tip The following articles describe ways to implement this tip with Spring Boot:\n How to Configure a Human-Readable Logging Format with Logback and Descriptive Logger How to Configure Environment-Specific Logging Behavior with Spring Boot  ","date":"July 30, 2018","image":"https://reflectoring.io/images/stock/0008-glasses-1200x628-branded_huba44b69235be263f935d157bc3a681a4_67599_650x0_resize_q90_box.jpg","permalink":"/logging-format/","title":"Tip: Use a Human-Readable Logging Format"},{"categories":["Software Craft"],"contents":"When searching for a bug, or just trying to get a feel for an application, it helps a lot if we know which information we can expect to find in the logs. But we will only know what to expect if we have followed a convention while programming the log statements. This article describes the set of logging conventions I have found useful while programming Java applications.\nLogging Levels In this article, we will have a look at the most commonly used logging levels. By accident, these are exactly the logging levels that SLF4J provides - the de-facto standard logging framework in the java world: ERROR, WARN, INFO, DEBUG and TRACE.\nNote that other logging frameworks have even more logging levels like FATAL or FINER, but the less logging levels we use, the easier it is to follow some conventions to use them consistently. Hence, we\u0026rsquo;ll stick with the de-facto default logging levels.\nWhy Logging Levels should be used consistently Logging levels exist for a reason. They allow us to put a log message into one of several buckets, sorted by urgency. This in turn allows us to filter the messages of a production log by the level of urgency.\nSince urgent messages in a production log often mean that something is wrong and we\u0026rsquo;re currently losing money because of that, this filter should be very important to us!\nNow, imagine, we have found the instruction manual for building one of those big Lego figures and a truck load of mixed Lego bricks on the attic. For each step in the manual we would have to sift through the bricks to find the ones we need. How much easier would it be if they were sorted into actual buckets by color?\nThe same is true for log messages. If we mix urgent log messages with informational log messages in the same bucket, we\u0026rsquo;re not going to find the messages we\u0026rsquo;re looking for because they\u0026rsquo;re drowned in others.\nLet\u0026rsquo;s have a look at when to put the log messages into which bucket.\nERROR The ERROR level should only be used when the application really is in trouble. Users are being affected without having a way to work around the issue.\nSomeone must be alerted to fix it immediately, even if it\u0026rsquo;s in the middle of the night. There must be some kind of alerting in place for ERROR log events in the production environment.\nOften, the only use for the ERROR level within a certain application is when a valuable business use case cannot be completed due to technical issues or a bug.\nTake care not to use this logging level too generously because that would add too much noise to the logs and reduce the significance of a single ERROR event. You wouldn\u0026rsquo;t want to be woken in the middle of the night due to something that could have waited until the next morning, would you?\nWARN The WARN level should be used when something bad happened, but the application still has the chance to heal itself or the issue can wait a day or two to be fixed.\nLike ERROR events, WARN events should be attended to by a dev or ops person, so there must be some kind of alerting in place for the production environment.\nA concrete example for a WARN message is when a system failed to connect to an external resource but will try again automatically. It might ultimately result in an ERROR log message when the retry-mechanism also fails.\nThe WARN level is the level that should be active in production systems by default, so that only WARN and ERROR messages are being reported, thus saving storage capacity and performance.\nIf storage and performance are not a problem and our log server provides good search capabilities we can actually report even INFO and DEBUG events and just filter them out when we\u0026rsquo;re only interested in the important stuff.\nINFO The INFO level should be used to document state changes in the application or some entity within the application.\nThis information can be helpful during development and sometimes even in production to track what is actually happening in the system.\nConcrete examples for using the INFO level are:\n the application has started with configuration parameter x having the value y a new entity (e.g. a user) has been created or changed its state the state of a certain business process (e.g. an order) has changed from \u0026ldquo;open\u0026rdquo; to \u0026ldquo;processed\u0026rdquo; a regularly scheduled batch job has finished and processed z items.  DEBUG It\u0026rsquo;s harder for to define what information to log on DEBUG level than defining it for the other levels.\nIn a nutshell, we want to log any information that helps us identify what went wrong on DEBUG level.\nConcrete examples for using the DEBUG level are:\n error messages when an incoming HTTP request was malformed, resulting in a 4xx HTTP status variable values in business logic.  The DEBUG level may be used more generously than the above levels, but the code should not be littered with DEBUG statements as it reduces readability and pollutes the log.\nTRACE Compared to DEBUG, it\u0026rsquo;s pretty easy to define what to log on TRACE. As the name suggests, we want to log all information that helps us to trace the processing of an incoming request through our application.\nThis includes:\n start or end of a method, possibly including the processing duration URLs of the endpoints of our application that have been called start and end of the processing of an incoming request or scheduled job.  Alert \u0026amp; Adapt Even with a convention in place, in a team of several developers we probably won\u0026rsquo;t get the logging level for all messages right the first time.\nThere will be ERROR messages that should be WARN messages because nothing is broken yet. And there will be errors hidden in INFO messages, giving us a false sense of security.\nThat\u0026rsquo;s why there must be some kind of alerting on the WARN and ERROR levels and someone responsible to act on it.\nEven in a pre-production environment, we want to know what is being reported on WARN and ERROR in order to be able fix things before they go into production.\nConclusion The above conventions provide a first step towards searchable and understandable log data that allows us to quickly find the information we need in a situation where each second might cost us a lot of money.\nTo keep our conventions sharp, we should set up an alerting on WARN and ERROR messages on a test environment and act on them by either adapting our conventions or changing the level of a message.\n","date":"July 28, 2018","image":"https://reflectoring.io/images/stock/0033-colored-chalk-1200x628-branded_hu2cd38b1d9646be2974b91a3eb1a3d55d_170992_650x0_resize_q90_box.jpg","permalink":"/logging-levels/","title":"Tip: Use Logging Levels Consistently"},{"categories":["Software Craft"],"contents":"Skipping a CI build is like purposefully not brushing your teeth every morning and evening. You know it should not be skipped and you feel guilty when you do it anyways. However, there are some cases when you have only changed some supplementary files (like documentation) that have no impact whatsoever on your build pipeline and you don\u0026rsquo;t want to wait for a long-running build. Here are two ways how to skip a CI build in this case.\nUsing the Commit Message [skip ci] The easiest way to skip a CI build is to add [skip ci] or [ci skip] to your commit message. Many CI providers support this:\n Travis CI GitLab BitBucket CircleCI  This solution has two major drawbacks, though.\nFirstly, it pollutes the git commit messages with meta information that is only relevant to the CI system and brings no value to the commit history.\nSecondly, the above commit message will cause the CI system to ignore the push / pull request completely, i.e. it will not even register in your CI history. This will cause any hooks you have installed to your build pipeline not to run.\nYou\u0026rsquo;re screwed, for example, if you have protected the branches of your GitHub repository, so that pull requests can only be merged when the CI build for the pull request has successfully run. This will never happen using [skip ci], so you can never merge the pull request\u0026hellip; .\nUsing a Git Diff in the CI Build So, we actually want the CI build to start - only to immediately exit when only non-code changes were made.\nI\u0026rsquo;ve found a nice script for Travis CI on GitHub and modified it a little:\nif ! git diff --name-only $TRAVIS_COMMIT_RANGE | grep -qvE \u0026#39;(.md$)\u0026#39; then echo \u0026#34;Only docs were updated, not running the CI.\u0026#34; exit fi This script creates a diff of all commits within the $TRAVIS_COMMIT_RANGE (i.e. within the current push or pull request) and exits the build if it only includes markdown files (*.md). We could modify the regular expression to include more than just markdown files. It can be included into a build, like I did in my code examples repository.\nHowever, of the big CI providers, only Travis CI currently supports the necessary \u0026ldquo;commit range\u0026rdquo; environment variable:\n GitLab doesn\u0026rsquo;t support a commit range variable, but it\u0026rsquo;s requested as a feature BitBucket doesn\u0026rsquo;t support a commit range variable, but it\u0026rsquo;s requested as a feature CircleCI doesn\u0026rsquo;t support a commit range variable, but there is a workaround  A Word of Caution Use wisely and with caution. The build should never be skipped when a file changed that has any impact on the build. So, if your markdown files are part of your build, don\u0026rsquo;t skip the build.\n","date":"June 11, 2018","image":"https://reflectoring.io/images/stock/0018-cogs-1200x628-branded_huddc0bdf9d6d0f4fdfef3c3a64a742934_149789_650x0_resize_q90_box.jpg","permalink":"/skip-ci-build/","title":"Skipping a CI Build for non-code changes"},{"categories":["Spring Boot"],"contents":"Well-behaved software consists of highly cohesive modules that are loosely coupled to other modules. Each module takes care from user input in the web layer down to writing into and reading from the database.\nThis article presents a way to structure a Spring Boot application in vertical modules and discusses a way how to test the layers within one such module isolated from other modules using the testing features provided by Spring Boot.\n Example Code This article is accompanied by a working code example on GitHub. Code Structure Before we can test modules and layers, we need to create them. So, let\u0026rsquo;s have a look at how the code is structured. If you want to view the code while reading, have a look at the github repository with the example code.\nThe application resides in the package io.reflectoring and consists of three vertical modules:\n The booking module is the main module. It provides functionality to book a flight for a certain customer and depends on the other modules. The customer module is all about managing customer data. The flight module is all about managing available flights.  Each module has its own sub-package. Within each module we have the following layers:\n The web layer contains our Spring Web MVC Controllers, resource classes and any configuration necessary to enable web access to the module. The business layer contains the business logic and workflows that make up the functionality of the module. The data layer contains our JPA entities and Spring Data repositories.  Again, each layer has its own sub-package.\nApplicationContext Structure Now that we have a clear-cut package structure, let\u0026rsquo;s look at how we structure the Spring ApplicationContext in order to represent our modules:\nIt all starts with a Spring Boot Application class:\npackage io.reflectoring; @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } The @SpringBootApplication annotation already takes care of loading all our classes into the ApplicationContext.\nHowever, we want our modules to be separately runnable and testable. So we create a custom configuration class annotated with @Configuration for each module to load only the slice of the application context that this module needs.\nThe BookingConfiguration imports the other two configurations since it depends on them. It also enables a @ComponentScan for Spring beans within the module package. It also creates an instance of BookingService to be added to the application context:\npackage io.reflectoring.booking; @Configuration @Import({CustomerConfiguration.class, FlightConfiguration.class}) @ComponentScan public class BookingConfiguration { @Bean public BookingService bookingService( BookingRepository bookingRepository, CustomerRepository customerRepository, FlightService flightService) { return new BookingService(bookingRepository, customerRepository, flightService); } } Aside from @Import and @ComponentScan, Spring Boot also offers other features for creating and loading modules.\nThe CustomerConfiguration looks similar, but it has no dependency to other configurations. Also, it doesn\u0026rsquo;t provide any custom beans, since all beans are expected to be loaded via @ComponentScan:\npackage io.reflectoring.customer; @Configuration @ComponentScan public class CustomerConfiguration {} Let\u0026rsquo;s assume that the Flight module contains some scheduled tasks, so we enable Spring Boot\u0026rsquo;s scheduling support:\npackage io.reflectoring.flight; @Configuration @EnableScheduling @ComponentScan public class FlightConfiguration { @Bean public FlightService flightService(){ return new FlightService(); } } Note that we don\u0026rsquo;t add annotations like @EnableScheduling at application level but instead at module level to keep responsibilities sharp and to avoid any side-effects during testing.\nTesting Modules in Isolation Now that we have defined some \u0026ldquo;vertical\u0026rdquo; modules within our Spring Boot application, we want to be able to test them in isolation.\nIf we\u0026rsquo;re doing integration tests in the customer module, we don\u0026rsquo;t want them to fail because some bean in the booking module has an error. So, how do we load only the part of the application context that is relevant for a certain module?\nWe could use Spring\u0026rsquo;s standard @ContextConfiguration support to load only one of our module configurations above, but this way we won\u0026rsquo;t have support for Spring Boot\u0026rsquo;s test annotations like @SpringBootTest, @WebMvcTest, and @DataJpaTest which conveniently set up an application context for integration tests.\nBy default, the test annotations mentioned above create an application for the first @SpringBootConfiguration annotation they find from the current package upwards, which is usually the main application class, since the @SpringBootApplication annotation includes a @SpringBootConfiguration.\nSo, to narrow down the application context to a single module, we can create a test configuration for each of our modules within the test sources:\npackage io.reflectoring.booking; @SpringBootConfiguration @EnableAutoConfiguration class BookingTestConfiguration extends BookingConfiguration {} package io.reflectoring.customer; @SpringBootConfiguration @EnableAutoConfiguration class CustomerTestConfiguration extends CustomerConfiguration {} package io.reflectoring.flight; @SpringBootConfiguration @EnableAutoConfiguration class FlightTestConfiguration extends FlightConfiguration {} Each test configuration is annotated with @SpringBootConfiguration to make it discoverable by @SpringBootTest and its companions and extends the \u0026ldquo;real\u0026rdquo; configuration class to inherit its contributions to the application context. Also, each configuration is additionally annotated with @EnableAutoConfiguration to enable Spring Boot\u0026rsquo;s auto-configuration magic.\nWhy not use @SpringBootConfiguration in production code?  We could just add @SpringBootConfiguration and @EnableAutoConfiguration to our module configurations in the prodcution code and it would still work.  But the API docs state that we should not use more than one @SpringBootConfiguration in a single application and this one is usually inherited from the @SpringBootApplication annotation.  So, as not to make our code incompatible to future Spring Boot versions, we take a slight detour and duplicate the module configurations in the test sources, adding the @SpringBootConfiguration annotation where it cannot hurt.  If we now create a @SpringBootTest in the customer package, for instance, only the customer module is loaded by default.\nLet\u0026rsquo;s create some integration tests to prove our test setup.\nTesting a Module\u0026rsquo;s Data Layer with @DataJpaTest Our data layer mainly contains our JPA entities and Spring Data repositories. Our testing efforts in this layer concentrate on testing the interaction between our repositories and the underlying database.\nSpring Boot provides the @DataJpaTest annotation to set up a stripped application context with only the beans needed for JPA, Hibernate and an embedded database.\nLet\u0026rsquo;s create a test for the data layer of our customer module:\npackage io.reflectoring.customer.data; @DataJpaTest class CustomerModuleDataLayerTests { @Autowired private CustomerRepository customerRepository; @Autowired(required = false) private BookingRepository bookingRepository; @Test void onlyCustomerRepositoryIsLoaded() { assertThat(customerRepository).isNotNull(); assertThat(bookingRepository).isNull(); } } @DataJpaTest goes up the package structure until it finds a class annotated with @SpringBootConfiguration. It finds our CustomerTestConfiguration and then adds all Spring Data repositories within that package and all sub-packages to the application context, so that we can just autowire them and run tests against them.\nThe test shows that only the CustomerRepository is loaded. The BookingRepository is in another module and not picked up in the application context. An error in a query within the BookingRepository will no longer cause this test to fail. We have effectively decoupled our modules in our tests.\nMy article about the @DataJpaTest annotation goes into deeper detail about which queries to test, and how to set up and populate a database schema for tests.\nTesting a Module\u0026rsquo;s Web Layer with @WebMvcTest Similar to @DataJpaTest, @WebMvcTest sets up an application context with everything we need for testing a Spring MVC controller:\npackage io.reflectoring.customer.web; @WebMvcTest class CustomerModuleWebLayerTests { @Autowired private CustomerController customerController; @Autowired(required = false) private BookingController bookingController; @Test void onlyCustomerControllerIsLoaded() { assertThat(customerController).isNotNull(); assertThat(bookingController).isNull(); } } Similar to @DataJpaTest, @WebMvcTest goes up the package structure to the first @SpringBootConfiguration it finds and uses it as the root for the application context.\nIt again finds our CustomerTestConfiguration and adds all web-related beans from the customer module. Web controllers from other modules are not loaded.\nIf you want to read up on details about what to test in a web layer and how to test it, have a look at my article about testing Spring MVC web controllers.\nTesting a whole Module using @SpringBootTest Instead of only creating an application context for a certain layer of one of our modules, we can create an application context for a whole module with @SpringBootTest:\npackage io.reflectoring.customer; @SpringBootTest class CustomerModuleTest { @Autowired(required = false) private BookingController bookingController; @Autowired(required = false) private BookingService bookingService; @Autowired(required = false) private BookingRepository bookingRepository; @Autowired private CustomerController customerController; @Autowired private CustomerService customerService; @Autowired private CustomerRepository customerRepository; @Test void onlyCustomerModuleIsLoaded() { assertThat(customerController).isNotNull(); assertThat(customerService).isNotNull(); assertThat(customerRepository).isNotNull(); assertThat(bookingController).isNull(); assertThat(bookingService).isNull(); assertThat(bookingRepository).isNull(); } } Again, only the beans of our customer module are loaded, this time spanning from the web layer all the way to the data layer. We can now happily autowire any beans from the customer module and create integration tests between them.\nWe can use @MockBean to mock beans from other modules that might be needed.\nIf you want to find out more about integration tests with Spring Boot, read my article about the @SpringBootTest annotation.\nTesting ApplicationContext Startup Even though we have now successfully modularized our Spring Boot application and our tests, we want to know if the application context still works as a whole.\nSo, a must-have test for each Spring Boot application is wiring up the whole ApplicationContext, spanning all modules, to check if all dependencies between the beans are satisfied.\nThis test actually is already included in the default sources if you create your Spring Boot application via Spring Initializr:\npackage io.reflectoring; @ExtendWith(SpringExtension.class) @SpringBootTest class ApplicationTests { @Test void applicationContextLoads() { } } As long as this test is in the base package of our application, it will not find any of our module configurations and instead load the application context for the main application class annotated with @SpringBootApplication.\nIf the application context cannot be started due to any configuration error or conflict between our modules, the test will fail.\nConclusion Using @Configuration classes in the production sources paired with @SpringBootConfiguration classes in the test sources, we can create modules within a Spring Boot application that are testable in isolation.\nYou can find the source code for this article on github.\nUpdate History  03-01-2019: Refactored the article in order to make it compatible with Spring Boot API docs stating that we should have only one @SpringBootConfiguration per application. Also removed testing basics and instead linked to other articles.  ","date":"May 27, 2018","image":"https://reflectoring.io/images/stock/0034-layers-1200x628-branded_hub6558a87dc13e82db207cec5ec3c5de9_214458_650x0_resize_q90_box.jpg","permalink":"/testing-verticals-and-layers-spring-boot/","title":"Structuring and Testing Modules and Layers with Spring Boot"},{"categories":["Software Craft"],"contents":"A microservice architecture is all about communication. How should services communicate in any given business scenario? Should they call each other synchronously? Or should they communicate via asynchronous messaging? As always, this is not a black-or-white decision. This article discusses some prominent communication patterns.\nSynchronous Calls The probably easiest communication pattern to implement is simply calling another service synchronously, usually via REST.\nService 1 calls Service 2 and waits until Service 2 is done processing the request and returns a response. Service 1 can then process Service 2\u0026rsquo;s response in the same transaction that triggered the communication.\nThis pattern is easy to grasp since we are doing it all the time in any web application out there. It\u0026rsquo;s also well supported by technology. Netflix have open sourced support for synchronous communication in form of the Feign and Hystrix libraries.\nLet\u0026rsquo;s discuss some pros and cons.\nTimeouts What if Service 2 needs very long to process the Service 1\u0026rsquo;s request and Service 1 is tired of waiting? Service 1 will then probably have some sort of timeout exception and roll back the current transaction. However, Service 2 doesn\u0026rsquo;t know that Service 1 rolled back the transaction and might process the request after all, perhaps resulting in inconsistent data between the two services.\nStrong Coupling Naturally, synchronous communication creates a strong coupling between the services. Service 1 cannot work without Service 2 being available. To mitigate this, we have to work around communication failures by implementing retry and / or fallback mechanisms. Luckily, we have Hystrix, enabling us to do exactly this. However, retries and fallbacks only go so far and might not cover all business requirements.\nEasy to Implement Hey, it\u0026rsquo;s synchronous communication! We\u0026rsquo;ve all done it before. And thus we can do it again easily. Let\u0026rsquo;s just get the latest version of our favorite HTTP client library and implement it. It\u0026rsquo;s easy as pie (as long as we don\u0026rsquo;t have to think about retries and fallbacks, that is).\nSimple Messaging Asynchronous messaging is the next option we take a look at.\nService 1 fires a message to a message broker and forgets about it. Service 2 subscribes to a topic is fed with all messages belonging to that topic. The services don\u0026rsquo;t need to know each other at all, they just need to know that there are messages of a certain type with a certain payload.\nLet\u0026rsquo;s discuss messaging.\nAutomatic Retry Depending on the message broker, we get a retry mechanism for free. If Service 2 is currently not available, the message broker will try to deliver the message again until Service 2 finally gets it. \u0026ldquo;Guaranteed Delivery\u0026rdquo; is the magic keyword.\nLoose Coupling Along the same lines, messaging makes the services loosely coupled since Service 2 doesn\u0026rsquo;t need to be available at the time Service 1 sends the message.\nMessage Broker must not fail Using a message broker, we just introduced a piece of central infrastructure that is needed by all services that want to communicate asynchronously. If it fails, hell will break loose (and all services cease functioning).\nPipeline contains Schema It\u0026rsquo;s worthy to note that messages (even if they are JSON) define a certain schema within the message broker. If the format of a message changes (and the change is not backwards compatible), then all messages of that type must have been processed by all subscribers before the new service versions can be deployed.\nThis contradicts independent deployments, one of the main goals of microservices. This can be mitigated by only allowing backward compatible changes to message formats (which may not always be possible).\nTwo-Phase Commit Another caveat is that we usually send messages as part of our business logic and the business logic is usually bound to a database transaction. If the database transaction rolls back, a message may have already been sent to the message broker.\nThis can be addressed by implementing two-phase commit between the database transaction and the message broker. However, two-phase commit may not be supported by the database or the message broker and even if it is, it\u0026rsquo;s often a pain to get working and even more so to test reliably.\nTransactional Messaging We can modify the simple messaging scenario from above for some benefits.\nInstead of sending a message directly to the message broker, we now store it in the service\u0026rsquo;s database first. Same on the receiving side: here the message gets stored into the receiver service\u0026rsquo;s database before it is being processed.\nNo Need for Two-Phase Commit Since we\u0026rsquo;re writing the message to a local database table we can use the same transaction that our business logic uses. If the business logic fails, the transaction is rolled back and so is our message. We cannot accidentally send messages any more when our local transaction has been rolled back.\nMessage Broker may Fail Since we\u0026rsquo;re storing our messages in the local database on the sending and the receiving side, the message broker may fail anytime and the system will magically heal itself once it\u0026rsquo;s back online. We can just send the messages again from our message database table.\nComplex Setup The above perks aren\u0026rsquo;t for free, of course. The setup is quite complex, since we need to store the messages in the database of the sending and reveiving services. Also, we need to implement jobs on both sides that poll the database, looking for unprocessed messages and then process them by\n sending them to the message broker (on the sending side) or calling the business logic that processes the message (on the receiving side)  Zero-Payload Events The last scenario is similar to the messaging example, but we\u0026rsquo;re not sending whole messages (i.e. big JSON objects) but instead only a pointer to the payload.\nIn this case, the message is more like an event. It signals that something happened, for example that \u0026ldquo;the order with ID 4711 has been shipped\u0026rdquo;. Thus, the message itself only contains the type of the event (\u0026ldquo;orderShipped\u0026rdquo;) and the ID of the order (4711). If Service 2 is interested in the \u0026ldquo;orderShipped\u0026rdquo; event, it can then synchronously call Service 1 and ask for the order data.\nDumb Pipe This scenario takes most of the message structure from the message broker, making it a dumber pipe (as is desirable in a microservice architecture). We don\u0026rsquo;t have to think that much on maintaining backwards compatibility within the message structure anymore, since we have almost no message structure. Note, however, that the little message structure we have left should still change in a backwards compatible fashion between two releases.\nCombinable with Transactional Messaging Combining the zero-payload approach with the transactional messaging approach from above, we gain all the benefits of not needing two-phase commit and gaining a retry-mechanism even when the message broker fails. This adds even more complexity to the solution though, since we now also have to implement synchronous calls between the services to get the event payloads.\nWhen to use which Approach? As mentioned in the introduction, there is no black-and-white decision between the communication patterns described above. However, let\u0026rsquo;s try to find some indications on when we might use which approach.\nWe might want to use Synchronous Calls if:\n we want to query some data, because a query is not changing any state so we don\u0026rsquo;t have to worry about distributed transactions and data consistency across service boundaries the call is allowed to fail and we don\u0026rsquo;t need a sophisticated retry mechanism  We might want to use Simple Messaging if:\n we want to send state-changing commands the operation must be performed eventually, even if it fails the first couple times we don\u0026rsquo;t care about potentially complex message structure  We might want to use Transactional Messaging if:\n we want to send state-changing commands only when the local database transaction has been successful two-phase commit is not an option we don\u0026rsquo;t trust the message broker (actually, better look for one you trust)  We might want to use Zero Payload Events if:\n we want to send state-changing commands we would otherwise have a very complex message structure that is hard to maintain in a backwards-compatible way  ","date":"May 14, 2018","image":"https://reflectoring.io/images/stock/0035-switchboard-1200x628-branded_hu8b558f13f0313494c9155ce4fc356d65_235224_650x0_resize_q90_box.jpg","permalink":"/microservice-communication-patterns/","title":"Microservice Communication Patterns"},{"categories":["Spring Boot"],"contents":"Every software project comes to a point where the code should be broken up into modules. These may be modules within a single code base or modules that each live in their own code base. This article explains some Spring Boot features that help to split up your Spring Boot application into several modules.\n Example Code This article is accompanied by a working code example on GitHub. What\u0026rsquo;s a Module in Spring Boot? A module in the sense of this article is a set of Spring components loaded into the application context.\nA module can be a business module, providing some business services to the application or a technical module that provides cross-cutting concerns to several other modules or to the whole of the application.\nThe modules discussed in this article are part of the same monolithic codebase. To better enforce module boundaries, we could split up that monolithic codebase into multiple build modules with Maven or Gradle, if we so wish.\nOptions for Creating Modules The base for a Spring Module is a @Configuration-annotated class along the lines of Spring\u0026rsquo;s Java configuration feature.\nThere are several ways to define what beans should be loaded by such a configuration class.\n@ComponentScan The easiest way to create a module is using the @ComponentScan annotation on a configuration class:\n@Configuration @ComponentScan(basePackages = \u0026#34;io.reflectoring.booking\u0026#34;) public class BookingModuleConfiguration { } If this configuration class is picked up by one of the importing mechanisms (explained later), it will look through all classes in the package io.reflectoring.booking and load an instance of each class that is annotated with one of Spring\u0026rsquo;s stereotype annotations into the application context.\nThis way is fine as long as you always want to load all classes of a package and its sub-packages into the application context. If you need more control on what to load, read on.\n@Bean Definitions Spring\u0026rsquo;s Java configuration feature also brings the @Bean annotation for creating beans that are loaded into the application context:\n@Configuration public class BookingModuleConfiguration { @Bean public BookingService bookingService(){ return new BookingService(); } // potentially more @Bean definitions ...  } When this configuration class is imported, a BookingService instance will be created and inserted into the application context.\nUsing this way to create a module gives a clearer picture of what beans are actually loaded, since you have a single place to look at - in contrast to using @ComponentScan where you have to look at the stereotype annotations of all classes in the package to see what\u0026rsquo;s going on.\n@ConditionalOn... Annotations If you need even more fine-grained control over which components should be loaded into the application context, you can make use of Spring Boot\u0026rsquo;s @ConditionalOn... annotations:\n@Configuration @ConditionalOnProperty(name = \u0026#34;io.reflectoring.security.enabled\u0026#34;, havingValue = \u0026#34;true\u0026#34;, matchIfMissing = true) public class SecurityModuleConfiguration { // @Bean definitions ... } Setting the property io.reflectoring.security.enabled to false will now disable this module completely.\nThere are other @ConditionalOn... annotations you can use to define conditions for loading a module. These include a condition depending on the version of the JVM and the existence of a certain class in the classpath or a certain bean in the application context.\nIf you ever asked yourself how Spring Boot magically loads exactly the beans your application needs into the application context, this is how. Spring Boot itself makes heavy use of the @ConditionalOn... annotations.\nOptions for Importing Modules Having created a module, we need to import it into the application.\n@Import The most straight-forward way is to use the @Import annotation:\n@SpringBootApplication @Import(BookingModuleConfiguration.class) public class ModularApplication { // ... } This will import the BookingModuleConfiguration class and all beans that come with it - no matter whether they are declared by @ComponentScan or @Bean annotations.\n@Enable... Annotations Spring Boot brings a set of annotations that each import a certain module by themselves. An example is @EnableScheduling, which imports all Beans necessary for the scheduling sub system and its @Scheduled annotation to work.\nWe can make use of this ourselves, by defining our own @EnableBookingModule annotation:\n@Retention(RetentionPolicy.RUNTIME) @Target({ElementType.TYPE}) @Documented @Import(BookingModuleConfiguration.class) @Configuration public @interface EnableBookingModule { } The annotation is used like this:\n@SpringBootApplication @EnableBookingModule public class ModularApplication { // ... } The @EnableBookingModule annotation is actually just a wrapper around an @Import annotation that imports our BookingModuleConfiguration as before. However, if we have a module consisting of more than one configuration, this is a convenient and expressive way to aggregate these configurations into a single module.\nAuto-Configuration If we want to load a module automatically instead of hard-wiring the import into the source code, we can make use of Spring Boot\u0026rsquo;s auto-configuration feature.\nTo enable a module for auto configuration, put the file META-INF/spring.factories into the classpath:\norg.springframework.boot.autoconfigure.EnableAutoConfiguration=\\ io.reflectoring.security.SecurityModuleConfiguration This would import the SecurityModuleConfiguration class all its beans into the application context.\nAn auto-configuration is especially handy if we\u0026rsquo;re building a cross-cutting concern to be used in many Spring Boot applications. In this case, we can even build a separate starter module around the configuration.\nConfiguring a Module With the @ConfigurationProperties annotation, Spring Boot provides first-class support for binding external configuration parameters to a Spring bean in a type-safe manner.\nWhen to use which Import Strategy? This article presented the major options for creating and importing modules in a Spring Boot application. But when should we use which of those options?\nUse @Import for Business Modules For modules that contain business logic - like the BookingModuleConfiguration from the code snippets above - a static import with the @Import annotation should suffice in most cases. It usually does not make sense to not load a business module, so we do not need any control about the conditions under which it is loaded.\nNote that even if a module is always loaded, it still has a right to exist as a module, since it being a module enables it to live in its own package or even its own JAR file.\nUse Auto-Configuration for Technical Modules Technical modules, on the other hand - like the SecurityModuleConfiguration from above - usually provide some cross-cutting concerns like logging, exception handling, authorization or monitoring features which the application can very well live without.\nEspecially during development, these features may not be desired at all, so we want to have a way to disable them.\nAlso, we do not want to import each technical module statically with @Import, since they should not really have any impact on our code.\nSo, the best option for importing technical modules is the auto-configuration feature. The modules are loaded silently in the background and we can influence them outside of the code with properties.\n","date":"February 6, 2018","image":"https://reflectoring.io/images/stock/0037-rubics-cube-1200x628-branded_hu03a5df656ff1515247429fdb6b332135_160363_650x0_resize_q90_box.jpg","permalink":"/spring-boot-modules/","title":"Modularizing a Spring Boot Application"},{"categories":["Software Craft"],"contents":"In previous articles, I discussed how to publish snapshots to oss.jfrog.org and how to publish releases to Bintray using Gradle as a build tool. While this is very helpful already, you can get better exposure\nfor your release by publishing it to the JCenter and / or Maven Central repositories because those are widely known and supported by build tools. This article explains how to publish a release from your Bintray repository to JCenter and Maven Central.\n Example Code This article is accompanied by a working code example on GitHub. JCenter vs. Maven Central Before we go into the details of publishing to JCenter and Maven Central, let\u0026rsquo;s disuss the difference between the two. Both are publicly available Maven repositories that host releases of open source libraries.\nMaven Central is operated by Sonatype, the company behind the Nexus software that is widely used to host Maven repositories (and hosts Maven Central itself).\nJCenter is operated by JFrog, the company that created Bintray and Artifactory (which is used to host JCenter). JCenter is younger than Maven Central, giving it an edge in terms of user experience and simple workflows because the developers had more time to learn.\nSince JCenter is a mirror of Maven Central that contains everything Maven Central contains plus some extra, you could simply include JCenter into your build tools and get access to all releases you could wish for. However, Maven Central is still more widely known and supported out-of-the-box in more build tools, so you might want to publish your release in both repositories.\nIn the following, we will discuss the steps necessary to synchronize a repository on Bintray with JCenter and Maven Central so that all releases to that repository are automatically published to both public repositories. If you don\u0026rsquo;t have uploaded your release to Bintray yet, read this article which explains the necessary steps.\nPublish to JCenter Syncing a Bintray repository to JCenter is easy as pie. Simply go to your package in the Bintray UI and klick the button \u0026ldquo;Add to JCenter\u0026rdquo;. In the dialog you can also check the checkbox \u0026ldquo;host my snapshot artifacts on oss.jfrog.org\u0026rdquo; to be able to publish snapshots (more on snapshots here).\nSubmit the form and wait until you get a response. This may take a working day or so, since the approval is a manual process. Then, you\u0026rsquo;ll find a response in your inbox on Bintray and you\u0026rsquo;re ready to publish to JCenter. Every time you publish an artifact to Bintray, it will automatically be mirrored to JCenter without anything else to do.\nTo publish manually, click the \u0026ldquo;Publish\u0026rdquo; link shown below after you uploaded some files.\nTo publish automatically from a Gradle build, add the publish flag to the bintray configuration:\nbintray { ... pkg { ... } publish = true } Publish to Maven Central Syncing with Maven Central requires a little more effort. Here\u0026rsquo;s what to do:\nSet up a Sonatype Account Maven Central is hosted on a Nexus instance that requires a login to publish releases. Thus, you need to register and request the group name you want to publish your artifacts under. This guide explains the necessary steps. There is a manual process involved on Sonatype\u0026rsquo;s side so be patient :).\nLink Your Bintray Account with Your Sonatype Account Next, you can add your Sonatype credentials to your Bintray Account under \u0026ldquo;Edit Profile -\u0026gt; Accounts\u0026rdquo;.\nIf you\u0026rsquo;re not comfortable with trusting your Sonatype credentials to Bintray, you can also enter the credentials each time you want to sync your repository in the step \u0026ldquo;Sync with Maven Central\u0026rdquo;.\nNext, we need to sign our artifacts, since that is a requirement for all artifacts published on Maven Central.\nSign with Bintray\u0026rsquo;s Key The easy way to sign your artifacts is to let Bintray do the work. Simply check \u0026ldquo;GPG sign uploaded files using Bintray\u0026rsquo;s public/private key pair.\u0026rdquo; in the settings of your Bintray repository. Done.\nSign with your own Key If you want to sign your artifacts with your own key, you first need to create a GPG key pair and add the public and private keys to your Bintray profile under \u0026ldquo;Edit Profile -\u0026gt; GPG Signing\u0026rdquo;.\nAdditionally, we need to add the gpg closure to the Bintray gradle plugin so that when gradle publishes artifacts to Bintray, they are automatically signed with the private key associated to your Bintray profile:\nbintray { ... pkg { ... version { ... gpg { sign = true } } } publish = true } For a full example have a look at my diffparser project.\nNote that the key pair you upload to your Bintray profile should be a special key pair for exactly the purpose of publishing your artifacts through Bintray. You\u0026rsquo;re giving away your private key, after all, so you don\u0026rsquo;t want it to be a key that is also used for something else.\nAgain, if you don\u0026rsquo;t feel comfortable with providing a private key to Bintray, you can use a Gradle plugin like the Signing Plugin to create the signatures from the Gradle build on your machine or your CI server (however, then you still have to provide the private key to the CI server, which probably is not much better\u0026hellip;).\nSync with Maven Central Once the above steps are taken, navigate to the package you want to publish in the Bintray UI. Open the \u0026ldquo;Maven Central\u0026rdquo; tab and click on \u0026ldquo;Sync\u0026rdquo;. You may have to wait a couple minutes and then the Bintray UI shows if the syncing was successful. Note that you have to hit this button manually each time you want to release a new version to Maven Central.\nConclusion This article discussed the steps necessary to sync a Bintray package to JCenter and Maven Central to get the best exposure for your open source releases. Syncing to JCenter is easier than syncing to Maven Central, but to get even more exposure, it might still be worth it to take the steps to also publish to Maven Central.\n","date":"January 24, 2018","image":"https://reflectoring.io/images/stock/0038-package-1200x628-branded_hu7e104c3cc9032be3d32f9334823f6efc_80797_650x0_resize_q90_box.jpg","permalink":"/bintray-jcenter-maven-central/","title":"Publishing Open Source Releases to JCenter and Maven Central"},{"categories":["Spring Boot"],"contents":"Consumer-driven contract tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture. In the Java ecosystem, Spring Boot is a widely used technology for implementing microservices. Spring Cloud Contract is a framework that facilitates consumer-driven contract tests. So let\u0026rsquo;s have a look at how to verify a Spring Boot REST client against a contract with Spring Cloud Contract.\n Example Code This article is accompanied by a working code example on GitHub. In this Article Instead of testing API consumer and provider in an end-to-end manner, with consumer-driven contract tests we split up the test of our API into two parts:\n a consumer test testing against a mock provider and a provider test testing against a mock consumer  This article focuses on the consumer side.\nIn this article we will:\n define an API contract with Spring Cloud Contract\u0026rsquo;s DSL create a client against that API with Feign publish the contract to the API provider generate a provider stub against which we can verify our consumer code verify the consumer against the stub locally verify the consumer against the stub online  Define the Contract With Spring Cloud Contract, contracts are defined with a Groovy DSL:\npackage userservice import org.springframework.cloud.contract.spec.Contract Contract.make { description(\u0026#34;When a POST request with a User is made, the created user\u0026#39;s ID is returned\u0026#34;) request { method \u0026#39;POST\u0026#39; url \u0026#39;/user-service/users\u0026#39; body( firstName: \u0026#34;Arthur\u0026#34;, lastName: \u0026#34;Dent\u0026#34; ) headers { contentType(applicationJson()) } } response { status 201 body( id: 42 ) headers { contentType(applicationJson()) } } } The above contract defines an HTTP POST request to /user-service/users with a user object as body that is supposed to save that user to the database and should be answered with HTTP status 201 and the id of the newly created user.\nWe\u0026rsquo;ll store the contract in a file called shouldSaveUser.groovy for later usage.\nThe details of the DSL can be looked up in the Spring Cloud Contract Reference.\nCreate a Client against the API We choose Feign as the technology to create a client against the API defined in the contract.\nWe need to add the Feign dependency to the Gradle build:\ndependencies { compile(\u0026#34;org.springframework.cloud:spring-cloud-starter-openfeign:2.0.1.RELEASE\u0026#34;) // ... other dependencies } Next, we create the actual client and the data classes used in the API:\n@FeignClient(name = \u0026#34;userservice\u0026#34;) public interface UserClient { @RequestMapping(method = RequestMethod.POST, path = \u0026#34;/user-service/users\u0026#34;) IdObject createUser(@RequestBody User user); } public class User { private Long id; private String firstName; private String lastName; // getters / setters / constructors omitted } public class IdObject { private long id; // getters / setters / constructors omitted } The @FeignClient annotation tells Spring Boot to create an implementation of the UserClient interface that should run against the host that configured under the name userservice. The @RequestMapping and @RequestBody annotations specify the details of the POST request and the corresponding response defined in the contract.\nPublish the Contract to the Provider The next thing we - as the API consumer - want to do, is to verify that our client code works exactly as the contract specifies. For this verification, Spring Cloud Contracts provides a Stub Runner that takes a contract as input and provides a runtime stub against which we can run our consumer code.\nThat stub is created via the Spring Cloud Contract Gradle plugin on the provider side. Thus, we need to make the contract available to the provider.\nSo, we simply clone the provider codebase and put the contract into the file src/test/resources/contracts/userservice/shouldSaveUser.groovy in the provider codebase and push it as a pull request for the provider team to take up.\nNote that although we\u0026rsquo;re still acting as the consumer of the API, in this step and the next, we\u0026rsquo;re editing the provider\u0026rsquo;s codebase!\nGenerate a Provider Stub Next, we want to generate the stub against which we can verify our consumer code. For this, the Spring Cloud Contract Verifier Gradle plugin has to be set up in the provider build. You can read up on this setup in this article about the provider side.\nAdditionally to the setup from the article above, in order to publish the stub into a Maven repository, we need to add the maven-publish plugin to the build.gradle:\napply plugin: \u0026#39;maven-publish\u0026#39; We want to control the groupId, version and artifactId of the stub so that we can later use these coordinates to load the stub from the Maven repository. For this, we add some information to build.gradle:\ngroup = \u0026#39;io.reflectoring\u0026#39; version = \u0026#39;1.0.0\u0026#39; The artifactId can be set up in settings.gradle (unless you\u0026rsquo;re OK with it being the name of the project directory, which is the default):\nrootProject.name = \u0026#39;user-service\u0026#39; Then, we run ./gradlew publishToMavenLocal which should create and publish the artifact io.reflectoring:user-service:1.0.0-stubs to the local Maven repository on our machine. If you\u0026rsquo;re interested what this artifact looks like, look into the file build/libs/user-service-1.0.0-stubs.jar. Basically, it contains a JSON representation of the contract that can be used as input for a stub that can act as the API provider.\nVerify the Consumer Code Locally After the trip to the provider\u0026rsquo;s code base, let\u0026rsquo;s get back to our own code base (i.e. the consumer code base). Now, that we have the stub in our local Maven repository, we can use the Stub Runner to verify that our consumer code works as the contract expects.\nFor this, we need to add the Stub Runner as a dependency to the Gradle build:\ndependencies { testCompile(\u0026#34;org.springframework.cloud:spring-cloud-starter-contract-stub-runner:2.0.1.RELEASE\u0026#34;) // ... other dependencies } With the Stub Runner in place, we create an integration test for our consumer code:\n@RunWith(SpringRunner.class) @SpringBootTest @AutoConfigureStubRunner( ids = \u0026#34;io.reflectoring:user-service:+:stubs:6565\u0026#34;, stubsMode = StubRunnerProperties.StubsMode.LOCAL) public class UserClientTest { @Autowired private UserClient userClient; @Test public void createUserCompliesToContract() { User user = new User(); user.setFirstName(\u0026#34;Arthur\u0026#34;); user.setLastName(\u0026#34;Dent\u0026#34;); IdObject id = userClient.createUser(user); assertThat(id.getId()).isEqualTo(42L); } } With the @AutoConfigureStubRunner annotation we tell the Stub Runner to load the Maven artifact with\n the groupId io.reflectoring, the artifactId user-service, of the newest version (+) and with the stubs qualifier  from a Maven repository, extract the contract from it and pass it into the Stub Runner who then acts as the API provider on port 6565.\nThe stubsMode is set to LOCAL meaning that the artifact should be resolved against the local Maven repository on our machine for now. And since we have published the stub to our local Maven repository, it should resolve just fine.\nWhen running the test, you may run into the following exception:\ncom.netflix.client.ClientException: Load balancer does not have available server for client: userservice This is because we need to tell the Stub Runner which Maven artifact it is supposed to be used as a stub for which service. Since our Feign client runs against the service named userservice and our artifact has the artifactId user-service (with \u0026ldquo;-\u0026quot;), we need to add the following config to our application.yml:\nstubrunner: idsToServiceIds: user-service: userservice Verify the Consumer Code Online Having verified the consumer code against a stub in our local Maven repository is well and good, but once we push the consumer code to the CI, the build will fail because the stub is not available in an online Maven repository.\nThus, we have to wait until the provider team is finished with implementing the contract and the provider code is pushed to the CI. The provider build pipeline should be configured to automatically publish the stub to an online Maven repository like a Nexus or Artifactory installation.\nOnce the provider build has passed the CI build pipeline, we can adapt our test and set the stubsMode to REMOTE so that the stub will be loaded from our Nexus or Artifactory server:\n@AutoConfigureStubRunner( ids = \u0026#34;io.reflectoring:user-service:+:stubs:6565\u0026#34;, stubsMode = StubRunnerProperties.StubsMode.REMOTE) public class UserClientTest { //... } In order for the Stub Runner to find the online Maven repository, we need to tell it where to look in the application.yml:\nstubrunner: repositoryRoot: http://path.to.repo/repo-name Now, we can push the consumer code and be certain that the consumer and provider are compatible to each other.\nConclusion This article gave a quick tour of the consumer-side workflow of Spring Cloud Contract. We created a Feign client and verified it against a provider stub which is created from a contract. The workflow requires good communication between the consumer and provider teams, but that is the nature of integration tests. Once the workflow is understood by all team members, it lets us sleep well at night since it protects us from syntactical API issues between consumer and provider.\n","date":"January 18, 2018","image":"https://reflectoring.io/images/stock/0025-signature-1200x628-branded_hu40d5255a109b1d14ac3f4eab2daeb887_126452_650x0_resize_q90_box.jpg","permalink":"/consumer-driven-contract-consumer-spring-cloud-contract/","title":"Testing a Spring Boot REST API Consumer against a Contract with Spring Cloud Contract"},{"categories":["Spring Boot"],"contents":"A few months ago I was asked to find a solution for starting and stopping a Spring Boot application under Windows automatically together with the computer this application was running on. After doing some research I found a nice fitting and open source solution with WinSW.\nAs you can read on the Github page of WinSW it \u0026ldquo;is an executable binary, which can be used to wrap and manage a custom process as a Windows service\u0026rdquo;. This windows service can be used to automatically start/stop your application on computer startup/shutdown. After downloading the binary (you can find it here) you have to perform the following simple steps to install your own custom windows service.\nStep 1: Name the Service First you take the downloaded winsw-2.1.2-bin.exe file and rename it to the name of your service. In this example I will call this MyCustomService.exe.\nStep 2: Configure the Service Next, you have to create a new MyCustomService.xml file and place it right next to the executable (it is mandatory that the file name is the same). This xml file holds all the configuration for your custom windows service. It could look like the following example:\n\u0026lt;service\u0026gt; \u0026lt;id\u0026gt;MyCustomService\u0026lt;/id\u0026gt; \u0026lt;!-- must be unique --\u0026gt; \u0026lt;name\u0026gt;MyCustomService\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;This service runs my custom service.\u0026lt;/description\u0026gt; \u0026lt;executable\u0026gt;java\u0026lt;/executable\u0026gt; \u0026lt;arguments\u0026gt;-jar \u0026#34;%BASE%\\myCustomService.jar\u0026#34;\u0026lt;/arguments\u0026gt; \u0026lt;logpath\u0026gt;%BASE%\\log\u0026lt;/logpath\u0026gt; \u0026lt;log mode=\u0026#34;roll-by-time\u0026#34;\u0026gt; \u0026lt;pattern\u0026gt;yyyyMMdd\u0026lt;/pattern\u0026gt; \u0026lt;download from=\u0026#34;http://www.example.de/spring-application/myCustomService.jar\u0026#34; to=\u0026#34;%BASE%\\myCustomService.jar\u0026#34; auth=\u0026#34;basic\u0026#34; unsecureAuth=\u0026#34;true\u0026#34; user=\u0026#34;aUser\u0026#34; password=\u0026#34;aPassw0rd\u0026#34;/\u0026gt; \u0026lt;/log\u0026gt; \u0026lt;/service\u0026gt; This configurations basically tells the windows service to:\n Download the jar file from the given URL and place it in the current folder Execute the just downloaded jar by executing the command java -jar myCustomService.jar Save all logs into the log folder (for more details about logging click here)  Step 3: Install the Service To finally install the service as a Windows service you open your command line in the current folder and execute MyCustomService.exe install. After the installation you can directly test your service by executing MyCustomService.exe test. Now you can manage this service like any other default windows service. To put it in the autostart you have to navigate to your Windows services, select the newly service and set the Startup type to Automatic.\nConclusion As seen in this short example WinSW can be used not only for executing java programs automatically on Windows startup but also for updating your programs automatically. In case you need to update this jar file on multiple Windows clients this can be a pretty neat feature, because you only have to replace the jar hosted on http://www.example.de/spring-application/myCustomService.jar and restart the computers.\n","date":"January 14, 2018","image":"https://reflectoring.io/images/stock/0039-start-1200x628-branded_hu0e786b71aef533dc2d1f5d8371554774_82130_650x0_resize_q90_box.jpg","permalink":"/autostart-with-winsw/","title":"Autostart for your Spring Boot Application"},{"categories":["Java"],"contents":"\u0026ldquo;Release early, release often\u0026rdquo;. This philosophy should be a goal for every software project. Users can only give quality feedback when they have early access to a software release. And they can only give feedback to new features and fixes if they have access to the latest version. Releasing often is a major pain when the release process is not automated. This article is a guide to a fully automated release chain that is able to publish snapshots and releases from a Github repository using Gradle, Bintray and Travis CI.\n Example Code This article is accompanied by a working code example on GitHub. The Release Chain The following image shows the release chain we\u0026rsquo;re going to build.\nIn a simplified Git Flow fashion, we have two branches in our Git repository:\nThe master branch contains the current state of work. Here, all features and bugfixes currently being developed come together.\nThe release branch contains only those versions of the codebase that are to be released.\nAdditionally, there may be optional feature branches in which some features are developed in isolation.\nHere\u0026rsquo;s what we\u0026rsquo;re going to automate:\nEach time someone pushes a commit to the master branch (or merges a feature branch into master), a snapshot will be published by our CI pipeline so that users can test the current state of work at any time.\nEach time someone pushes a commit to the release branch, a stable release will be published by our CI pipeline so that users can work with the stable version.\nNaturally, a snapshot or release will only be published if all tests have run successfully.\nPrerequisites To create an automated release chain as described in this article, we need to create a Bintray account and set up a Gradle build as described in my previous articles:\n Publishing Open Source Releases with Gradle Publishing Open Source Snapshots with Gradle  Once the build.gradle file is set up as described in those articles, we\u0026rsquo;re ready to configure Travis CI to do the publishing work for us automatically.\nConfigure Travis CI To enable Travis CI, we need to create an account on https://about.travis-ci.com and link it to our Github account.\nActivate Once logged into the Travis account, we activate Travis CI for the repository we want to publish snapshots and releases for:\nSet Environment Variables In the settings of the repository on Travis CI, we now set the environment variables BINTRAY_KEY and BINTRAY_USER to our Bintray credentials:\nThe .travis.yml File Next, we need to put a file called .travis.yml into the codebase and push it to Github. This file contains all configuration for the CI build.\nLet\u0026rsquo;s look at the contents of this file.\nBasic Setup language: java install: true sudo: false addons: apt: packages: - oracle-java8-installer before_install: - chmod +x gradlew With the language property, we tell Travis that it\u0026rsquo;s a Java project.\ninstall: true tells Travis that we want to take care of running the Gradle build ourselves (otherwise Travis runs gradlew assemble before each build stage).\nWe tell Travis to install the oracle-java8-installer that in turn takes care of installing the most current Java 8 JDK.\nThe last line makes the gradlew file executable so that Travis can run it.\nDeclare Build Stages In the next section of .travis.yml, we\u0026rsquo;re making use of Travis CI\u0026rsquo;s build stages feature to divide our build into several steps.\nstages: - name: build - name: snapshot if: branch = master - name: release if: branch = release The build stage is going to run the gradle build and check if everything compiles and all tests are running.\nThe snapshot stage is responsible for publishing a snapshot release and thus should only run on the master branch.\nThe release stage is responsible for publishing a stable release and thus should only run on the release branch.\nDefine Build Jobs The last thing left to do now is to configure the actual jobs that should run within the build stages we declared above:\njobs: include: - stage: build script: ./gradlew build - stage: snapshot script: ./gradlew artifactoryPublish -x test -Dsnapshot=true -Dbintray.user=$BINTRAY_USER -Dbintray.key=$BINTRAY_KEY -Dbuild.number=$TRAVIS_BUILD_NUMBER - stage: release script: ./gradlew bintrayUpload -x test -Dbintray.user=$BINTRAY_USER -Dbintray.key=$BINTRAY_KEY -Dbuild.number=$TRAVIS_BUILD_NUMBER In the build stage we\u0026rsquo;re simply running our Gradle build. If this stage fails, the other stages will not be started at all.\nIn the snapshot stage, we\u0026rsquo;re running the artifactoryPublish task that takes care of publishing the current build as a snapshot to oss.jfrog.org. The details of the Gradle configuration are explained here. We pass on the environment variables BINTRAY_USER, BINTRAY_KEY and TRAVIS_BUILD_NUMBER, so that the Gradle script can make use of them.\nIn the release stage, we\u0026rsquo;re running the bintrayUpload task that takes care of publishing a stable release to Bintray, again passing in the necessary environment variables. The details of the Gradle configuration are explained here.\nWhat now? And that\u0026rsquo;s it. All in all this is a pretty straightforward way to publish open source Java projects with Gradle, Bintray and Travis CI.\nYou can tailor the process to your project as needed. Especially in projects maintaining multiple versions at the same time you might have to move toward a more complex branching strategy more like the original Git Flow. In this case, you would have to add more branches from which snapshots and releases should be published to the Travis configuration.\n","date":"December 29, 2017","image":"https://reflectoring.io/images/stock/0038-package-1200x628-branded_hu7e104c3cc9032be3d32f9334823f6efc_80797_650x0_resize_q90_box.jpg","permalink":"/fully-automated-open-source-release-chain/","title":"A Fully Automated Open Source Release Chain with Gradle and Travis CI"},{"categories":["Java"],"contents":"One of the most fulfilling things in developing an open source project is getting feedback from the users of your project. To give feedback, the users need to have something to play around with. So, to get the most up-to-date feedback possible, you might want to give your users access to the current (unstable) development version of your project - often called a \u0026ldquo;snapshot\u0026rdquo;. This article shows how to publish snapshots of your Java projects to oss.jfrog.org and how your users can access those snapshots from their own projects.\n Example Code This article is accompanied by a working code example on GitHub. oss.jfrog.org vs. Bintray Before we start, a couple words on oss.jfrog.org. It\u0026rsquo;s the place we\u0026rsquo;re going to publish our snapshots to and an instance of Artifactory, an artifact repository application by JFrog. If you know Nexus, it\u0026rsquo;s similar, allowing to automatically deploy and serve artifacts of different types. In my opinion, however, Artifactory is easier to handle and integrate into your development cycle.\nSo what distinguishes oss.jfrog.org from Bintray, which is another product of JFrog? As said above, oss.jfrog.org is an installation of Artifactory, which is an application you can also buy and install on-premise to setup your own local artifact repository. Also, oss.jfrog.org is obiously intended for hosting open source software only.\nBintray, on the other hand, is a \u0026ldquo;cloud service\u0026rdquo; which offers high-volume delivery of files, using CDNs and stuff like that. Thus, Bintray is more focused on delivering content, while oss.jfrog.org is more focused on providing support during the development of a project. The difference between Artifactory and Bintray is also explained in an answer to this Stackoverflow answer.\nWith the focus of oss.jfrog.org and Bintray clear, we choose oss.jfrog.org to host our snapshots and Bintray - with its automatic sync to the JCenter and Maven Central repositories - to host our stable releases.\nSet up a Bintray Repository To be able to publish snapshots to oss.jfrog.org, you need to set up a repository on Bintray first. To do that, follow the steps from another article in this series:\n Create a Bintray Account Create a Repository Obtain your API Key  Activate your Snapshot Repository Having set up a Bintray account, you now need to create a repository on oss.jfrog.org where you want to put your snapshots. You can do this by clicking on \u0026ldquo;add to JCenter\u0026rdquo; on the homepage of your bintray package (see image below) and then providing a group id under which you want to publish your snapshots.\nIf you already have added your repository to JCenter, you can still activate the snapshot repository by clicking \u0026ldquo;stage snapshots on oss.jfrog.org\u0026rdquo; (see image below).\nIt takes from a couple hours up to a day or so for the JFrog people to check your request and activate your snapshot repository. You can check if it\u0026rsquo;s available by browsing the Artifact Repository on oss.jfrog.org. If there is an entry within oss-snapshot-local with the namespace you requested, you\u0026rsquo;re good to go.\nSet up your build.gradle Now that the target repository for our snapshots is available,you can go on to create a script that deploys your snapshots there.\nIn order to create the desired artifacts, follow these steps from another article:\n Set up your build.gradle Build Sources and Javadoc Artifacts Define what to publish  Then, add the artifactory plugin like so:\nplugins { id \u0026#34;com.jfrog.artifactory\u0026#34; version \u0026#34;4.5.4\u0026#34; } If you want to create snapshots, you will probably want to have a version number like 1.0.1-SNAPSHOT. And you don\u0026rsquo;t really want to manually remove and add the -SNAPSHOT part each time you make a release. So, we allow to pass in a system property called snapshot. If it has the value true Gradle automatically adds the snapshot suffix:\nversion = \u0026#39;1.0.1\u0026#39; + (Boolean.valueOf(System.getProperty(\u0026#34;snapshot\u0026#34;)) ? \u0026#34;-SNAPSHOT\u0026#34; : \u0026#34;\u0026#34;) Next, we add the information for publishing to oss.jfrog.org.\nartifactory { contextUrl = \u0026#39;http://oss.jfrog.org\u0026#39; publish { repository { repoKey = \u0026#39;oss-snapshot-local\u0026#39; username = System.getProperty(\u0026#39;bintray.user\u0026#39;) password = System.getProperty(\u0026#39;bintray.key\u0026#39;) } defaults { publications(\u0026#39;mavenPublication\u0026#39;) publishArtifacts = true publishPom = true } } resolve { repoKey = \u0026#39;jcenter\u0026#39; } clientConfig.info.setBuildNumber(System.getProperty(\u0026#39;build.number\u0026#39;)) } Important to note here is the repoKey which should contain oss-snapshot-local. The username is your bintray username and the password is your bintray API key. To define what to publish, we reference the mavenPublication defined earlier in the step Define what to publish. In the clientConfig section, we add a build number, which is read from a system property. This makes it easy for CI systems to later provide that build number to our script.\nPublish a Snapshot Once everything is set up, you can publish a snapshot with the following Gradle command:\n./gradlew artifactoryPublish -Dsnapshot=true -Dbintray.user=$BINTRAY_USER -Dbintray.key=$BINTRAY_KEY -Dbuild.number=$BUILD_NUMBER where $BINTRAY_USER, $BINTRAY_KEY and $BUILD_NUMBER are replaced by their respective values. You should get an output like this:\n:artifactoryPublish Deploying artifact: http://oss.jfrog.org/oss-snapshot-local/.../...-1.0.1-SNAPSHOT-javadoc.jar Deploying artifact: http://oss.jfrog.org/oss-snapshot-local/.../...-1.0.1-SNAPSHOT-sources.jar Deploying artifact: http://oss.jfrog.org/oss-snapshot-local/.../...-1.0.1-SNAPSHOT.jar Deploying artifact: http://oss.jfrog.org/oss-snapshot-local/.../...-1.0.1-SNAPSHOT.pom Deploying build descriptor to: http://oss.jfrog.org/api/build Build successfully deployed. Browse it in Artifactory under http://oss.jfrog.org/webapp/builds/.../$BUILD_NUMBER Access a Snapshot You can now tell the users of your project that they can access the latest snapshot version like this:\nrepositories { maven { url \u0026#39;https://oss.jfrog.org/artifactory/oss-snapshot-local\u0026#39; } } dependencies { compile(\u0026#39;group.id:myAwesomeLib:1.0.1-SNAPSHOT\u0026#39;) } Also, you can access a specific snapshot version like this:\nrepositories { maven { url \u0026#39;https://oss.jfrog.org/artifactory/oss-snapshot-local\u0026#39; } } dependencies { compile(\u0026#39;group.id:myAwesomeLib:1.0.1-20171220.200812-2\u0026#39;) } You can find out which specific versions are available by browsing the artifacts on oss.jfrog.org.\nWhat next? There comes a time when a version is complete and you want to release the real thing. Then, you might want to follow the guide to publishing stable releases to bintray. When this is all set up, you might want to have a CI tool create snapshots and releases automatically, which is covered in this blog post.\n","date":"December 20, 2017","image":"https://reflectoring.io/images/stock/0038-package-1200x628-branded_hu7e104c3cc9032be3d32f9334823f6efc_80797_650x0_resize_q90_box.jpg","permalink":"/publish-snapshots-with-gradle/","title":"Publishing Open Source Snapshots with Gradle"},{"categories":["programming"],"contents":"Consumer-driven contract tests are a technique to test integration points between API providers and API consumers without the hassle of end-to-end tests (read it up in a recent blog post). A common use case for consumer-driven contract tests is testing interfaces between services in a microservice architecture. However, another interesting use case is testing interfaces between the user client and those services. With Angular being a widely adopted user client framework and Pact being a polyglot contract framework that allows consumer and provider to be written in different languages, this article takes a look on how to create a contract from an Angular client that consumes a REST API.\n Example Code This article is accompanied by a working code example on GitHub. The Big Picture The big picture of Consumer-Driven Contract tests is shown in the figure below.\nInstead of testing consumer and provider in an end-to-end manner, which requires a complex server environment, we split the test of our API into two parts: a consumer test and a provider test. Each of these tests runs against a mock of the interface counterpart instead of against the real thing, in order to reduce complexity and gain some other advantages.\nThe consumer mock and provider mock both have access to a contract that specifies a set of valid request / response pairs (also called \u0026ldquo;interactions\u0026rdquo;) so that they are able to verify the requests and responses of the real consumer and provider.\nIn this Article This article focuses on the consumer side. Our consumer is an Angular application that accesses some remote REST API. The provider of this API is of no concern to us yet, since the API contract is created from the consumer-side (hence \u0026ldquo;consumer-driven\u0026rdquo;). Stay tuned for an upcoming blog post that tests a Spring Boot API provider against the contract we\u0026rsquo;re creating here.\nWhat we will do in this article:\n create an Angular service accessing a REST API create a contract for that REST API from an Angular test verify that the Angular service obeys the contract publish the contract on a Pact Broker so it can later be accessed by the API provider  A prerequisite for this article is an Angular app skeleton created with Angular CLI. If you don\u0026rsquo;t want to create one yourself, clone the code repository.\nThe API Consumer: UserService The API we want to create a contract for is an API to create a user resource. The consumer of this API is an Angular service called UserService living in the file user.service.ts:\n@Injectable() export class UserService { private BASE_URL = \u0026#39;/user-service/users\u0026#39;; constructor(private httpClient: HttpClient) { } create(resource: User): Observable\u0026lt;number\u0026gt; { return this.httpClient .post(this.BASE_URL, resource) .map(data =\u0026gt; data[\u0026#39;id\u0026#39;]); } } UserService uses the Angular HttpClient to send a POST request containing a User JSON object to the URI /user-service/users. The response is expected to contain an id field containing the ID of the newly created user.\nPact Dependencies In order to get Pact up and running in our Angular tests, we need to include the following dependencies as devDependencies in the package.json file:\n\u0026#34;devDependencies\u0026#34;: { ... \u0026#34;@pact-foundation/pact-node\u0026#34;: \u0026#34;6.5.0\u0026#34;, \u0026#34;@pact-foundation/karma-pact\u0026#34;: \u0026#34;2.1.3\u0026#34;, \u0026#34;@pact-foundation/pact-web\u0026#34;: \u0026#34;5.3.0\u0026#34; } pact-node is a wrapper around the original Ruby implementation of Pact that, among other things, allows to run a mock provider and create contract files - or \u0026ldquo;pacts\u0026rdquo;, as they are called when using Pact - from Javascript code.\nkarma-pact is a plugin for the Karma test runner framework that launches a mock provider via pact-node before running the actual tests.\npact-web (also called PactJS) is a Javascript library that provides an API to define contract fragments by listing request / response pairs (\u0026ldquo;interactions\u0026rdquo;) and sending them to a pact-node mock server. This enables us to implement consumer-driven contract tests from our Angular tests.\nConfigure Karma Before starting into our test, we need to configure Karma to start up a mock provider each time we start a test run. For this, we add the following lines to karma.conf.js:\nmodule.exports = function (config) { config.set({ // ... other configurations  pact: [{ cors: true, port: 1234, consumer: \u0026#34;ui\u0026#34;, provider: \u0026#34;userservice\u0026#34;, dir: \u0026#34;pacts\u0026#34;, spec: 2 }], proxies: { \u0026#39;/user-service/\u0026#39;: \u0026#39;http://127.0.0.1:1234/user-service/\u0026#39; } }); }; Basically, we only tell the karma-pact plugin some information like on which port to start the mock server. Additionally, I found that it\u0026rsquo;s necessary to add the proxies configuration. In the case above, we tell Karma to redirect all requests coming from within our tests and pointing to a URL starting with /user-service/ to port 1234, which is our mock provider. This way, we can be sure that the requests our UserService sends during the test will be received by the mock provider.\nSet up the Pact Test Now, we\u0026rsquo;re ready to set up a test that defines a contract and verifies our UserService against this contract. We name the file user.service.pact.spec.ts to make clear that it\u0026rsquo;s a Pact test. You can find the whole file in the demo repository.\nTo start off, we need to import the usual suspects from the Angular test framework, as well as our own files and the Pact files:\nimport {TestBed} from \u0026#39;@angular/core/testing\u0026#39;; import {HttpClientModule} from \u0026#39;@angular/common/http\u0026#39;; import {UserService} from \u0026#39;./user.service\u0026#39;; import {User} from \u0026#39;./user\u0026#39;; import {PactWeb, Matchers} from \u0026#39;@pact-foundation/pact-web\u0026#39;; Next, in the beforeAll() function, we create a provider object that can then be used by all test cases defined in the test file.\nbeforeAll(function (done) { provider = new PactWeb({ consumer: \u0026#39;ui\u0026#39;, provider: \u0026#39;userservice\u0026#39;, port: 1234, host: \u0026#39;127.0.0.1\u0026#39;, }); // required for slower CI environments  setTimeout(done, 2000); // Required if run with `singleRun: false`  provider.removeInteractions(); }); The provider object connects to the mock server we configured in karma.conf.js so take care that consumer, provider and port are the same as in the Karma config. Via this provider object, we can later add interactions (i.e. request/response pairs that define the API contract) to the mock server. To make sure that no interactions from a previous test run linger in the mock server, we call removeInteractions().\nFinally, in the afterAll() function we call provider.finalize(), which tells the mock server to write all currently available interactions into a contract file.\nafterAll(function (done) { provider.finalize() .then(function () { done(); }, function (err) { done.fail(err); }); }); Create a Pact Now to the test. The following code shows how to add an interaction to a contract and then verify if the requests our UserService sends are valid according to this contract.\ndescribe(\u0026#39;create()\u0026#39;, () =\u0026gt; { const expectedUser: User = { firstName: \u0026#39;Arthur\u0026#39;, lastName: \u0026#39;Dent\u0026#39; }; const createdUserId = 42; beforeAll((done) =\u0026gt; { provider.addInteraction({ state: `provider accepts a new person`, uponReceiving: \u0026#39;a request to POST a person\u0026#39;, withRequest: { method: \u0026#39;POST\u0026#39;, path: \u0026#39;/user-service/users\u0026#39;, body: expectedUser, headers: { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39; } }, willRespondWith: { status: 201, body: Matchers.somethingLike({ id: createdUserId }), headers: { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39; } } }).then(done, error =\u0026gt; done.fail(error)); }); it(\u0026#39;should create a Person\u0026#39;, (done) =\u0026gt; { const userService: UserService = TestBed.get(UserService); userService.create(expectedUser).subscribe(response =\u0026gt; { expect(response).toEqual(createdUserId); done(); }, error =\u0026gt; { done.fail(error); }); }); }); By calling provider.addInteraction() we send a request / response pair to the mock server. This request / response pair is then considered to be part of the API contract. Since the UserService is the consumer of that API, we\u0026rsquo;re creating a real consumer-driven contract here.\nIn the test (within the it() function), we then call userService.create() to send a real request to the mock server. The mock server checks this request against all interactions it has received before. If it finds an interaction with that request, it returns the response associated to it. If it does not find a matching interaction, the test fails. Thus, if the test passes, we have verified that UserService follows the rules of the contract fragment we created above.\nThe Pact After provider.finalize() has been called, i.e. when all tests have finished, the mock server creates a pact file from all interactions that it has been fed during the test run. A pact file is simply a JSON structure that contains the request / response pairs and some metadata.\n{ \u0026#34;consumer\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;ui\u0026#34; }, \u0026#34;provider\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;userservice\u0026#34; }, \u0026#34;interactions\u0026#34;: [ { \u0026#34;description\u0026#34;: \u0026#34;a request to POST a person\u0026#34;, \u0026#34;providerState\u0026#34;: \u0026#34;provider accepts a new person\u0026#34;, \u0026#34;request\u0026#34;: { \u0026#34;method\u0026#34;: \u0026#34;POST\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/user-service/users\u0026#34;, \u0026#34;headers\u0026#34;: { \u0026#34;Content-Type\u0026#34;: \u0026#34;application/json\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;firstName\u0026#34;: \u0026#34;Arthur\u0026#34;, \u0026#34;lastName\u0026#34;: \u0026#34;Dent\u0026#34; } }, \u0026#34;response\u0026#34;: { \u0026#34;status\u0026#34;: 201, \u0026#34;headers\u0026#34;: { \u0026#34;Content-Type\u0026#34;: \u0026#34;application/json\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;id\u0026#34;: 42 }, \u0026#34;matchingRules\u0026#34;: { \u0026#34;$.body\u0026#34;: { \u0026#34;match\u0026#34;: \u0026#34;type\u0026#34; } } } } ], \u0026#34;metadata\u0026#34;: { \u0026#34;pactSpecification\u0026#34;: { \u0026#34;version\u0026#34;: \u0026#34;2.0.0\u0026#34; } } } Bonus: Publish the Pact on a Pact Broker We just created a pact from an Angular test that tests an API consumer! But what about the provider of that API? The developers of that API provider will need the pact in order to build the correct API.\nThus, we should publish the pact somehow. For this, you can set up a Pact Broker, which acts as a repository for pacts. Here\u0026rsquo;s a script that publishes all pact files within a folder to a Pact Broker.\nlet projectFolder = __dirname; let pact = require(\u0026#39;@pact-foundation/pact-node\u0026#39;); let project = require(\u0026#39;./package.json\u0026#39;); let options = { pactFilesOrDirs: [projectFolder + \u0026#39;/pacts\u0026#39;], pactBroker: \u0026#39;https://your.pact.broker.url\u0026#39;, consumerVersion: project.version, tags: [\u0026#39;latest\u0026#39;], pactBrokerUsername: \u0026#39;YOUR_PACT_BROKER_USER\u0026#39;, pactBrokerPassword: \u0026#39;YOUR_PACT_BROKER_PASS\u0026#39; }; pact.publishPacts(options).then(function () { console.log(\u0026#34;Pacts successfully published!\u0026#34;); }); You can integrate this script into your npm build by adding it to the scripts section of your package.json:\n\u0026#34;scripts\u0026#34;: { ... \u0026#34;publish-pacts\u0026#34;: \u0026#34;node publish-pacts.js\u0026#34; } The script can then be executed by running npm run publish:pacts either from your machine or from your CI build to publish the pacts every time the tests ran successfully.\nWrap Up In this article, we created an API contract and verified that our Angular service (i.e. the API consumer) abides by that contract, all from within an Angular test. This article has not covered the provider side yet. In an upcoming blog post, we\u0026rsquo;ll have a look at how to create an API provider with Spring Boot and how to test that provider against the contract we just created.\n","date":"December 10, 2017","image":"https://reflectoring.io/images/stock/0029-contract-1200x628-branded_hu7a19ccad5c11568ad8f2270ae968f76d_151831_650x0_resize_q90_box.jpg","permalink":"/consumer-driven-contracts-with-angular-and-pact/","title":"Creating a Consumer-Driven Contract with Angular and Pact"},{"categories":["Java"],"contents":"When working on an open source Java project, you always come to the point where you want to share your work with the developer community (at least that should be the goal). In the Java world this is usually done by publishing your artifacts to a publicly accessible Maven repository. This article gives a step-by-step guide on how to publish your artifacts to your own Maven Repository on Bintray.\n Example Code This article is accompanied by a working code example on GitHub. Bintray vs. Maven Central You might be asking why you should publish your artifacts to a custom repository and not to Maven Central, because Maven Central is THE Maven repository that is used by default in most Maven and Gradle builds and thus is much more accessible. The reason for this is that you can play around with your publishing routine in your own repository first and THEN publish it to Maven Central from there (or JCenter, for that matter, which is another well-known Maven repository). Publishing from your own Bintray repository to Maven Central is supported by Bintray, but will be covered in a follow-up article.\nAnother reason for uploading to Bintray and not to Maven Central is that you still have control over your files even after uploading and publishing your files whereas in Maven Central you lose all control after publishing (however, you should be careful with editing already-published files!).\nCreate a Bintray Account To publish artifacts on Bintray, you naturally need an account there. I\u0026rsquo;m not going to describe how to do that since if you\u0026rsquo;re reading this article you should possess the skills to sign up on a website by yourself :).\nCreate a Repository Next, you need to create a repository. A repository on Bintray is actually just a smart file host. When creating the repository, make sure that you select the type \u0026ldquo;Maven\u0026rdquo; so Bintray knows that it\u0026rsquo;s supposed to handle the artifacts we\u0026rsquo;re going to upload as Maven artifacts.\nObtain your API key When signed in on Bintray, go to the \u0026ldquo;edit profile\u0026rdquo; page and click on \u0026ldquo;API Key\u0026rdquo; in the menu. You will be shown your API key which we need later in the Gradle scripts to automatically upload your artifacts.\nSet up your build.gradle In your build.gradle set up some basics:\nplugins { id \u0026#34;com.jfrog.bintray\u0026#34; version \u0026#34;1.7.3\u0026#34; id \u0026#34;maven-publish\u0026#34; id \u0026#34;java\u0026#34; } buildscript { repositories { mavenLocal() mavenCentral() jcenter() } } repositories { mavenLocal() mavenCentral() jcenter() } version = \u0026#39;1.0.0\u0026#39; The important parts are the bintray plugin and the maven-publish plugin.\nThe two repositories closures simply list the Maven repositories to be searched for our project\u0026rsquo;s dependencies and have nothing to do with publishing our artifacts.\nBuild Sources and Javadoc Artifacts When publishing an open source projects, you will want to publish a JAR containing the sources and another JAR containing the javadoc together with your normal JAR. This helps developers using your project since IDEs support downloading those JARs and displaying the sources directly in the editor. Also, providing sources and javadoc is a requirement for publishing on Maven Central, so we can as well do it now.\nAdd the following lines to your build.gradle:\ntask sourcesJar(type: Jar, dependsOn: classes) { classifier = \u0026#39;sources\u0026#39; from sourceSets.main.allSource } javadoc.failOnError = false task javadocJar(type: Jar, dependsOn: javadoc) { classifier = \u0026#39;javadoc\u0026#39; from javadoc.destinationDir } artifacts { archives sourcesJar archives javadocJar } A note on javadoc.failOnError = false: by default, the javadoc task will fail on things like empty paragraphs (\u0026lt;/p\u0026gt;) which can be very annoying. All IDEs and tools support them, but the javadoc generator still fails. Feel free to keep this check and fix all your Javadoc \u0026ldquo;errors\u0026rdquo;, if you feel masochistic today, though :).\nDefine what to publish Next, we want to define what artifacts we actually want to publish and provide some metadata on them.\ndef pomConfig = { licenses { license { name \u0026#34;The Apache Software License, Version 2.0\u0026#34; url \u0026#34;http://www.apache.org/licenses/LICENSE-2.0.txt\u0026#34; distribution \u0026#34;repo\u0026#34; } } developers { developer { id \u0026#34;thombergs\u0026#34; name \u0026#34;Tom Hombergs\u0026#34; email \u0026#34;tom.hombergs@gmail.com\u0026#34; } } scm { url \u0026#34;https://github.com/thombergs/myAwesomeLib\u0026#34; } } publishing { publications { mavenPublication(MavenPublication) { from components.java artifact sourcesJar { classifier \u0026#34;sources\u0026#34; } artifact javadocJar { classifier \u0026#34;javadoc\u0026#34; } groupId \u0026#39;io.reflectoring\u0026#39; artifactId \u0026#39;myAwesomeLib\u0026#39; version \u0026#39;1.0.0\u0026#39; pom.withXml { def root = asNode() root.appendNode(\u0026#39;description\u0026#39;, \u0026#39;An AWESOME lib. Really!\u0026#39;) root.appendNode(\u0026#39;name\u0026#39;, \u0026#39;My Awesome Lib\u0026#39;) root.appendNode(\u0026#39;url\u0026#39;, \u0026#39;https://github.com/thombergs/myAwesomeLib\u0026#39;) root.children().last() + pomConfig } } } } In the pomConfig variable, we simply provide some metadata that is put into the pom.xml when publishing. The interesting part is the publishing closure which is provided by the maven-publish plugin we applied before. Here, we define a publication called BintrayPublication (choose your own name if you wish). This publication should contain the default JAR file (components.java) as well as the sources and the javadoc JARs. Also, we provide the Maven coordinates and add the information from pomConfig above.\nProvide Bintray-specific Information Finally, the part where the action is. Add the following to your build.gradle to enable the publishing to Bintray:\nbintray { user = System.getProperty(\u0026#39;bintray.user\u0026#39;) key = System.getProperty(\u0026#39;bintray.key\u0026#39;) publications = [\u0026#39;mavenPublication\u0026#39;] pkg { repo = \u0026#39;myAwesomeLib\u0026#39; name = \u0026#39;myAwesomeLib\u0026#39; userOrg = \u0026#39;reflectoring\u0026#39; licenses = [\u0026#39;Apache-2.0\u0026#39;] vcsUrl = \u0026#39;https://github.com/thombergs/my-awesome-lib.git\u0026#39; version { name = \u0026#39;1.0.0\u0026#39; desc = \u0026#39;1.0.0\u0026#39; released = new Date() } } } The user and key are read from system properties so that you don\u0026rsquo;t have to add them in your script for everyone to read. You can later pass those properties via command line.\nIn the next line, we reference the mavenPublication we defined earlier, thus giving the bintray plugin (almost) all the information it needs to publish our artifacts.\nIn the pkg closure, we define some additional information for the Bintray \u0026ldquo;package\u0026rdquo;. A package in Bintray is actually nothing more than a \u0026ldquo;folder\u0026rdquo; within your repository which you can use to structure your artifacts. For example, if you have a multi-module build and want to publish a couple of them into the same repository, you could create a package for each of them.\nUpload! You can run the build and upload the artifacts on Bintray by running\n./gradlew bintrayUpload -Dbintray.user=\u0026lt;YOUR_USER_NAME\u0026gt; -Dbintray.key=\u0026lt;YOUR_API_KEY\u0026gt; Publish! The files have now been uploaded to Bintray, but by default they have not been published to the Maven repository yet. You can do this manually for each new version on the Bintray site. Going to the site, you should see a notice like this:\nClick on publish and your files should be published for real and be publicly accessible.\nAlternatively, you can set up the bintray plugin to publish the files automatically after uploading, by setting publish = true. For a complete list of the plugin options have a look at the plugin DSL.\nAccess your Artifacts from a Gradle Build Once the artifacts are published for real you can add them as dependencies in a Gradle build. You just need to add your Bintray Maven repository to the repositories. In the case of the example above, the following would have to be added:\nrepositories { maven { url \u0026#34;https://dl.bintray.com/thombergs/myAwesomeLib\u0026#34; } } dependencies { compile \u0026#34;io.reflectoring:myAwesomeLib:1.0.0\u0026#34; } You can view the URL of your own repository on the Bintray site by clicking the button \u0026ldquo;Set Me Up!\u0026rdquo;.\nWhat next? Now you can tell everyone how to access your personal Maven repository to use your library. However, some people are sceptical to include custom Maven repositories into their builds. Also, there\u0026rsquo;s probably a whole lot of companies out there which have a proxy that simply does not allow any Maven repository to be accessed.\nSo, as a next step, you might want to publish your artifacts to the well-known JCenter or Maven Central repositories. And to have it automated, you may want integrate the publishing step into a CI tool (for example, to publish snapshots with every CI build).\n","date":"December 4, 2017","image":"https://reflectoring.io/images/stock/0038-package-1200x628-branded_hu7e104c3cc9032be3d32f9334823f6efc_80797_650x0_resize_q90_box.jpg","permalink":"/guide-publishing-to-bintray-with-gradle/","title":"Publishing Open Source Releases with Gradle"},{"categories":["Software Craft"],"contents":"In a distributed system, testing the successful integration between\ndistributed services is essential for ensuring that the services won\u0026rsquo;t fail in production just because they\u0026rsquo;re not speaking the same language. This article discusses three approaches to implementing integration tests between distributed services and shows the advantages of Consumer-Driven Contract tests.\nStrategies for Integration Testing This article compares three testing strategies that can be used to implement integration tests and then goes into describing how those strategies work with some usual testing issues.\nBefore we go into the details of those testing strategies, I want to define the meaning of \u0026ldquo;integration test\u0026rdquo; in the context of this article:\n An integration test is a test between an API provider and an API consumer that asserts that the provider returns expected responses for a set of pre-defined requests by the consumer. The set of pre-defined requests and expected responses is called a contract.\n Thus, with an integration test, we want to assert that consumer and provider are speaking the same languange and that they understand each other syntactically by checking that they both follow the rules of a mutual contract.\nEnd-to-End Tests The most natural approach to testing interfaces between provider and consumer is end-to-end testing. In end-to-end tests (E2E tests), real servers or containers are set up so that provider and consumer (and all other services they require as dependencies, such as databases) are available in a production-like runtime environment.\nTo execute our integration tests, the consumer is usually triggered by providing certain input in the user interface. The consumer then calls the provider and the test asserts (again in the UI) if the results meet the expectations defined in the contract.\nMocking In a mock test, we no longer set up a whole runtime environment, but run isolated tests between the consumer and a mock provider and between a mock consumer and the real provider.\nWe now have two sets of tests instead of one. The first set of tests is between the consumer and a provider mock.The consumer service is started up and triggered so that it sends some requests to a provider mock. The provider mock checks if the requests are listed in the contract and reports an error otherwise.\nIn the second set of tests a mock consumer is given the requests from the contract and simply sends them against the provider service. The mock consumer then checks if the providers' responses meet the expectations defined in the contract.\nConsumer-Driven Contract Tests Consumer-Driven Contract tests (CDC tests) are a specialization of mock tests as described above. They work just like mock tests with the specialty that the interface contract is driven by the consumer and not, as one would expect naturally, by the provider. This provides some interesting advantages we will come to later in this article.\nComparing the Integration Testing Strategies Let\u0026rsquo;s have a look at certain issues in testing and check how the testing strategies deal with them.\nIsolation We\u0026rsquo;ve all been taught that isolation in tests is a good thing. This is highlighted in the famous testing pyramid (see image below). The base of the pyramid consists of isolated tests (you can also call them unit tests, if you like). Thus, your test suite should consist of a high number of isolated tests followed by few integration tests and even fewer end-to-end tests. The reason for this is simple: isolated tests are easy to execute and their results are easy to interpret, thus we should rely on them as long as it\u0026rsquo;s possible.\nE2E tests obviously aren\u0026rsquo;t isolated tests since each test potentially calls a whole lot of services. While I wouldn\u0026rsquo;t call mock tests and CDC tests \u0026ldquo;isolated\u0026rdquo;, they are definitely more isolated than E2E tests since each test only tests a single service: either the provider or the consumer.\nUsing mock tests instead of E2E tests for testing interfaces between distributed services moves those tests from the top of the testing pyramid at least one level down, so the point for isolation definitely goes to mock tests and CDC tests.\nTesting Data Semantics The correct semantics of data exchanged over an interface are, naturally, important for the data to be processed correctly. However, mock tests usually only check the syntax of the data, e.g. if a credit card number is syntactically correct but not if a credit card with that number actually exists.\nThe semantics of data can best be tested with E2E tests, since here we have a full runtime environment including the business logic that can check if the credit card actually exists.\nReducing the test focus from semantics to syntax should be a conscious choice you make when implementing mock tests. You are no longer testing the business logic but you are concentrating your test efforts on the potentially fragile interface structure (while covering your business logic with a separate set of isolated tests, I hope).\nComplexity For an E2E runtime environment, you have to deploy containers running your services, their databases and any other dependencies they might have, each in a specified versions (see image below).\nNowadays, tools like Docker and kubernetes make this a lot easier than it was when services were hosted on bare metal. However, you have to implement an automatism that executes this deployment when the tests are to be run. You do not have this kind of complexity with mock tests.\nTest Data Setup Test data is always an issue when implementing tests of any sort. In E2E tests test data is especially troublesome since you have potentially many services each with their own database (see image in the previous section). To set up a test environment for those E2E tests you have to provide each of those databases with test data that match the expectations of your tests.\nIf cross-references between databases exist, the data in each database has to match the data in the other databases to enable valid testing scenarios across multiple services. Beyond that, you have to implement a potentially complex automation to fire up your databases in a defined state.\nIn mock tests, on the other hand, you can define the data to be sent / returned directly in the consumer and provider mocks without having to setup any database at all.\nFeedback Time Another important issue in testing is the time it takes from starting your tests until you get the results and can act on them by fixing a bug or modifying a test. The shorter this feedback time, the more productive you can be.\nDue to their integrative nature, E2E tests usually have a rather long feedback time. One cause for this is the time it takes to set up a complete E2E runtime environment. The other cause is that once you have set up that environment you probably won\u0026rsquo;t just run a single test but rather a complete suite of tests, which tends to take some time.\nMock tests have a much shorter feedback cycle, since you can run them any time, especially from a developer machine and get feedback rather quickly (not as quickly as for usual unit tests, but quicker than for E2E tests for sure).\nStability Due to the complexity, potentially erroneous test data and a whole lot of other potential factors, E2E tests may fail. If an E2E test fails, it does not necessarily mean that you found a bug in the code or in the test. It may mean that the runtime environment was badly configured and a service could not be reached or that a certain service was deployed in the wrong version or any other reason. That means that E2E tests are inherently less stable than tests that are better isolated like mock tests.\nUnstable tests lead to dangerous mindsets like \u0026ldquo;A couple tests failed, but 90% successfull tests are OK, so let\u0026rsquo;s deploy to production.\u0026rdquo;.\nAlso, when setting up an E2E runtime environment, some of the deployed services may be developed by another team or even completely outside of your organization. You probably don\u0026rsquo;t have a lot of influence on those services. If one of those services fails, it may be a cause for a failing test and adds to the potential instability.\nReveal Unused Interfaces Usually, an API is defined by the API provider. Consumers then may choose which operations of the API they want to use and which not. Thus, the provider does not really know which operations of its API are used by which consumer. This may lead to a situation where an operation of the API is not used by any consumer.\nWe obviously want to find out which operations of an API are not used so that we can throw away unneeded code cluttering our codebase. Running E2E tests or even plain mock tests, however, you cannot easily find out which operations of an API are not used.\nWhen using CDC tests, on the other hand, if a consumer decides that a certain API operation is no longer needed, it removes that operation from the consumer tests and thus from the contract. This leads to a failing provider test and you will automatically be notified by your CI when an API operation is no longer needed and you can act accordingly.\nWell-Fittedness A very similar issue is the issue of well-fittedness of the API operations for a certain consumer. If a provider dictates an API contract, it may not fit certain use cases of certain consumers optimally. If the consumer defines the contract, it may be defined to fit its use case better.\nWith E2E tests and plain provider-dicated mock tests, the consumer has no real say in matters of well-fittedness. Only consumer-driven contracts allow the consumer to match the API to his needs.\nUnknown Consumers Some APIs are public or semi-public and thus developed for an unknown group of consumers. In a setting like this, CDC tests obviously don\u0026rsquo;t work, since unknown consumers cannot define a contract.\nSimple mock tests still work though. Instead of two sets of tests (one for testing the provider and one for testing each consumer) you only have one set of tests for testing the provider, since there are no known consumers to test. You just create a mock consumer that represents all the unknown consumers out there to test the provider.\n\u0026ldquo;Real\u0026rdquo; E2E test are also not possible with unknown consumers since you cannot test end to end without a consumer. However, you could argue that it\u0026rsquo;s still an E2E test in the context of your application if you setup your provider in an E2E runtime environment and hit it with mocked requests from your contract.\nFeature Overview Here\u0026rsquo;s an overview table of the features of the different testing strategies discussed above.\n    E2E Tests Mock Tests CDC Tests     Isolation      Complexity      Test Data Setup      Testing Data Semantics      Feedback Time      Stability      Reveal Unused Interfaces      Well-Fittedness      Unknown Consumers       As you can see, there are good reasons to implement Consumer-Driven Contract tests to test interfaces between services in a distributed system like a microservice architecture. If you are interested in implementing CDC tests, have a look at the Pact framework or at Spring Cloud Contract. For an example on how to use Pact, have a look at this blog post.\n","date":"November 29, 2017","image":"https://reflectoring.io/images/stock/0029-contract-1200x628-branded_hu7a19ccad5c11568ad8f2270ae968f76d_151831_650x0_resize_q90_box.jpg","permalink":"/7-reasons-for-consumer-driven-contracts/","title":"7 Reasons to Choose Consumer-Driven Contract Tests Over End-to-End Tests"},{"categories":["Java"],"contents":"Writing Gradle build tasks is often easy and straight forward, but as soon as you start to write more generic tasks for multiple modules or projects it can get a little tricky.\nWhy Lazy Evaluation? Recently I wrote a task to configure a docker build for different Java modules. Some of them are packaged as JAR and some as WAR artifacts. Now this configuration was not that complicated, but I really hate duplicating stuff. So I wondered how to write a generic configuration and let each module override some parts of this config? That\u0026rsquo;s where lazy property evaluation comes in very handy.\nLazy Evaluation of String Properties Let\u0026rsquo;s check this simple project configuration, which logs the the evaluated properties to the console using the build-in Gradle Logger.\nallprojects { version = \u0026#39;1.0.0\u0026#39; ext { artifactExt = \u0026#34;jar\u0026#34; dockerArtifact = \u0026#34;${name}-${version}.${artifactExt}\u0026#34; } } subprojects { task printArtifactName { doLast { logger.lifecycle \u0026#34;Artifact ${dockerArtifact}\u0026#34; } } } project(\u0026#39;A\u0026#39;) { // using default configuration } project(\u0026#39;B\u0026#39;) { artifactExt = \u0026#39;war\u0026#39; } The above code should do exactly what we want:\n./gradlew printArtifactName :A:printArtifactName Artifact A-1.0.0.jar :B:printArtifactName Artifact B-1.0.0.jar Wait, didn\u0026rsquo;t we override the default artifactExt property within module B? Gradle seems to ignore the overridden property!\nLet\u0026rsquo;s modify the example task to get a deeper insight:\ntask printArtifactName { doLast { logger.lifecycle dockerArtifact logger.lifecycle artifactExt } } ./gradlew printArtifactName :A:printArtifactName Artifact A-1.0.0.jar Extension jar :B:printArtifactName Artifact B-1.0.0.jar Extension war Looks like the property artifactExt gets overridden correctly. The problem is caused by the evaluation time of the property dockerArtifact. Within Gradles configuration phase dockerArtifact gets evaluated directly, but at that time artifactExt is defined with it\u0026rsquo;s default value jar. Later when configuring project B, dockerArtifact is already set and overriding artifactExt does not affect the value of dockerArtifact anymore. So we have to tell Gradle to evaluate the property artifactExt at execution time.\nWe can do that by turning the property into a Closure like that:\ndockerArtifact = \u0026#34;${name}-${version}.${-\u0026gt; artifactExt}\u0026#34; Now Gradle evaluates name and version properties eagerly but artifactExt gets evaluated lazily each time dockerArtifact is used. Running the modified code again gives us the expected result:\n./gradlew printArtifactName :A:printArtifactName Artifact A-1.0.0.jar Extension jar :B:printArtifactName Artifact B-1.0.0.war Extension war This simple hack can come in quite handy, but can only be used within Groovy Strings, as it uses Groovys build-in Lazy String Evaluation. Note that Groovy Strings are those Strings wrapped in double quotes, whereas regular Java Strings are wrapped in single quotes.\nLazy Evaluation of non-String Properties Using Closures you can also use lazy evaluation for other property types like shown below.\nLet\u0026rsquo;s define another property called maxMemory as a Closure.\nallprojects { version = \u0026#39;1.0.0\u0026#39; ext { artifactExt = \u0026#34;jar\u0026#34; dockerArtifact = \u0026#34;${name}-${version}.${-\u0026gt; artifactExt}\u0026#34; minMemory = 128 // use a Closure for maxMemory calculation  maxMemory = { minMemory * 2 } } } subprojects { task printArtifactName { doLast { logger.lifecycle \u0026#34;Artifact ${dockerArtifact}\u0026#34; logger.lifecycle \u0026#34;Extension ${artifactExt}\u0026#34; logger.lifecycle \u0026#34;Min Mem ${minMemory}\u0026#34; // running maxMemory Closure by invoking it  logger.lifecycle \u0026#34;Max Mem ${maxMemory()}\u0026#34; } } } project(\u0026#39;B\u0026#39;) { artifactExt = \u0026#39;war\u0026#39; minMemory = 512 } As you can see the real difference to lazy String evaluation is how the closure gets invoked at execution time. We invoke the Closure by adding parenthesis to the property name.\nRunning the modified code again gives us the expected result:\n./gradlew printArtifactName :A:printArtifactName Artifact A-1.0.0.jar Extension jar Min Mem 128 Max Mem 256 :B:printArtifactName Artifact B-1.0.0.war Extension war Min Mem 512 Max Mem 1024 As you can see lazy evaluation of properties is really simple and allows more complex configurations without the need of duplicating code.\n","date":"November 14, 2017","image":"https://reflectoring.io/images/stock/0040-hammock-1200x628-branded_hu7878b055f15b0e055987ba9c73b2f1f9_195463_650x0_resize_q90_box.jpg","permalink":"/gradle-lazy-property-evaluation/","title":"Lazy Evaluation of Gradle Properties"},{"categories":["Java"],"contents":"Sometimes, a test should only be run under certain conditions. One such case are integration tests which depend on a certain external system. We don\u0026rsquo;t want our builds to fail if that system has an outage, so we just want to skip the tests that need a connection to it. This article shows how you can skip tests in JUnit 4 and JUnit 5 depending on certain conditions.\n Example Code This article is accompanied by a working code example on GitHub. Assumptions Both JUnit 4 and JUnit 5 support the concept of assumptions. Before each test, a set of assumptions can be made. If one of these assumptions is not met, the test should be skipped.\nIn our example, we make the assumption that a connection to a certain external system can be established.\nTo check if a connection can be established, we create the helper class ConnectionChecker:\npublic class ConnectionChecker { private String uri; public ConnectionChecker(String uri){ this.uri = uri; } public boolean connect() { ... // try to connect to the uri  } } Our ConnectionChecker has a single public method connect() which sends an HTTP GET request to a given URI and returns true if the server responded with an HTTP response with a status code in the range of 200-299 meaning that the response was successfully processed.\nAssumptions for a single Test Method (JUnit 4 and JUnit 5) Skipping a single test method based on an assumption works the same in JUnit 4 and JUnit 5:\npublic class ConnectionCheckingTest { private ConnectionChecker connectionChecker = new ConnectionChecker(\u0026#34;http://my.integration.system\u0026#34;); @Test public void testOnlyWhenConnected() { assumeTrue(connectionChecker.connect()); ... // your test steps  } } The lines below assumeTrue() will only be called if a connection to the integration system could successfully be established.\nMost of the times, though, we want all methods in a test class to be skipped depending on an assumption. This is done differently in JUnit 4 and JUnit 5\nAssumptions for all Test Methods with JUnit 4 In JUnit 4, we have to implement a TestRule like this:\npublic class AssumingConnection implements TestRule { private ConnectionChecker checker; public AssumingConnection(ConnectionChecker checker) { this.checker = checker; } @Override public Statement apply(Statement base, Description description) { return new Statement() { @Override public void evaluate() throws Throwable { if (!checker.connect()) { throw new AssumptionViolatedException(\u0026#34;Could not connect. Skipping test!\u0026#34;); } else { base.evaluate(); } } }; } } We use our ConnectionChecker to check the connection and throw an AssumptionViolatedException if the connection could not be established.\nWe then have to include this rule in our JUnit test class like this:\npublic class ConnectionCheckingJunit4Test { @ClassRule public static AssumingConnection assumingConnection = new AssumingConnection(new ConnectionChecker(\u0026#34;http://my.integration.system\u0026#34;)); @Test public void testOnlyWhenConnected() { ... } } Assumptions for all Test Methods with JUnit 5 In JUnit 5, the same can be achieved a little more elegantly with the extension sytem and annotations. First, we define ourselves an annotation that should mark tests that should be skipped if a certain URI cannot be reached:\n@Retention(RetentionPolicy.RUNTIME) @ExtendWith(AssumeConnectionCondition.class) public @interface AssumeConnection { String uri(); } In this annotation we hook into the JUnit 5 extension mechanism by using @ExtendWith and pointing to an extension class. In this extension class, we read the URI from the annotation and call our ConnectionChecker to either continue with the test or skip it:\npublic class AssumeConnectionCondition implements ExecutionCondition { @Override public ConditionEvaluationResult evaluateExecutionCondition(ExtensionContext context) { Optional\u0026lt;AssumeConnection\u0026gt; annotation = findAnnotation(context.getElement(), AssumeConnection.class); if (annotation.isPresent()) { String uri = annotation.get().uri(); ConnectionChecker checker = new ConnectionChecker(uri); if (!checker.connect()) { return ConditionEvaluationResult.disabled(String.format(\u0026#34;Could not connect to \u0026#39;%s\u0026#39;. Skipping test!\u0026#34;, uri)); } else { return ConditionEvaluationResult.enabled(String.format(\u0026#34;Successfully connected to \u0026#39;%s\u0026#39;. Continuing test!\u0026#34;, uri)); } } return ConditionEvaluationResult.enabled(\u0026#34;No AssumeConnection annotation found. Continuing test.\u0026#34;); } } We can now use the annotation in our tests either on class level or on method level to skip tests conditionally:\n@AssumeConnection(uri = \u0026#34;http://my.integration.system\u0026#34;) public class ConnectionCheckingJunit5Test { @Test public void testOnlyWhenConnected() { ... } } Conclusion Both JUnit 4 and JUnit 5 support the concept of assumptions to conditionally enable or disable tests. However, it\u0026rsquo;s definitely worthwhile to have a look at JUnit 5 and its extension system since it allows a very declarative way (not only) to create conditionally running tests.\n","date":"October 10, 2017","image":"https://reflectoring.io/images/stock/0019-magnifying-glass-1200x628-branded_hudd3c41ec99aefbb7f273ca91d0ef6792_109335_650x0_resize_q90_box.jpg","permalink":"/conditional-junit4-junit5-tests/","title":"Assumptions and Conditional Test Execution with JUnit 4 and 5"},{"categories":["Java"],"contents":"Object mapping is a necessary and often unloved evil in software development projects. To communicate between layers of your application, you have to create and test mappers between a multitude of types, which can be a very cumbersome task, depending on the mapper library that is used. This article introduces reMap, yet another Java object mapper that has a unique focus on robustness and minimal testing overhead.\nSpecifying a Mapper Rather than creating a mapper via XML or annotations as in some other mapping libraries, with reMap you create a mapper by writing a few good old lines of code. The following mapper maps all fields from a Customer object to a Person object.\nMapper\u0026lt;Customer, Person\u0026gt; mapper = Mapping .from(Customer.class) .to(Person.class) .mapper(); However, the above mapper specification expects Customer and Person to have exactly the same fields with the same names and the same types. Otherwise, calling mapper() will throw an exception.\nHere, we already come across a main philosophy of reMap:\n In your specification of a mapper, all fields that are different in the source and destination classes have to be specified.\n Identical fields in the source and destination classes are automatically mapped and thus specified implicitly. Different fields have to be specified explicitly as described in the following sections. The reasoning behind this is simply robustness as discussed in more detail below.\nOnce you have a mapper instance, you can map a Customer object into a Person object by simply calling the map() method:\nCustomer customer = ... Person person = mapper.map(customer); Omitting fields Say Customer has the field address and Person does not. Vice versa, Person has a field birthDate that is missing in Customer.\nIn order to create a valid mapper for this scenario, you need to tell reMap to omit those fields:\nMapper\u0026lt;Customer, Person\u0026gt; mapper = Mapping .from(Customer.class) .to(Person.class) .omitInSource(Customer::getAddress) .omitInDestination(Person::getBirthDate) .mapper(); Note that instead of referencing fields with Strings containing the field names, you use references of the corresponding getter methods instead. This makes the mapping code very readable and refactoring-safe.\nAlso note that this feature comes at the \u0026ldquo;cost\u0026rdquo; that mapped classes have to follow the Java Bean conventions, i.e. they must have a default constructor and a getter and setter for all fields.\nWhy do I have to specify fields that should be omitted? Why doesn\u0026rsquo;t reMap just skip those fields? The simple reason for this is robustness again. I don\u0026rsquo;t want to let a library outside of my control decide which fields to map and which not. I want to explicitly specify what to map from here to there. Only then can I be sure that things are mapped according to my expectations at runtime.\nMapping fields with different names Source and target objects often have fields that have the same meaning but a different name. By using the reassign specification, we can tell reMap to map one field into another field of the same type. In this example, Customer has a field familyName that is mapped to the name field in Person. Both fields are of the same type String.\nMapper\u0026lt;Customer, Person\u0026gt; mapper = Mapping .from(Customer.class) .to(Person.class) .reassign(Customer:getFamilyName) .to(Person::getName) .mapper(); Mapping fields with different types What if I need to convert a field to another type? Say Customer has a field registrationDate of type Calendar that should be mapped to the field regDate of type Date in Person?\nprivate Mapper\u0026lt;Customer, Person\u0026gt; createMapper(){ return Mapping .from(Customer.class) .to(Person.class) .replace(Customer::getRegistrationDate, Person::regDate) .with(calendarToDate()) .mapper(); } private Transform\u0026lt;Date, Calendar\u0026gt; calendarToDate() { return source -\u0026gt; { if(source == null){ return null; } return source.getTime(); }; } By implementing a Transform function that converts one type to another, we can use the replace specification to convert a field value.\nNested Mapping Another often-required feature of a mapper is nested mapping. Let\u0026rsquo;s say our Customer class has a field of type CustomerAddress and our Person class has a field of type PersonAddress. First, we create a mapper to map CustomerAddress to PersonAddress. Then we tell our Customer-to-Person mapper to use this address mapper when it comes across fields of type CustomerAddress by calling useMapper():\nMapper\u0026lt;CustomerAddress, PersonAddress\u0026gt; addressMapper = Mapping .from(CustomerAddress.class) .to(PersonAddress.class) .mapper(); Mapper\u0026lt;Customer, Person\u0026gt; mapper = Mapping .from(Customer.class) .to(Person.class) .useMapper(addressMapper) .mapper(); Key Philosophies reMap has some more features that can best be looked up in the project\u0026rsquo;s documentation. However, I would like to point out some \u0026ldquo;meta-features\u0026rdquo; that make out the philosophy behind the development of reMap.\nRobustness A main goal of reMap is to create robust mappers. That means that a mapper must be refactoring-safe. A mapper must not break if a field name changes. This is why getter method references are used to specify fields instead of simple Strings.\nA nice effect of this is that the compiler already checks most of your mapping specification. It won\u0026rsquo;t allow you to specify a reassign() for fields of a different type, for example. Another nice effect is that the compiler will tell you if you broke a mapper by changing the type of a field.\nBut a mapper can be broken even if the compiler has nothing to fret about. For example, you might have overlooked a field when specifying the mapper. This is why each mapper is validated at the earliest possible moment during runtime, which is when calling the mapper() factory method.\nTesting This leads us to testing. A major goal of reMap is to reduce testing effort to a minimum. Mapping is a tedious task, so we don\u0026rsquo;t want to add another tedious task by creating unit tests that manually check if each field was mapped correctly. Due to the rather brainless nature of this work, those unit tests are very error prone (in my experience, at least).\nSince all validation of a mapper is done by the compiler and the mapper() factory method, all you have to do to test a mapper is to create an instance of the mapper using the mapper() method. If this produces an exception (for example when you overlooked a field or a type conversion) the test will fail.\nIf you want to create a fixture for regression testing, reMap supports asserting a mapper by creating an AssertMapping like this:\nAssertMapping.of(mapper) .expectOmitInSource(Customer::getAddress) .expectOmitInDestination(Person::getBirthDate) // ... other expectations  .ensure(); Calling ensure() will throw an AssertionError if the AssertMapping does not match the specification of the mapper. Having a unit test with such an assertion in place, you will notice if the specification of the mapper does not match your expectations. This also allows test-driven development of a mapper.\nNote that if you created a custom Transform function as described above you should include an explicit test for this transformation in your test suite, since it cannot be validated automatically by reMap.\nPerformance Performance was actually not a goal at all when developing reMap. Robustness and minimal test effort were valued much higher. However, reMap seems to be faster than some other popular mappers like Dozer and ModelMapper. The following performance test results were created on my local machine with a testing framework created by Frank Rahn for his mapper comparison blog post (beware of German language!).\n   Mapper Average Mapping Time (ms)     JMapper 0,01248   ByHand 0,01665   MapStruct 0,21591   Orika 0,37756   Selma 0,44576   reMap 2,56231   ModelMapper 4,71332   Dozer 6,12523    Summary reMap is yet another object mapper for Java but has a different philosophy from most of the other mappers out there. It values robustness above all else and minimal testing overhead a strong second. reMap is not the fastest mapper but plays in league of some of the other popular mappers performance-wise.\nreMap is very young yet, and probably not feature-complete, so we\u0026rsquo;d love to hear your feedback and work out any bugs you might find and discuss any features you might miss. Simply drop us an issue on Github.\n","date":"October 1, 2017","image":"https://reflectoring.io/images/stock/0041-adapter-1200x628-branded_hudbdb52a7685a8d0e28c5b58dcc10fabe_81226_650x0_resize_q90_box.jpg","permalink":"/autotmatic-refactoring-safe-java-mapping/","title":"Robust Java Object Mapping With Minimal Testing Overhead Using reMap"},{"categories":["Spring Boot"],"contents":"In a microservice environment or any other distributed system you may come upon the requirement to exchange events between services. This article shows how to implement a messaging solution with RabbitMQ.\n Example Code This article is accompanied by a working code example on GitHub. Event Messaging Requirements Before jumping into the solution let\u0026rsquo;s define some requirements that an eventing mechanism in a distributed system should fulfill. We\u0026rsquo;ll use the following diagram to derive those requirements.\n The event producing service must not call the event consuming services directly in order to preserve loose coupling. The event producing service must be able to send events of different types (e.g. \u0026ldquo;customer.created\u0026rdquo; or \u0026ldquo;customer.deleted\u0026rdquo;). The event consuming services must be able to receive only events of types they are interested in (e.g. \u0026ldquo;*.deleted\u0026rdquo;, which means all events concerning a customer). In our distributed system we have several service clusters (e.g. a cluster of \u0026ldquo;order service\u0026rdquo; instances and a cluster of \u0026ldquo;archive service\u0026rdquo; instances). Each event must be processed by at most one instance per service cluster.  Messaging Concepts The eventing solution presented in this article makes use of some messaging concepts that are described in the following sections.\nProducer A producer is simply a piece of software that sends a message to a message broker, for example a customer service in a system of microservices that wants to tell other services that a new customer was created by sending the event customer.created that contains the newly created customers' ID as a payload.\nConsumer A consumer is a piece of software that receives messages from a message broker and processes those messages. In our example, this might be an order service that needs the address of all customers to create orders for those customers. It would process the customer.created event by reading the ID from the event and calling the customer service to load the corresponding customers' address.\nQueue A queue is first-in-first-out message store. The messages are put into a queue by a producer and read from it by a consumer. Once a message is read, it is consumed and removed from the queue. A message can thus only be processed exactly once.\nExchange An exchange is a concept that is part of the AMQP protocol. Basically, it acts as an intermediary between the producer and a queue. Instead of sending messages directly to a queue, a producer can send them to an exchange instead. The exchange then sends those messages to one or more queues following a specified set of rules. Thus, the producer does not need to know the queues that eventually receive those messages.\nBinding A binding connects a queue to an exchange. The exchange forwards all messages it receives to the queues it is bound to. A binding can contain a routing key that specifies which events should be forwarded. For example, a binding might contain the routing key customer.* meaning that all events whose type starts with customer. will be routed to the specified queue.\nAn Event Messaging Concept with AMQP Using the concepts above, we can create an eventing solution with RabbitMQ. The solution is depicted in the figure below.\nEach service cluster gets its own queue. This is necessary since not all events are relevant to each service cluster. An order service may be interested in all customer events (customer.*) whereas an archiving service may be interested in all events where an object has been deleted (*.deleted). If we had only one queue for all events that queue would sooner or later overflow since it might contain events that no consumer is interested in.\nEach consuming service cluster binds its queue the central exchange with a routing key that specifies which events it is interested in. Only those events are then routed into the queue. The events are then consumed by exactly one of the service instances connected to that queue.\nThe event producing services only need to know the central exchange and send all events to that exchange. Since the consuming services take care of the binding and routing, we have a real, loosely coupled eventing mechanism.\nImplementing Event Messaging with Spring Boot and RabbitMQ The eventing concept described above can be implemented with Spring Boot and RabbitMQ. The implementation is pretty straightforward. If you don\u0026rsquo;t feel like reading and more like delving into code, you will find a link to a github repository with a working example at the end of this article.\nIncluding the Spring Boot AMQP Starter Spring Boot offers a starter for Messaging with AMQP that integrates the Spring AMQP project with Spring Boot. The AMQP Starter currently only supports RabbitMQ as underlying message broker, which is fine for us. To use the starter, include the following dependency into your project (Gradle notation):\ncompile(\u0026#39;org.springframework.boot:spring-boot-starter-amqp\u0026#39;) The starter contains an auto configuration which is automatically activated.\nConnecting to RabbitMQ In order to connect to a RabbitMQ server, the Spring AMQP starter reads the following properties, which you can specify as environment variables, for example in your application.properties. The following settings are the default connection settings once you have installed RabbitMQ locally.\nspring.rabbitmq.host=localhost spring.rabbitmq.port=5672 spring.rabbitmq.username=guest spring.rabbitmq.password=guest Configuring an Event Producer Creating an event producer is pretty straightforward. We make use of the RabbitTemplate provided by the AMQP starter and call the method convertAndSend() to send an event. The event in the code example only contains a String. If the message should contain a complex object, you can make use of message converters.\nThe RabbitTemplate automatically uses the connection settings provided in the application.properties earlier.\npublic class CustomerService { private final RabbitTemplate rabbitTemplate; private final Exchange exchange; public CustomerService(RabbitTemplate rabbitTemplate, Exchange exchange) { this.rabbitTemplate = rabbitTemplate; this.exchange = exchange; } public void createCustomer() { // ... do some database stuff  String routingKey = \u0026#34;customer.created\u0026#34;; String message = \u0026#34;customer created\u0026#34;; rabbitTemplate.convertAndSend(exchange.getName(), routingKey, message); } } Note that the call to RabbitTemplate needs the name of the exchange to which the event should be sent. To wire our application against a specific exchange, we simply create a Spring Bean of type TopicExchange and choose a name for that exchange (in case of the example code below, the exchange is called eventExchange). The application will automatically connect to RabbitMQ and create an exchange with this name, if it doesn\u0026rsquo;t exist yet. We use a so-called \u0026ldquo;topic exchange\u0026rdquo; here, since it allows to specify a routing key (a \u0026ldquo;topic\u0026rdquo;) when sending a message to it.\nThe RabbitTemplate passed into the CustomerService is provided to the Spring application context by the AMQP starter.\n@Configuration public class EventProducerConfiguration { @Bean public Exchange eventExchange() { return new TopicExchange(\u0026#34;eventExchange\u0026#34;); } @Bean public CustomerService customerService(RabbitTemplate rabbitTemplate, Exchange eventExchange) { return new CustomerService(rabbitTemplate, senderTopicExchange); } } Configuring an Event Consumer First off, the event consumer itself is a simple java class. Again, to process more complex objects than simple strings, you can use Spring AMQPs message converters. We use the @RabbitListener annotation on a method to mark it as an event receiver.\npublic class EventConsumer { private Logger logger = LoggerFactory.getLogger(EventConsumer.class); @RabbitListener(queues=\u0026#34;orderServiceQueue\u0026#34;) public void receive(String message) { logger.info(\u0026#34;Received message \u0026#39;{}\u0026#39;\u0026#34;, message); } } We now need to declare a queue and bind it to the same exchange used in the event producer.\nFirst, we define the same Exchange as we did in the event consumer configuration. Then, we define a Queue with a unique name. This is the queue for our service cluster. To connect the two, we then create a Binding with the routing key customer.* specifying that we are only interested in customer events.\nAs with the exchange before, a Queue and a Binding will be automatically created on the RabbitMQ server if they do not exist yet.\n@Configuration public class EventConsumerConfiguration { @Bean public Exchange eventExchange() { return new TopicExchange(\u0026#34;eventExchange\u0026#34;); } @Bean public Queue queue() { return new Queue(\u0026#34;orderServiceQueue\u0026#34;); } @Bean public Binding binding(Queue queue, Exchange eventExchange) { return BindingBuilder .bind(queue) .to(eventExchange) .with(\u0026#34;customer.*\u0026#34;); } @Bean public EventConsumer eventReceiver() { return new EventConsumer(); } } Wrap-Up With the concepts of exchanges, bindings and queues, AMQP provides everything we need to create an event mechanism for a distributed system. Spring AMQP and its integration into Spring Boot via the AMQP Starter provide a very convenient programming model to connect to such an event broker.\n","date":"September 16, 2017","image":"https://reflectoring.io/images/stock/0035-switchboard-1200x628-branded_hu8b558f13f0313494c9155ce4fc356d65_235224_650x0_resize_q90_box.jpg","permalink":"/event-messaging-with-spring-boot-and-rabbitmq/","title":"Event Messaging for Microservices with Spring Boot and RabbitMQ"},{"categories":["Java"],"contents":"Sometimes you want to add code snippets to our Javadoc comments, especially when developing an API of some kind. But how do you mark the code snippet so that it will be rendered correctly in the final Javadoc HTML, especially when special characters like '\u0026lt;', '\u0026gt;' and '@' are involved? Since there are multiple options to do this - each with different results - this blog post gives an overview on these options and a guideline on when to use which.\n\u0026lt;pre\u0026gt;, \u0026lt;code\u0026gt;, {@code}, what? Javadoc supports three different features for code markup. These are the HTML tags \u0026lt;pre\u0026gt; and \u0026lt;code\u0026gt; and the Javadoc tag {@code}. Sounds great, but each time I want to include a code snippet into a Javadoc comment, I\u0026rsquo;m wondering which of the three to use and what the difference between them actually is\u0026hellip; .\nTo assemble a definitive guide on when to use which of the markup features, I took a look at how they behave by answering the following questions for each of them:\n   Question Rationale     Are indentations and line breaks displayed correctly in the rendered Javadoc? For multi-line code snippets indentations and line breaks are essential, so they must not get lost when rendering the Javadoc.   Are '\u0026lt;' and '\u0026gt;' displayed correctly in the rendered Javadoc? '\u0026lt;' and '\u0026gt;'should not be evaluated as part of an HTML tag but instead be displayed literally. This is especially important for code snippets containing HTML or XML code or Java code containing generics.   Is '@' displayed correctly in the rendered Javadoc? '@' should not be evaluated as part of a Javadoc tag but instead be displayed literally. This is important for Java code containing annotations.   Can special characters like the ones above be escaped using HTML number codes like \u0026amp;#60;, \u0026amp;#62; and \u0026amp;#64; (which evaluate to '\u0026lt;', '\u0026gt;' and '@')? If the special characters cannot be displayed literally, they should at least be escapable via HTML codes.    \u0026lt;pre\u0026gt; \u0026lt;pre\u0026gt; is the default HTML tag for preformatted text. This means that HTML renderers by default know that the code within the tag should be displayed literally. Thus, line breaks and indentation are supported. However, since we\u0026rsquo;re in a Javadoc environment, '@' is evaluated as a Javadoc tag and since we\u0026rsquo;re also in an HTML environment, '\u0026lt;' and '\u0026gt;' are evaluated as HTML tags. So none of these characters will be displayed correctly in the rendered Javadoc HTML so they have to be escaped.\n/** * \u0026lt;pre\u0026gt; * public class JavadocTest { * // indentation and line breaks are kept * * \u0026amp;#64;SuppressWarnings * public List\u0026amp;#60;String\u0026amp;#62; generics(){ * // \u0026#39;@\u0026#39;, \u0026#39;\u0026lt;\u0026#39; and \u0026#39;\u0026gt;\u0026#39; have to be escaped with HTML codes * // when used in annotations or generics * } * } * \u0026lt;/pre\u0026gt; */ public class PreTest {} renders to \u0026hellip;\npublic class JavadocTest { // indentation and line breaks are kept @SuppressWarnings public List\u0026lt;String\u0026gt; generics(){ // '@', '\u0026lt;' and '\u0026gt;' have to be escaped with HTML codes // when used in annotations or generics } } \u0026lt;code\u0026gt; Within a \u0026lt;code\u0026gt; tag, not even the indentation and line breaks are kept and our special characters still have to be escaped.\n/** * Using \u0026amp;#60;code\u0026amp;#62;, indentation and line breaks are lost. * \u0026#39;@\u0026#39;, \u0026#39;\u0026lt;\u0026#39; and \u0026#39;\u0026gt;\u0026#39; have to be escaped with HTML codes. * * An annotation \u0026lt;code\u0026gt;\u0026amp;#64;Foo\u0026lt;/code\u0026gt;; and a generic List\u0026amp;#60;String\u0026amp;#62;. */ public class CodeHtmlTagTest {} renders to \u0026hellip;\nUsing \u0026lt;code\u0026gt;, indentation and line breaks are lost. \u0026#39;@\u0026#39;, \u0026#39;\u0026lt;\u0026#39; and \u0026#39;\u0026gt;\u0026#39; have to be escaped with HTML codes. An annotation @Foo; and a generic List\u0026lt;String\u0026gt;. {@code} {@code} is a Javadoc tag that came with Java 5. A code snippet embedded within {@code} will display our special characters correctly so they don\u0026rsquo;t need to be manually escaped. However, indentation and line breaks will be lost. This can be rectified by using {@code} together with \u0026lt;pre\u0026gt;, though (see next section).\n/** * Using {@code @code} alone, indentation will be lost, but you don\u0026#39;t have to * escape special characters: * * {@code An annotation \u0026lt;code\u0026gt;@Foo\u0026lt;/code\u0026gt;; and a generic List\u0026lt;String\u0026gt;}. */ public class CodeJavadocTagTest {} renders to \u0026hellip;\nUsing @code alone, indentation will be lost, but you don\u0026#39;t have to escape special characters: An annotation \u0026lt;code\u0026gt;@Foo\u0026lt;/code\u0026gt;; and a generic List\u0026lt;String\u0026gt;. \u0026lt;pre\u0026gt; + {@code} Combining \u0026lt;pre\u0026gt; and {@code}, indentations and line breaks are kept and '\u0026lt;' and '\u0026gt;' don\u0026rsquo;t have to be escaped. However, against all expectations the '@' character is now evaluated as a Javadoc tag. What\u0026rsquo;s worse: it cannot even be escaped using the HTML number code, since the HTML number code would be literalized by {@code}.\n/** * \u0026lt;pre\u0026gt;{@code * public class JavadocTest { * // indentation and line breaks are kept * * @literal @SuppressWarnings * public List\u0026lt;String\u0026gt; generics(){ * // \u0026#39;\u0026lt;\u0026#39; and \u0026#39;\u0026gt;\u0026#39; are displayed correctly * // \u0026#39;@\u0026#39; CANNOT be escaped with HTML code, though! * } * } * }\u0026lt;/pre\u0026gt; */ public class PreTest {} renders to \u0026hellip;\npublic class JavadocTest { // indentation and line breaks are kept \u0026amp;#64;SuppressWarnings public List\u0026lt;String\u0026gt; generics(){ // \u0026#39;\u0026lt;\u0026#39; and \u0026#39;\u0026gt;\u0026#39; are displayed correctly // \u0026#39;@\u0026#39; CANNOT be escaped with HTML code, though! } } Note that you actually CAN escape an '@' using @literal @ within the {@code} block. However, this way always renders an unwanted whitespace before the '@' character, which is why I don\u0026rsquo;t discuss that option any further.\nCode Markup Features at a Glance The following table summarizes the different javadoc code markup features.\n    \u0026hellip; \u0026lt;code\u0026gt;\u0026hellip;\u0026lt;/code\u0026gt; {@code \u0026hellip;} {@code \u0026hellip;}     keep indentation \u0026amp; line breaks       display '\u0026lt;' \u0026amp; '\u0026gt;' correctly       display '@' correctly       escape special characters via HTML number codes   no need to escape     When to use which? Looking at the table above, sadly, there is no single best option. Which option to use depends on the content of the code snippet you want to embed in your Javadoc. The following guidelines can be derived for different situations:\n   Situation Code Markup Feature  Rationale     Inline code snippet {@code ... } With {@code ...}, you don\u0026rsquo;t need to escape special characters. For inline snippets, it doesn\u0026rsquo;t matter that line breaks are lost.   Multi-line Java code snippets \u0026lt;pre\u0026gt;...\u0026lt;/pre\u0026gt; For multi-line snippets you need line breaks. So only \u0026lt;pre\u0026gt;...\u0026lt;/pre\u0026gt; and \u0026lt;pre\u0026gt;{@code ...}\u0026lt;/pre\u0026gt; are options. However, only \u0026lt;pre\u0026gt;...\u0026lt;/pre\u0026gt; allows the use of '@' (escaped using HTML number codes), which you need for Java code containing annotations.   Multi-line HTML / XML code snippets \u0026lt;pre\u0026gt;{@code ... }\u0026lt;/pre\u0026gt; In HTML or XML code you probably need '\u0026lt;' and '\u0026gt;' more often than '@' , so it doesn\u0026rsquo;t matter that '@' cannot be displayed. If you need an '@', you have to fall back on \u0026lt;pre\u0026gt; and HTML number codes.    ","date":"August 14, 2017","image":"https://reflectoring.io/images/stock/0031-matrix-1200x628-branded_hufb3c207f9151b804bbf7fe86cefe5814_184798_650x0_resize_q90_box.jpg","permalink":"/howto-format-code-snippets-in-javadoc/","title":"A Guide to Formatting Code Snippets in Javadoc"},{"categories":["Spring Boot"],"contents":"When thinking about integration testing in a distributed system, you quickly come across the concept of consumer-driven contracts. This blog post gives a short introduction into this concept and a concrete implementation example using the technologies Pact, Spring Boot, Feign and Spring Data REST.\nDeprecated The contents of this article are deprecated. Instead, please read the articles about Creating a Consumer-Driven Contract with Feign and Pact and Testing a Spring Boot REST API against a Consumer-Driven Contract with Pact\n Integration Test Hell Each service in a distributed system potentially communicates with a set of other services within or even beyond that system. This communication hopefully takes place through well-defined APIs that are stable between releases.\nTo validate that the communication between a consumer and a provider of an API still works as intended after some code changes were made, the common reflex is to setup integration tests. So, for each combination of an API provider and consumer, we write one or more integration tests. For the integration tests to run automatically, we then have to deploy the provider service to an integration environment and then run the consumer application against its API. As if that is not challenging enough, the provider service may have some runtime dependencies that also have to be deployed, which have their own dependencies and soon you have the entire distributed system deployed for your integration tests.\nThis may be fine if your release schedule only contains a couple releases per year. But if you want to release each service often and independently (i.e. you want to practice continuous delivery) this integration testing strategy does not suffice.\nTo enable continuous delivery we have to decouple the integration tests from an actual runtime test environment. This is where consumer-driven contracts come into play.\nConsumer-Driven Contracts The idea behind consumer-driven contracts is to define a contract between each consumer/provider pair and then test the consumer and provider against that contract independently to verify that they abide by the contract. This way each \u0026ldquo;integration test\u0026rdquo; can run separately and without a full-blown runtime test environment.\nThe contract lies in the responsibility of the consumer, hence the name \u0026ldquo;consumer-driven\u0026rdquo;. For example, the consumer defines a set of requests with expected responses within a contract. This way, the provider knows exactly which API calls are actually used out there in the wild and unused APIs can safely be removed from the code base.\nOf course, the contract is created by the consumer in agreement with the provider so that it cannot define API calls the provider doesn\u0026rsquo;t want to support.\nThe process of consumer-driven contracts looks like this:\n The API consumer creates and maintains a contract (in agreement with the provider). The API consumer verifies that it successfully runs against the contract. The API consumer publishes the contract. The API provider verifies that it successfully runs against the contract.  In the following sections, I will show how to implement these steps with Pact, Spring Boot, an API consumer implemented with Feign and an API provider implemented with Spring Data REST.\nPact Pact is a collection of frameworks that support the idea of consumer-driven contracts. The core of Pact it is a specification that provides guidelines for implementations in different languages. Implementations are available for a number of different languages and frameworks. In this blog post we will focus on the Pact integrations with JUnit 4 (pact-jvm-consumer-junit_2.11 and pact-jvm-provider-junit_2.11).\nAside from Java, it is noteworthy that Pact also integrates with JavaScript. So, for example, when developing a distributed system with Java backend services and Angular frontends, Pact supports contract testing between your frontends and backends as well as between backend services who call each other.\nObviously, instead of calling it a \u0026ldquo;contract\u0026rdquo;, Pact uses the word \u0026ldquo;pact\u0026rdquo; to define an agreement between an API consumer and provider. \u0026ldquo;Pact\u0026rdquo; and \u0026ldquo;contract\u0026rdquo; are used synonymously from here on.\nCreating and Verifying a pact on the Consumer Side Let\u0026rsquo;s create an API client with Feign, create a pact and verify the client against that pact.\nThe Feign Client Our API consumer is a Feign client that reads a collection of addresses from a REST API provided by the customer service. The following code snippet is the whole client. More details about how to create a Feign client against a Spring Data REST API can be read in this blog post.\n@FeignClient(value = \u0026#34;addresses\u0026#34;, path = \u0026#34;/addresses\u0026#34;) public interface AddressClient { @RequestMapping(method = RequestMethod.GET, path = \u0026#34;/\u0026#34;) Resources\u0026lt;Address\u0026gt; getAddresses(); } The Pact-Verifying Unit Test Now, we want to create a pact using this client and validate that the client works correctly against this pact. This is the Unit test that does just that:\n@RunWith(SpringRunner.class) @SpringBootTest(properties = { // overriding provider address  \u0026#34;addresses.ribbon.listOfServers: localhost:8888\u0026#34; }) public class ConsumerPactVerificationTest { @Rule public PactProviderRuleMk2 stubProvider = new PactProviderRuleMk2(\u0026#34;customerServiceProvider\u0026#34;, \u0026#34;localhost\u0026#34;, 8888, this); @Autowired private AddressClient addressClient; @Pact(state = \u0026#34;a collection of 2 addresses\u0026#34;, provider = \u0026#34;customerServiceProvider\u0026#34;, consumer = \u0026#34;addressClient\u0026#34;) public RequestResponsePact createAddressCollectionResourcePact(PactDslWithProvider builder) { return builder .given(\u0026#34;a collection of 2 addresses\u0026#34;) .uponReceiving(\u0026#34;a request to the address collection resource\u0026#34;) .path(\u0026#34;/addresses/\u0026#34;) .method(\u0026#34;GET\u0026#34;) .willRespondWith() .status(200) .body(\u0026#34;...\u0026#34;, \u0026#34;application/hal+json\u0026#34;) .toPact(); } @Test @PactVerification(fragment = \u0026#34;createAddressCollectionResourcePact\u0026#34;) public void verifyAddressCollectionPact() { Resources\u0026lt;Address\u0026gt; addresses = addressClient.getAddresses(); assertThat(addresses).hasSize(2); } } We add the @SpringBootTest annotation to the test class so that a Spring Boot application context - and thus our AddressClient - is created. You could create the AddressClient by hand instead of bootstrapping the whole Spring Boot application, but then you would not test the client that is created by Spring Boot in production.\nThe PactProviderRuleMk2 is included as a JUnit @Rule. This rule is responsible for evaluating the @Pact and @PactVerification annotations on the methods of the test class.\nThe method createAddressCollectionResourcePact() is annotated with @Pact and returns a RequestResponsePact. This pact defines the structure and content of a request/response pair. When the unit test is executed, a JSON representation of this pact is automatically generated into the file target/pacts/addressClient-customerServiceProvider.json.\nFinally, the method verifyAddressCollectionPact() is annotated with @PactVerification, which tells Pact that in this method we want to verify that our client works against the pact defined in the method createAddressCollectionResourcePact(). For this to work, Pact starts a stub HTTP server on port 8888 which responds to the request defined in the pact with the response defined in the pact. When our AddressClient successfully parses the response we know that it interacts according to the pact.\nPublishing a Pact Now that we created a pact, it needs to be published so that the API provider can verify that it, too, interacts according to the pact.\nIn the simplest case, the pact file is created into a folder by the consumer and then read in from that same folder in a unit test on the provider side. That obviously only works when the code of both consumer and provider lies next to each other, which may not be desired due to several reasons.\nThus, we have to take measures to publish the pact file to some location the provider can access. This can be a network share, a simple web server or the more sophisticated Pact Broker. Pact Broker is a repository server for pacts and provides an API that allows publication and consumption of pact files.\nI haven\u0026rsquo;t tried out any of those publication measures yet, so I can\u0026rsquo;t go into more detail. More information on different pact publication strategies can be found here.\nVerifying a Spring Data REST Provider against a Pact Assuming our consumer has created a pact, successfully verified against it and then published the pact, we now have to verify that our provider also works according to the pact.\nIn our case, the provider is a Spring Data REST application that exposes a Spring Data repository via REST. So, we need some kind of test that replays the request defined in the pact against the provider API and verify that it returns the correct response. The following code implements such a test with JUnit:\n@RunWith(PactRunner.class) @Provider(\u0026#34;customerServiceProvider\u0026#34;) @PactFolder(\u0026#34;../pact-feign-consumer/target/pacts\u0026#34;) public class ProviderPactVerificationTest { @ClassRule public static SpringBootStarter appStarter = SpringBootStarter.builder() .withApplicationClass(DemoApplication.class) .withArgument(\u0026#34;--spring.config.location=classpath:/application-pact.properties\u0026#34;) .withDatabaseState(\u0026#34;address-collection\u0026#34;, \u0026#34;/initial-schema.sql\u0026#34;, \u0026#34;/address-collection.sql\u0026#34;) .build(); @State(\u0026#34;a collection of 2 addresses\u0026#34;) public void toAddressCollectionState() { DatabaseStateHolder.setCurrentDatabaseState(\u0026#34;address-collection\u0026#34;); } @TestTarget public final Target target = new HttpTarget(8080); } PactRunner allows Pact to create the mock replay client. Also, we specify the name of the API provider via @Provider. This is needed by Pact to find the correct pact file in the @PactFolder we specified. In this case the pact files are located in the consumer code base which lies next to the provider code base.\nThe method annotated with @State must be implemented to signal to the provider which state in the pact is currently tested, so it can return the correct data. In our case, we switch the database backing the provider in a state that contains the correct data.\n@TestTarget defines against which target the replay client should run. In our case against an HTTP server on port 8080.\nThe classes SpringBootRunner and DatabaseStateHolder are classes I created myself that start up the Spring Boot application with the provider API and allow to change the state of the underlying database by executing a set of SQL scripts. Note that if you\u0026rsquo;re implementing your own Spring MVC Controllers you can use the pact-jvm-provider-spring module instead of these custom classes. This module supports using MockMvc and thus you don\u0026rsquo;t need to bootstrap the whole Spring Boot application in the test. However, in our case Spring Data REST provides the MVC Controllers and there is no integration between Spring Data REST and Pact (yet?).\nWhen the unit test is executed, Pact will now execute the requests defined in the pact files and verify the responses against the pact. In the log output, you should see something like this:\nVerifying a pact between addressClient and customerServiceProvider Given a collection of 2 addresses a request to the address collection resource returns a response which has status code 200 (OK) includes headers \u0026#34;Content-Type\u0026#34; with value \u0026#34;application/hal+json\u0026#34; (OK) has a matching body (OK) ","date":"August 9, 2017","image":"https://reflectoring.io/images/stock/0029-contract-1200x628-branded_hu7a19ccad5c11568ad8f2270ae968f76d_151831_650x0_resize_q90_box.jpg","permalink":"/consumer-driven-contracts-with-pact-feign-spring-data-rest/","title":"Consumer-Driven Contracts with Pact, Feign and Spring Data REST"},{"categories":["Spring Boot"],"contents":"With Spring Data REST you can rapidly create a REST API that exposes your Spring Data repositories and thus provides CRUD support and more. However, in serious API development, you also want to have an automatically generated and up-to-date API documentation.\n Example Code This article is accompanied by a working code example on GitHub. Swagger provides a specification for documenting REST APIs. And with Springfox we have a tool that serves as a bridge between Spring applications and Swagger by creating a Swagger documentation for certain Spring beans and annotations.\nSpringfox also recently added a feature that creates a Swagger documentation for a Spring Data REST API. This feature is incubating yet, but I nevertheless played around with it a little to evaluate if it\u0026rsquo;s ready to use in real projects yet. Because if it is, the combination of Spring Data REST and Springfox would allow rapid development of a well-documented REST API.\nNote that as of now (version 2.7.0), the Springfox integration for Spring Data REST is still in incubation and has some serious bugs and missing features (see here and here, for example). Thus, the descriptions and code examples below are based on the current 2.7.1-SNAPSHOT version in which this is remedied considerably.\nEnabling Springfox in a Spring Boot / Spring Data REST application In order to enable Springfox to create a Swagger documentation for our Spring Data REST API, you have to take the following steps.\nAdd Springfox dependencies Add the following dependencies to your application (gradle notation):\ncompile(\u0026#39;io.springfox:springfox-swagger2:2.7.0\u0026#39;) compile(\u0026#39;io.springfox:springfox-data-rest:2.7.0\u0026#39;) compile(\u0026#39;io.springfox:springfox-swagger-ui:2.7.0\u0026#39;)  springfox-swagger2 contains the core features of Springfox that allow creation of an API documentation with Swagger 2. springfox-data-rest contains the integration that automatically creates a Swagger documentation for Spring Data REST repositories. springfox-swagger-ui contains the Swagger UI that displays the Swagger documentation at http://localhost:8080/swagger-ui.html.  Configure the Application class The Spring Boot application class has to be configured as follows:\n@SpringBootApplication @EnableSwagger2 @Import(SpringDataRestConfiguration.class) public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } }  The @EnableSwagger2 annotation enables Swagger 2 support by registering certain beans into the Spring application context. The @Import annotation imports additional classes into the Spring application context that are needed to automatically create a Swagger documentation from our Spring Data REST repositories.  Create a Docket bean You can optionally create a Spring bean of type Docket. This will be picked up by Springfox to configure some of the swagger documentation output.\n\n@Configuration public class SpringfoxConfiguration { @Bean public Docket docket() { return new Docket(DocumentationType.SWAGGER_2) .tags(...) .apiInfo(...) ... } } Annotate your Spring Data repositories Also optionally, you can annotate the Spring Data repositories exposed by Spring Data REST using the @Api, @ApiOperation and @ApiParam annotations. More details below.\nThe Output In the end, you should be able to view the Swagger documentation of your Spring Data REST API by accessing http://localhost:8080/swagger-ui.html in your browser. The result should look something like the image below.\n![Swagger UI]({{ base }}/assets/img/posts/spring-data-rest-springfox.png)\nCustomizing the Output The numbers on the image above show some places where things in the generated API documentation can be customized. The following sections describe some customizations that I deemed important. You can probably customize more than I have found out so feel free to add a comment if you found something I missed!\nGeneral API Information (1) Information like the title, description, licence and more can be configured by creating a Docket bean as in the code snippet above and using its setters to change the settings you want.\nRepository Description (2) The description for a repository can be changed by creating a tag named exactly like the default API name (\u0026ldquo;Address Entity\u0026rdquo; in the example), providing a description to this Tag in the Docket object and connecting the repository with that Tag using the @Api annotation. I have found no way to change the name of the repository itself so far.\n@Configuration public class SpringfoxConfiguration { @Bean public Docket docket() { return new Docket(DocumentationType.SWAGGER_2) .tags(new Tag(\u0026#34;Address Entity\u0026#34;, \u0026#34;Repository for Address entities\u0026#34;)); } } @Api(tags = \u0026#34;Address Entity\u0026#34;) @RepositoryRestResource(path = \u0026#34;addresses\u0026#34;) public interface AddressRepository extends CrudRepository\u0026lt;Address, Long\u0026gt; { // methods omitted } Operation Description (3) The description of a single API operation can be modified by the @ApiOperation annotation like so:\npublic interface AddressRepository extends PagingAndSortingRepository\u0026lt;Address, Long\u0026gt; { @ApiOperation(\u0026#34;find all Addresses that are associated with a given Customer\u0026#34;) Page\u0026lt;Address\u0026gt; findByCustomerId(@Param(\u0026#34;customerId\u0026#34;) Long customerId, Pageable pageable); } Input Parameters (4) The names and descriptions of input parameters can be configured using the @ApiParam annotation. Note that as of Springfox 2.7.1 the parameter names are also read from the @Param annotation provided by Spring Data.\npublic interface AddressRepository extends PagingAndSortingRepository\u0026lt;Address, Long\u0026gt; { Page\u0026lt;Address\u0026gt; findByCustomerId(@Param(\u0026#34;customerId\u0026#34;) @ApiParam(value=\u0026#34;ID of the customer\u0026#34;) Long customerId, Pageable pageable); } Responses (5) The different response statuses and their payloads can be tuned using the @ApiResponses and @ApiResponse annotations:\npublic interface AddressRepository extends PagingAndSortingRepository\u0026lt;Address, Long\u0026gt; { @Override @ApiResponses({@ApiResponse(code=201, message=\u0026#34;Created\u0026#34;, response=Address.class)}) Address save(Address address); } Conclusion Spring Data REST allows you to produce fast results when creating a database-driven REST API. Springfox allows you to quickly produce automated documentation for that API. However, the API docs generated by Springfox do not match the actual API in every detail. Some manual fine-tuning with annotations is necessary, like described in the customization section above.\nOne such example is that the JSON of example requests and responses is not rendered correctly in every case, since Spring Data REST uses the HAL format and Springfox only does in a few cases. With manual work involved, it will be hard to keep the API documentation up-to-date for every detail.\nMy conclusion is that the combination of Spring Data REST and Springfox is a good starting point to quickly produce a REST API whose documentation is good enough for most use cases, especially when the API is developed and used in a closed group of developers. For a public API, details matter a little more and it may be frustrating to keep the Swagger annotations and Springfox configuration up-to-date for every detail.\n","date":"August 2, 2017","image":"https://reflectoring.io/images/stock/0042-fox-1200x628-branded_hudd58d18346e2fd372cf868cdf9e60a20_120586_650x0_resize_q90_box.jpg","permalink":"/documenting-spring-data-rest-api-with-springfox/","title":"Documenting a Spring Data REST API with Springfox and Swagger"},{"categories":["Spring Boot"],"contents":"Spring Data REST is a framework that automatically exposes a REST API for Spring Data repositories, thus potentially saving a lot of manual programming work. Feign is a framework that allows easy creation of REST clients and is well integrated into the Spring Cloud ecosystem. Together, both frameworks seem to be a natural fit, especially in a microservice environment.\n Example Code This article is accompanied by a working code example on GitHub. However, they don\u0026rsquo;t play along by default. This blog post shows what has to be done in order to be able to access a Spring Data REST API with a Spring Boot Feign client.\nThe Symptom: Serialization Issues When accessing a Spring Data REST API with a Feign client you may trip over serialization issues like this one:\nCan not deserialize instance of java.util.ArrayList out of START_OBJECT token This error occurs when Feign tries to deserialize a JSON object provided by a Spring Data REST server. The cause for this is simply that Spring Data REST by default creates JSON in a Hypermedia format called HAL and Feign by default does not know how to parse it. The response Spring Data REST creates for a GET request to a collection resource like http://localhost:8080/addresses may look something like this:\n{ \u0026#34;_embedded\u0026#34; : { \u0026#34;addresses\u0026#34; : [ { \u0026#34;street\u0026#34; : \u0026#34;Elm Street\u0026#34;, \u0026#34;_links\u0026#34; : {...} } }, { \u0026#34;street\u0026#34; : \u0026#34;High Street\u0026#34;, \u0026#34;_links\u0026#34; : {...} } ] }, \u0026#34;_links\u0026#34; : { \u0026#34;self\u0026#34; : { \u0026#34;href\u0026#34; : \u0026#34;http://localhost:8080/addresses/\u0026#34; }, \u0026#34;profile\u0026#34; : { \u0026#34;href\u0026#34; : \u0026#34;http://localhost:8080/profile/addresses\u0026#34; } } } The deserialization issue comes from the fact that Feign by default expects a simple array of address objects and instead gets a JSON object.\nThe Solution: Help Feign understand Hypermedia To enable Feign to understand the HAL JSON format, we have to take the following steps.\nAdd Dependency to Spring HATEOAS Spring Data REST uses Spring HATEOAS to generate the HAL format on the server side. Spring HATEOAS can just as well be used on the client side to deserialize the HAL-formatted JSON. Thus, simply add the following dependency to your client (Gradle notation):\ncompile(\u0026#39;org.springframework.boot:spring-boot-starter-hateoas\u0026#39;) Enable Spring Boot\u0026rsquo;s Hypermedia Support Next, we have to tell our Spring Boot client application to configure its JSON parsers to use Spring HATEOAS. This can be done by simply annotating your Application class with the @EnableHypermedia annotation:\n@EnableHypermediaSupport(type = EnableHypermediaSupport.HypermediaType.HAL) @SpringBootApplication @EnableFeignClients public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } } Use Resource and Resources instead of your Domain objects Feign will still not be able to map HAL-formatted JSON into your domain objects. That\u0026rsquo;s because your domain object most likely don\u0026rsquo;t contain properties like _embedded or _links that are part of that JSON. To make these properties known to a JSON parser, Spring HATEOAS provides the two generic classes Resource\u0026lt;?\u0026gt; and Resources\u0026lt;?\u0026gt;.\nSo, in your Feign client, instead of returning domain objects like Address or List\u0026lt;Address\u0026gt; return Resource\u0026lt;Address or Resources\u0026lt;Address\u0026gt; instead:\n@FeignClient(value = \u0026#34;addresses\u0026#34;, path = \u0026#34;/addresses\u0026#34;) public interface AddressClient { @RequestMapping(method = RequestMethod.GET, path = \u0026#34;/\u0026#34;) Resources\u0026lt;Address\u0026gt; getAddresses(); @RequestMapping(method = RequestMethod.GET, path = \u0026#34;/{id}\u0026#34;) Resource\u0026lt;Address\u0026gt; getAddress(@PathVariable(\u0026#34;id\u0026#34;) long id); } Feign will then be able to successfully parse the HAL-formatted JSON into the Resource or Resources objects.\nAccessing and Manipulating Associations between Entities with Feign Once feign is configured to play along with Spring Data REST, simple CRUD operations are just a matter of creating the correct methods annotated with @RequestMapping.\nHowever, there is still the question of how to access and create associations between entities with Feign, since managing associations with Spring Data Rest is not self-explanatory (see this blog post).\nThe answer to that is actually also just a matter of creating the correct @RequestMapping. Assuming that Address has a @ManyToOne relationship to Customer, creating an association to an (existing) Customer can be implemented with a PUT request of Content-Type text/uri-list to the association resource /addresses/{addressId}/customer as shown below. The other way around, reading the Customer associated to an Address can be done with a GET request to the endpoint /addresses/{addressId}/customer.\n@FeignClient(value = \u0026#34;addresses\u0026#34;, path = \u0026#34;/addresses\u0026#34;) public interface AddressClient { @RequestMapping(method = RequestMethod.PUT, consumes = \u0026#34;text/uri-list\u0026#34;, path=\u0026#34;/{addressId}/customer\u0026#34;) Resource\u0026lt;Address\u0026gt; associateWithCustomer(@PathVariable(\u0026#34;addressId\u0026#34;) long addressId, @RequestBody String customerUri); @RequestMapping(method = RequestMethod.GET, path=\u0026#34;/{addressId}/customer\u0026#34;) Resource\u0026lt;Customer\u0026gt; getCustomer(@PathVariable(\u0026#34;addressId\u0026#34;) long addressId); } ","date":"July 31, 2017","image":"https://reflectoring.io/images/stock/0036-notebooks-1200x628-branded_huf4115935f6abd8868b7cc652cfae8e97_224633_650x0_resize_q90_box.jpg","permalink":"/accessing-spring-data-rest-with-feign/","title":"Accessing a Spring Data REST API with Feign"},{"categories":["Spring Boot"],"contents":"Spring Data Rest allows to rapidly create a REST API to manipulate and query a database by exposing Spring Data repositories via its @RepositoryRestResource annotation.\n Example Code This article is accompanied by a working code example on GitHub. Managing associations between entities with Spring Data Rest isn\u0026rsquo;t quite self-explanatory. That\u0026rsquo;s why in this post I\u0026rsquo;m writing up what I learned about managing associations of different types with Spring Data Rest.\nThe Domain Model For the sake of example, we will use a simple domain model composed of Customer and Address entities. A Customer may have one or more Addresses. Each Address may or may not have one Customer. This relationship can be modelled in different variants with JPA\u0026rsquo;s @ManyToOne and @OneToMany annotations. For each of those variants we will explore how to associate Addresses and Customers with Spring Data Rest.\nBefore associating two entities, Spring Data Rest assumes that both entities already exist. So for the next sections, we assume that we already have created at least one Address and Customer entity. When working with Spring Data Rest, this implies that a Spring Data repository must exist for both entities.\n\nAssociating entities from a unidirectional @ManyToOne relationship The easiest variant is also the cleanest and most maintainable. Address has a Customer field annotated with @ManyToOne. A Customer on the other hand doesn\u0026rsquo;t know anything about his Addresses.\n@Entity public class Address { @Id @GeneratedValue private Long id; @Column private String street; @ManyToOne private Customer customer; // getters, setters omitted } @Entity public class Customer { @Id @GeneratedValue private long id; @Column private String name; // getters, setters omitted } The following request will associate the Customer with ID 1 with the Address with ID 1:\nPUT /addresses/1/customer HTTP/1.1 Content-Type: text/uri-list Host: localhost:8080 Content-Length: 33 http://localhost:8080/customers/1 We send a PUT request to the association resource between an Address and a Customer. Note that the Content-Type is text/uri-list so valid payload must be a list of URIs. We provide the URI to the customer resource with ID 1 to create the association in the database. The response for this result will be a HTTP status 204 (no content).\nAssociating entities from a unidirectional @OneToMany relationship Coming from the other end of the relationship, we have a Customer that has a list of Addresses and the Addresses don\u0026rsquo;t know about the Customers they are associated with.\n@Entity public class Address { @Id @GeneratedValue private Long id; @Column private String street; // getters, setters omitted } @Entity public class Customer { @Id @GeneratedValue private long id; @Column private String name; @OneToMany(cascade=CascadeType.ALL) private List\u0026lt;Address\u0026gt; addresses; // getters, setters omitted } Again, a PUT request to the association resource will create an association between a customer and one or more addresses. The following request associates two Addresses with the Customer with ID 1:\n\nPUT customers/1/addresses HTTP/1.1 Content-Type: text/uri-list Host: localhost:8080 Content-Length: 67 http://localhost:8080/addresses/1 http://localhost:8080/addresses/2 Note that a PUT request will remove all associations that may have been created before so that only those associations remain that were specified in the uri list. A POST request, on the other hand, will add the associations specified in the uri list to those that already exist.\nAssociating entities in a bidirectional @OneToMany/@ManyToOne relationship When both sides of the association know each other, we have a bidirectional association, which looks like this in JPA:\n@Entity public class Address { @Id @GeneratedValue private Long id; @Column private String street; @ManyToOne private Customer customer; // getters, setters omitted  } @Entity public class Customer { @Id @GeneratedValue private long id; @Column private String name; @OneToMany(cascade=CascadeType.ALL, mappedBy=\u0026#34;customer\u0026#34;) private List\u0026lt;Address\u0026gt; addresses; // getters, setters omitted } From the address-side (i.e. the @ManyToOne-side) of the relationship, this will work as above.\nFrom the customer-side, however, a PUT request like the one above that contains one or more links to an Address, will not work. The association will not be stored in the database. That\u0026rsquo;s because Spring Data Rest simply puts a list of Addresses into the Customer object and tells Hibernate to store it. Hibernate, however, only stores the associations in a bidirectional relationship if all Addresses also know the Customer they belong to (also see this post on Stackoverflow). Thus, we need to add this information manually, for example with the following method on the Customer entity:\n@PrePersist @PreUpdate public void updateAddressAssociation(){ for(BidirectionalAddress address : this.addresses){ address.setCustomer(this); } } Even then, it does not behave as in the unidirectional @OneToMany case. A PUT request will not delete all previously stored associations and a POST request will do nothing at all.\nWrap Up The thing to learn from this is not to use bidirectional associations in JPA. They are hard to handle with and without Spring Data Rest. Stick with unidirectional associations and make explicit repository calls for each use case you are implementing instead of counting on the supposed ease-of-use of a bidirectional association.\n","date":"July 30, 2017","image":"https://reflectoring.io/images/stock/0001-network-1200x628-branded_hu72d229b68bf9f2a167eb763930d4c7d5_172647_650x0_resize_q90_box.jpg","permalink":"/relations-with-spring-data-rest/","title":"Handling Associations Between Entities with Spring Data REST"},{"categories":["Spring Boot"],"contents":"Today, I stumbled (once again) over LocalDate in a Spring Boot application. LocalDate came with Java 8 and is part of the new standard API in Java for working with dates. However, if you want to effectively use LocalDate over Date in a Spring Boot application, you need to take some extra care, since not all tools support LocalDate by default, yet.\nSerializing LocalDate with Jackson Spring Boot includes the popular Jackson library as JSON (de-)serializer. By default, Jackson serializes a LocalDate object to something like this:\n{ \u0026#34;year\u0026#34;: 2017, \u0026#34;month\u0026#34;: \u0026#34;AUGUST\u0026#34;, \u0026#34;era\u0026#34;: \u0026#34;CE\u0026#34;, \u0026#34;dayOfMonth\u0026#34;: 1, \u0026#34;dayOfWeek\u0026#34;: \u0026#34;TUESDAY\u0026#34;, \u0026#34;dayOfYear\u0026#34;: 213, \u0026#34;leapYear\u0026#34;: false, \u0026#34;monthValue\u0026#34;: 8, \u0026#34;chronology\u0026#34;: { \u0026#34;id\u0026#34;:\u0026#34;ISO\u0026#34;, \u0026#34;calendarType\u0026#34;:\u0026#34;iso8601\u0026#34; } } That\u0026rsquo;s a very verbose representation of a date in JSON, wouldn\u0026rsquo;t you say? We\u0026rsquo;re only really interested in the year, month and day of month in this case, so that\u0026rsquo;s exactly what should be contained in the JSON.\nThe Jackson JavaTimeModule To configure Jackson to map a LocalDate into a String like 1982-06-23, you need to activate the JavaTimeModule. You can register the module with a Jackson ObjectMapper instance like this:\nObjectMapper mapper = new ObjectMapper(); mapper.registerModule(new JavaTimeModule()); mapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS); The module teaches the ObjectMapper how to work with LocalDates and the parameter WRITE_DATES_AS_TIMESTAMPS tells the mapper to represent a Date as a String in JSON.\nThe JavaTimeModule is not included in Jackson by default, so you have to include it as a dependency (gradle notation):\ncompile \u0026#34;com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.8.6\u0026#34; Mapping LocalDate in a Spring Boot application When using Spring Boot, an ObjectMapper instance is already provided by default (see the reference docs on how to customize it in detail).\nHowever, you still need to add the dependency to jackson-datatype-jsr310 to your project. The JavaTimeModule is then activated by default. The only thing left to do is to set the following property in your application.yml (or application.properties):\nspring: jackson: serialization: WRITE_DATES_AS_TIMESTAMPS: false ","date":"July 25, 2017","image":"https://reflectoring.io/images/stock/0043-calendar-1200x628-branded_hu4de637414a60e632f344e01d7e13a994_98685_650x0_resize_q90_box.jpg","permalink":"/configuring-localdate-serialization-spring-boot/","title":"Serializing LocalDate to JSON in Spring Boot"},{"categories":["Node"],"contents":"As a developer I am lazy. I don\u0026rsquo;t build everything by myself because others have done it already. So, when I come upon a problem someone has already solved and that someone put that solution into some library, I simply pull that library into my own - I declare a dependency to that library.\nThis post describes an important caveat when declaring \u0026ldquo;soft\u0026rdquo; dependencies using NPM and how to lock these dependencies to avoid problems.\npackage.json In the javascript world, NPM is the de-facto standard package manager which takes care of pulling my dependencies from the web into my own application. Those dependencies are declared in a file called package.json and look like this (example from an angular app):\n\u0026#34;dependencies\u0026#34;: { \u0026#34;@angular/animations\u0026#34;: \u0026#34;~4.2.4\u0026#34;, \u0026#34;@angular/common\u0026#34;: \u0026#34;^4.0.0\u0026#34;, ... } Unstable Dependencies In the package.json you can declare a dependency using certain matchers:\n \u0026quot;4.2.4\u0026quot; matches exactly version 4.2.4 \u0026quot;~4.2.4 matches the latest 4.2.x version \u0026quot;^4.2.4 matches the latest 4.x.x version \u0026quot;latest\u0026quot; matches the very latest version \u0026quot;\u0026gt;4.2.4\u0026quot; / \u0026quot;\u0026lt;=4.2.4\u0026quot; matches the latest version greater than / less or equal to 4.2.4) * matches any version.  Matchers like ~ and ^ provide a mechanism to declare a dependency to a range of versions instead of a specific version. This can be very dangerous, since the maintainer of your dependency might update to a version that does no longer work with your application. The next time you build your app, it might fail - and the reasons for that failure will be very hard to find.\nStable Dependencies with package-lock.json Each time I create a javascript app whose dependencies are managed by NPM, the first thing I\u0026rsquo;m doing is to remove all matchers in package.json and define the exact versions of the dependencies I\u0026rsquo;m using.\nSadly, that alone does not solve the \u0026ldquo;unstable dependencies\u0026rdquo; problem. My dependencies can have their own dependencies. And those may have used one of those matchers to match a version range instead of a specific version. Thus, even though I declared explicit versions for my direct dependencies, versions of my transitive dependencies might change from one build to another.\nTo lock even the versions of my transitive dependencies to a specific version, NPM has introduced package locks with version 5.\nWhen calling npm install, npm automatically generates a file called package-lock.json which contains all dependencies with the specific versions that were resolved at the time of the call. Future calls of npm run build will then use those specific versions instead of resolving any version ranges.\nSimply check-in package-lock.json into version control and you will have stable builds.\nNot Working? NPM doesn\u0026rsquo;t generate a package-lock.json? Or the versions in package-lock.json are not honored when calling npm run build? Make sure that your NPM version is 5 or above and if it isn\u0026rsquo;t, call npm install npm@latest (you may also provide a specific version to npm install, if you prefer :)).\n","date":"July 24, 2017","image":"https://reflectoring.io/images/stock/0044-lock-1200x628-branded_hufda82673b597e36c6f6f4e174d972b96_267480_650x0_resize_q90_box.jpg","permalink":"/locking-dependencies-with-npm/","title":"Locking transitive Dependencies with NPM"},{"categories":["Software Craft"],"contents":"I recently had the opportunity to propose the architecture for a new large-scale software project. While it\u0026rsquo;s fun being paid for finding the technologies that best fit to the customer\u0026rsquo;s requirements (of which only a fraction is available at the time, of course), it may also be daunting having to come up with a solution that is future-proof and scales with expectations.\nTo ease this task in the future, I compiled a list of concerns you should think about when setting up a new software project. For most of those concerns in the list below, I offer one or more technologies that are a possible solution to that concern.\nNote that this list is not complete and weighs heavily towards Java technologies, so you should use this list with the caution that the task of selecting the best architecture for your customer deserves.\nSome of the concerns are inspired by arc42 which provides a template that I often use as a basis for documenting a software architecture.\nI hope this helps any readers who are setting up a new Java-based software architecture.\nArchitecture Style Which should be the basic architecture style of the application? There are of course more styles that are listed here. However, monoliths and micro-services seem to be the most discussed architecture styles these days.\n   Architecture style Notes     Monolithic A monolithic architecture contains all of its functionality in a single deployment unit. Might not support a flexible release cycle but doesn\u0026rsquo;t need potentially fragile distributed communication.   (Micro-) Services Multiple, smaller deployment units that make use of distributed communication to implement the application\u0026rsquo;s functionality. May be more flexible for creating smaller and faster releases and scales with multiple teams, but comes at the cost of distributed communication problems.    Back-End Aspects What things should you think about that concern the back-end of the application you want to build?\n   Aspect Notes     Logging Use SLF4J with either Logback or Log4J2 underneath. Not really much to think on, nowadays. You should however think about using a central log server (we will come to that later).   Application Server Where should the software be hosted? A distributed architecture may work well with Spring Boot while a monolithic architecture might better be served from a full-fledged application server like Wildfly. Choice of application server is often predetermined for you, since corporate operations like to define a default server for all applications they have to run.   Job Execution Almost every medium-to-large sized application will need to execute scheduled jobs like cleaning up a database or batch-importing third-party data. Spring offers basic job scheduling features. For more sophisticated needs, you may want to use Quartz, which integrates into a Spring application nicely as well.   Database Refactoring You should think about how to update the structure of your relational database between two versions of your software. In small projects, manual execution of SQL scripts may be acceptable, in medium-to-large projects you may want to use a database refactoring framework like Flyway or Liquibase (see my previous blog post. If you are using a schemaless database you don\u0026rsquo;t really need a database refactoring framework (you should still think about which changes you can do to your data in order to stay backwards-compatible, though).   API Technology Especially when building a distributed architecture, you need to think about how your deployment units communicate with each other. They may communicate asynchronously via a messaging middleware like Kafka or synchronously, for example via REST using Spring MVC and Feign.   API Documentation The internal and external APIs you create must be documented in some form. For REST APIs you may use Swagger\u0026rsquo;s heavy-weight annotations or use Spring Rest Docs for a more flexible (but more manual) approach (see my previous blog post). When using no framework at all, document your APIs by hand using a markup format like Markdown or Asciidoctor.   Measuring Metrics Are there any metrics like thoughput that should be measured while the application is running? Use a metric framework like Dropwizard Metrics (see my previous blog post) or the Prometheus Java Client.   Authentication How will users of the application prove that they are who they claim to be? Will users be asked to provide username and password or are there additional credentials to check? With a client-side single page app, you need to issue some kind of token like in OAuth or JWT (also see this blog post about OpenID). In other web apps, a session id cookie may be enough.   Authorization Once authenticated, how will the application check what the user is allowed to do and what is prohibited? On the server side, Spring Security is a framework that supports implementation of different authorization mechanisms.   Database Technology Does the application need a structured, schema-based database? Use a relational database. Is it storing document-based structures? Use MongoDB. Key-Value Pairs? Redis. Graphs? Neo4J.   Persistence Layer When using a relational database, Hibernate is the de-facto default technology to map your objects into the database. You may want to use Spring Data JPA on top of Hibernate for easy creation of repository classes. Spring Data JPA also supports many NoSQL databases like Neo4J or MongoDB. However, there are alternative database-accessing technologies like iBatis and jOOQ.    Frontend Aspects What concerns are there to think about that affect the frontend architecture?\n   Aspect Notes     Frontend Technology Is the application required to be hosted centrally as a web application or should it be a fat client? If a web application, will it be a client-side single page app (use Angular) or a server-side web framework (I would propose using Apache Wicket or Thymeleaf / Spring MVC over frameworks like JSF or Vaadin, unless you have a very good reason). If a fat client, are the requirements in favor of a Swing or JavaFX-based client or something completely different like Electron?   Client-side Database Do the clients need to store data? In a web application you can use Local Storage and IndexedDB. In a fat client you can use some small-footprint database like Derby.   Peripheral Devices Do the clients need access to some kind of peripheral devices like card readers, authentication dongles or any hardware that does external measurements of some sort? In a fat client you may access those devices directly, in a web application you may have to provide a small client app which accesses the devices and makes their data available via a http server on localhost which can be integrated into the web app within the browser.   Design Framework How will the client app be layouted and designed? In a HTML-based client, you may want to use a framework like Bootstrap. For fat clients, the available technologies may differ drastically.   Measuring Metrics Are there any events (errors, client version, \u0026hellip;) that the client should report to a central server? How will those events be communicated to the server?   Offline Mode Are the clients required to work offline? Which use cases should be available offline and which not? How will client side data be synchronized with the server once the client is online?    Operational Aspects What you definitely should discuss with the operations team before proposing your architecture to anyone.\n   Aspect Notes     Servers Will the application be hosted on real hardware or on virtual machines? Docker is a popular choice for virtualization nowadays.   Network Infrastructure How is the network setup? Are there any communication obstacles between different parts of the application or between the application and third party applications?   Load Balancing How will the load on the application be balanced between multiple instances of the software? Is there a hardware load balancer? Does it have to support sticky sessions? Does the app need a reverse proxy that routes requests to different deployment units of the application (you may want to use Zuul)?   Monitoring How is the health of the server instances monitored and alarmed (Icinga may be a fitting tool)? Who will be alarmed? Should there be a central dashboard where all kinds of metrics like thoughput etc. are measured (Prometheus + Grafana may be the tools of choice).   Service Registry When building a (Micro-)Service Architecture, you may need a central registry for your services so that they find each other. Eureka and its integration in Spring Boot may be a tool to look into.   Central Log Server Especially in a distributed architecture with many deployment units, but also in a monolithic application (which also should have at least two instances running), a central log server may make bug hunting easier. The Elastic Stack (Elastic Search, Logstash, Kibana) is popular, but heavy to set up. Graylog 2 is an alternative.   Database Operations What are the requirements towards the database? Does it need to support hot failover and / or load balancing between database instance? Does it need online backup? Oracle RAC is a pretty default (but expensive) technology here, but other databases support similar requirements.    Development Aspects Things that the whole development team has to deal with every day. Definitely discuss these points with the development team before starting development.\n   Aspect Notes     IDE What\u0026rsquo;s the policy on using IDE\u0026rsquo;s? Is each developer allowed to use his/her IDE of choice? Making a specific IDE mandatory may reduce costs for providing several parallel solutions while letting each developer use his favorite IDE may reduce training costs. I\u0026rsquo;m a follower of IntelliJ and would try to convert all Eclipsians when starting a new project ;).   Build Tool Which tool will do the building? Both Maven and Gradle are popular choices, while I would chose Gradle for it\u0026rsquo;s customizability in form of Groovy Code.   Unit Testing Which parts of the code should be unit tested? Which frameworks will be used for this? JUnit4 and Mockito are a reasonable starting point (note that JUnit 5 is currently on the way).   End-to-End Tests Which parts of the code should be tested with automated end-to-end tests? Selenium is a popular choice to remote control a browser. When working on a single page application with Angular and angular-cli, Protractor is setup by default. Have a look at this blog post for a proposal on how to create end-to-end tests with Selenium while still having access to the database internals.   Version Control Where will the source code be hosted? Git is quickly becoming the de-facto standard, but Subversion has a better learning curve.   Coding Conventions How are classes and variables named? Is code and javadoc in english or any other language? I would propose to choose an existing code formatter and a Checkstyle rule set and include it as a build breaker into the build process to make sure that only code that adheres to your conventions are committed to the code base.   Code Quality How will you measure code quality? Are the coding conventions enough or will you run additional metrics on the code? How will those metrics be made visible? You may want to setup a central code quality server like SonarQube for all to access.   Code Reviews Will you perform code reviews during development (I highly recommend this)? How will thos code reviews be supported by software? There are code review tools like Review Board. Some version control tools like GitLab support workflows in which each user works on his own branch until he is ready to merge his changes. A merge request is a perfect opportunity for code reviews.   Continuous Integration How will the build process be executed on a regular basis? There are cloud providers like CircleCI or Travis or you may install a local Jenkins server.   Continuous Deployment Are there automatic tasks that deploy your application to a development, staging or production environment? How will these tasks be executed?   Logging Guidelines Which information should be logged when? You should provide a guideline for developers to help them include the most valuable information in the log files.   Documentation Which parts of the application should be documented how? What information should be documented in a wiki like Confluence and what should be put into Word documents? If there is a chance, use a markup format like Markdown ord AsciiDoc instead of Word.    ","date":"May 27, 2017","image":"https://reflectoring.io/images/stock/0045-checklist-1200x628-branded_hu9e774932f96798e633ac569f63dda92c_116442_650x0_resize_q90_box.jpg","permalink":"/checklist-architecture-setup/","title":"A Checklist for setting up a Java-based Software Architecture"},{"categories":["Java"],"contents":"In a previous blog post I discussed the term \u0026ldquo;database refactoring\u0026rdquo; and some concepts that allow database refactoring to be supported by tools with the result of having a database schema that is versioned just like your software is. In this post I would like to discuss Flyway and Liquibase - both popular java-based tools that support database refactoring. The goal of this post is to find out which tool is better suited in which scenario.\nFlyway Flyway\u0026rsquo;s concept centers around six different commands to provide support for automated database refactoring and versioning. These commands can be executed from the command line, from a build process (e.g. with Maven or Gradle) or directly from Java code, using the API. When executing a command you have to provide the database connection parameters (url, username, password) of the target database that you want to refactor.\nThe main command is named migrate and does exactly what database refactoring is all about: it looks in a specified folder full of sql scripts (each with a version number in the file name) and checks which of these scripts has already been applied to the target database. It then executes those scripts that have not yet been applied. In case of inconsistencies, e.g. when a script that has already been applied has been changed in the meantime, Flyway aborts processing with an error message.\nA unique feature of Flyway is that you can provide migration scripts not only in SQL format but also as Java code. This way, you can implement complex and dynamic database migrations. This feature should be used with caution, however, since the dynamic database migrations are hard to debug if anything goes wrong.\nThe central migrate command is supplemented by a set of additional commands that make the database refactoring life a little easier.\nThe info command shows all currently available migration scripts from the specified folder and lists which scripts have already been applied and which are still due to be applied on the target database.\nTo check if the migration scripts that were applied to the target database have been changed in the meantime, you can run the validate command. We want to know if a script in the script folder has been changed since being applied to the target database, because this may mean that the script has been applied to different databases in different versions, which is a source of trouble.\nIf you decide that your scripts should be applied in spite of a failing validate command, you can run the repair command. This command resets the database table used by Flyway to store which scripts have been applied (this table is called SCHEMA_VERSION by default).\nLast but not least, the clean command empties the target schema completely (should only be used on test databases, obviously).\nLiquibase Liquibase follows a different concept to implement database refactoring. While Flyway supports migration scripts in SQL and Java format only, Liquibase abstracts away from SQL completely and thus decouples database refactoring from the underlying database technology.\nInstead of SQL scripts, Liquibase supports migration scripts in XML, YAML and JSON format. In these scripts you define the changes to a database on an abstract level. For each change, Liquibase supports a corresponding element in YML, YAML and JSON. A change that creates a new database table in YAML format looks like this, for example:\ncreateTable: tableName: Customer  columns: - column: name: name type: varchar(255) - column: name: address type: varchar(255) Changes like \u0026ldquo;add column\u0026rdquo;, \u0026ldquo;create index\u0026rdquo; or \u0026ldquo;alter table\u0026rdquo; and many others are available in a similar fashion.\nWhen executed, Liquibase automatically applies all scripts that have not yet been applied and stores the metadata for all applied scripts in a special database table - very similar to Flyway. Also very similar to Flyway, Liquibase can be called via command line, build tools or directly via its Java API.\nWhen to use which Tool? Both Flyway and Liquibase support all features that you need for professional database refactoring and versioning, so you will always know which version of the database schema you are dealing with and if it matches to the version of your software. Both tools are integrated in Maven or Gradle build scripts and in the Spring Boot ecosystem so that you can fully automate database refactoring.\nFlyway uses SQL to define database changes, and thus you can tailor your SQL scripts to work well with the underlying database technology like Oracle or PostgreSQL. With Liquibase on the other hand, you can introduce an abstraction layer by using XML, YAML or JSON to define your database changes. Thus, Liquibase is better suited to be used in a software product that is installed in different environments with different underlying database technologies. If you want to have full control over your SQL, however, Flyway is the tool of choice since you can change the database with fully tailored SQL or even Java code.\nThe catch with both tools is that both are mainly maintained by a single person and not by a large team. This may have a negative impact on future development of both tools, but doesn\u0026rsquo;t have to. At the time of this writing, activity in Flyway\u0026rsquo;s GitHub repository is higher that in the Liquibase repository, however.\n","date":"May 14, 2017","image":"https://reflectoring.io/images/stock/0046-rack-1200x628-branded_hu38983fac43ab7b5246a0712a5f744c11_252723_650x0_resize_q90_box.jpg","permalink":"/database-refactoring-flyway-vs-liquibase/","title":"Tool-based Database Refactoring: Flyway vs. Liquibase"},{"categories":["Spring Boot"],"contents":"In my previous blog posts about creating monitoring metrics with Dropwizard Metrics and exposing them for the Prometheus monitoring application we already have gained a little insight into why monitoring is important and how to implement it.\nHowever, we have not looked into monitoring specific and meaningful metrics yet. For one such metric, the error rate, I would like to go into a little detail in this blog post. The error rate is important for any kind of application that processes requests of some sort. Some applications, like GitHub, even publicly display their error rate to show that they are able to handle the load created by the users (have a look at the \u0026lsquo;Exception Percentage\u0026rsquo; on their status page).\nThe error rate is a good indicator for the health of a system since the occurrence of errors most certainly indicates something is wrong. But what exactly is the definition of error rate and how can we measure it in a Spring Boot application?\nDefinitions of \u0026ldquo;Error Rate\u0026rdquo; For the definition of our application\u0026rsquo;s error rate we can borrow from Wikipedia\u0026rsquo;s definition of bit error rate:\n The bit error rate (BER) is the number of bit errors per time unit.\n Although our application sends and receives bits, the bit error rate is a little too low-level for us. Transferring that definition to the application level however, we come up with something like this:\n The application error rate is the number of requests that result in an error per time unit.\n It may also be interesting to measure errors in percentage instead of time units, so for the sake of this blog post, we add another definition:\n The application error percentage is the number of requests that result in an error compared to the total number of requests.\n For our Spring Boot application \u0026ldquo;resulting in an error\u0026rdquo; means that some kind of internal error was caused that prevented the request from being processed successfully (i.e. HTTP status 5xx).\nCounting Errors Using Spring MVC, counting errors in an application is as easy as creating a central exception handler using the @ControllerAdvice annotation:\n@ControllerAdvice public class ControllerExceptionHandler { private MetricRegistry metricRegistry; @Autowired public ControllerExceptionHandler(MetricRegistry metricRegistry){ this.metricRegistry = metricRegistry; } @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR) @ExceptionHandler(Exception.class) @ResponseBody public String handleInternalError(Exception e) { countHttpStatus(HttpStatus.INTERNAL_SERVER_ERROR); logger.error(\u0026#34;Returned HTTP Status 500 due to the following exception:\u0026#34;, e); return \u0026#34;Internal Server Error\u0026#34;; } private void countHttpStatus(HttpStatus status){ Meter meter = metricRegistry.meter(String.format(\u0026#34;http.status.%d\u0026#34;, status.value())); meter.mark(); } } In this example, we\u0026rsquo;re catching all Exceptions that are not caught by any other exception handler and increment a Dropwizard meter called http.status.500 (refer to my previous blog post to learn how to use Dropwizard Metrics).\nCounting Total Requests In order to calculate the error percentage, we also want to count the total number of HTTP requests processed by our application. One way to do this is by implementing a WebMvcConfigurerAdapter and registering it within our ApplicationContext like this:\n@Configuration public class RequestCountMonitoringConfiguration extends WebMvcConfigurerAdapter { private Meter requestMeter; @Autowired public RequestCountMonitoringConfiguration(MetricRegistry metricRegistry) { this.requestMeter = metricRegistry.meter(\u0026#34;http.requests\u0026#34;); } @Override public void addInterceptors(InterceptorRegistry registry) { registry.addInterceptor(new HandlerInterceptorAdapter() { @Override public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception { requestMeter.mark(); } }); } } This will intercept all incoming requests and increment a Meter called http.requests after the request has been processed, regardless of an exception being thrown or not.\nMonitoring the Error Rate with Prometheus If we translate the Dropwizard metrics into the Prometheus data format (see my previous blog post), we will see the following metrics when typing \u0026ldquo;/prometheus\u0026rdquo; into the browser:\nhttp_requests_total 13.0 http_status_500_total 4.0 Now, we have a prometheus metric called http_status_500_total that counts unexpected errors within our application and a metric called http_requests_total that counts the total number of processed requests.\nSetting up Prometheus Once Prometheus is setup we can play around with these metrics using Prometheus' querying language.\nTo set up Prometheus, simply install it and edit the file prometheus.yml to add your application\u0026rsquo;s url to targets and add metrics_path: '/prometheus' if your application\u0026rsquo;s prometheus metrics are exposed via the /prometheus endpoint. Once started, you can access the Prometheus web interface via localhost:9090 by default.\nQuerying Metrics in Prometheus' Web Interface In the web interface, you can now provide a query and press the \u0026ldquo;execute\u0026rdquo; button to show a graph of the metrics you queried.\nTo get the average rate of errors per second within the last minute, we can use the rate() function like this:\nrate(http_status_500_total [1m]) Likewise we can query the average rate of total requests per second:\nrate(http_http_requests_total [1m]) And finally, we can relate both metrics by calculating the percentage of erroneously processed requests within the last minute\nrate(http_status_500_total [1m]) / rate(http_requests_total [1m]) The result of the last query looked something like this in the Prometheus web interface, once I manually created some successful requests and some errors:\n![Error Percentage]({{ base }}/assets/img/posts/error_percentage.png)\nWrap-Up By simply counting all requests and counting those requests that return an HTTP status 500 (internal server error) and exposing those counters via Dropwizard Metrics we can set up a monitoring with Prometheus that alerts us when the application starts creating errors for some reason. Though pretty easy to calculate, the error rate is a very meaningful indicator of our application\u0026rsquo;s health at any time and should be present in every monitoring setup.\n","date":"May 7, 2017","image":"https://reflectoring.io/images/stock/0032-dashboard-1200x628-branded_hu32014b78b20b83682c90e2a7c4ea87ba_153646_650x0_resize_q90_box.jpg","permalink":"/monitoring-error-rate-spring-boot/","title":"Monitoring the Error Rate of a Spring Boot Web Application"},{"categories":["Spring Boot"],"contents":"Monitoring is an important quality requirement for applications that claim to be production-ready. In a previous blog post I discussed how to expose metrics of your Spring Boot application with the help of the Dropwizard Metrics library. This blog post shows how to expose metrics in a format that Prometheus understands.\nWhy Prometheus? Prometheus represents the newest generation of monitoring tools. It contains a time-series database that promises efficient storage of monitoring metrics and provides a query language for sophisticated queries of those metrics. Prometheus promises to be better suited to modern, dynamically changing microservice architectures than other monitoring tools.\nAn apparent drawback of Prometheus is that it does not provide a dashboard UI where you can define several metrics you want to monitor and see their current and historical values. Prometheus developers argue that there are tools that already do that pretty good. Grafana is such a tool that provides a datasource for Prometheus data off-the-shelf. However, Prometheus does provide a simple UI you can use to do adhoc queries to your monitoring metrics.\nThat being said, Prometheus was on my list of tools to check out, so that\u0026rsquo;s the main reason I\u0026rsquo;m having a look at how to provide monitoring data in the correct format :).\nPrometheus Data Format Prometheus can scrape a set of endpoints for monitoring metrics. Each server node in your system must provide such an endpoint that returns the node\u0026rsquo;s metrics in a text-based data format that Prometheus understands. At the time of this writing, the current version of that format is 0.0.4. Prometheus takes care of regularly collecting the monitoring metrics from all configured nodes and storing them in the time-series database for later querying.\nThe data format looks pretty simple on first look. A simple counter can be expressed like this:\n# HELP counter_name A human-readable help text for the metric # TYPE counter_name counter counter_name 42 On second look, however, the data format is a lot more expressive and complex. The following snippet exposes a summary metric that defines the duration of certain requests in certain quantiles (a quantile of 0.99 meaning that 99% of the requests took less that the value and the other 1% took more):\n# HELP summary_metric A human-readable help text for the metric # TYPE summary_metric summary summary_metric{quantile=\u0026#34;0.5\u0026#34;,} 5.0 summary_metric{quantile=\u0026#34;0.75\u0026#34;,} 6.0 summary_metric{quantile=\u0026#34;0.95\u0026#34;,} 7.0 summary_metric{quantile=\u0026#34;0.98\u0026#34;,} 8.0 summary_metric{quantile=\u0026#34;0.99\u0026#34;,} 9.0 summary_metric{quantile=\u0026#34;0.999\u0026#34;,} 10.0 summary_metric_count 42 The key-value pairs within the parentheses are called \u0026lsquo;labels\u0026rsquo; in Prometheus-speech. You can define any labels you would later like to query, the label quantile being a special label used for the summary metric type.\nFurther details of the Prometheus data format can be looked up at the Prometheus website.\nProducing the Prometheus Data Format with Spring Boot If you read my previous blog post, you know how to expose metrics in a Spring Boot application using Dropwizard metrics and the Spring Boot Actuator plugin. The data format exposed by Spring Boot Actuator is a simple JSON format, however, that cannot be scraped by Prometheus. Thus, we need to transform our metrics into the Prometheus format.\nPrometheus Dependencies First off, we need to add the following dependencies to our Spring Boot application (Gradle notation):\ncompile \u0026#34;io.prometheus:simpleclient_spring_boot:0.0.21\u0026#34; compile \u0026#34;io.prometheus:simpleclient_hotspot:0.0.21\u0026#34; compile \u0026#34;io.prometheus:simpleclient_dropwizard:0.0.21\u0026#34; Configuring the Prometheus Endpoint @Configuration @EnablePrometheusEndpoint public class PrometheusConfiguration { private MetricRegistry dropwizardMetricRegistry; @Autowired public PrometheusConfiguration(MetricRegistry dropwizardMetricRegistry) { this.dropwizardMetricRegistry = dropwizardMetricRegistry; } @PostConstruct public void registerPrometheusCollectors() { CollectorRegistry.defaultRegistry.clear(); new StandardExports().register(); new MemoryPoolsExports().register(); new DropwizardExports(dropwizardMetricRegistry).register(); ... // more metric exports  } } The simpleclient_spring_boot library provides the @EnablePrometheusEndpoint annotation which we add to a class that is also annotated with Spring\u0026rsquo;s @Configuration annotation so that it is picked up in a Spring component scan. By default, this creates an HTTP endpoint accessible via /prometheus that exposes all registered metrics in the Prometheus data format.\nIn a @PostConstruct method we register all metrics that we want to have exposed via the Prometheus endpoint. The StandardExports and MemoryPoolExports classes are both provided by the simpleclient_hotspot library and expose metrics concerning the server\u0026rsquo;s memory. The DropwizardExports class is provided by the simpleclient_dropwizard library and registers all metrics in the specified Dropwizard MetricRegistry object to the new Prometheus endpoint and takes care of translating them into the correct format.\nNote that the call to CollectorRegistry.defaultRegistry.clear() is a workaround for unit tests failing due to \u0026lsquo;metric already registered\u0026rsquo; errors. This error occurs since defaultRegistry is static and the Spring context is fired up multiple times during unit testing. I would have wished that a CollectorRegistry simply ignored the fact that a metric is already registered\u0026hellip; .\nFor a list of all available libraries that provide or translate metrics for Java applications, have a look at the GitHub repo. They are not as well documented as I would have hoped, but they mostly contain only a few classes so that a look under the hood should help in most cases.\nAfter firing up your application, the metrics should be available in Prometheus format at http://localhost:8080/prometheus.\n","date":"May 1, 2017","image":"https://reflectoring.io/images/stock/0032-dashboard-1200x628-branded_hu32014b78b20b83682c90e2a7c4ea87ba_153646_650x0_resize_q90_box.jpg","permalink":"/monitoring-spring-boot-with-prometheus/","title":"Exposing Metrics of a Spring Boot Application for Prometheus"},{"categories":["Spring Boot"],"contents":"How do we know if an application we just put into production is working as it should? How do we know that the application can cope with the number of users and is not slowing down to a crawl? And how do we know how many more users the application can handle before having to scale up and put another instance into the cluster? The answer to these questions is transparency. A good application is transparent in that it exposes several metrics about its health and current status that can be interpreted manually as well as automatically.\nThis post explains how to create metrics in a Java application with the Dropwizard metrics library and how to expose them with Spring Boot.\nWhat Metrics to Measure? Usual monitoring setups measure metrics like CPU, RAM and hard drive usage. These metrics measure the resources available to our application. These metrics can usually be read from the application server or operating system so that we don\u0026rsquo;t have to do anything specific within our application to make them available.\nThese resource metrics are very important. If the monitoring setup raises an alarm because some resource is almost depleted, we can take action to mitigate that problem (i.e. adding another hard drive or putting another server into the load balancing cluster).\nHowever, there are metrics which are just as important that can only be created within our application: the number of payment transactions or the average duration of a search in a shop application, for example. These metrics give insight to the actual business value of our application and make capacity planning possible when held against the resource metrics.\nCreating Metrics with Dropwizard Luckily, there are tools for creating such metrics, so we don\u0026rsquo;t have to do it on our own. Dropwizard metrics is such a tool, which makes it very easy to create metrics within our Java application.\nInjecting the MetricsRegistry First off, you will need a MetricRegistry object at which to register the metrics you want to measure. In a Spring Boot application, you simply have to add a dependency to the Dropwizard metrics library. Spring Boot will automatically create a MetricRegistry object for you which you can inject like this:\n@Service public class ImportantBusinessService { private MetricRegistry metricRegistry; @Autowired public ImportantBusinessService(MetricRegistry metricRegistry){ this.metricRegistry = metricRegistry; } } Measuring Throughput If you want to create a throughput metric or a \u0026ldquo;rate\u0026rdquo;, simply create a Meter and update it within your business transaction:\n@Service public class ImportantBusinessService { private Meter paymentsMeter; @Autowired public ImportantBusinessService(MetricRegistry metricRegistry){ this.paymentsMeter = metricRegistry.meter(\u0026#34;payments\u0026#34;); } public void pay(){ ... // do business  paymentsMeter.mark(); } } This way, each time a payment transaction is finished, Dropwizard will update the following metrics (also see the Dropwizard manual) :\n a counter telling how many payments have been made since server start the mean rate of transactions per second since server start moving average rates transactions per second within the last minute, the last 5 minutes and the last 15 minutes.  The moving averages rates are actually exponentially weighted so that the most recent transactions are taken into account more heavily. This is done so that trend changes can be noticed earlier, since they can mean that something is just now happening to our application (a DDOS attack, for example).\nMeasuring Duration Dropwizard also allows measuring the duration of our transactions. This is done with a Timer:\n@Service public class ImportantBusinessService { private Timer paymentsTimer; @Autowired public ImportantBusinessService(MetricRegistry metricRegistry){ this.paymentsTimer = metricRegistry.timer(\u0026#34;payments\u0026#34;); } public void pay(){ Timer.Context timer = paymentsTimer.time(); try { ... // do business  } finally { timer.stop(); } } } A Timer creates the following metrics for us:\n the min, max, mean and median duration of transactions the standard deviation of the duration of transactions the 75th, 95th, 98th, 99th and 999th percentile of the transaction duration  The 99th percentile means that 99% of the measured transactions were faster than this value and 1% was slower. Additionally, a Timer also creates all metrics of a Meter.\nExposing Metrics via Spring Boot Actuator Having measured the metrics, we still need to expose them, so that some monitoring tool can pick them up. Using Spring Boot, you can simply add a dependency to the Actuator Plugin. By default Actuator will create a REST endpoint on /metrics which lists several metrics already, including some resources metrics as well as counts on different page hits.\nSpring Boot has support for Dropwizard by default, so that all metrics created with Dropwizard will automatically be exposed on that endpoint. Calling the endpoint results in a JSON structure like the following:\n{ \u0026#34;classes\u0026#34;: 13387, \u0026#34;classes.loaded\u0026#34;: 13387, \u0026#34;classes.unloaded\u0026#34;: 0, \u0026#34;datasource.primary.active\u0026#34;: 0, \u0026#34;datasource.primary.usage\u0026#34;: 0.0, \u0026#34;gc.ps_marksweep.count\u0026#34;: 4, \u0026#34;gc.ps_marksweep.time\u0026#34;: 498, \u0026#34;gc.ps_scavenge.count\u0026#34;: 17, \u0026#34;gc.ps_scavenge.time\u0026#34;: 305, \u0026#34;heap\u0026#34;: 1860608, \u0026#34;heap.committed\u0026#34;: 876544, \u0026#34;heap.init\u0026#34;: 131072, \u0026#34;heap.used\u0026#34;: 232289, \u0026#34;httpsessions.active\u0026#34;: 0, \u0026#34;httpsessions.max\u0026#34;: -1, \u0026#34;instance.uptime\u0026#34;: 3104, \u0026#34;mem\u0026#34;: 988191, \u0026#34;mem.free\u0026#34;: 644254, \u0026#34;nonheap\u0026#34;: 0, \u0026#34;nonheap.committed\u0026#34;: 115008, \u0026#34;nonheap.init\u0026#34;: 2496, \u0026#34;nonheap.used\u0026#34;: 111648, \u0026#34;processors\u0026#34;: 8, \u0026#34;systemload.average\u0026#34;: -1.0, \u0026#34;threads\u0026#34;: 19, \u0026#34;threads.daemon\u0026#34;: 16, \u0026#34;threads.peak\u0026#34;: 20, \u0026#34;threads.totalStarted\u0026#34;: 25, \u0026#34;uptime\u0026#34;: 20126, \u0026#34;payments.count\u0026#34;: 0, \u0026#34;payments.fifteenMinuteRate\u0026#34;: 0.0, \u0026#34;payments.fiveMinuteRate\u0026#34;: 0.0, \u0026#34;payments.meanRate\u0026#34;: 0.0, \u0026#34;payments.oneMinuteRate\u0026#34;: 0.0, \u0026#34;payments.snapshot.75thPercentile\u0026#34;: 0, \u0026#34;payments.snapshot.95thPercentile\u0026#34;: 0, \u0026#34;payments.snapshot.98thPercentile\u0026#34;: 0, \u0026#34;payments.snapshot.999thPercentile\u0026#34;: 0, \u0026#34;payments.snapshot.99thPercentile\u0026#34;: 0, \u0026#34;payments.snapshot.max\u0026#34;: 0, \u0026#34;payments.snapshot.mean\u0026#34;: 0, \u0026#34;payments.snapshot.median\u0026#34;: 0, \u0026#34;payments.snapshot.min\u0026#34;: 0, \u0026#34;payments.snapshot.stdDev\u0026#34;: 0 } Wrap-Up When implementing a web application, think of the business metrics you want to measure and add a Dropwizard Meter or Timer to create those metrics. It\u0026rsquo;s a few lines of code that provide a huge amount of insight into an application running in production. Spring Boot offers first class support for Dropwizard metrics by automatically exposing them via the \u0026lsquo;/metrics\u0026rsquo; endpoint to be picked up by a monitoring tool.\n","date":"April 21, 2017","image":"https://reflectoring.io/images/stock/0047-transparent-1200x628-branded_hu3d6aec174c17bf72613228059ab0e64e_127557_650x0_resize_q90_box.jpg","permalink":"/transparency-with-spring-boot/","title":"Exposing Metrics of a Spring Boot Application using Dropwizard"},{"categories":["Software Craft"],"contents":"You may have already heard about OpenID Connect as the new standard for single sign-on and identity provision on the internet. If not, I am sure that you have at least already used it by clicking on any of these \u0026ldquo;Log In With Google\u0026rdquo; buttons. But what is OpenID Connect and why would you want to use it for your own applications? In this post I want to give a simple answer to these questions.\nMotivation When thinking about authentication (Who is the user?) and authorization (What is the user allowed to do?) for your application, the first approach might be to store your users in a local database and create a custom login that checks against this database. This works fine for small apps in small environments. For big enterprises though, it quickly becomes hard to manage all the users if you have myriad different applications with numerous different roles. The solution to this problem could be to centralize the identification by creating a single service that is only responsible for authentication and authorization, called Identity Provider (ID Provider).\nIntroducing OpenID Connect OpenID Connect (OIDC) is a standard for creating such an ID Provider (and more). It basically adds an authentication layer to OAuth 2.0 (an authorization framework). Technically spoken OIDC specifies a RESTful HTTP API, that is using the JSON Web Token (JWT) standard. In the following I will try to explain these and more technical terms concerning OIDC with the help of a simple example.\nA Simple Example Imagine the following situation: You have created a cool new app but don\u0026rsquo;t want to have anything to do with storing information about all the users of your app (maybe because you don\u0026rsquo;t know how to do this absolutely secure). On the other hand, you don\u0026rsquo;t want to give access to everyone out there. Therefore, you want to give your users the option to log into your app with their google accounts. Within this example you already have all the three important roles in the OIDC world:\n Your app is the Client (denoted as Relying Party in OIDC) \u0026ldquo;Google\u0026rdquo; is the Authorization Server (denoted as OpenID Provider in OIDC) Your users are the Resource Owners.  In OIDC there are different ways (flows) to authenticate a user. The Authorization code flow is the most commonly used one and works like this:\nA user opens your app for the first time. As he isn\u0026rsquo;t already logged in, your app redirects him to google. There the user logs in with his google credentials. Google authenticates him and creates a one-time, short-lived, temporary code. The user gets redirected back to your app with this code attached. Your app extracts the code and makes a background REST invocation to google for an Identity Token (ID token), Access Token and Refresh Token. The most important one is the ID token, targeted to identify the user within the client. After your client validates this ID token (to make sure that it hasn\u0026rsquo;t been changed during transport), the user is successfully logged in. Moreover, your client can use the access token to ask the OpenID Provider (the userinfo endpoint to be exact) for additional information about your user. Because the access token usually expires after a few minutes the refresh token can be used to obtain a new access token.\nA quite interesting fact here is that OpenID Connect doesn\u0026rsquo;t specify how the user gets authenticated by the OpenID Provider. That means that it doesn\u0026rsquo;t necessarily need to be a username and password, but could also be for example a code, that is sent to the users email address or anything else you can imagine. In addition, these mechanisms can be changed easily depending on the degree of security you require and without the need to change any of the secured applications.\nJSON Web Token (JWT) As seen in the example, the app finally gets three different tokens: the ID token, access token and refresh token. Because your application hasn\u0026rsquo;t saved any information about your user, it has to extract them from the ID token or request additional details from the OpenID Provider with the help of the access token. As mentioned earlier the ID token is indeed a JWT (pronounced like the English word \u0026ldquo;jot\u0026rdquo;).\nA JWT is basically a JSON-based, cryptographically signed, base-64 encoded and URL-safe string. It is separated by dots into three different parts. In this post I don\u0026rsquo;t want to get into details of JWT, but if you are interested I would suggest reading this blog post or this great talk.\n header.payload.signature\n The interesting information is within the payload (decoded example of a payload):\n{ \u0026#34;exp\u0026#34;: 1491392499, \u0026#34;iat\u0026#34;: 1491392199, \u0026#34;sub\u0026#34;: \u0026#34;41f97c0d-66c7-47c0-9f06-13e48332e2cc\u0026#34;, \u0026#34;iss\u0026#34;: \u0026#34;http://localhost/auth/realms/demo\u0026#34;, \u0026#34;aud\u0026#34;: \u0026#34;demoClient\u0026#34;, \u0026#34;typ\u0026#34;: \u0026#34;ID\u0026#34;, \u0026#34;azp\u0026#34;: \u0026#34;demoClient\u0026#34;, \u0026#34;session_state\u0026#34;: \u0026#34;802f8a0d-a329-4b7f-9d1e-ba518a481ba2\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;max mustermann\u0026#34;, \u0026#34;family_name\u0026#34;: \u0026#34;mustermann\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;max@test.com\u0026#34; } The cool thing about ID tokens as JWTs is that you don\u0026rsquo;t need to save sessions within your application. Instead you just need to make sure that the JWT hasn\u0026rsquo;t been changed, by validating it.\nSummary As you have seen in this short example, your app could be secured without storing any user information yourself. Instead you delegated this concern to Google. Centralizing user management to one OpenID Provider can make things a lot easier. By the way, you are not limited to use any big provider like Google. Consider using Keycloak as your own custom OpenID Provider.\nThere are a lot more advantages of using OpenID Connect that I haven\u0026rsquo;t mentioned in this short blog post. Easy realization of Single-Sign-On or minimizing password security risks are just two of them.\n","date":"April 19, 2017","image":"https://reflectoring.io/images/stock/0048-passport-1200x628-branded_hu3fd808d352e17c66c741a1147f79b860_283084_650x0_resize_q90_box.jpg","permalink":"/openid-connect/","title":"OpenID Connect"},{"categories":["Java"],"contents":"Often you come across the requirement to validate integrity and authenticity of data that was sent digitally. Digital signatures are the solution to this requirement. So what do you need to sign the data? First, you need an asymmetric key pair. It consists of a private key, that only the signer can access, and a public key or even better, a certificate. The public key or the certificate is available for everyone.\nPlain Java Signature The simple way to produce a signature in Java looks like this:\nSignature ecdsaSignature = Signature.getInstance(\u0026#34;SHA256withECDSA\u0026#34;); ecdsaSignature.initSign(eccPrivateKey); ecdsaSignature.update(dataToSign); byte[] signature = ecdsaSignature.sign(); Using this code you get a raw signature. It means that a hash value of the data was calculated and this hash value was encrypted with the private key. So to check if the data was manipulated, you just have to calculate the hash value of the data to be checked, decrypt the signature and to compare the results. This is called signature verification and looks like this:\nSignature ecdsaSignature = Signature.getInstance(\u0026#34;SHA256withECDSA\u0026#34;); ecdsaSignature.initVerify(certificate); ecdsaSignature.update(dataToVerify); boolean isValide = ecdsaSignature.verify(rawSignature); What are the advantages of doing it this way? The signature is small, the code is short and clear. It can be used if you have a requirement to keep the signature simple and quick. What disadvantages did you get by this way? First, the verifier has to know which certificate he or she should use to verify the signature. Second, the verifier has to know what signature algorithm he or she has to use to verify the signature. Third, the signer and the verifier have to bind the data and the signature. It means you can use this kind of signature very well inside of one system.\nCryptographic Message Syntax (CMS) To avoid these disadvantages it is helpful to use a standard signature format. The standard is Cryptographic Message Syntax (CMS) defined in RFC5652. CMS describes several standards of cryptographic data, but we are interested in the Signed-data format here. The signed data in this format has a lot of information, that can help you to verify the signature. So how can you create such a data structure?\nWith JCE (Java Cryptography Extension), Java provides an interface for cryptographic operations. It\u0026rsquo;s best practice to use this interface for cryptographic operations. Implementations of JCE are called JCE providers. Your JDK already has a JCE provider named SUN.\nHowever, JCE does not provide an interface for the Cryptographic Message Syntax. That is why you have to use a different cryptographic library. BouncyCastle is a good choice. It is a JCE provider and has a lot of additional cryptographic functionality on a high level of abstraction. The code to create a signature wit CMS and BouncyCastle can look like this (JavaDoc of BouncyCastle):\nList certList = new ArrayList(); CMSTypedData msg = new CMSProcessableByteArray(\u0026#34;Hello world!\u0026#34;.getBytes()); certList.add(signCert); Store certs = new JcaCertStore(certList); CMSSignedDataGenerator gen = new CMSSignedDataGenerator(); ContentSigner sha256Signer = new JcaContentSignerBuilder(\u0026#34;SHA256withECDSA\u0026#34;).build(signKP.getPrivate()); gen.addSignerInfoGenerator( new JcaSignerInfoGeneratorBuilder( new JcaDigestCalculatorProviderBuilder().build()) .build(sha256Signer, signCert)); gen.addCertificates(certs); CMSSignedData sigData = gen.generate(msg, false); Note that you can define if the data should be put into the CMS container alongside the data or not. With other words you can choose to create either an attached or a detached signature. The CMS container contains the following:\n the signature the certificate that can be used for verifying the digital algorithm possibly the signed data itself.  It is also possible to create several signatures for the data and put them in the same container. That means several signers can sign the data and send all their signatures in the same container. The code to verify a CMSSignedData (again JavaDoc of BouncyCastle):\nStore certStore = cmsSignedData.getCertificates(); SignerInformationStore signers = cmsSignedData.getSignerInfos(); Collection c = signers.getSigners(); Iterator it = c.iterator(); while (it.hasNext()){ SignerInformation signer = (SignerInformation)it.next(); Collection certCollection = certStore.getMatches(signer.getSID()); Iterator certIt = certCollection.iterator(); X509CertificateHolder cert = (X509CertificateHolder)certIt.next(); if (signer.verify(new JcaSimpleSignerInfoVerifierBuilder().build(cert))) { // successfully verified  } } Light Weight If you want to use the whole functionality of a JCE implementation you have to install the \u0026ldquo;unlimited strength jurisdiction policy files\u0026rdquo; for the JVM. If you don\u0026rsquo;t, you\u0026rsquo;ll get something like this\njava.lang.SecurityException: Unsupported keysize or algorithm parameters or java.security.InvalidKeyException: Illegal key size The reason for this exception is the restriction of the export of cryptographic technologies from the United States until 2000. These restrictions limited the key length. Unfortunately, the JDK still does not have unrestricted implementation after the default installation, and that\u0026rsquo;s why you have to install the unrestricted policy files additionally.\nAs you guess it is not a big problem to get and to install the unrestricted policy files for your JVM. But what if you want to distribute your application? It can be pretty difficult for some users to solve this problem. The BouncyCastle library has again a solution. It provides a light weight version of cryptographic operations. It means, that these operations don\u0026rsquo;t use any JCE provider. That\u0026rsquo;s why it is not necessary to install unrestricted policy files. Maybe you already saw that some classes of the BouncyCastle begin with Jce (Java Cryptography Extension) or with Jca(Java Cryptography Architecture). These classes use JCE provider. The light weight classes begin with Bc and as said above don\u0026rsquo;t use a JCE provider. The code for signing with light weight version would look like this:\nX509Certificate certificate = ...; X509CertificateHolder x509CertificateHolder = new X509CertificateHolder(certificate.getEncoded()); String certAlgorithm = certificate.getPublicKey().getAlgorithm(); CMSTypedData message = new CMSProcessableByteArray(dataToSign); AlgorithmIdentifier sigAlgId = new DefaultSignatureAlgorithmIdentifierFinder().find(\u0026#34;SHA256WithECDSA\u0026#34;); AlgorithmIdentifier digAlgId = new DefaultDigestAlgorithmIdentifierFinder().find(sigAlgId); AsymmetricKeyParameter privateKeyParameter = PrivateKeyFactory.createKey( softCert.getPrivateKey().getEncoded()); ContentSigner signer = new BcECDSAContentSignerBuilder(sigAlgId, digAlgId).build(privateKeyParameter); SignerInfoGeneratorBuilder signerInfoGeneratorBuilder = new SignerInfoGeneratorBuilder(new BcDigestCalculatorProvider()); SignerInfoGenerator infoGenerator = signerInfoGeneratorBuilder.build(signer, x509CertificateHolder); CMSSignedDataGenerator dataGenerator = new CMSSignedDataGenerator(); dataGenerator.addSignerInfoGenerator(infoGenerator); dataGenerator.addCertificate(x509CertificateHolder); CMSSignedData signedData = dataGenerator.generate(message, true); You get the same CMS container without installing any patches. You can verify the data with this code:\nCollection\u0026lt;SignerInformation\u0026gt; signers = cmsSignedData.getSignerInfos().getSigners(); List\u0026lt;SignerInformation\u0026gt; signerList = new ArrayList\u0026lt;\u0026gt;(signers); SignerInformation signerFromCMS = signerList.get(0); SignerId sid = signerFromCMS.getSID(); Store store = cmsSignedData.getCertificates(); Collection\u0026lt;X509CertificateHolder\u0026gt; certificateCollection = store.getMatches(sid); ArrayList\u0026lt;X509CertificateHolder\u0026gt; x509CertificateHolders = new ArrayList\u0026lt;\u0026gt;(certificateCollection); // we use the first certificate X509CertificateHolder x509CertificateHolder = x509CertificateHolders.get(0); BcECSignerInfoVerifierBuilder verifierBuilder = new BcECSignerInfoVerifierBuilder( new BcDigestCalculatorProvider()); SignerInformationVerifier verifier = verifierBuilder.build(x509CertificateHolder); boolean result = signerFromCMS.verify(verifier); Conclusion There are two ways to create signature and to verify it. The first is to create a raw signature. This way is very short clear. But it does not provide enough information about signing process. The second way is to create a CMS container and is a little more complicated, but provides powerful tools to work with signatures. If you don\u0026rsquo;t want to use any JCE provider, you can use the light weight version of cryptographic operations provided by BouncyCastle.\n","date":"April 14, 2017","image":"https://reflectoring.io/images/stock/0025-signature-1200x628-branded_hu40d5255a109b1d14ac3f4eab2daeb887_126452_650x0_resize_q90_box.jpg","permalink":"/How%20to%20sign/","title":"Digital Signature in Java"},{"categories":["programming"],"contents":"For the coderadar project, I\u0026rsquo;m currently searching for a way to create a persistent model of a git commit history that contains the relationships between all commits and the files that were touched within these commits. And since coderadar is a code quality server, the model should also be able to express the history of (code quality) metrics on all files throughout all commits.\nMy first reflex was to model this with JPA entities in a relational database and build some really complex HQL queries to access the data. Turns out it works, but the relational schema required me to create a join table between the table representing commits and the table representing the files that were touched within the commits. For each commit, this table contained one entry for each file that exists at the time of the commit, even if the file was not modified within the commit, so that I could run queries on it! A test with a git repository containing about 7 500 files and a couple thousand commits resulted in 70 million entries in that join table! There has to be a different solution that does not waste millions of bytes in join tables. Hence, I had a look at Neo4j.\nWhy Neo4j? Neo4j is a graph database. A graph allows modelling of entities (nodes) and their relationships to each other. A git commit history is also a graph of commits with parent/child relationships. Throw in relationships between files and commits and relationships between files and code quality metrics and we have the model I\u0026rsquo;m looking for.\nAlso, Neo4j has pretty good support with Spring Data, which is used in the coderadar project. In addition, the learning curve is not as steep as I initially thought, having only worked with relational databases before. The Getting Started Guide is quite helpful and I was able to learn the basics of Neo4j within just one afternoon.\nThe Graph It turns out that I found modelling a graph database is much more fun than modelling a relational database, since you just draw nodes and edges and you have a model which can then be easily transferred into code using Spring Data Neo4j and Neo4j\u0026rsquo;s Object Graph Mapper (OGM). The following Graph shows the model I came up with after some drawing with pen \u0026amp; paper.\n![Coderadar Graph]({{ base }}/assets/img/posts/coderadar-graph.png)\nCommit A commit node represents a commit in a git history. Every commit is a CHILD OF one or more other commits (except the first commit, which has no parent) and TOUCHES one or more files and file snapshots. A commit has a timestamp and a sha1-hash which serves as identifier.\nFile A file node represents a file during all of its life within the git repository. A file comes into existence when it is ADDED in a commit and can be MODIFIED or RENAMED over several following commits and finally it can be DELETED in a final commit. Thus, each file node is connected to one or more commit nodes via a relationship that specified the type of change the file experienced within that commit. If a file is not modified within a certain commit, there will be no relationship between the two nodes.\nThe other way around, a commit TOUCHES certain files. This relationship is optional from a model point of view, since we already have the relationship in the other direction. However, bi-directional relationships make working with the Object Graph Mapper easier in some cases. For example, if you want to store a node, OGM automatically also stores the nodes that are connected to that node by outgoing relationships. I can now simply store a Commit node and all TOUCHED file nodes will be saved, too, all within a single call to the database.\nA file node has a single fileId as attribute, which just serves as an identifier for the file.\nFileSnapshot While a file node represents a file all over it\u0026rsquo;s lifetime, a file snapshot node represents a file at the point in time of a certain commit only. This is necessary, since a file can be renamed during it\u0026rsquo;s lifetime, and we need some way to identify a file by it\u0026rsquo;s name. So, a file snaphot node has a path attribute that contains the file\u0026rsquo;s path at the point in time of a certain commit.\nMetric A metric node represents a certain code quality metric (like cyclomatic complexity). It has a metricId attribute which serves as identifier for the metrics. A metric node can have relationships with multiple file snapshot nodes, which represent that the file snapshot has been MEASURED with this metric. This relationship has the attribute value which specified the value of the metric in the file snapshot.\nQuerying the Graph So, what\u0026rsquo;s the big thing? Couldn\u0026rsquo;t we have done the same with a relational database. Admittedly, you can model database tables very similar to the graph in the image above. However, note that we only have to to connect commit nodes with files and file snapshots of files that were touched in that commit and not with ALL files that exist at the time of the commit, thus saving a quadratic amount of storage space.\nAlso, querying the graph is much easier with Neo4j\u0026rsquo;s query language Cypher than it is with SQL (or HQL for that matter). Take this query, which recursively loads all files that were TOUCHED in any commit previous to a specified commit but have not yet been DELETED.\nMATCH (child:Commit {name:\u0026lt;COMMIT_NAME\u0026gt;})-[:IS_CHILD_OF*]-\u0026gt;(parent:Commit), (parent)-[:TOUCHED]-\u0026gt;(file:File) WHERE NOT ((file)-[:DELETED_IN_COMMIT]-\u0026gt;()) RETURN file Try to find an equivalent in SQL for this query that does not rely on having a fully filled join table between commits and files! I guess you could create such a query if the database supports hierarchical queries, but those queries would be much harder to create and much less readable.\nWrap-Up My first experience with Neo4j has been quite refreshing. Knowing a lot about JPA and relational databases I had to open up to the graph concept but I was quickly convinced of the expressive nature of a graph database and the elegance in which I could create and query a graph database for my use case.\nI\u0026rsquo;m going to continue building a graph database for coderadar to evaluate performance and maintainability and may report the current state again in a later post. If you want to have a first-hand look at code, you may want to look at the coderadar-graph module of coderadar. It contains some unit tests that show how to work with Spring Data together with Neo4j.\n","date":"April 14, 2017","image":"https://reflectoring.io/images/stock/0050-git-1200x628-branded_hue893d837883783866d1e88c8e713ed74_236340_650x0_resize_q90_box.jpg","permalink":"/git-neo4j/","title":"Modeling Git Commits with Neo4j"},{"categories":["Software Craft"],"contents":"The term \u0026ldquo;refactoring\u0026rdquo; is well defined in software development. It is usually used to describe a restructuring of source code ranging from simply renaming a variable to completely re-thinking whole components or applications.\nHowever, the term \u0026ldquo;refactoring\u0026rdquo; is rarely used when talking about restructuring database schemas. But databases are a very important part in most (web) applications developed today. And the structure of a database changes almost as often as the code itself. Thus, a refactoring of the database structure should be done just as careful as refactoring the source code.\n2nd Class Database Refactoring Usually, when a change in the source code requires a change in the database schema we create an SQL script that makes that change (e.g. adding or removing a table or a column in a table). In the best case, that script is put into version control next to the source code of the application.\nWhen releasing a new version of the application, we now have to remember to run that script on the target database (and all other scripts that have accumulated in the meantime). Commonly, this is a manual step during the release and thus is prone to error.\nWhile source code is a first class citizen, SQL scripts are often being neglected. All because we want to create features with business value instead of handling SQL scripts.\nDatabase Refactoring Done Right Yes, we want to create business value. But why not make our lives easier by automating database refactoring with a tool? Everything that runs automatically prevents errors and saves time in the long run which we can use to develop features for the business. So, how does an automated database refactoring look like?\nFirst, we have to collect changes to the database schema as described above. Depending on which tool we use, these changes are described in SQL, XML, JSON or YAML, for example. These scripts are being numbered and put into version control, just like described above.\nThe difference to the naive approach of manual database refactoring is that we use a tool to apply the scripts to a target database. When we want to update a target database, we simply run a command and the tool executes all scripts for us. A big plus here is that the tool knows which scripts have already run on the target database and only executes those scripts that have not yet been run. Usually, the tool uses a separate database table to store that information.\nAnother feature of a database refactoring tool is an integrity check on the scripts. If a script has already been run on a target database, then changed and then the tool is run again, it will fail with an error message. This prevents having diverging database schema versions on different target databases. For this integrity check, the database refactoring tools usually store a hash value of the scripts in a special database table.\nTools A database refactoring tool does nasty manual labor for us and reduces the risk of having diverging database schemas on different server environments. Pretty good arguments.\nTwo commonly used database refactoring tools are Liquibase and Flyway. Both are written in Java, but need only minimal Java knowledge to be run and thus qualify for non-Java projects. As usual, each has a set of advantages and disadvantages which will be discussed in a future blog post.\n","date":"April 1, 2017","image":"https://reflectoring.io/images/stock/0046-rack-1200x628-branded_hu38983fac43ab7b5246a0712a5f744c11_252723_650x0_resize_q90_box.jpg","permalink":"/tool-based-database-refactoring/","title":"Tool-based Database Refactoring"},{"categories":["Java"],"contents":"From time to time we need a randomly generated Number in Java. In this case we are normally using java.util.Random which provides a stream of pseudo generated Numbers. But there are some use cases in which the direct usage may cause some unexpected problems.\nThis is the ordinary way to generate a Number:\n// Random Random random = new Random(); random.nextInt();//nextDouble(), nextBoolean(), nextFloat(), ... Alternatively, we can use the Math Class:\n// Math Math.random(); Whereby the Math class just holds an instance of Random to generating Numbers.\n// Math public static double random() { Random rnd = randomNumberGenerator; if (rnd == null) rnd = initRNG(); // return a new Random Instance  return rnd.nextDouble(); } According to the Javadoc, the usage of java.util.Random is thread safe. But the concurrent use of the same Random instance across different threads may cause contention and consequently poor performance. The reason for this is the usage of so called Seeds for the generation of random numbers. A Seed is a simple number which provides the basis for the generation of new random numbers. This happens within the method next() which is used within Random:\n// Random protected int next(int bits) { long oldseed, nextseed; AtomicLong seed = this.seed; do { oldseed = seed.get(); nextseed = (oldseed * multiplier addend) \u0026amp; mask; } while (!seed.compareAndSet(oldseed, nextseed)); return (int)(nextseed \u0026gt;\u0026gt;\u0026gt; (48 - bits)); } First, the old seed and a new one are stored over two auxiliary variables. The principle by which the new seed is created is not important at this point. To save the new seed, the compareAndSet() method is called. This replaces the old seed with the next new seed, but only under the condition that the old seed corresponds to the seed currently set. If the value in the meantime was manipulated by a concurrent thread, the method return false, which means that the old value did not match the excepted value. This is done within a loop till the variables matches the excepted values. And this is the point which could cause poor performance and contention.\nThus, if more threads are actively generating new random numbers with the same instance of Random, the higher the probability that the above mentioned case occurs. For programs that generate many (very many) random numbers, this procedure is not recommended. In this case you should use ThreadLocalRandom instead, which was added to Java in version 1.7.\nThreadLocalRandom extends Random and adds the option to restrict its use to the respective thread instance. For this purpose, an instance of ThreadLocalRandom is held in an internal map for the respective thread and returned by calling current().\nThreadLocalRandom.current().nextInt() Conclusion The pitfall described above does not mean that it\u0026rsquo;s forbidden to share a Random Instance between several threads. There is no problem with turning one or two extra rounds in a loop, but if you generate a huge amount of random numbers in different threads, just bear the above mentioned solution in mind. This could save you some debug time :)\n","date":"February 16, 2017","image":"https://reflectoring.io/images/stock/0049-dice-1200x628-branded_huc726402b703cb3f3253f5b5929a348f5_110706_650x0_resize_q90_box.jpg","permalink":"/how-to-random/","title":"A Random Pitfall"},{"categories":["Spring Boot"],"contents":"The Spring Boot gradle plugin provides the bootRun task that allows a developer to start the application in a \u0026ldquo;developer mode\u0026rdquo; without first building a JAR file and then starting this JAR file. Thus, it\u0026rsquo;s a quick way to test the latest changes you made to the codebase.\nSadly, most applications cannot be started or would not work correctly without specifying a couple configuration parameters. Spring Boot supports such parameters with it\u0026rsquo;s application.properties file. The parameters in this file are automatically read when the application is started from a JAR and passed to the application.\nThe bootRun task also allows to define such properties. The common way of doing this is like this in the build.gradle file:\nbootRun { jvmArgs = [ \u0026#34;-DmyApp.myParam1=value1\u0026#34;, \u0026#34;-DmyApp.myParam2=value2\u0026#34; ] } However, if your are working at the codebase together with other developers, each developer may want to test different use cases and needs different configuration values. She would have to edit the build.gradle each time. And each time she checks in changes to the codebase, she has to check if the build.gradle file should really be checked in. Which is not what we want.\nThe solution to this problem is a specific properties file for each developer\u0026rsquo;s local environment that is not checked into the VCS. Let\u0026rsquo;s call it local.application.properties. In this file, put your applications configuration parameters just as you would in a real application.properties file.\nTo make the bootRun task load these properties, add the following snippet to your build.gradle:\ndef Properties localBootRunProperties() { Properties p = new Properties(); p.load(new FileInputStream( file(project.projectDir).absolutePath + \u0026#34;/local.application.properties\u0026#34;)) return p; } Then, in your bootRun task, fill the systemProperties attribute as follows:\nbootRun { doFirst { bootRun.systemProperties = localBootRunProperties() } } The call to localBootRunProperties() is put into the doFirst closure so that it gets executed only when the task itself is executed. Otherwise event all other tasks would fail with a FileNotFoundException if the properties file is not found instead of only the bootRun task.\nFurther Reading  Spring Boot Gradle Plugin  ","date":"January 25, 2017","image":"https://reflectoring.io/images/stock/0013-switchboard-1200x628-branded_hu4e75c8ecd0e5246b9132ae3e09f147a6_167298_650x0_resize_q90_box.jpg","permalink":"/externalizing-properties-gradle-bootrun/","title":"Loading External Application Properties in the Gradle bootRun Task"},{"categories":["Software Craft"],"contents":"If you are new to git and/or GitHub, it\u0026rsquo;s easy to get overwhelmed by the different workflow models you can use to contribute code to a repository. At least, I was overwhelmed and it took some time for me to open up to new workflows and to get over the things I learned using good old SVN.\nThis post explains the basic fork and pull workflow model that is used on a lot of GitHub repositories. For each step in the workflow, I will list the necessary git commands and describe them briefly. Thus, this post is aimed at git beginners that have yet hesitated to contribute on GitHub.\nFork \u0026amp; Pull Thinking about it, \u0026ldquo;Fork \u0026amp; Pull\u0026rdquo; is a pretty concise name for this workflow.\n Create a personal fork of the repository you want to contribute to Edit the fork to make the changes you want to contribute Create a pull request from the fork to propose your changes to the repository owner for merging  For the sake of simplicity, we can consider a fork to be a personal copy of the repository that can be edited by you even when you cannot edit the original repository. Creating a fork on GitHub is as easy as clicking the \u0026ldquo;fork\u0026rdquo; button on the repository page.\nThe fork will then appear in the list of your repositories on GitHub where you can clone it to your local machine and edit it. Once you are done editing, you push your commits back to the fork on GitHub.\nLastly, you submit a request to the owner of the original repository to pull your changes into the original repository - a pull request. This can be done by simply clicking the pull request button on the GitHub page of your fork. The owner of the original repository will then be notified of your changes and may merge them. In the best case (when there are no merge conflicts), he can do this by simply clicking the \u0026ldquo;merge\u0026rdquo; button.\nGit Commands for a Simple Workflow The following steps are enough for creating a pull request if you don\u0026rsquo;t need to work on multiple pull requests to the same repository at once.\n  Create a Fork\nSimply click on the \u0026ldquo;fork\u0026rdquo; button of the repository page on GitHub.\n  Clone your Fork\nThe standard clone command creates a local git repository from your remote fork on GitHub.\n  git clone https://github.com/USERNAME/REPOSITORY.git  Modify the Code\nIn your local clone, modify the code and commit them to your local clone using the git commit command.\n  Push your Changes\nIn your workspace, use the git push command to upload your changes to your remote fork on GitHub.\n  Create a Pull Request\nOn the GitHub page of your remote fork, click the \u0026ldquo;pull request\u0026rdquo; button. Wait for the owner to merge or comment your changes and be proud when it is merged :). If the owner suggests some changes before merging, you can simply push these changes into your fork by repeating steps #3 and #4 and the pull request is updated automatically.\n  Additional Git Commands The commands listed above are enough for a simple pull request. In some cases, however you need to know a couple more commands.\nUpdating your Fork Other developers don\u0026rsquo;t sleep while you are coding. Thus, it may happen that while you are editing your fork (step #3) other changes are made to the original repository. To fetch these changes into your fork, use these commands in your fork workspace:\n# add the original repository as remote repository called \u0026#34;upstream\u0026#34; git remote add upstream https://github.com/OWNER/REPOSITORY.git # fetch all changes from the upstream repository git fetch upstream # switch to the master branch of your fork git checkout master # merge changes from the upstream repository into your fork git merge upstream/master Working on multiple Pull Requests at once If you are working on multiple features you want to push them isolated from each other. Thus, you need to create a separate pull request for each feature. A pull request is always bound to a branch of a git repository, so you have to create a separate branch for each feature.\n# change to the master branch so the master serves as source branch for the # next command git checkout master # create and switch to a new branch for your feature git checkout -b my-feature-branch # upload the branch and all committed changes within it to the remote fork git push --set-upstream origin my-feature-branch Create a branch like this for each feature you are working on. To switch between branches, simply use the command git checkout BRANCHNAME. To create a pull request from a branch, go to the GitHub page of that branch and click the \u0026ldquo;pull request\u0026rdquo; button. GitHub automatically creates a pull request from the selected branch.\nUpdating a Feature Branch You may want to pull changes made to the original repository into a local feature branch. As described in Updating your Fork above, merge the upstream repository to your master branch. Then rebase your feature branch from the updated master branch:\n# switch to your feature branch git checkout my-feature-branch # commit all changes in your feature-branch git commit -m MESSAGE # update your feature branch from the master branch git rebase master Conclusion The steps and commands described above should provide enough information to start using pull requests. Of course, there are more sophisticated workflows and git commands yet, but starting small reduces the fear of doing something wrong ;). So, start contributing pull requests to your favorite GitHub project today!\nFurther Reading  GitHub Standard Fork \u0026amp; Pull Request Workflow About collaborative development models About Pull Requests  ","date":"January 2, 2017","image":"https://reflectoring.io/images/stock/0050-git-1200x628-branded_hue893d837883783866d1e88c8e713ed74_236340_650x0_resize_q90_box.jpg","permalink":"/github-fork-and-pull/","title":"Github's Fork \u0026 Pull Workflow for Git Beginners"},{"categories":["Spring Boot"],"contents":"The first impression counts. When you\u0026rsquo;re developing an API of any kind, chances are that the first impression is gained from a look into the API docs. If that first impression fails to convince, developers will go on looking for another API they can use instead.\nWhy not Swagger? Looking for a tool to document a RESTful API, the first tool you probably come across is Swagger. Among other things, Swagger provides tooling for a lot of different programming languages and frameworks and allows automated creation of an API documentation and even of a web frontend that can interact with your API. Also, Swagger is well established as a tool supporting the development of RESTful APIs.\nBut at least if you\u0026rsquo;re familiar to Java, there\u0026rsquo;s a compelling reason to use Spring Rest Docs instead of or at least additionally to Swagger: Spring Rest Docs integrates directly into your integration tests. Tests will fail if you forget to document a field that you have just added to your API or if you removed a field that is still part your API docs. This way, your documentation is always up-to-date with your implementation.\nThis article explains the basics of Spring Rest Docs along the lines of some code examples. If you want to see it in action, you may want to check out the coderadar project on github.\nSnippet-Generating Integration Tests The following code snippet shows a simple integration test of a Spring MVC controller that exposes a REST API to create a project resource.\n@Test public void createProjectSuccessfully() throws Exception { ProjectResource projectResource = ... mvc().perform(post(\u0026#34;/projects\u0026#34;) .content(toJson(projectResource)) .contentType(MediaType.APPLICATION_JSON)) .andExpect(status().isOk()) .andDo(document(\u0026#34;projects/create\u0026#34;); } Let\u0026rsquo;s have a look at the details: mvc() is a helper method that creates a MockMvc object that we use to submit a POST request to the URL /projects. The result of the request is passed into the document() method to automatically create documentation for the request. The document() method is statically imported from the class MockMvcRestDocumentation to keep the code readable.\nThe MockMvc object returned by the method mvc() is initialized with a JUnitRestDocumentation object, as shown in the next code snippet. This way, the MockMvc object is instrumented to create Asciidoctor snippets into the folder build/generated-snippets.\n@Rule public JUnitRestDocumentation restDocumentation = new JUnitRestDocumentation(\u0026#34;build/generated-snippets\u0026#34;); protected MockMvc mvc() { return MockMvcBuilders.webAppContextSetup(applicationContext) .apply(MockMvcRestDocumentation.documentationConfiguration(this.restDocumentation)) .build(); } When the test is executed, Spring Rest Docs will now generate snippets into the snippets folder that contain an example request and an example response. The following snippets would be generated into the folder build/generated-snippets/projects/create.\nhttp-request.adoc:\n[source,http,options=\u0026#34;nowrap\u0026#34;] ---- POST /projects HTTP/1.1 Content-Type: application/json Host: localhost:8080 Content-Length: 129 { \u0026#34;name\u0026#34; : \u0026#34;name\u0026#34;, \u0026#34;vcsType\u0026#34; : \u0026#34;GIT\u0026#34;, \u0026#34;vcsUrl\u0026#34; : \u0026#34;http://valid.url\u0026#34;, \u0026#34;vcsUser\u0026#34; : \u0026#34;user\u0026#34;, \u0026#34;vcsPassword\u0026#34; : \u0026#34;pass\u0026#34; } ---- http-response.adoc:\n[source,http,options=\u0026#34;nowrap\u0026#34;] ---- HTTP/1.1 201 Created Content-Type: application/hal+json;charset=UTF-8 Content-Length: 485 { \u0026#34;name\u0026#34; : \u0026#34;name\u0026#34;, \u0026#34;vcsType\u0026#34; : \u0026#34;GIT\u0026#34;, \u0026#34;vcsUrl\u0026#34; : \u0026#34;http://valid.url\u0026#34;, \u0026#34;vcsUser\u0026#34; : \u0026#34;user\u0026#34;, \u0026#34;vcsPassword\u0026#34; : \u0026#34;pass\u0026#34;, \u0026#34;_links\u0026#34; : { \u0026#34;self\u0026#34; : { \u0026#34;href\u0026#34; : \u0026#34;http://localhost:8080/projects/1\u0026#34; }, \u0026#34;files\u0026#34; : { \u0026#34;href\u0026#34; : \u0026#34;http://localhost:8080/projects/1/files\u0026#34; }, \u0026#34;analyzers\u0026#34; : { \u0026#34;href\u0026#34; : \u0026#34;http://localhost:8080/projects/1/analyzers\u0026#34; }, \u0026#34;strategy\u0026#34; : { \u0026#34;href\u0026#34; : \u0026#34;http://localhost:8080/projects/1/strategy\u0026#34; } } } ---- These examples already go a long way to documenting your REST API. Examples are the best way for developers to get to know your API. The snippets automatically generated from your test don\u0026rsquo;t help when they rot in your snippets folder, though, so we have to expose them by including them into a central documentation of some sorts.\nCreating API Docs with Asciidoctor With the snippets at hand, we can now create our API documentation. The snippets are in Asciidoctor format by default. Asciidoctor is a markup language similiar to Markdown, but much more powerful. You can now simply create an Asciidoctor document with your favorite text editor. That document will provide the stage for including the snippets. An example document would look like this:\n= My REST API v{version}, Tom Hombergs, {date} :doctype: book :icons: font :source-highlighter: highlightjs :highlightjs-theme: github :toc: left :toclevels: 3 :sectlinks: :sectnums: [introduction] == Introduction ... some warm introductory words... . == Creating a Project === Example Request include::{snippets}/projects/create/http-request.adoc[] === Example Response include::{snippets}/projects/create/http-response.adoc[] The document above includes the example HTTP request and response snippets that are generated by the integration test above. While it could yet be fleshed out with a little more text, the documentation above is already worth its weight in gold (imagine each byte weighing a pound or so\u0026hellip;). Even if you change your implementation, you will not have to touch your documentation, since the example snippets will be generated fresh with each build and thus be up-to-date at all times! You still have to include the generation of your snippets into your build though, which we will have a look at in the next section\nIntegrating Documentation into your Build The integration tests should run with each build. Thus, our documentation snippets are generated with each build. The missing step now is to generate human-readable documentation from your asciidoctor document.\nThis can be done using the Asciidoctor Gradle Plugin when you\u0026rsquo;re using Gradle as your build tool or the Asciidoctor Maven Plugin when you\u0026rsquo;re using Maven. The following examples are based on Gradle.\nIn your build.gradle, you will first have to define a dependency to the plugin:\nbuildscript { repositories { jcenter() maven { url \u0026#34;https://plugins.gradle.org/m2/\u0026#34; } } dependencies { classpath \u0026#34;org.asciidoctor:asciidoctor-gradle-plugin:1.5.3\u0026#34; } } Next, you create a task that calls the plugin to parse your asciidoctor document and transforms it into a human-readable HTML document. Note, that in the following example, the asciidoctor document must be located in the folder src/main/asciidoc and that the resulting HTML document is created at build/docs/html5/\u0026lt;name_of_your_asciidoc\u0026gt;.html.\next { snippetsDir = file(\u0026#34;build/generated-snippets\u0026#34;) } asciidoctor { attributes \u0026#34;snippets\u0026#34;: snippetsDir, \u0026#34;version\u0026#34;: version, \u0026#34;date\u0026#34;: new SimpleDateFormat(\u0026#34;yyyy-MM-dd\u0026#34;).format(new Date()), \u0026#34;stylesheet\u0026#34;: \u0026#34;themes/riak.css\u0026#34; inputs.dir snippetsDir dependsOn test sourceDir \u0026#34;src/main/asciidoc\u0026#34; outputDir \u0026#34;build/docs\u0026#34; } Next, we include the asciidoctor task to be run when we execute the build task, so that it is automatically run with each build.\nbuild.dependsOn asciidoctor Wrap-Up Done! We just created an automated documentation that is updated with each run of our build. Let\u0026rsquo;s sum up a few facts:\n Documentation of REST endpoints that are covered with a documenting integration test is automatically updated with each build and thus stays up-to-date to your implementation Documentation of new REST endpoints is only added once you have created a documenting integration test for the endpoint You should have 100% test coverage of REST endpoints and thus 100% of your REST endpoints documented (this does not necessarily mean 100% line coverage!) You have to do a little manual documentation to create the frame that includes the automatically generated snippets You have your documentation right within your IDE and thus always at hand to change it if necessary  There\u0026rsquo;s more you can do with Spring Rest Docs, which will be covered in future posts:\n document the fields of a request or response document field type constraints document hypermedia (HATEOAS) links \u0026hellip;  If you want to see these features in a live example, have a look at the coderadar REST API or at the coderadar sources at github. If you want to dive deeper into the features of Spring Rest Docs have a look at the good reference documentation.\nAny questions? Drop a comment!\n","date":"December 19, 2016","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/spring-restdocs/","title":"Documenting your REST API with Spring Rest Docs"},{"categories":["Java"],"contents":"A common use case for build tools like Ant, Maven or Gradle is to retrieve the current revision number of the project sources in the Version Control System (VCS), in many cases Subversion (SVN). This revision number is then used in the file names of the build artifacts, for example. As mature build tools, Ant and Maven provide plugins to access the current revision number of the SVN working copy. But how about Gradle? Having recently moved from Ant to Gradle in a ~500.000 LOC Java project, I can say that Gradle offers a lot of well-thought-out features that make life easier. However, getting the Subversion revision number of a project workspace is not one of them. It\u0026rsquo;s remarkably easy to do it yourself, though, as shown in the code snippet below.\nimport org.tmatesoft.svn.core.wc.* buildscript { repositories { mavenCentral() } dependencies { classpath group: \u0026#39;org.tmatesoft.svnkit\u0026#39;, name: \u0026#39;svnkit\u0026#39;, version: \u0026#39;1.7.11\u0026#39; } } def getSvnRevision(){ ISVNOptions options = SVNWCUtil.createDefaultOptions(true); SVNClientManager clientManager = SVNClientManager.newInstance(options); SVNStatusClient statusClient = clientManager.getStatusClient(); SVNStatus status = statusClient.doStatus(projectDir, false); SVNRevision revision = status.getRevision(); return revision.getNumber(); } allprojects { version = \u0026#39;1.2.3.\u0026#39; + getSvnRevision() } Using the buildscript closure you can define dependencies that are only available in your build script (i.e. these dependencies do not spill into the dependencies of your project). Using this way, you can add the dependency to tmatesoft\u0026rsquo;s SVNKit to your build. SVNKit provides a Java API to Subversion funcionality.\nBy defining a function (named getSvnRevision() in the snippet above), you can then simply use SVNKit to retrieve the current SVN revision number from your working copy. This function can then be called anywhere in your Gradle build script. In the case of the snippet above, I used it to append the revision number to a standard major/minor/bugfix versioning pattern. This complete version number can then be used in Gradle subprojects.\n","date":"November 26, 2016","image":"https://reflectoring.io/images/stock/0050-git-1200x628-branded_hue893d837883783866d1e88c8e713ed74_236340_650x0_resize_q90_box.jpg","permalink":"/getting-svn-revision-in-gradle/","title":"Getting the current Subversion Revision Number in Gradle"},{"categories":["Spring Boot"],"contents":"When following a \u0026ldquo;code first\u0026rdquo; approach in API development, we first start with writing code, and then we generate the API specification from the code, which then becomes the documentation.\n\u0026ldquo;Code first\u0026rdquo; is not the only way to develop an API. \u0026ldquo;API first\u0026rdquo; is another option where we do exactly the opposite. First, we write the specification, and then we generate code from that specification and implement against it.\nLet\u0026rsquo;s discuss the benefits of using this approach and how to implement it with Springdoc and Spring Boot.\n Example Code This article is accompanied by a working code example on GitHub. When to Choose the \u0026ldquo;Code First\u0026rdquo; Approach When we need to go to production fast, or create a prototype something, \u0026ldquo;code first\u0026rdquo; may be a good approach. Then we can generate our documentation from the API we have already programmed.\nAnother benefit of code first is the fact that the documentation will be generated from the actual code, which means that we don\u0026rsquo;t have to manually keep the documentation in sync with our code. The documentation is more likely to match the behavior of the code and is always up-to-date.\nExample Application In this article, we\u0026rsquo;ll be using Spring Boot together with springdoc-openapi.\nAll the annotations that we will be using are from Swagger. Springdoc wraps Swagger and offers us a single dependency which we can use to create our API documentation.\nGetting Started To get started we only need to add the Springdoc dependency (Gradle notation):\nimplementation \u0026#39;org.springdoc:springdoc-openapi-ui:1.3.3\u0026#39; First, let\u0026rsquo;s define the path of our documentation. We define it in the application.yml of our Spring Boot project:\nspringdoc: api-docs: path: /reflectoring-openapi Springdoc will now add the endpoint /reflectoring-openapi to our application where it will beautifully display our endpoints. For more configuration properties please check the official documentation.\nDefining General API Information Next, let\u0026rsquo;s define some information about our API:\n@OpenAPIDefinition( info = @Info( title = \u0026#34;Code-First Approach (reflectoring.io)\u0026#34;, description = \u0026#34;\u0026#34; + \u0026#34;Lorem ipsum dolor ...\u0026#34;, contact = @Contact( name = \u0026#34;Reflectoring\u0026#34;, url = \u0026#34;https://reflectoring.io\u0026#34;, email = \u0026#34;petros.stergioulas94@gmail.com\u0026#34; ), license = @License( name = \u0026#34;MIT Licence\u0026#34;, url = \u0026#34;https://github.com/thombergs/code-examples/blob/master/LICENSE\u0026#34;)), servers = @Server(url = \u0026#34;http://localhost:8080\u0026#34;) ) class OpenAPIConfiguration { } Note that we don\u0026rsquo;t need to define the class above as a Spring bean. Springdoc will just use reflection to obtain the information it needs.\nNow, if we start the Spring Boot application and navigate to http://localhost:8080/swagger-ui/index.html?configUrl=/reflectoring-openapi/swagger-config, we should see the information we defined above:\nDefining the REST API Next, let\u0026rsquo;s add some REST endpoints. We\u0026rsquo;ll be building a TODO API with CRUD operations.\n@RequestMapping(\u0026#34;/api/todos\u0026#34;) @Tag(name = \u0026#34;Todo API\u0026#34;, description = \u0026#34;euismod in pellentesque ...\u0026#34;) interface TodoApi { @GetMapping @ResponseStatus(code = HttpStatus.OK) List\u0026lt;Todo\u0026gt; findAll(); @GetMapping(\u0026#34;/{id}\u0026#34;) @ResponseStatus(code = HttpStatus.OK) Todo findById(@PathVariable String id); @PostMapping @ResponseStatus(code = HttpStatus.CREATED) Todo save(@RequestBody Todo todo); @PutMapping(\u0026#34;/{id}\u0026#34;) @ResponseStatus(code = HttpStatus.OK) Todo update(@PathVariable String id, @RequestBody Todo todo); @DeleteMapping(\u0026#34;/{id}\u0026#34;) @ResponseStatus(code = HttpStatus.NO_CONTENT) void delete(@PathVariable String id); } With the @Tag annotation, we add some additional information to the API.\nNow, we have to implement this interface and annotate our controller with @RestController. This will let Springdoc know that this is a controller and that it should produce a documentation for it:\n@RestController class TodoController implements TodoApi { // method implementations } Let\u0026rsquo;s start the application again and take a look at the Swagger UI. It should look something like this:\nSpringdoc did its magic and created a documentation for our API!\nLet\u0026rsquo;s dive a little more into Springdoc by defining a security scheme.\nDefining a Security Scheme To define a security scheme for our application we just need to add the @SecurityScheme annotation in one of our classes:\n// other annotations omitted @SecurityScheme( name = \u0026#34;api\u0026#34;, scheme = \u0026#34;basic\u0026#34;, type = SecuritySchemeType.HTTP, in = SecuritySchemeIn.HEADER) class OpenAPIConfiguration { } The above @SecurityScheme will be referred to as api and will do a basic authentication via HTTP. We add this annotation in the OpenAPIConfiguration class.\nLet\u0026rsquo;s see what this annotation produced for us:\nOur documentation has now also an \u0026ldquo;Authorize\u0026rdquo; Button! If we press this button we will get a dialog where we can authenticate:\nTo define that an API endpoint uses the above security scheme we have to annotate it with the @SecurityRequirement annotation.\nNow, the TodoApi looks like this:\n@RequestMapping(\u0026#34;/api/todos\u0026#34;) @Tag(name = \u0026#34;Todo API\u0026#34;, description = \u0026#34;euismod in pellentesque ...\u0026#34;) @SecurityRequirement(name = \u0026#34;api\u0026#34;) interface TodoApi { // other methods omitted } Now, the Swagger UI will show a lock on each of our endpoints to mark them as \u0026ldquo;secured\u0026rdquo;:\nActually, the endpoints are not secured, yet. If we try to request the /api/todos resource, for example, we will still be able to receive the data without authentication:\nWe have to implement the actual security ourselves. See the code in the repository for the full implementation with Spring Security.\nAfter securing the application we can now see that we receive a 401 status code if we try to access any resource under /api/todos.\nAfter authenticating we can again access the resource:\nCaveats When Using Code First The Code First approach is really easy to use and can get you pretty fast to a well documented REST API.\nSometimes, however, it might give us the sense that our documentation is up-to date when it is actually not. That\u0026rsquo;s because annotations can be added or removed accidentally. Unlike code, they\u0026rsquo;re not executed during unit tests, so the documentation behaves more like Javadoc than code in terms of outdatedness.\nA solution to that problem is Spring REST docs, which creates documentation based on tests.\nIf a test fails, it means that the documentation won\u0026rsquo;t be created. That way, our REST API documentation becomes part of the actual code and its lifecycle.\nConclusion As we saw in this article, the \u0026ldquo;code first\u0026rdquo; approach with Springdoc is all about speed. First, we build our API in code, then we generate the specification/documentation via annotations. Springdoc elevates Swagger and helps us create our OpenAPI Specification.\nIf you want to have a deeper look, browse the code on GitHub.\n","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/spring-boot-springdoc/","title":"'Code First' API Documentation with Springdoc and Spring Boot"},{"categories":null,"contents":"Hi,\nI\u0026rsquo;m Tom, and I run the reflectoring blog.\nI\u0026rsquo;m a software developer, consultant, architect, coach \u0026hellip; whatever the role, I\u0026rsquo;m focused on making things simple.\n The mission of this blog is to provide comprehensive and easy-to-read learning experiences that generate \u0026ldquo;aha\u0026rdquo; moments when you need to solve a specific problem.\n With this blog, my team of authors and I produce:\n deep-dive tutorials about Spring Boot hands-on tutorials about Java hands-on tutorials about Node.js hands-on tutorials about AWS opinion pieces on practices of the Software Craft the occasional book notes of a (non-fiction) book I\u0026rsquo;ve read. \u0026hellip; and more.  If you\u0026rsquo;re interested in working with me or have any feedback about my writing, don\u0026rsquo;t hesitate to contact me.\n","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/about/","title":"About Reflectoring"},{"categories":null,"contents":"Looking to put your brand or product in front of more than 150,000 Java developers per month?\nRead on, this page details the different sponsorship options.\nSponsorship Opportunities Here\u0026rsquo;s a quick overview of the options for sponsoring reflectoring:\n   Sponsorship opportunity Price Details     Exclusive Ad Placement $325 / week Details   Sponsored Blog Post $750 Details   Sponsored Newsletter $150 Details   Sponsored Newsletter with personal endorsement $300 Details    Get in touch if you have any questions or want to discuss a sponsoring opportunity.\nWhat Customers Say  Tom and the Reflectoring brand represent a fantastic opportunity for any organization such as ours that is seeking to build authentic engagement with a developer audience. This is primarily the case because he writes so clearly from a \u0026ldquo;for developers by developers\u0026rdquo; perspective. The use cases and the value streams are optimized for real-world consumption and even implementation. In addition to this practical, hands-on approach with individual posts, the content offered across the Reflectoring site and in its email newsletter is positive, encouraging and insightful.\n Matt Hines - Director of Product Marketing at Logz.io\nThe reflectoring Audience The audience of the reflectoring blog consists of mainly professional JVM developers who come to the site through search engines to solve a problem. About 20% of the audience is located in the U.S. followed by 15% in India, and 5% in Germany.\nHere are some stats (numbers from November, 2021):\n   Metric Value     Monthly Active Users 180,000   Monthly Sessions \u0026gt; 200,000   Newsletter Subscribers 5,000   Average Newsletter Open Rate 42%   Average Newsletter Click Rate 9%    Exclusive Ad Placement You get to be the exclusive advertiser on reflectoring for a time. Exclusive meaning the only external advertiser, as I will still show ads for my own products.\nYou can place the following ads:\n a skyscraper ad at the top of the right sidebar (any size up to 260 x 600 pixels) an in-content banner ad about half-way through the page (any size up to 720 x 300 pixels)  These ads will appear on all content pages across reflectoring.\nYou provide the images for the ads and a URL where they should link. Or, you just provide a logo, a text message, and a URL, and I will create the ads from that.\nI\u0026rsquo;m happy to discuss other options as well.\nSponsored Blog Post I will write a technical blog post about your product or framework and make sure that the article is search engine optimized for it to get the most traffic possible.\nThe article solves a specific problem that users would be searching for via search engines in order to provide value to my developer audience. Some example ideas:\n \u0026ldquo;Complete Guide to Authentication and Authorization with \u0026lt;Product X\u0026gt; and Spring Boot\u0026rdquo; (if your product is an auth provider) \u0026ldquo;Implement Logging with \u0026lt;Product Y\u0026gt; and Java\u0026rdquo; (if your product is a logging provider) \u0026ldquo;Testing with \u0026lt;Product Z\u0026gt; in Kotlin\u0026rdquo; (if your product is a testing tool)  The finished article will be a well-structured, learning-friendly deep dive into the topic and have something between 1,500 and 2,000 words. It will include code examples from a working example application that I will write myself and host on GitHub.\nI\u0026rsquo;m happy to discuss ideas with you so we can choose a topic that we think will have the most impact.\nYou can provide me with URLs to link to in the text and I will place them where they make the most sense to the reader.\nAlso, I will include a link and an intro to the article in my weekly newsletter.\nSponsored Newsletter You leave a message to the audience of my weekly newsletter.\nThe newsletter usually contains a link to the latest article on the page, together with a short inspirational text from me (see examples here).\nYou provide a logo and a message of up to about 60 words, plus a URL to link to, and it will show up as the first thing in the newsletter.\nI can also write the message myself, including an authentic endorsement for your product, making it much more effective. To stay authentic, I only write endorsements for products which I have used myself and which have proven their value to me.\nGet in Touch The above are rough guidelines, so I\u0026rsquo;m happy to discuss options. Fill out the form below or reach out to tom@reflectoring.io to get in touch!\nI want to Sponsor Reflectoring!  Your name *  Email Address *  Website  Your message *   Send Now    ","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/advertise/","title":"Advertise on Reflectoring"},{"categories":null,"contents":"Successful Review All payments are subject to a successful review and the publishing of the article in question. I will not publish low-quality articles. In cases where the quality is not what we expect, we will always give feedback on how to improve the quality to finally publish them. In rare cases, when the text quality is poor, we will not review an article at all.\nThe writing guidelines define the basis for judging the quality of an article during review.\nPayment Tiers Topics are categorized in tiers according to their value.\n Tier 1 topics are the most valuable topics for any number of reasons. They usually bring more value to the readers and are expected to have more readers. Tier 2 topics are standard topics that are perhaps not so novel, or not so valuable as tier 1 topics.  You can see the tier of a topic in the labels of the corresponding Trello card.\nPayment Metric The payment will be determined by the number of words in the article. While the number of words is a poor metric for the quality of an article, it’s usually a good metric for the effort that went into creating the article.\nQuality is assured during the review. Quantity defines the payment.\nUnnecessarily long sentences, phrases, and code will be removed during review and will not improve payment.\nTo determine the number of words, the published article is pasted into WordCounter (including code examples).\nPayment Rates    Tier per 100 words 500 words article (example) 1000 words article (example) 2000 words article (example)     Tier 1 $6 $30 $60 $120   Tier 2 $4.50 $22.50 $45 $90    The payment rates are specified in US-Dollars.\nPayment via Wise I prefer Wise as the service for paying authors. Its fees for international money transfers are very transparent and cheap compared to PayPal, which has very expensive exchange fees.\nPlease set up a Wise account using this link if you don\u0026rsquo;t have an account already and set up an account that can receive any currency (affiliate link). Under Manage \u0026gt; Account \u0026gt; Receiving by email or phone, activate that your Wise account can be found by email:\nMake sure that you connect a bank account (not just an UPI account).\nThen, all you need to do after an article has been published is to let me know the email address you used for your Wise account and I can transfer the money.\nPayment via PayPal If Wise is not an option for you, we can use PayPal. PayPal has some hidden fees, however, which might mean that it takes some of the money before it arrives in your account.\nAfter an article is published, let me know your PayPal email address and I\u0026rsquo;ll transfer the money.\nI want to Write!  Full name *  Email Address *  Tell me a bit about you *   Send Now    ","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/contribute/author-payment/","title":"Author Payment"},{"categories":null,"contents":"This document explains the workflow of writing articles for reflectoring to make our cooperation as productive as possible for both of us.\nIf you have any questions at all, don’t hesitate to reach out via email to tom@reflectoring.io.\nOverview The Workflow in a Nutshell The workflow is pretty straightforward:\n You pick a topic You create an outline An editor reviews the outline You write the article An editor reviews the article I publish the article I process payment  These steps will be described in more detail in the rest of this document.\nTracking the Work We use Trello as the tool of choice to track work on the reflectoring blog. You will have gotten an invite email with a link to the Trello board.\nCommunication Communication is key to avoid misunderstandings and unnecessary work. All communication concerning an article should take place on the Trello card of that article. If you need feedback or have any questions while working on an article, add a comment to the Trello card and mention the user @Reflectoring Editors. Your editor will get back to you. Please be patient, because the editors don\u0026rsquo;t look into their messages every day.\nPlease rather communicate too much than too little!\nTimeframe Expectations You can write an article in a day or in a month, I don’t care much (longer than a month is tedious to follow up, though, so have a good reason). I do expect you to give me a due date, though, so I know when to expect a result.\nIt’s not a big deal if something happened and you don’t make that date, but expect me to ask when that date is past. Please fill the “Due Date” field in the Trello card of your article and keep it up-to-date.\n0 - You Bring a Topic (Optional) Submit the Topic If you have a topic that you would like to write about and that you think is a good fit for the reflectoring blog, simply create your own Trello card in the “Propose your topic” column.\nRequest a Review of the Topic Mention the editors on the card (@Reflectoring Editors) so they can review the topic and decide the next steps.\n1 - You Pick a Topic Browse Available Topics Pick a topic from the column “Topics ready to pick”.\nChange the Status to OUTLINING Once you have found a topic or created your own, add yourself to the card, set a due date, and move it to the “Outlining” column.\n2 - You Create an Outline Research an Outline If you’re very familiar with the topic, you may know from the top of the head what to write. Sometimes, though, you need to research a topic deeper to create an outline of the article.\nThis research may require you to create a code example to try out some things.\nWrite the Outline With your research in mind, think of the structure of your article and the sub-headings it will have. Will it have only one level of headings? Or will it need a deeper structure? What is the order of things to discuss? Where will you provide code examples? Is there a certain point or warning you want to make that should be displayed in an info or warning box?\nPrepare an outline with section headings and a few bullet-points with the content you plan for each section.\nThe outline is just an orientation and a means of checking that we have the same understanding of the topic. It’s better to discuss changes to an outline than changes to an already written article - it avoids unnecessary work. If, while writing, you find that you need to deviate from the outline, do it.\nPost the outline in the description of the topic card (Trello supports Markdown!) and mention the editors (@Reflectoring Editors) in a comment to let your editor know to review the outline.\n3 - An Editor Reviews the Outline Timeframe Expect feedback on the outline within a day or two. Nudge your editor by adding @Reflectoring Editors in a comment on the Trello card if you still don’t have feedback after 3 or so days (the editors don\u0026rsquo;t look every day, so please be patient).\nAddress Comments If there are any remarks on the Outline that need addressing, the Editor will add them in a comment to the Trello card so you can address them and submit it for review again by mentioning me.\nOnce the outline has been reviewed, your editor will move it to the WRITING column and you can start working on the article.\n4 - You Write the Article Writing Guidelines Once the outline has been reviewed, write the article in Markdown. Please read through the Writing Guidelines to create high-quality content.\nSet a Due Date Please set a due date to the card you’re working on so your editor knows when to expect a draft for review. You can change the due date at any time, this is just a tool for editors to keep up with their authors. 2 days and 5 days after the due date, you will get an automatic reminder to update the due date. 10 days after the due date, the card will be marked as \u0026ldquo;inactive\u0026rdquo;.\n(Optional) Local Preview If you want to preview the article in the real layout and design, follow the instructions on the reflectoring GitHub repository to set up the blog on your local machine.\nSubmit the Article via Pull Request When you have the text ready for review , create a pull request. The blog is just like a software project, so if you have worked with GitHub before, this should be familiar to you. If you haven’t created pull requests before, follow the instructions on the reflectoring GitHub repository.\nMake sure to activate the checkbox “Allow edits from maintainers”!\nOnline Preview Once you have created a pull request, an online preview will be generated automatically. Please check the article in the online preview before requesting a review.\nTo access the online preview, click on “Show all checks” in the detail view of your pull request, and then on “Details” next to the netlify check.\nSubmit the Code Examples via Pull Request Most articles will require some code examples to prove the ideas or discussion in the text. As a general rule, all code examples must be included in the code-examples GitHub repository. Create a pull request to that contains the code examples. You can find instructions on the GitHub page of the repository.\nSubmit the Topic for Review Mention the editors @Reflectoring Editors in a comment on the Trello card to let me know that the topic is ready for review.\n5 - Editor Reviews the Article Timeframe I’m usually pretty quick in giving feedback to submitted articles. Give me a day or two to respond. In the response you’ll either get the feedback directly or a timeframe in which to expect it.\nNudge your editor by adding @Reflectoring Editors in a comment on the Trello card if you didn’t get feedback within a week.\nMinor Changes I will do minor changes to the article during the review. This includes fixing typos or increasing the readability of certain phrases.\nMajor Changes If major changes are necessary, I will return the Trello card to the WRITING column and assign it back to you to address the requested changes. I will add a comment to the Trello card explaining my request.\nNote that every article will require at least one round of changes from your side.\nI reserve the right to decline articles completely if during a review it becomes clear to me that too much work is necessary to make it high-quality content.\n6 - I Publish the Article Merging the Pull Request When the review is finished and all changes have been addressed, I will publish the article by merging the pull request.\n7 - I Process Payment Provide your Paypal Account Be sure to provide the email address of your Paypal account so I can process payments for finished articles.\nIf Paypal doesn\u0026rsquo;t work for you, feel free to suggest an alternative payment channel (that does not incur horrendous transfer fees).\nReceiving Payment At least once a week, I will process payments for articles that have been published.\nI want to Write!  Full name *  Email Address *  Tell me a bit about you *   Send Now    ","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/contribute/author-workflow/","title":"Author Workflow"},{"categories":null,"contents":"I\u0026rsquo;m looking for authors to contribute to the reflectoring blog!\nInterested to write about Spring and Java, Kotlin, Node.js, or the software craft in general? To become a better writer? To earn a little money on the side? Then read on!\nNo need to be an expert writer! I\u0026rsquo;ll help you get your piece over the finish line!\nI want to write!\nWhat the Authors Say  \u0026ldquo;Reflectoring gives me the opportunity to research, develop, and write about topics that interest me, as well as to connect and network with other professionals that have the same interests as me.  Also, as a beginner writer myself, writing for reflectoring helps me develop my writing skills. Tom\u0026rsquo;s proof-reading is really valuable in order to write better articles.\u0026rdquo;\n - Petros Stergioulas\n \u0026ldquo;I like the structured process of writing here - from creating an outline, submitting the draft for review, and finally getting it published.  The reviews before publishing add valuable feedback from a reader\u0026rsquo;s perspective. Above that, the use of Markdown formats, Trello boards, and GitHub gives a great experience not very different from what we are used to as developers.\u0026rdquo;\n - Pratik Das\nI want to write!\nWhy Should I Write at All? There\u0026rsquo;s plenty of reasons for a software developer to write. Writing regularly has certainly created some contacts and opened a couple of doors for me.\nAnd there\u0026rsquo;s nothing more satisfying than seeing the co-workers in the office reading your blog to solve the problem they\u0026rsquo;re currently having :).\nWhy Should I Write for reflectoring? I will review all contributed articles and help you create the best content you can. You get full credit for the article with your own author photo, blurb, and social media links under the article.\nYou also get some money for the effort (see below).\nWhat Should I Write About? There\u0026rsquo;s a big backlog of potential topics that you can choose from. You can also bring your own topic of interest. The topic should fit into the reflectoring blog, though, so it should have something to do with Spring, Java, Kotlin, AWS, Node.js or software development best practices.\nThe content must be exclusive to reflectoring.\nAre There Any Writing Guidelines? Yes. I\u0026rsquo;ve created a set of guidelines that will help you find your voice even if you\u0026rsquo;re not an experienced writer.\nHow Do I Write? You\u0026rsquo;ll write in Markdown and create a pull request to a GitHub repository. It\u0026rsquo;s just like coding, only instead of code, you write text.\nWhat Should I Bring? You should probably have a couple of years of Java and Spring experience under your belt.\nYour written English doesn\u0026rsquo;t need to be perfect, but good enough so that it doesn\u0026rsquo;t have to be completely re-written to be polished into a text that could be a native speaker\u0026rsquo;s.\nOther than that, just have fun sharing your experience!\nHow Will I Be Paid? You will be paid by the number of words in the published article. A 1000-word article (including code snippets) about a high-potential topic will be rated at USD 60, for instance. A 1000-word article about a lower potential topic will be rated at USD 45.\nI reserve the right not to publish (and not to pay) articles that are too much work to get to the quality standard I expect. In this case, the content is yours to do with whatever you want.\nPayment will be processed with PayPal.\nDo You Accept Guest Posts? The content created as per the above information are paid, contributed articles that I influence topic-wise.\n\u0026ldquo;Guest posts\u0026rdquo; are posts from external sources with the goal to create reach for a certain product (i.e. for a software product, a framework, or even for the author).\nI may consider such posts for publication on reflectoring, but I will still review them, they must be exclusive, and they will not be paid. Above all, they must provide information that is worthwhile for my readers.\nWhy Are You Looking For Authors? I have created this big backlog of topics that I believe are interesting to my readers but I don\u0026rsquo;t have enough time to write it all myself (I do have a day job and a family I like to spend some time with, after all!).\nSo, to get the stuff out there before I die - to grow the reflectoring blog faster than I could alone - I\u0026rsquo;m looking for help.\nI\u0026rsquo;m In - What Do I Do? Have a read through these documents to check if writing on reflectoring excites you:\n Writing Guide Author Workflow Author Payment  If you want to try it out, fill out the form below explaining your motivation, a little background on yourself and, if applicable, some links to stuff you have written so far.\nI want to Write!  Full name *  Email Address *  Tell me a bit about you *   Send Now    ","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/contribute/become-an-author/","title":"Become an Author"},{"categories":null,"contents":"You need an experienced software engineer for a mentoring or pairing session? To bounce off some ideas? To validate an architecture? Or to just talk through an idea?\nI can help you! You can book me remotely for an hour at a time - once, or regularly. Below is an incomplete list of things that I can help with. If what you\u0026rsquo;re looking for is not listed, feel free to reach out anyways to discuss if I can help you.\nTake advantage of me as a wildcard to get quick results without binding to a contract!\nNot sure if I am the right person? Have a look at my LinkedIn profile or shoot me an e-mail with your questions.\nMy current rate is $110 / hour.\nGet in touch to discuss ideas and questions or book me directly via Calendly and I\u0026rsquo;ll get in touch with you.\nOne-on-One Coaching You would like to have a mentor for yourself or a member of your team, but you currently don\u0026rsquo;t have access to a senior engineer that could fill that role?\nI\u0026rsquo;m happy to set up a regular 1:1 session to talk through things like career growth, developing hard and soft skills, or improving impact as a developer.\nTeam Coaching Your team is missing a senior engineer or agile coach and you would like to have a guide for the team?\nI\u0026rsquo;ve served in both low and high-perfoming teams and know a thing or two about how to increase the effectiveness of an engineering team. Book me for a bi-weekly team introspection to evaluate and increase your team\u0026rsquo;s impact.\nPair Programming If you\u0026rsquo;re looking for a very hands-on coaching, I offer myself as a pairing partner for you or your team to do pair or mob programming.\nI will ask a lot of dumb questions to keep you from making the mistakes that I have made already.\nCode Reviews You have the feeling that your codebase is not as maintainable as it could be, but can\u0026rsquo;t quite point out the cause?\nBook me to conduct a code review on your codebase. I will ask a lot of questions and point out potential improvements. I\u0026rsquo;m also happy to pair up to implement those improvements afterwards or act as an accountability partner in regular meetings to improve follow-through.\nArchitecture Reviews You\u0026rsquo;re starting a new project and want the architecture validated before putting a whole team of engineers on it? Or you are already working on a project and would like to evaluate the architecture for potential risks and improvements?\nBook me to conduct an architecture review. I will start by creating a context diagram to understand your system and then walk you through the aspects of an architecture that are most important to you.\nTechnical Trainings You want your development team to learn about a certain topic but don\u0026rsquo;t have a trainer at hand?\nI have experience in learning and teaching as a developer and can prepare a topic in a way that a development team can effectively learn the material.\nWe\u0026rsquo;ll spread the training across multiple, short sessions to avoid tiring and boring day-long meetings. We\u0026rsquo;ll get our hands dirty on actual code to make the learning as effective as possible.\nDepending on how familiar I am with the topic myself, I will bill some hours to prepare a training sessions.\nContact The above a rough ideas of what I can help you with, so I\u0026rsquo;m happy to discuss options. Submit the form below or reach out to tom@reflectoring.io to get in touch!\nI want to Book You!  Your name *  Email Address *  How can I help you? *   Send Now    ","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/book-me/","title":"Book me"},{"categories":null,"contents":" A good software architecture should keep the cost of development low over the complete lifetime of an application.\n Ever wondered about how to actually implement a “Clean Architecture” or a “Hexagonal Architecture”? There’s a lot of noise around these keywords, but you can find very little hands-on material on this topic.\nThis book fills a void by providing a hands-on approach to the Hexagonal architecture style from the concepts behind it down to actual code.\nGet the ebook          more than 150 reviews on Amazon and Goodreads for the first edition.  ebook available as .pdf and .epub on Leanpub and Gumroad support me directly by paying more than the minimum price get updated versions immediately and for free save $10 if you subscribe to the Simplify! newsletter  Get it on Gumroad Get it on Leanpub    Get the Print Book          rated 4.4 stars on Amazon.  ebook and paperback  Get it on Amazon    All About Hexagonal Architecture  Learn the concepts behind \u0026ldquo;Clean Architecture\u0026rdquo; and \u0026ldquo;Hexagonal Architecture\u0026rdquo;. Explore a hands-on approach of implementing a Hexagonal architecture with example code on GitHub. Develop your domain code independent of database or web concerns.  Get a Grip on Your Layers  Learn about potential problems of the common layered architecture style. Free your domain layer of oppressive dependencies using dependency inversion. Structure your code in an architecturally expressive way. Use different methods to enforce architectural boundaries. Learn the consequences of shortcuts and when to accept them. \u0026hellip; and more.  What Readers Say  Tom Hombergs has done a terrific job in explaining clean architecture - from concepts to code. Really wish more technical books would be as clear as that one!\n Gernot Starke - Fellow at INNOQ, Founder of arc42, Author of Software Architecture Books, Coach, and Consultant\n Love your book. One of the most practical books on hexagonal architecture I have seen/read so far.\n Marten Deinum - Spring Framework Contributor and Author of \u0026ldquo;Spring 5 Recipes\u0026rdquo; and \u0026ldquo;Spring Boot 2 Recipes\u0026rdquo;\n A book taken right out of the machine room of software development. Tom talks straight from his experience and guides you through the day-to-day trade-offs necessary to deliver clean architecture.\n Sebastian Kempken - Software Architect at Adcubum\n Thank you for the great book, it helped me gain significant insight into how one would go about implementing hexagonal and DDD in a modern Spring project.\n Spyros Vallianos - Java Developer at Konnekt-able\n After reading it I had one of these \u0026lsquo;aha\u0026rsquo; moments when things finally click in your brain.\n Manos Tzagkarakis - Java Developer at Datawise\nTable of Contents  Maintainability What\u0026rsquo;s Wrong with Layers? Inverting Dependencies Organizing Code Implementing a Use Case Implementing a Web Adapter Implementing a Persistence Adapter Testing Architecture Elements Mapping Between Boundaries Assembling the Application Taking Shortcuts Consciously Enforcing Architecture Boundaries Managing Multiple Bounded Contexts A Component-Based Approach to Software Architecture Deciding on an Architecture Style  Questions? Comments? Drop me an email.\n","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/book/","title":"Get Your Hands Dirty on Clean Architecture (2nd edition)"},{"categories":["Spring Boot"],"contents":"In the good old days, we implemented web applications with a server-side web framework. The browser sends a request to the server, the server processes the request and answers with HTML, the browser renders that HTML.\nNowadays, every application frontend seems to be a single page application (SPA) that loads a bunch of Javascript at the start and then uses Javascript-based templating to render the frontend.\nWhat if we combine the two? This article shows a way of combining Vue.js components with a Thymeleaf-based server-side web application.\nI\u0026rsquo;m using this method in blogtrack.io, a blog tracking service going into beta soon, and I\u0026rsquo;m very happy with it.\n Example Code This article is accompanied by a working code example on GitHub. The Problems of SPAs While SPAs allow for building more interactive, desktop-like applications, they also introduce new problems:\n we need a mechanism to load only the Javascript resources we need on a certain page, we might need to render part of the page on the server so that the user doesn\u0026rsquo;t see a blank page (time to first content), we have to handle page refreshs and the back-button, we have to handle analytics ourselves because analytics providers usually only count when a page is loaded, \u0026hellip; and a whole bunch of other problems I don\u0026rsquo;t pretend to understand.  Solutions to many of these problems exist, but they add new problems (like the \u0026ldquo;time to interactive\u0026rdquo; metric) and complexity to the SPA frameworks, making them harder to use and understand. This leads to SPA fatigue.\nBut building applications with only old-school server-side web frameworks is not a solution, either. We want modern, interactive frontends, for which we need Javascript.\nSo, what if we use a server-side web framework to render HTML that includes some Javascript components here and there, to add this interactivity?\nReusable Javascript Components The goal is to create narrowly scoped, potentially re-usable Javascript components that we can place into the HTML rendered by our server-side web framework using \u0026lt;script\u0026gt; tags.\nHowever, we don\u0026rsquo;t want to simply hack some untested Javascript that adds some JQuery here and there (it\u0026rsquo;s not the 90s anymore!) but take advantage of the rich feature set that today\u0026rsquo;s SPA frameworks bring to the table.\nWe want:\n to preview the Javascript components without starting the server-side application, to write and run tests for these Javascript components, to include selected Javascript components in a server-rendered HTML page without loading all of them, to minify the Javascript, and to integrate the build of the Javascript components with the build of the server-side application.  Let\u0026rsquo;s see how we can achieve this by using client-side Vue.js components in HTML pages generated with the server-side templating engine Thymeleaf.\nThe Sample Project For this article, imagine we\u0026rsquo;re building a dashboard application that displays some charts. We want to integrate the Chart.js library to create those charts. But instead of just adding hand-rolled, untested Javascript to our server-side HTML templates, we want to wrap those charts in components built with Vue.js.\nWe\u0026rsquo;re using server-generated HTML to render the layout and all the static and dynamic content that doesn\u0026rsquo;t require Javascript and only use Vue.js components for the interactive Javascript components.\nIn our project directory, we create a folder for the server-side Spring Boot application and another for the client-side Javascript components:\nthymeleaf-vue ├── server └── client Let\u0026rsquo;s fill these folders with live!\nSetting up the Server-Side Web Application with Spring Boot \u0026amp; Thymeleaf We start by building a Spring Boot application that serves a page generated with the Thymeleaf templating engine.\nWe can let Spring Boot Initializr generate a ZIP file for us and extract the contents into the server folder (actually, we need to move the Gradle files back into the main folder - see the example project on Github for the final folder structure.\nNext, we create the page template src/main/resources/templates/hello-vue.html:\n\u0026lt;html\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1 th:text=\u0026#34;${title}\u0026#34;\u0026gt;This title will be replaced\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt; Here comes a Vue component!\u0026lt;/p\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; This is just a simple \u0026ldquo;Hello World\u0026rdquo;-style page that displays a title that is defined by the backend. We\u0026rsquo;re going to add a Vue.js component to it later.\nAlso, we add a controller that serves this page:\n@Controller class HelloVueController { @GetMapping(\u0026#34;/\u0026#34;) ModelAndView showHelloPage() { Map\u0026lt;String, Object\u0026gt; model = new HashMap\u0026lt;\u0026gt;(); model.put(\u0026#34;title\u0026#34;, \u0026#34;Hello Vue!\u0026#34;); return new ModelAndView(\u0026#34;hello-vue.html\u0026#34;, model); } } If we start the application with ./gradlew bootrun and go to http://localhost:8080/, we should see this page:\nWe now have a working server-side web application driven by a Thymeleaf template. Time to create some Javascript components.\nBuilding a Javascript Chart Component with Vue.js For the client-side Javascript components, we\u0026rsquo;ll use Vue.js, which is a framework we can use to create SPAs, but which specifically supports exporting components to be consumed outside of a SPA.\nWe\u0026rsquo;ll need Node.js installed on our machine to support the Vue development environment.\nWhen Node is installed, we can install the Vue CLI:\nnpm install -g @vue/cli This brings us the vue command, which we use to create our Vue project. From the parent folder of our project (thymeleaf-vue), we run\nvue create client to create the client subfolder and fill it with a default Vue application. We end up with a file structure like this:\nthymeleaf-vue ├── server └── client ├── src | ├── assets | └── components └── package.json I omitted some files for clarity.\nNow, we want to create a Vue component that displays a chart. Let\u0026rsquo;s say the chart shall take 7 numbers as input, one for each day in the week, and display them in a bar chart.\nNote that the chart is just an example. We can create any simple or complex client-side Javascript component with our without Vue.js and use it in a server-side template.\nFirst, we add the dependency to chart.js to our package.json file:\nnpm install --save chart.js Next, we create our WeekChart component as a single file component:\n\u0026lt;template\u0026gt; \u0026lt;div class=\u0026#34;chart-container\u0026#34;\u0026gt; \u0026lt;canvas ref=\u0026#34;chart\u0026#34;\u0026gt;\u0026lt;/canvas\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/template\u0026gt; \u0026lt;script\u0026gt; import Chart from \u0026#34;chart.js\u0026#34;; export default { name: \u0026#34;WeekChart\u0026#34;, props: { chartData: { type: Array, required: true, }, }, mounted: function() { const config = { type: \u0026#34;bar\u0026#34;, data: { labels: [ \u0026#34;Monday\u0026#34;, \u0026#34;Tuesday\u0026#34;, \u0026#34;Wednesday\u0026#34;, \u0026#34;Thursday\u0026#34;, \u0026#34;Friday\u0026#34;, \u0026#34;Saturday\u0026#34;, \u0026#34;Sunday\u0026#34;], datasets: [ { data: this.chartData }, ] }, }; new Chart(this.$refs.chart, config); } }; \u0026lt;/script\u0026gt; \u0026lt;style scoped\u0026gt; .chart-container { position: relative; height: 100%; width: 100%; } \u0026lt;/style\u0026gt; This component bundles the HTML markup, some Javascript, and some CSS into a self-sufficient UI component. Note that we\u0026rsquo;re importing the Chart object from the chart.js library. The component has a single input parameter (or \u0026ldquo;prop\u0026rdquo; in JS lingo) called chartData, which takes an array of values - one value for each day of the week.\nWithin the mounted function, we\u0026rsquo;re creating a chart configuration according to the chart.js docs, pass the chartData input parameter into this config, and finally bind this config to the \u0026lt;canvas\u0026gt; element in the template section via the ref=chart attribute.\nIn package.json, we change the build script so that it builds our component as a library instead of a SPA:\n{ ... \u0026#34;scripts\u0026#34;: { ... \u0026#34;build\u0026#34;: \u0026#34;vue-cli-service build --target lib --dest dist/WeekChart --name WeekChart src/components/WeekChart.vue\u0026#34;, }, ... } If we run npm run build now, the Vue CLI will create several different versions of our WeekChart component in the dist folder. The one we\u0026rsquo;re interested in is WeekChart.umd.min.js, which is a self-sufficient Javascript file containing all dependencies (except for Vue itself) that we can include in any HTML page.\nPreviewing the Vue Component with Storybook Now that we\u0026rsquo;ve built a chart component, we want to see if it works without having to embed it into our application. For this, we\u0026rsquo;ll use Storybook.\nIntegrating Storybook with Vue is surprisingly simple, we merely have to execute this command in our client folder:\nnpx -p @storybook/cli sb init --type vue This adds a storybook script and all required dependencies to our package.json and creates a folder stories, which now contains some sample \u0026ldquo;stories\u0026rdquo;.\nWe\u0026rsquo;ll add a story to the storybook by creating the file stories/WeekChart.stories.js with this content:\nimport WeekChart from \u0026#39;../src/components/WeekChart.vue\u0026#39;; export default { title: \u0026#39;WeekChart\u0026#39;, component: WeekChart, }; export const DefaultState = () =\u0026gt; ({ components: { chart: WeekChart }, template: `\u0026lt;chart v-bind:chartData=\u0026#34;[1,2,3,4,5,6,7]\u0026#34; /\u0026gt;` }); This file creates an instance of our WeekChart component with the name DefaultState and exports it so that Storybook can pick it up and include it in its GUI.\nRunning npm run storybook will start a local web server and serve the stories in a nice UI when we open http://localhost:6006 in a browser:\nWe know that our bar chart component is working now. Storybook is nice to use during development to make sure that our changes have the desired effect. But if we do a refactoring to any of our components and forget to check it in Storybook, we may still break our components. So, let\u0026rsquo;s add an automated test that runs during the build.\nAdding a Unit Test for the Vue Component We want to create tests for each of our Vue components that run during the CI build to make sure that errors in a component will break the build. For this, we rely on Jest, a popular Javascript testing framework that integrates well with Vue.js.\nTo set up Jest in our project, we add the following entries to our package.json file:\n{ ... \u0026#34;scripts\u0026#34;: { ... \u0026#34;test\u0026#34;: \u0026#34;vue-cli-service test:unit\u0026#34; }, \u0026#34;devDependencies\u0026#34;: { ... \u0026#34;@vue/cli-plugin-unit-jest\u0026#34;: \u0026#34;^4.4.0\u0026#34;, \u0026#34;@vue/test-utils\u0026#34;: \u0026#34;^1.0.3\u0026#34; } } This adds the dependencies needed to work with Jest in Vue, and it adds a script to execute the tests. Don\u0026rsquo;t forget to run npm install after modifying the dependencies in package.json.\nAlso, we create the file jest.config.js to configure Jest to work with *.vue files:\nmodule.exports = { preset: \u0026#34;@vue/cli-plugin-unit-jest\u0026#34;, collectCoverage: true, collectCoverageFrom: [\u0026#34;src/**/*.{js,vue}\u0026#34;, \u0026#34;!**/node_modules/**\u0026#34;], coverageReporters: [\u0026#34;html\u0026#34;, \u0026#34;text-summary\u0026#34;] }; Next, we create a test for our WeekChart component in src/tests/unit/WeekChart.spec.js:\nimport { shallowMount } from \u0026#34;@vue/test-utils\u0026#34;; import WeekChart from \u0026#34;../../components/WeekChart.vue\u0026#34;; describe(\u0026#34;WeekChart\u0026#34;, () =\u0026gt; { it(\u0026#34;renders without error\u0026#34;, () =\u0026gt; { const wrapper = shallowMount(WeekChart, { propsData: { chartData: [1, 2, 3, 4, 5, 6, 7], }, }); const chart = wrapper.findComponent({ name: \u0026#34;WeekChart\u0026#34; }); expect(chart.exists()).toBe(true); }); }); We can run the test with npm run test.\nThe test will pass, but it will show some error output on the console:\nError: Not implemented: HTMLCanvasElement.prototype.getContext (without installing the canvas npm package) This is because our chart component relies on a canvas element, which is not supported in the Jest runtime environment. But we want the test to fail in this case! So, we configure the Jest runtime to throw an error when it encounters this error log. For this, we create the file jest/console-error-to-exception.setup.js:\nimport { format } from \u0026#34;util\u0026#34;; beforeEach(() =\u0026gt; { const { error } = global.console; global.console.error = (...args) =\u0026gt; { for (let i = 0; i \u0026lt; args.length; i += 1) { const arg = args[i]; // add patterns here that should fail a test  if (typeof arg === \u0026#34;string\u0026#34; \u0026amp;\u0026amp; (arg.includes(\u0026#34;Vue warn\u0026#34;) || arg.includes(\u0026#34;Not implemented\u0026#34;))) { throw new Error(format(...args)); } } error(...args); }; }); This will intercept calls to console.error() and re-throw them as an error if they match a certain pattern. The patterns include the \u0026ldquo;not implemented\u0026rdquo; error we encountered before and Vue warnings.\nWe now need to tell Jest to run this code before every test by adding the file to jest.config.js:\nmodule.exports = { ... setupFilesAfterEnv: [ \u0026#34;./jest/console-error-to-exception.setup.js\u0026#34; ] }; If we run the test again, it will now fail with the same error message as above. Here\u0026rsquo;s the source where I got this idea.\nTo fix the underlying problem of the unavailable canvas element, we add a mock canvas to our development dependencies in package.json:\nnpm install --save-dev jest-canvas-mock Also, we add another Jest setup file in /jest/mock-canvas.setup.js with a single import statement:\nimport \u0026#39;jest-canvas-mock\u0026#39;; and add this file to jest.config.js to be executed for all tests:\nmodule.exports = { ... setupFilesAfterEnv: [ ... \u0026#34;./jest/mock-canvas.setup.js\u0026#34; ] }; Now, the tests will have access to a mock Canvas element and the test will be green.\nThe test will now tell us when we broke something.\nIntegrating the Vue Build into the Spring Boot Build We have a Spring Boot application that\u0026rsquo;s being built with a Gradle process (you can probably also do it with Maven, but I\u0026rsquo;m a Gradle fanboy) and a Vue component that is built with NPM. We want to include our Vue component in the Spring Boot application so it can serve the Javascript together with the HTML. How do we do that?\nThe solution I went for is to wrap the Javascript build within Gradle. When the Gradle build starts, it triggers the NPM build, creating ready-for-use Javascript files that we can include in our HTML pages. All we need to do then is to copy those Javascript files to a location where they will be picked up when the Spring Boot application is packaged.\nThe first step is to make our client folder a module in the Gradle build. For this, we create a file build.gradle in this folder:\nplugins { id \u0026#34;com.github.node-gradle.node\u0026#34; version \u0026#34;2.2.4\u0026#34; } apply plugin: \u0026#39;java\u0026#39; task npmBuild(type: NpmTask) { inputs.dir(\u0026#34;src\u0026#34;) outputs.dir(\u0026#34;dist\u0026#34;) args = [\u0026#39;run\u0026#39;, \u0026#39;build\u0026#39;] } task npmClean(type: NpmTask) { args = [\u0026#39;run\u0026#39;, \u0026#39;clean\u0026#39;] } jar { into \u0026#39;/static\u0026#39;, { from \u0026#39;dist\u0026#39; include \u0026#39;**/*.umd.min.js\u0026#39; } } jar.dependsOn(\u0026#39;npmBuild\u0026#39;) clean.dependsOn(\u0026#39;npmClean\u0026#39;) We include the Gradle Node Plugin which enables us to call NPM tasks from within our Gradle build.\nWe also apply the Java plugin, which allows us to create a JAR file as an output of the build.\nWe create the tasks npmBuild and npmClean which call npm run build and npm run clean, respectively.\nThen, we configure the jar task so that the resulting JAR file will contain a folder static with all files from the dist folder. Finally, with dependsOn, we configure that the npmBuild task will run before the jar task, because the npmBuild task will create the files that the jar task needs.\nThe static folder has a special meaning in Thymeleaf: it\u0026rsquo;s content will be served by the web server, so that it can be accessed from the browser. This is important in our case, since we want the browser to load the Javascript files with our Vue components.\nSince with the server and the client folders we now have a multi-module Gradle build, we need to create a settings.gradle file in the parent directory that lists all the modules:\nrootProject.name = \u0026#39;thymeleaf-vue\u0026#39; include \u0026#39;client\u0026#39; include \u0026#39;server\u0026#39; And finally, in the build.gradle file of the server module, we need to add the dependency to the client project:\ndependencies { implementation project(\u0026#39;:client\u0026#39;) ... } Using the Vue Component in a Thymeleaf Template If we build the project now with ./gradlew build, we get a Spring Boot application that carries the file WeekChart.umd.min.js in its belly. That means we can use it in our Thymeleaf template hello-vue.html that we have created at the start of this article:\n\u0026lt;html\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1 th:text=\u0026#34;${title}\u0026#34;\u0026gt;This title will be replaced\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt; Here comes a Vue component!\u0026lt;/p\u0026gt; \u0026lt;div id=\u0026#34;chart\u0026#34;\u0026gt; \u0026lt;chart th:v-bind:chart-data=\u0026#34;${chartData}\u0026#34;\u0026gt;\u0026lt;/chart\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;script src=\u0026#34;https://unpkg.com/vue\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script th:src=\u0026#34;@{/WeekChart/WeekChart.umd.min.js}\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script\u0026gt; (function() { new Vue({ components: { chart: WeekChart } }).$mount(\u0026#39;#chart\u0026#39;) })(); \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; We\u0026rsquo;ve added a \u0026lt;div\u0026gt; with the id chart that contains an instance of our WeekChart component.\nWe want to provide the data to the chart from the server, so we add a th: (for \u0026ldquo;thymeleaf\u0026rdquo;) in front of the attribute v-bind:chart-data that is expected by vue to pass an array prop into the component. This will let Thymeleaf know that we want this attribute populated with the value of the chartData variable.\nAlso, we added \u0026lt;script\u0026gt; tags to load Vue.js and our chart component (which will be served from out of the JAR file of the client module). And another \u0026lt;script\u0026gt; tag to instantiate the Vue component and bind it to the \u0026lt;chart\u0026gt; tag within the chart div.\nFinally, we need to modify our server-side controller so that it populates the chartData variable:\n@Controller class HelloVueController { @GetMapping(\u0026#34;/\u0026#34;) ModelAndView showHelloPage() { Map\u0026lt;String, Object\u0026gt; model = new HashMap\u0026lt;\u0026gt;(); model.put(\u0026#34;title\u0026#34;, \u0026#34;Hello Vue!\u0026#34;); model.put(\u0026#34;chartData\u0026#34;, Arrays.asList(7,6,5,4,3,2,1)); return new ModelAndView(\u0026#34;hello-vue.html\u0026#34;, model); } } Running ./gradlew bootrun and opening http://localhost:8080/ in a browser will now proudly show our Vue chart component on the page, populated with data from the server.\nConclusion We have created a Spring Boot application with the server-side template engine Thymeleaf and a Javascript component library that provides a Javascript component built with NPM and Vue. The result is a hybrid application that allows the server-side template engine to create static HTML pages while including Javascript components that allow more interactivity.\nWe have established a proper development environment for both, the server-side Java part, and the client-side Javascript part.\nThere\u0026rsquo;s certainly more tweaking necessary to get this integration of Vue.js and Thymeleaf customized to a specific project (sharing CSS between client and server, bundling Javascript components together or not, \u0026hellip;) but this article has laid the foundation.\nI\u0026rsquo;m using this method in my service at blogtrack.io and might report about its evolution in the future.\nA working example including all the bits and pieces that this article glossed over is available on Github.\n","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0018-cogs-1200x628-branded_huddc0bdf9d6d0f4fdfef3c3a64a742934_149789_650x0_resize_q90_box.jpg","permalink":"/reusable-vue-components-in-thymeleaf/","title":"Marrying Vue.js and Thymeleaf: Embedding Javascript Components in Server-Side Templates"},{"categories":null,"contents":"Last modified: July 14, 2019\nIntroduction Tom Hombergs (\u0026ldquo;us\u0026rdquo;, \u0026ldquo;we\u0026rdquo;, or \u0026ldquo;our\u0026rdquo;) operates the https://reflectoring.io website (hereinafter referred to as the \u0026ldquo;Service\u0026rdquo;).\nThis page informs you of our policies regarding the collection, use and disclosure of personal data when you use our Service and the choices you have associated with that data.\nWe use your data to provide and improve the Service. By using the Service, you agree to the collection and use of information in accordance with this policy.\nDefinitions   Service: Service is the https://reflectoring.io website operated by Tom Hombergs.\n  Personal Data: Personal Data means data about a living individual who can be identified from those data (or from those and other information either in our possession or likely to come into our possession).\n  Usage Data: Usage Data is data collected automatically either generated by the use of the Service or from the Service infrastructure itself (for example, the duration of a page visit).\n  Cookies: Cookies are small files stored on your device (computer or mobile device).\n  Data Controller: Data Controller means the natural or legal person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal information are, or are to be, processed.\nFor the purpose of this Privacy Policy, we are a Data Controller of your Personal Data.\n  Data Processors (or Service Providers): Data Processor (or Service Provider) means any natural or legal person who processes the data on behalf of the Data Controller.\nWe may use the services of various Service Providers in order to process your data more effectively.\n  Data Subject (or User): Data Subject is any living individual who is using our Service and is the subject of Personal Data.\n  Information Collection and Use We collect several different types of information for various purposes to provide and improve our Service to you.\nTypes of Data Collected Personal Data While using our Service, we may ask you to provide us with certain personally identifiable information that can be used to contact or identify you (\u0026ldquo;Personal Data\u0026rdquo;). Personally identifiable information may include, but is not limited to:\n Name Email address Cookies and Usage Data  We may use your Personal Data to contact you with newsletters, marketing or promotional materials and other information that may be of interest to you. You may opt out of receiving any, or all, of these communications from us by following the unsubscribe link or the instructions provided in any email we send.\nUsage Data We may also collect information on how the Service is accessed and used (\u0026ldquo;Usage Data\u0026rdquo;). This Usage Data may include information such as your computer\u0026rsquo;s Internet Protocol address (e.g. IP address), browser type, browser version, the pages of our Service that you visit, the time and date of your visit, the time spent on those pages, unique device identifiers and other diagnostic data.\nTracking \u0026amp; Cookies Data We use cookies and similar tracking technologies to track the activity on our Service and we hold certain information.\nCookies are files with a small amount of data which may include an anonymous unique identifier. Cookies are sent to your browser from a website and stored on your device. Other tracking technologies are also used such as beacons, tags and scripts to collect and track information and to improve and analyse our Service.\nYou can instruct your browser to refuse all cookies or to indicate when a cookie is being sent. However, if you do not accept cookies, you may not be able to use some portions of our Service.\nExamples of Cookies we use:\n Session Cookies. We use Session Cookies to operate our Service. Preference Cookies. We use Preference Cookies to remember your preferences and various settings. Security Cookies. We use Security Cookies for security purposes. Advertising Cookies. Advertising Cookies are used to serve you with advertisements that may be relevant to you and your interests.  Use of Data reflectoring uses the collected data for various purposes:\n To provide and maintain our Service To notify you about changes to our Service To allow you to participate in interactive features of our Service when you choose to do so To provide customer support To gather analysis or valuable information so that we can improve our Service To monitor the usage of our Service To detect, prevent and address technical issues To provide you with news, special offers and general information about other goods, services and events which we offer that are similar to those that you have already purchased or enquired about unless you have opted not to receive such information  Legal Basis for Processing Personal Data under the General Data Protection Regulation (GDPR) If you are from the European Economic Area (EEA), reflectoring legal basis for collecting and using the personal information described in this Privacy Policy depends on the Personal Data we collect and the specific context in which we collect it.\nreflectoring may process your Personal Data because:\n We need to perform a contract with you You have given us permission to do so The processing is in our legitimate interests and it is not overridden by your rights For payment processing purposes To comply with the law  Retention of Data reflectoring will retain your Personal Data only for as long as is necessary for the purposes set out in this Privacy Policy. We will retain and use your Personal Data to the extent necessary to comply with our legal obligations (for example, if we are required to retain your data to comply with applicable laws), resolve disputes and enforce our legal agreements and policies.\nreflectoring will also retain Usage Data for internal analysis purposes. Usage Data is generally retained for a shorter period of time, except when this data is used to strengthen the security or to improve the functionality of our Service, or we are legally obligated to retain this data for longer periods.\nTransfer of Data Your information, including Personal Data, may be transferred to — and maintained on — computers located outside of your state, province, country or other governmental jurisdiction where the data protection laws may differ from those of your jurisdiction.\nYour consent to this Privacy Policy followed by your submission of such information represents your agreement to that transfer.\nreflectoring will take all the steps reasonably necessary to ensure that your data is treated securely and in accordance with this Privacy Policy and no transfer of your Personal Data will take place to an organisation or a country unless there are adequate controls in place including the security of your data and other personal information.\nDisclosure of Data Business Transaction If reflectoring is involved in a merger, acquisition or asset sale, your Personal Data may be transferred. We will provide notice before your Personal Data is transferred and becomes subject to a different Privacy Policy.\nDisclosure for Law Enforcement Under certain circumstances, reflectoring may be required to disclose your Personal Data if required to do so by law or in response to valid requests by public authorities (e.g. a court or a government agency).\nLegal Requirements reflectoring may disclose your Personal Data in the good faith belief that such action is necessary to:\n To comply with a legal obligation To protect and defend the rights or property of reflectoring To prevent or investigate possible wrongdoing in connection with the Service To protect the personal safety of users of the Service or the public To protect against legal liability  Security of Data The security of your data is important to us but remember that no method of transmission over the Internet or method of electronic storage is 100% secure. While we strive to use commercially acceptable means to protect your Personal Data, we cannot guarantee its absolute security.\nYour Data Protection Rights under the General Data Protection Regulation (GDPR) If you are a resident of the European Economic Area (EEA), you have certain data protection rights. reflectoring aims to take reasonable steps to allow you to correct, amend, delete or limit the use of your Personal Data.\nIf you wish to be informed about what Personal Data we hold about you and if you want it to be removed from our systems, please contact us.\nIn certain circumstances, you have the following data protection rights:\n  The right to access, update or delete the information we have on you. Whenever made possible, you can access, update or request deletion of your Personal Data directly within your account settings section. If you are unable to perform these actions yourself, please contact us to assist you.\n  The right of rectification. You have the right to have your information rectified if that information is inaccurate or incomplete.\n  The right to object. You have the right to object to our processing of your Personal Data.\n  The right of restriction. You have the right to request that we restrict the processing of your personal information.\n  The right to data portability. You have the right to be provided with a copy of the information we have on you in a structured, machine-readable and commonly used format.\n  The right to withdraw consent. You also have the right to withdraw your consent at any time where reflectoring relied on your consent to process your personal information.\n  Please note that we may ask you to verify your identity before responding to such requests.\nYou have the right to complain to a Data Protection Authority about our collection and use of your Personal Data. For more information, please contact your local data protection authority in the European Economic Area (EEA).\nService Providers We may employ third party companies and individuals to facilitate our Service (\u0026ldquo;Service Providers\u0026rdquo;), provide the Service on our behalf, perform Service-related services or assist us in analysing how our Service is used.\nThese third parties have access to your Personal Data only to perform these tasks on our behalf and are obligated not to disclose or use it for any other purpose.\nMailerLite reflectoring uses MailerLite as its email service provider.\nMailerLite collects contact information, distributes emails, and tracks actions you take that assist us in measuring the performance of the website and emails. Upon subscription, MailerLite also tracks the pages you visit on the website.\nOur emails may contain tracking pixels. This pixel is embedded in emails and allows us to analyze the success of our emails. Because of these tracking pixels, we may see if and when you open an email and which links within the email you click.\nThis behavior is not passed to third parties. All data submitted at the time of subscription to our emails is stored on MailerLite’s servers. You may access MailerLite’s privacy policy here.\nAt any time, you may be removed from our newsletter list by clicking on the unsubscribe button provided in each email.\n\nAnalytics We may use third-party Service Providers to monitor and analyse the use of our Service.\nGoogle Analytics Google Analytics is a web analytics service offered by Google that tracks and reports website traffic. Google uses the data collected to track and monitor the use of our Service. This data is shared with other Google services. Google may use the collected data to contextualise and personalise the ads of its own advertising network.\nYou can opt-out of having made your activity on the Service available to Google Analytics by installing the Google Analytics opt-out browser add-on. The add-on prevents the Google Analytics JavaScript (ga.js, analytics.js and dc.js) from sharing information with Google Analytics about visits activity.\nFor more information on the privacy practices of Google, please visit the Google Privacy \u0026amp; Terms web page.\nAdvertising We may use third-party Service Providers to show advertisements to you to help support and maintain our Service.\nGoogle AdSense \u0026amp; DoubleClick Cookie Google, as a third party vendor, uses cookies to serve ads on our Service. Google\u0026rsquo;s use of the DoubleClick cookie enables it and its partners to serve ads to our users based on their visit to our Service or other websites on the Internet.\nYou may opt out of the use of the DoubleClick Cookie for interest-based advertising by visiting the Google Ads Settings web page.\nPayments We may provide paid products and/or services within the Service. In that case, we use third-party services for payment processing (e.g. payment processors).\nWe will not store or collect your payment card details. That information is provided directly to our third-party payment processors whose use of your personal information is governed by their Privacy Policy. These payment processors adhere to the standards set by PCI-DSS as managed by the PCI Security Standards Council, which is a joint effort of brands like Visa, MasterCard, American Express and Discover. PCI-DSS requirements help ensure the secure handling of payment information.\nThe payment processor we work with is PayPal. Their Privacy Policy can be viewed here.\nLinks to Other Sites Our Service may contain links to other sites that are not operated by us. If you click a third party link, you will be directed to that third party\u0026rsquo;s site. We strongly advise you to review the Privacy Policy of every site you visit.\nWe have no control over and assume no responsibility for the content, privacy policies or practices of any third party sites or services.\nChildren\u0026rsquo;s Privacy Our Service does not address anyone under the age of 18 (\u0026ldquo;Children\u0026rdquo;).\nWe do not knowingly collect personally identifiable information from anyone under the age of 18. If you are a parent or guardian and you are aware that your Child has provided us with Personal Data, please contact us. If we become aware that we have collected Personal Data from children without verification of parental consent, we take steps to remove that information from our servers.\nChanges to This Privacy Policy We may update our Privacy Policy from time to time. We will notify you of any changes by posting the new Privacy Policy on this page.\nWe will let you know via email and/or a prominent notice on our Service, prior to the change becoming effective and update the \u0026ldquo;effective date\u0026rdquo; at the top of this Privacy Policy.\nYou are advised to review this Privacy Policy periodically for any changes. Changes to this Privacy Policy are effective when they are posted on this page.\nContact Us If you have any questions about this Privacy Policy, please contact us at tom@reflectoring.io.\n","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0018-cogs-1200x628-branded_huddc0bdf9d6d0f4fdfef3c3a64a742934_149789_650x0_resize_q90_box.jpg","permalink":"/privacy/","title":"Privacy Policy"},{"categories":["Software Craft"],"contents":"Believing Roy Fielding, who first coined the REST acronym, you may call your API a REST API only if you make use of hypertext. But what is hypermedia? This article explains what hypermedia means for creating an API, what benefits it brings and which drawbacks you might encounter when using it.\nThe REST Maturity Model Before starting with Hypermedia, let\u0026rsquo;s have a look at the REST Maturity Model conceived by Leonard Richardson:\nAt the bottom of the maturity pyramid we find the \u0026ldquo;Swamp of POX (Plain old XML)\u0026rdquo;. This means sending XML fragments back and forth that contain a command for executing one of several procedures as well as some payload data to a single URL. Basically, this is RPC (Remote Procedure Call) like it\u0026rsquo;s done in typical SOAP webservice.\nLevel 1 breaks up the \u0026ldquo;single URL\u0026rdquo; part by providing separate URLs for separate \u0026ldquo;things\u0026rdquo;. These \u0026ldquo;things\u0026rdquo; are called resources in REST-lingo. Separating concerns into different URLs is just a matter of following the software engineering principle of, well, Separation of concerns.\nLevel 2 then builds on top of these resource URLs and provides meaningful, standardized \u0026ldquo;operations\u0026rdquo; on those resources. These operations are the HTTP Verbs, most commonly GET, POST, PUT and DELETE. This way, we have a set of verbs we can combine with a resource URL to modify the resource. Common practice is to use\n GET to load a resource POST to create a new resource PUT to modify an existing resource and DELETE to remove a resource.  Level 3 adds hypermedia to the mix, consisting of hyperlinks between resources, with each link representing a relation between the connected resources. The concepts behind level 3 are a little more academic than the other levels, and are a little harder to grasp. Read on to follow my attempt to grasp them.\nHypermedia I will use the term \u0026ldquo;Hypermedia\u0026rdquo; as a shorthand for \u0026ldquo;Hypermedia As The Engine Of Application State\u0026rdquo; (HATEOAS), which is one of the ugliest acronyms I\u0026rsquo;ve ever seen. Basically, it means that a REST API provides hyperlinks with each response that link to other related resources.\nLet\u0026rsquo;s try this concept on a simple book store example:\nIn the diagram above, each node is a URL within our API and each edge is a link relating one URL with another.\nHypermedia means that those links will be returned together with the actual response payload. Let\u0026rsquo;s walk through the diagram.\nCalling the root of the API (/), we will get a response that has no actual payload but a single link to /books with the relation list. This response might look like this when using the HAL format:\n{ \u0026#34;_links\u0026#34;: { \u0026#34;list\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/books\u0026#34; } } } Following this link, we can call /books to get a list of books, which might look like this:\n{ \u0026#34;_embedded\u0026#34;: { \u0026#34;booksList\u0026#34;: [ { \u0026#34;title\u0026#34;: \u0026#34;Hitchhiker\u0026#39;s Guide to the Galaxy\u0026#34;, \u0026#34;author\u0026#34;: \u0026#34;Douglas Adams\u0026#34;, \u0026#34;_links\u0026#34;: { \u0026#34;self\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/books/42\u0026#34; } } }, // more books ...  ] } } The self link on each book points us to the corresponding book resource.\nThe book resource response then might contain a relation add linking to /cartItems, allowing us to add the book to the shopping cart.\nAll in all, just by following the links in the API\u0026rsquo;s responses we can browse books, add them to our shopping cart or remove them from it, and finally order the contents of our shopping cart.\nWhy Hypermedia Makes Sense Having understood the basics of hypermedia in REST APIs, let\u0026rsquo;s discuss the pros.\nThe main argument for going the extra mile to level 3 and create a hypermedia-driven API is that it helps to decouple the consumer from the provider. This brings some advantages \u0026hellip;\nRefactoring Resource URLs The consumer does not need to know all the URLs to the API\u0026rsquo;s endpoints, because it can navigate the API over the hyperlinks provided in the responses. This gives the provider the freedom to refactor the endpoint URLs at will without consulting the consumers at all.\nChanging Client Behavior without Changing Code Another decoupling feature is that the consumer can change its behavior depending on which links are provided by the server. In the book store example above, the server may decide not to include the order link in its responses until the shopping cart has reached a minimum value. The client knows that and only displays a checkout button once it gets the order link from the server.\nExplorable API Obviously, by following the hyperlinks, our REST API is explorable by a client. However, it\u0026rsquo;s also explorable by a human. A developer that knows the root URL can simply follow the links to get a feel for the API.\nUsing a tool like HAL Browser, the API can even be browsed and experimented with comfortably.\nWhy You Might Not Want to Use Hypermedia While decoupling client and server is a very worthwhile goal, when implementing a hypermedia API you may stumble over a few things.\nClient Must Evaluate Hyperlinks First of all, the decoupling aspect of the hyperlinks gets lost if the client chooses not to evaluate the hyperlinks but instead uses hard-coded URLs to access the API.\nIn this case, all the effort that has gone into crafting a semantically powerful hypermedia API was in vain, since the client does not take advantage of it and we loose all decoupling advantages.\nIn a public-facing API, there will most probably be clients that will use hard-coded URLs and NOT evaluate the hyperlinks. So, as the API provider, we lose the advantage of hyperlinks because we still cannot refactor URLs independently without irritating our users.\nClient Must Understand Relations If the client chooses to evaluate the hyperlinks, it obviously needs to understand the relations to make sense of them. In the example above, the client needs to know what the remove relation means in order to present the user with a UI that lets him remove an item from the shopping cart.\nHaving to understand the relations in itself is a good thing, but building a client that acts on those relations alone is harder than building a client that is more tightly coupled to the server, which is probably a main reason that most APIs don\u0026rsquo;t go to level 3.\nServer Must Describe Application Completely with Relations Similarly, describing the whole application state with relations and hyperlinks is a burden on the server side - at least initially. Designing and Building a hypermedia API is plain more effort than building a level 2 API.\nTo be fair, once the API is stable it\u0026rsquo;s probably less effort to maintain a hypermedia API than a level 2 API, thanks to the decoupling features. But building a hypermedia API is a long-term invest few managers are willing to make.\nNo Standard Hypermedia Representation Another reason why hypermedia is not yet widely adopted is the lack of a standard format. There\u0026rsquo;s RFC 5988, specifying a syntax for links contained in HTTP headers. Then there\u0026rsquo;s HAL, JSON Hyper-Schema and other formats, each specifying a syntax for links within JSON resources.\nEach of these formats has a different view on which information should be included in hyperlinks. This raises uncertainty amongst developers and makes development of general-purpose hypermedia frameworks harder for both the client side and the server side.\nClient Must Still Have Domain Knowledge Hypermedia is not a silver bullet for decoupling client from server. The client still needs to know the structure of the resources it loads from the server and posts back to the server. This structure contains a large part of the domain knowledge, so the decoupling is far from complete.\nBigger Response Payload As you can see in the JSON examples above, hypermedia APIs tend to have bigger response payloads than a level 2 API. The links and relations need to be transferred from server to client somehow, after all. Thus, an API implemented with hypermedia will probably need more bandwidth than the same API without.\nHow Should I Implement My New Shiny API? In my opinion, there\u0026rsquo;s no golden way of creating a REST API.\nIf, in your project, the advantages of hypermedia outweigh the disadvantages - mainly effort in careful design and implementation - then go for hypermedia and be one of the few who can claim to have built a glorious level 3 REST API :).\nIf you\u0026rsquo;re not sure, go for level 2. Especially if the client is not under your control, this may be the wiser choice and save some implementation effort. However, be aware that you may not call your API a \u0026ldquo;REST API\u0026rdquo; then \u0026hellip; .\n","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0036-notebooks-1200x628-branded_huf4115935f6abd8868b7cc652cfae8e97_224633_650x0_resize_q90_box.jpg","permalink":"/rest-hypermedia/","title":"REST with Hypermedia - Hot or Not?"},{"categories":null,"contents":"","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0036-notebooks-1200x628-branded_huf4115935f6abd8868b7cc652cfae8e97_224633_650x0_resize_q90_box.jpg","permalink":"/search/","title":"Search Result"},{"categories":null,"contents":"What You\u0026rsquo;ll Get You’ll get a regular 5-minutes-to-read email with inspiration on how to grow as a software engineer (and as a person) by simplifying your habits and processes.\nThis includes:\n how to be more productive as a software engineer, how to grow personally and professionally, how to be a confident software engineer, \u0026hellip; and much more.  Have a look at previous editions of the newsletter.\nWall of Love Check out what the subscribers have to say:\nOnce again, your newsletter is short and to the point and let\u0026rsquo;s me think about a detail of my behavior / job. Thanks!\n  Because weekly it has given me some points of view and inspirational ideas to organize myself and to be a better IT professional. Thanks a lot Tom Best Regards, Thiago from Brazil, São Paulo State, São Paulo City\n  It makes so much sense. I can relate to all the problems you discussed. I will make sure I follow the suggestions. Thanks for sharing these.\n  I\u0026rsquo;m working through ways to become a more productive developer. I love the emails related to productivity and organization tips. Thank you!\n  It was good advice\n  clear, concise, informative\u0026hellip;..\n  Well written and informative; I appreciate the diversity of topics, both tech and non-tech\n  Very insightful! I love it\n  The insight(s) reflect both experience \u0026amp; pragmatism. Being that I am both of these too, to some extent, it always makes the read enjoyable and I find patterns of behaviour/practice that I need to stop or need to start developing. Thanks.\n  wise words, Tom. Thank you.\n  Short and informative article on a topic that always looks like something I\u0026rsquo;ve seen at some point in the past.\n  Hi Tom, I’ve been feeling unproductive and inefficient recently. Your article addresses so many key issues that I’m dealing with. Thanks for sharing and I appreciate that you continue to write/share knowledge even though you have so much going on. I will give ‘Shape’ a try :)\n  Thank you for your advice about growth in the developer career!\n  You point out things related to personal growth and not just software engineering\n  I have just joined as a SDE and really find the description of good work and great work meaningful. Thanks!\n  The suggestions regarding prioritization and how to create a plan along this line is insightful.\n  Thank you. Its really useful.\n  Thanks a lot for this piece of advice!\n  Relevant as I want to understand how to get promoted within the org\n  Your inspirational nuggets are helping me start my career better.\n  Inspiring content ! Thanks for your work.\n  Hi Tom, I read every Inspirational Nugget from you. Thanks a lot for your valuable insights via email.\n  More or less my own thoughts, but synthesized and ordered. And I needed to see others share the same vision as me. If only I could make myself to actually follow and employ this great wisdom\u0026hellip;\n  Doesn\u0026rsquo;t feel like useless email feeds cluttering the inbox. Someone took the effort to write this valuable piece of advice and I enjoyed it. Thanks,\n  It\u0026rsquo;s very inspirational to get these nuggets time to time. It help me to become a better software engineer.\n  Always looking forward to your content :)\n  Useful and always new content\n  I really like the \u0026ldquo;Inspirational Nugget of the Week\u0026rdquo; section. Is there a place on your website where I can see them all? I would love to share them among my coworkers. Thank you very much!\n  Just what I needed to hear today. Thanks Tom!\n  I enjoy all of these emails. Not only do I like the voice they\u0026rsquo;re written in (maybe because I too write software), but the points they make are easy to relate to, grasp and apply. In fact, I have an external Notes page where I add/keep the body of each of these for future use. Thanks a lot.\n  Great tips on productivity.\n  Speaks to me\n  Helped me remember something I knew I should be doing but had forgotten.\n  I love your newsletters. Amazing tips! Thanks :D\n  I love reading your emails about \u0026ldquo;your thinking\u0026rdquo;. It gives something to think about which in most time I not thinking about.\n   Previous Editions Check out some previous editions of the newsletter. This list is updated with a time lag so that the content is exclusive for subscribers for a time.\nDecember 2021  Take the Time to Understand Give Your Projects a Heartbeat  November 2021  Make it Simple Be Scrappy, Not Crappy Shape Tomorrow Enjoy the Process Start with Why  October 2021  Happiness Comes From Solving Problems Good Work vs. Great Work Ask Why Don\u0026rsquo;t Take Orders, Make Suggestions!  September 2021  What Does \u0026ldquo;Done\u0026rdquo; Mean? Maintain a Maintenance Schedule Integrate Work into Your Life Don\u0026rsquo;t Read Emails Before Lunch  August 2021  Sharpen the Axe Make it Easy! Set Your Fears Increase Your Surface Area for Luck  July 2021  Be Resourceful Design Your Day Write! Step Back From Your Work  June 2021  Spend Your Time Intentionally Take Structured Notes Eat That Frog! Pomodoros for Health and Focus Reflect Your Day  May 2021  Take Control Have a Personal Retrospective Left-Shift Quality Record Your Achievements  April 2021  Know Your Goals Keep \u0026ldquo;People Lists\u0026rdquo; Schedule Everything! Reduce Anxiety  March 2021  Do many things at once (but not in parallel) Intentional Reading - a Developer Superpower Pick a Daily Highlight to Focus On Creating and Analyzing Heap Dumps Meaningful Commit Messages  February 2021  Painless Code Formatting with EditorConfig Getting Started with AWS S3 and Spring Boot Handling Cookies with Spring and the Servlet API The Open-Closed Principle Explained  January 2021  Getting Started with GraphQL Git Merge vs. Git Rebase Getting Started with AWS CloudFormation Handling Exceptions with Spring Boot  December 2020  Implementing a Circuit Breaker with Resilience4J Using Elasticsearch with Spring Boot From Zero to Production with Spring Boot and AWS Make Time For What Matters Mock Modules with Spring Boot Get Your Thinking Juices Flowing  November 2020  Dynamic Queries with Spring Data Specifications Managing Multiple JDKs with SDKMAN! 12 Factor Apps with Spring Boot From Zero to Production with Spring Boot and AWS Find your Essential Intents and Focus on them!  ","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0036-notebooks-1200x628-branded_huf4115935f6abd8868b7cc652cfae8e97_224633_650x0_resize_q90_box.jpg","permalink":"/simplify/","title":"Simplify!"},{"categories":null,"contents":"You have successfully subscribed to the Newsletter.\nYou will receive an email with your welcome gifts shortly :).\nCheers, Tom\n","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0036-notebooks-1200x628-branded_huf4115935f6abd8868b7cc652cfae8e97_224633_650x0_resize_q90_box.jpg","permalink":"/subscribed/","title":"Thanks for signing up!"},{"categories":null,"contents":"Reflectoring Mission Statement The reflectoring blog aims to provide software developers with a comprehensive but easy-to-read learning experience that generates “aha” moments when they need to solve a specific problem.\n An article solves at least one specific problem: it explains how to solve a reader’s current question and provides working code examples for them to learn from. An article is comprehensive: it explains a certain topic, framework feature, or solution from top-to-bottom, potentially answering questions the reader doesn’t even know about yet. An article is easy to read: it is structured logically, uses conversational language with simple sentences and paragraphs and without fancy words. An article generates “aha” moments: it explains the “why” a certain solution works and in which cases the solution may not be the best one.  General Guidelines Example Articles For your orientation, here’s a list of some of the most successful articles on reflectoring (success = high number of readers and positive reader feedback):\n https://reflectoring.io/spring-boot-test/ https://reflectoring.io/bean-validation-with-spring-boot/ https://reflectoring.io/unit-testing-spring-boot/ https://reflectoring.io/spring-boot-conditionals/ https://reflectoring.io/spring-boot-data-jpa-test/  These articles have in common that they explain a certain Spring Boot feature in-depth, answering a specific question the reader probably googled for, and more questions the reader didn’t even have yet.\nIt’s important to note that these articles don’t just reiterate the content of Spring Boot’s reference manual, but instead explain the features in simple words with code examples and sections that explain why we should or should not do it in a certain way.\nArticle Categories The main categories of the reflectoring blog contain tutorials about the Java programming language in general and the Spring Boot framework in particular. Also of interest are articles about software development and architecture best practices in a category called Software Craft.\nAny topic that is valuable to software engineers, is interesting, though, so don\u0026rsquo;t hesitate to propose topics that don\u0026rsquo;t fit into these categories.\nArticle Length There is no hard-and-fast rule for how long an article should be. Given the goal of comprehensiveness, however, good reflectoring articles tend to be between 1000 and 2000 words (including code examples).\nYou can use WordCounter to count the words of your article.\nLanguage Guidelines Use Simple Language Online readers don’t want to spend much time on understanding a topic. It’s important to keep the text simple and not use words that are ambiguous or difficult to understand.\nSome examples:\n   Don\u0026rsquo;t write this Write this instead     “utilize” “use”   “In order to” “to”    Copy your text into Grammarly, which provides some great simplification suggestions even on the free tier.\nKeep Sentences and Paragraphs Short Instead of one long sentence, use two short ones. Sub-clauses often make the text harder to read and introduce ambiguity.\nInstead of a wall of text, split the text into logical paragraphs of ideally no more than 4 lines. Important statements can sometimes even be a paragraph in their own right, even if it’s a one-liner.\nCreate a Conversation with the Reader Texts are more engaging if they read like a conversation between you and the reader. This means that you can use the pronouns “I” and “You”, as we would when speaking to someone.\nWhen explaining how to do something, however, use “we” rather than “you”, as too much “do this” and “do that” can quickly sound condescending (I’m aware that I’m doing it this document :)).\n   Don\u0026rsquo;t write this Write this instead     “Add an annotation to class X to do Y” “We add an annotation to class X to do Y”   “The next step is to do X” “Let’s do X next”    Be Inclusive We don’t want our texts to offend anyone, so make sure to use inclusive language. Don’t assume the gender of people you use in examples. Use plural instead. Where plural isn’t applicable, use it anyways (it’s called the “single they”).\n   Don\u0026rsquo;t write this Write this instead     “These guys\u0026hellip;” “These developers\u0026hellip;”   “By doing this, we make life easier for the developer. He will thank you for it.” “By doing this, we make life easier for the developers. They will thank you for it.”  or, using “single they”:  “By doing this, we make life easier for the developer. They will thank you for it.”    Make It Personal Include something of yourself in the text. If you have made some experience that connects to the topic at hand, share it.\nAdd a sentence in parentheses to comment on something (I sometimes share my thoughts in parentheses like this).\nAdd a bit of dry humor if the situation allows it. We’re not writing a doctoral thesis that no one really understands.\nUse Active, Not Passive Use active voice instead of passive voice wherever possible. This makes the reading much less convoluted and thus easier.\n   Don\u0026rsquo;t write this Write this instead     “This can be done by…” “We can do this by\u0026hellip;”   “This code will be executed by method XYZ.” “Method XYZ executes this code.”    Conventions Be Consistent Be consistent about spelling.\nCheck the spelling of frameworks, libraries, and products so that it matches their brand name.\nSome words can be spelled in different variations. Stick to one of them throughout the text.\nIntroduce the Article Start the article with a sentence or two about what to expect in the article. Don’t drop the readers into cold water. Give them a chance to drop out of reading right then and there if the topic is not interesting for the.\nTry to make the introduction compelling, though. Ask open questions that the article will answer to spark curiosity.\nConclude the Article Conclude the article with a … wait for it … conclusion. Summarize the key takeaways from the article in a sentence or two. Add a joke to the end if you can think of one so that the reader is rewarded for reading to the end.\nUse Title Case in Headers Use title case in headings. Check your headings on titlecase.com for the correct capitalization.\nHighlight Important Key Facts in Bold Internet readers usually don’t read an article from start to end, but they scan it. Help the “scanner”-type readers, by making the main ideas of the articles bold. Don’t make single words bold, because they have too little context for scanning. Also, don’t make whole paragraphs bold, because it won’t help the “scanner” to find the interesting bits.\nInstead, highlight sentences and half-sentences in bold that carry a main idea and make sense without reading the rest of the text.\nLinks Link to sources you used while researching for the article. These can be reference manuals, or other any other website links.\nMake the link part of a natural sentence instead of adding a word just for the link.\n   Don\u0026rsquo;t write this Write this instead     “You can find the reference manual here.” “You can find more information in the reference manual.”    Quality Review Your Text After a Day When you’re done writing, leave the text alone for a couple of hours or a day. Then, read through it with a fresh mind and fix all those complicated phrases and typos. You’ll be amazed at what issues you find after having your mind do something else for a while.\nCheck Your Text with Grammarly After your own review, please paste it into the free tier service of Grammarly and apply all suggestions. They usually make sense. If a suggestion doesn’t make sense, don’t apply it.\nAdd Cross-Links to Other reflectoring Articles Do a quick Google search restricted to “site:reflectoring.io” to find out if there are other articles about a similar topic. If yes, think of a way to naturally link to them within the article to create\nWorking with Code Examples Prove Your Claims with Code Examples Almost every reflectoring article should be accompanied by a working code example. The code examples are collected in the code-examples GitHub repository. Add your code to an existing module if it fits, or create a new module if there is no fitting module.\nThe code examples in the article should always be copied from the real code in the code-examples repository so that we can be sure they are correct.\nCode modules can use Maven or Gradle as a build tool. Make sure to include the Maven Wrapper or Gradle Wrapper so that they can run anywhere.\nIntroduce a Code Example, Then Explain Introduce a code example with a sentence like “Let’s take a look at the Foo class:” (note the colon “:”). Then paste the code example.\nBelow the code example, explain it step by step.\nDon’t start to explain code before the code example. The reader hasn’t had a chance to see the code, yet!\nExplain Code Bottom-Up If the code in one code example depends on the code of another code example, start with the independent one and only add the dependent code example afterward. This is the natural order of things and the reader’s mind can keep up.\nKeep Code Examples Small If a code example contains boilerplate code like getters and setters or methods that are irrelevant to the discussion, remove the irrelevant code from the example and replace it with a comment like “// other methods omitted” or “// …”.\nMake the code examples as small and understandable as possible.\nDon’t Modify Code Examples Manually Except for omitting code as explained above, don’t modify code manually. If you changed something in the real code, copy and paste it into the article instead of modifying the code example in the article manually. Errors will creep in otherwise.\nUse Package-Private Visibility Don’t use the public modifier as the default. Use package-private visibility where possible. This keeps the code examples more focused on the important things. Also, it’s a good practice for dependency hygiene and we want to teach good practices :).\nFormat Code When you copy the code example into the article, reduce the indentation to 2 spaces to make it more compact and to reduce the chance for scrolling. You can do this by searching and replacing the existing indentation.\nLink to the Code Examples Repository Link to the code example in the repositories at the start and the end of the article. Link directly to the module that contains your example, for instance, https://github.com/thombergs/code-examples/tree/master/logging.\nAt the start of the article, use this include after the introductory paragraph:\n{{% github \u0026#34;url_to_example_module\u0026#34; %}} This will include a Heading with a little default text and a link.\nAt the end of the article, remind the reader that he can look up the code on GitHub like this.\nElements You Can Use GitHub Link After the intro paragraph, you should include a link to the example code on GitHub (if there is any example code):\n{{% github \u0026#34;url_to_example_module\u0026#34; %}} This will be rendered into an H2 with a link to the GitHub project like this:\n Example Code This article is accompanied by a working code example on GitHub. Tables You can use standard Markdown tables:\n| Header 1 | Header 2 | |----------|----------| | One | Two | | Three | Four | They will be rendered into nice-looking tables like this:\n   Header 1 Header 2     One Two   Three Four    Images Instead of standard Markdown images, please use this shortcode for any image you want to include:\n{{% image src=\u0026quot;images/posts/your-article/your-image.png\u0026quot; alt=\u0026quot;A meaningful description of the image.\u0026quot; %}} Asides For information that doesn\u0026rsquo;t fit into the text, but rather into an \u0026ldquo;aside\u0026rdquo;, you can use information boxes like below.\nA danger box Use this markup to create a \u0026ldquo;danger\u0026rdquo; box:\n{{% danger title=\u0026quot;Title of your danger box\u0026quot; %}} Any markdown content {{% /danger %}}  A warning box Use this markup to create a \u0026ldquo;warning\u0026rdquo; box:\n{{% warning title=\u0026quot;Title of your warning box\u0026quot; %}} Any markdown content {{% /warning %}}  An info box Use this markup to create an \u0026ldquo;info\u0026rdquo; box:\n{{% info title=\u0026quot;Title of your info box\u0026quot; %}} Any markdown content {{% /info %}}  I want to Write!  Full name *  Email Address *  Tell me a bit about you *   Send Now    ","date":"January 1, 1","image":"https://reflectoring.io/images/stock/0016-pen-1200x628-branded_hu01476d2ce863620c75f8f9d54074a6bf_114085_650x0_resize_q90_box.jpg","permalink":"/contribute/writing-guide/","title":"Writing Guide"}]