1. Abstract
This book contains course notes covering Enterprise Computing with Java. This comprehensive course explores core application aspects for developing, configuring, securing, deploying, and testing a Java-based service using a layered set of modern frameworks and libraries that can be used to develop full services and microservices to be deployed within a container. The emphasis of this course is on the center of the application (e.g., Spring, Spring Boot, Spring Data, and Spring Security) and will lay the foundation for other aspects (e.g., API, SQL and NoSQL data tiers, distributed services) covered in related courses.
Students will learn thru lecture, examples, and hands-on experience in building multi-tier enterprise services using a configurable set of server-side technologies.
Students will learn to:
-
Implement flexibly configured components and integrate them into different applications using inversion of control, injection, and numerous configuration and auto-configuration techniques
-
Implement unit and integration tests to demonstrate and verify the capabilities of their applications using JUnit and Spock
-
Implement basic API access to service logic using using modern RESTful approaches that include JSON and XML
-
Implement basic data access tiers to relational and NoSQL databases using the Spring Data framework
-
Implement security mechanisms to control access to deployed applications using the Spring Security framework
Using modern development tools students will design and implement several significant programming projects using the above-mentioned technologies and deploy them to an environment that they will manage.
The course is continually updated and currently based on Java 11, Spring 5.x, and Spring Boot 2.x.
Enterprise Computing with Java (605.784.8VL) Course Syllabus DRAFT
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
2. Course Description
2.1. Meeting Times/Location
-
Wednesdays, 4:30-7:10pm EST
-
via Zoom Meeting ID: 944 0484 0738
2.2. Course Goal
The goal of this course is to master the design and development challenges of a single application instance to be deployed in an enterprise-ready Java application framework. This course provides the bedrock for materializing broader architectural solutions within the body of a single instance.
2.3. Description
This comprehensive course explores core application aspects for developing, configuring, securing, deploying, and testing a Java-based service using a layered set of modern frameworks and libraries that can be used to develop full services and microservices to be deployed within a container. The emphasis of this course is on the center of the application (e.g., Spring, Spring Boot, Spring Data, and Spring Security) and will lay the foundation for other aspects (e.g., API, SQL and NoSQL data tiers, distributed services) covered in related courses.
Students will learn thru lecture, examples, and hands-on experience in building multi-tier enterprise services using a configurable set of server-side technologies.
Students will learn to:
-
Implement flexibly configured components and integrate them into different applications using inversion of control, injection, and numerous configuration and auto-configuration techniques
-
Implement unit and integration tests to demonstrate and verify the capabilities of their applications using JUnit
-
Implement basic API access to service logic using using modern RESTful approaches that include JSON and XML
-
Implement basic data access tiers to relational and NoSQL (Mongo) databases using the Spring Data framework
-
Implement security mechanisms to control access to deployed applications using the Spring Security framework
Using modern development tools students will design and implement several significant programming projects using the above-mentioned technologies and deploy them to an environment that they will manage.
The course is continually updated and currently based on Java 17, Spring 5.x, and Spring Boot 2.x.
2.4. Student Background
-
Prerequisite: 605.481 Distributed Development on the World Wide Web or equivalent
-
Strong Java programming skills are assumed
-
Familiarity with Maven and IDEs is helpful
-
Familiarity with Docker (as a user) can be helpful in setting up a local development environment quickly
2.5. Student Commitment
-
Students should be prepared to spend between 6-10 hours a week outside of class. Time spent can be made efficient by proactively keeping up with class topics and actively collaborating with the instructor and other students in the course.
2.6. Course Text(s)
The course uses no mandatory text. The course comes with many examples, course notes for each topic, and references to other free Internet resources.
2.7. Required Software
Students are required to establish a local development environment.
-
Software you will need to load onto your local development environment:
-
Git Client
-
Java JDK 17
-
Maven 3 (>= 3.6.3)
-
IDE (IntelliJ IDEA Community Edition or Pro or Eclipse/STS)
-
The instructor will be using IntelliJ IDEA CE in class, but Eclipse/STS is also a good IDE option. It is best to use what you are already comfortable using.
-
-
JHU VPN (Open Pulse Secure) — workarounds available
-
-
Software you will ideally load onto your local development environment:
-
Docker
-
Docker can be used to automate software installation and setup and implement deployment and integration testing techniques. Several pre-defined images, ready to launch, will be made available in class.
-
-
curl or something similar
-
Postman API Client or something similar
-
-
Software you will need to install if you do not have Docker
-
MongoDB
-
-
High visibility software you will use that will get downloaded and automatically used through Maven.
-
JUnit
-
SLF/Logback
-
a relational database (H2 Database Engine) and JPA persistence provider (Hibernate)
-
application framework (Spring Boot 2.x, Spring 5.x).
-
2.8. Course Structure
The course materials consist of a large set of examples that you will download, build, and work with locally. The course also provides a set of detailed course notes for each lecture and an associated assignment active at all times during the semester. Topics and assignments have been grouped into application development, service/API tier, data tier, and async processing. Each group consists of multiple topics that span multiple weeks.
The examples are available in a Gitlab public repository. The course notes are available in HTML and PDF format for download. All content or links to content is published on the course public website. To help you locate and focus on current content and not be overwhelmed with the entire semester, examples and links to content are activated as the semester progresses. A list of "What is new" and "Student TODOs" is published weekly before class to help you keep up to date and locate relevant material.
2.9. Grading
-
100 >= A >= 90 > B >= 80 > C >= 70 > F
Assessment |
% of Semester Grade |
Class/Newsgroup Participation |
10% (9pm EST, Wed weekly cut-off) |
Assignment 0: Application Build |
5% (##) |
Assignment 1: Application Config |
20% |
Assignment 2: Web API |
15% |
Assignment 3: Security |
15% |
Assignment 4: Deployment |
10% |
Assignment 5: Database |
25% |
Do not host your course assignments in a public Internet repository.
Course assignments should not be posted in a public Internet repository. If using an Internet repository, only the instructor should have access. |
-
Assignments will be done individually and most are graded 100 though 0, based on posted project grading criteria.
-
## Assignment 0 will be graded on a done (100)/not-done(0) basis and must be turned in on-time in order to qualify for a REDO. The intent of this requirement is to promote early activity with development and early exchange of questions/answers and artifacts between students and instructor.
-
-
Class/newsgroup participation will be based on instructor judgment whether the student has made a contribution to class to either the classroom or newsgroup on a consistent weekly basis. A newsgroup contribution may be a well-formed technical observation/lesson learned, a well formed question that leads to a well formed follow up from another student, or a well formed answer/follow-up to another student’s question. Well formed submissions are those that clearly summarize the topic in the subject, and clearly describe the objective, environment, and conditions in the body. The instructor will be the judge of whether a newsgroup contribution meets the minimum requirements for the week. The intent of this requirement is to promote active and open collaboration between class members.
-
Weekly cut-off for newsgroup contributions is each Wed @9pm EST
-
2.10. Grading Policy
-
Late assignments will be deducted 10pts/week late, starting after the due date/time, with one exception. A student may submit a single project up to 4 days late without receiving approval and still receive complete credit. Students taking advantage of the "free first pass" should still submit an e-mail to the instructor and grader(s) notifying them of their intent.
-
Class attendance is strongly recommended, but not mandatory. The student is responsible for obtaining any written or oral information covered during their absence. Each session will be recorded — minus error. A link to the recording will be posted on Canvas.
2.11. Academic Integrity
Collaboration of ideas and approaches are strongly encouraged. You may use partial solutions provided by others as a part of your project submission. However, the bulk usage of another students implementation or project will result in a 0 for the project. There is a difference between sharing ideas/code snippets and turning in someone else’s work as your own. When in doubt, document your sources.
Do not host your course assignments in a public Internet repository.
2.12. Instructor Availability
I am available at least 20min before class, breaks, and most times after class for extra discussion. I monitor/respond to e-mails and the newsgroup discussions and hold ad-hoc office hours via Zoom in the evening and early morning hours.
2.13. Communication Policy
I provide detailed answers to assignment and technical questions through the course newsgroup. You can get individual, non-technical questions answered via email but please direct all technical and assignment questions to the newsgroup. If you have a question or make a discovery — it is likely pertinent to most of the class and you are the first to identify.
-
Newsgroup: [Canvas Course Discussions]
-
Instructor Email: jim.stafford@jhu.edu
I typically respond to all e-mails and newsgroup posts in the evening and early morning hours. Rarely will a response take longer than 24 hours. It is very common for me to ask for a copy of your broken project so that I can provide more analysis and precise feedback. This is commonly transmitted either as an e-mail attachment, a link to a branch in a private repository, or an early submission in Canvas.
2.14. Office Hours
Students needing further assistance are welcome to schedule a web meeting using Zoom Conferencing. Most conference times will be between 8 and 10pm EST and 6am to 5pm EST weekends.
3. Course Assignments
3.1. General Requirements
-
Assignments must be submitted to Canvas with source code in a standard archive file. "target" directories with binaries are not needed and add unnecessary size.
-
All assignments must be submitted with a README that points out how the project meets the assignment requirements.
-
All assignments must be written to build and run in the grader’s environment in a portable manner using Maven 3. This will be clearly spelled out during the course and you may submit partial assignments early to get build portability feedback (not early content grading feedback).
-
Test Cases must be written using JUnit 5 and run within the Maven surefire and failsafe environments.
-
The course repository will have an assignment-support and assignment-starter set of modules.
-
The assignment-support modules are to be referenced as a dependency and not cloned into student submissions.
-
The assignment-starter modules are skeletal examples of work to be performed in the submitted assignment.
-
3.2. Submission Guidelines
You should test your application prior to submission by
-
Verify that your project does not require a pre-populated database. All setup must come through automated test setup.
This will make sure you are not depending on any residue schema or data in your database.
-
Run maven clean and archive your project from the root without pre-build target directory files.
This will help assure you are only submitting source files and are including all necessary source files within the scope of the assignment.
-
Move your localRepository (or set your settings.xml#localRepository value to a new location — do not delete your primary localRepository)
This will hide any old module SNAPSHOTs that are no longer built by the source (e.g., GAV was changed in source but not sibling dependency).
-
Explode the archive in a new location and run mvn clean install from the root of your project.
This will make sure you do not have dependencies on older versions of your modules or manually installed artifacts. This, of course, will download all project dependencies and help verify that the project will build in other environments. This will also simulate what the grader will see when they initially work with your project.
-
Make sure the README documents all information required to demonstrate or navigate your application and point out issues that would be important for the evaluator to know (e.g., "the instructor said…")
4. Syllabus
# | Date | Lectures | Assignments/Notes |
---|---|---|---|
Aug31 |
|
|
|
|
|||
Sep07 |
|
||
Sep14 |
|
||
Logging notes |
|
||
Sep21 |
Testing notes |
|
# | Date | Lectures | Assignments/Notes |
---|---|---|---|
4 |
Sep21 (Cont) |
|
|
Sep28 |
|||
Oct05 |
|||
Oct12 |
|
||
Oct19 |
|||
Oct26 |
AOP and Method Proxies notes |
|
# | Date | Lectures | Assignments/Notes |
---|---|---|---|
Nov02 |
|
||
Nov09 |
|
||
Nov16 |
|
||
Nov23 |
Thanksgiving |
no class |
|
Nov30 |
|
||
Dec07 |
Heroku Database Deployments notes |
||
Validation notes |
Development Environment Setup
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
5. Introduction
Participation in this course requires a local development environment. Since competence using Java is a prerequisite to taking the course, much of the contents here is likely already installed in your environment.
Software versions do not have to be latest-and-greatest. My JDK 11/Maven environment looks to be close to 2 years old when authoring this guide. For the most part, the firmest requirement is that the JDK must be 17 or at least your source code needs to stick to Java 17 features to be portable to grading environments.
You must manually download and install some of the software locally (e.g., IDE). Some have options (e.g., Docker/Mongo or Mongo). The remaining set will download automatically and run within Maven. Some software is needed day 1. Others can wait.
Rather than repeat detailed software installation procedures for the various environments, I will list each one, describe its purpose in the class, and direct you to one or more options to obtain. Please make use of the course newsgroup if you run into trouble or have questions.
5.1. Goals
The student will:
-
setup required tooling in local development environment and be ready to work with the course examples
5.2. Objectives
At the conclusion of these instructions, the student will have:
-
installed Java JDK 17
-
installed Maven 3
-
installed a Git Client and checked out the course examples repository
-
installed a Java IDE (IntelliJ IDEA Community Edition or Eclipse/STS)
-
installed a Web API Client tool
-
optionally installed Docker
-
conditionally installed Mongo
6. Software Setup
6.1. Java JDK (immediately)
You will need a JDK 17 compiler and its accompanying JRE environment immediately in class. Everything we do will revolve around a JVM.
-
For Mac and Unix-like platforms, SDKMan is a good source for many of the modern JDK images. You can also use brew or your package manager (e.g., yum, apt).
> brew search jdk | grep 17
openjdk@17
$ sdk list java | egrep 'ms|open' | grep 17
Microsoft | | 17.0.3 | ms | | 17.0.3-ms
# apt-cache search openjdk-17 | egrep 'jdk |jre '
openjdk-17-jdk - OpenJDK Development Kit (JDK)
openjdk-17-jre - OpenJDK Java runtime, using Hotspot JIT
-
For Windows Users - Microsoft has JDK images available for direct download. These are the same downloads that SDKMan uses when using the
ms
option.
Windows x64 zip microsoft-jdk-17.0.3-windows-x64.zip sha256 / sig
Windows x64 msi microsoft-jdk-17.0.3-windows-x64.msi sha256
After installing and placing the bin
directory in your PATH, you should be able to execute the following commands and output a version 17.x of the JRE and compiler.
$ java --version openjdk 17.0.3 2022-04-19 OpenJDK Runtime Environment Temurin-17.0.3+7 (build 17.0.3+7) OpenJDK 64-Bit Server VM Temurin-17.0.3+7 (build 17.0.3+7, mixed mode, sharing) $ javac --version javac 17.0.3
6.2. Git Client (immediately)
You will need a Git client immediately in class. Note that most IDEs have a built-in/internal Git client capability, so the command line client shown here may not be absolutely necessary. If you chose to use your built-in IDE Git client, just translate any command-line instructions to GUI commands.
Download and install a Git Client.
-
All platforms - Git-SCM
I have git installed thru brew on MacOS |
$ git --version git version 2.36.0
Checkout the course baseline.
$ git clone https://gitlab.com/ejava-javaee/ejava-springboot.git ... $ ls | sort app build common coursedocs env intro pom.xml ...
Each week you will want to update your copy of the examples as I updated and release changes.
$ git checkout master # switches to master branch $ git pull # merges in changes from origin
Updating Changes to Modified Directory
If you have modified the source tree, you can save your changes to a new branchusing the following $ git status #show me which files I changed $ git diff #show me what the changes were $ git checkout -b new-branch #create new branch $ git commit -am "saving my stuff" #commit my changes to new branch $ git checkout master #switch back to course baseline $ git pull |
Saving Modifications to an Existing Branch
If you have made modifications to the source tree in the wrong branch, you can save your changes in an existing branch using the following $ git stash #save my changes in a temporary area $ git checkout existing-branch #go to existing branch $ git commit -am "saving my stuff" #commit my changes to existing branch $ git checkout master #switch back to course baseline $ git pull |
6.3. Maven 3 (immediately)
You will need Maven immediately in class. We use Maven to create repeatable and portable builds in class. This software build system is rivaled by Gradle. However, everything presented in this course is based on Maven and there is no feasible way to make that optional.
Download and install Maven 3.
-
All platforms - Apache Maven Project
I have Maven installed through brew on MacOS. Anything fairly recent should be good.
|
Place the $MAVEN_HOME/bin directory in your $PATH so that the mvn
command can be found.
$ mvn --version Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63) Maven home: /usr/local/Cellar/maven/3.8.6/libexec Java version: 17.0.3, vendor: Eclipse Adoptium, runtime: .../.sdkman/candidates/java/17.0.3-tem Default locale: en_US, platform encoding: UTF-8 OS name: "mac os x", version: "12.4", arch: "x86_64", family: "mac"
Setup any custom settings in $HOME/.m2/settings.xml
.
This is an area where you and I can define environment-specific values referenced by the build.
<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<!--
<localRepository>somewhere_else</localRepository>
-->
<offline>false</offline>
<mirrors>
<!-- uncomment when JHU unavailable
<mirror>
<id>ejava-dependencies</id>
<mirrorOf>ejava-nexus</mirrorOf>
<url>file://${user.home}/.m2/repository/</url>
</mirror>
--> (1) (2)
</mirrors>
<profiles>
</profiles>
<activeProfiles>
<!--
<activeProfile>aProfile</activeProfile>
-->
</activeProfiles>
</settings>
1 | make sure your ejava-springboot repository:main branch is up to date and installed (i.e., mvn clean install -f ./build; mvn clean install ) prior to using local mirror |
2 | the URL in the mirror must be consistent with the localRepository value.
The value shown here assumes the default, $HOME/.m2/repository value. |
Attempt to build the source tree. Report any issues to the course newsgroup.
$ pwd .../ejava-springboot $ mvn install -f build ... [INFO] ---------------------------------------- [INFO] BUILD SUCCESS [INFO] ---------------------------------------- $ mvn clean install ... [INFO] ---------------------------------------- [INFO] BUILD SUCCESS [INFO] ----------------------------------------
6.4. Java IDE (immediately)
You will realistically need a Java IDE very early in class.
If you are a die-hard vi, emacs, or text editor user — you can do a lot with your current toolset and Maven.
However, when it comes to code refactoring, inspecting framework API classes, and debugging, there is no substitute for a good IDE.
I have used Eclipse/STS for many years and some students in previous semester have used Eclipse installations from a previous Java-development course.
They are free and work well.
I will actively be using IntelliJ IDEA Community Edition.
The community edition is free and contains most of the needed support.
The professional edition is available for 1 year to anyone supplying a .edu
e-mail.
It is up to you what you use. Using something familiar is always the best choice.
Download and install an IDE for Java development.
-
IntelliJ IDEA Community Edition
-
All platforms - Jetbrains IntelliJ
-
-
Eclipse/STS
-
All platforms - Spring.io /tools
-
Load an attempt to run the examples in
-
app/app-build/java-app-example
6.5. Web API Client tool (not immediately)
Within the first month of the course, it will be helpful for you to have a web API client that can issue POST, PUT, and DELETE commands in addition to GET commands over HTTP and (one-way TLS) HTTPS. This will not be necessary until a few weeks into the semester.
Some options include:
-
curl - command line tool popular in Unix environments and likely available for Windows. All of my Web API call examples are done using curl.
-
Postman API Client - a UI-based tool for issuing and viewing web requests/responses. I personally do not like how "enterprisey" Postman has become. It use to simply be a browser plugin tool. However, the free version works and seems to only require a sign-up login.
$ curl -v -X GET https://ep.jhu.edu/ <!DOCTYPE html> <html class="no-js" lang="en"> <head> ... <title>Johns Hopkins Engineering | Part-Time & Online Graduate Education</title> ...
6.6. Optionally Install Docker (not immediately)
Is seems everything in this world has become containerized — and for a good reason. Once the initial investment of installing Docker has been tackled — software deployments, installation, and executions become very portable and easy to achieve.
I am still a bit tentative in requiring Docker for the class. I will make it optional for the students who cannot install. I will leverage Docker more heavily if I get a sense that all students have access. Let me know where you stand on this optional install.
Optionally download and install Docker. Docker can serve three purposes in class:
-
automates example database and JMS resource setup
-
provides a popular example deployment packaging
-
provides an integration test platform option
Without Docker installation, you will
-
need to manually install MongoDB
-
be limited to conceptual coverage of deployment and testing options in class
All platforms - Docker.com
-
Also install - docker-compose
docker-compose is now being installed with docker for the Docker Desktop installations.
It may not be necessary to do a separate installation for docker-compose .
|
$ docker -v Docker version 20.10.7, build f0df350 $ docker-compose -v docker-compose version 1.29.2, build 5becea4c
6.6.1. docker-compose Test Drive
With the course baseline checked out, you should be able to perform the following. Your results for the first execution will also include the download of images.
$ docker-compose -p ejava up -d mongodb postgres (1)(2) Creating ejava_postgres_1 ... done Creating ejava_mongodb_1 ... done
1 | -p option sets the project name to a well-known value (directory name is default) |
2 | up starts services and -d runs them all in the background |
$ docker-compose -p ejava stop mongodb postgres (1) Stopping ejava_mongodb_1 ... done Stopping ejava_postgres_1 ... done $ docker-compose -p ejava rm -f mongodb postgres (2) Going to remove ejava_mongodb_1, ejava_postgres_1 Removing ejava_mongodb_1 ... done Removing ejava_postgres_1 ... done
1 | stop pauses the running container |
2 | rm removes state assigned to the stopped container. -f does not request confirmation. |
6.7. MongoDB (later)
You will need MongoDB in the later 1/3 of the course. It is somewhat easy to install locally, but a mindless snap — configured exactly the way we need it to be — if we use Docker. Note that you will eventually need an Internet accessible MongoDB instance later in the course, so feel free to activate a free Atlas account at any time.
If you have not and will not be installing Docker, you will need to install and setup a local instance of Mongo.
-
All platforms - MongoDB
6.8. Heroku Account (later)
Mid-way through the course you will hit a very exciting point in the course, where you will begin deploying your assignments to the Internet for all to see.
We will be leveraging the Heroku Internet hosting platform. Heroku supports deploying Spring Boot executable JARs as well as Docker images. You will need an account and download their "toolbelt" set of commands for uploading, configuring, and managing deployments.
Create an account and download the Heroku toolbelt.
-
All platforms - Heroku
6.9. Mongo Atlas Account (later)
In the last 1/3 of the course, when deploying an application based on RDBMS and MongoDB, you will need access to an Internet accessible RDBMS and MongoDB instance. You will be able to provision a free RDBMS database directly from Heroku. You will be able to provision a free, Internet accessible MongoDB instance via Mongo Atlas.
Create an account and provision an Internet accessible MongoDB.
All platforms - Mongo Atlas
Introduction to Enterprise Java Frameworks
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
7. Introduction
7.1. Goals
The student will learn:
-
constructs and styles for implementing code reuse
-
what is a framework
-
what has enabled frameworks
-
a historical look at Java frameworks
7.2. Objectives
At the conclusion of this lecture, the student will be able to:
-
identify the key difference between a library and framework
-
identify the purpose for a framework in solving an application solution
-
identify the key concepts that enable a framework
-
identify specific constructs that have enabled the advance of frameworks
-
identify key Java frameworks that have evolved over the years
8. Code Reuse
Code reuse is the use of existing software to create new software. [1]
We leverage code reuse to help solve either repetitive or complex tasks so that we are not repeating ourselves, we reduce errors, and we achieve more complex goals.
8.1. Code Reuse Trade-offs
On the positive side, we do this because we have confidence that we can delegate a portion of our job to code that has been proven to work. We should not need to again test what we are using.
On the negative side, reuse can add dependencies bringing additional size, complexity, and risk to our solution. If all you need is a spoon — do you need to bring the entire kitchen?
8.2. Code Reuse Constructs
Code reuse can be performed using several structural techniques
- Method Call
-
We can wrap functional logic within a method within our own code base. We can make calls to this method from the places that require that task performed.
- Classes
-
We can capture state and functional abstractions in a set of classes. This adds some modularity to related reusable method calls.
- Interfaces
-
Abstract interfaces can be defined as placeholders for things needed but supplied elsewhere. This could be because of different options provided or details being supplied elsewhere.
- Modules
-
Reusable constructs can be packaged into separate physical modules so that they can be flexibly used or not used by our application.
8.3. Code Reuse Styles
There are two basic styles of code reuse and they primarily have to to with control.
Figure 1. Library/ Framework/Code Relationship [2]
|
|
Its not always a one-or-the-other style. Libraries can have mini frameworks within them. Even the JSON/XML parser example can be a mini-framework of customizations and extensions.
9. Frameworks
9.1. Framework Informal Description
A successful software framework is a body code that has been developed from the skeletons of successful and unsuccessful solutions of the past and present within a common domain of challenge. A framework is a generalization of solutions that provides for key abstractions, opportunity for specialization, and supplies default behavior to make the on-ramp easier and also appropriate for simpler solutions.
-
"We have done this before. This is what we need and this is how we do it."
A framework is much bigger than a pattern instantiation. A pattern is commonly at the level of specific object interactions. We typically have created or commanded something at the completion of a pattern — but we have a long way to go in order to complete our overall solution goal.
-
Pattern Completion: "that is not enough — we are not done"
-
Framework Completion: "I would pay (or get paid) for that!"
A successful framework is more than many patterns grouped together. Many patterns together is just a sea of calls — like a large city street at rush hour. There is a pattern of when people stop and go, make turns, speed up, or yield to let someone into traffic. Individual tasks are accomplished, but even if you could step back a bit — there is little to be understood by all the interactions.
-
"Where is everyone going?"
A framework normally has a complex purpose. We have typically accomplished something of significance or difficulty once we have harnessed a framework to perform a specific goal. Users of frameworks are commonly not alone. Similar accomplishments are achieved by others with similar challenges but varying requirements.
-
"This has gotten many to their target. You just need to supply …"
Well designed and popular frameworks can operate at different scale — not just a one-size-fits-all all-of-the-time. This could be for different sized environments or simply for developers to have a workbench to learn with, demonstrate, or develop components for specific areas.
-
"Why does the map have to be actual size?"
9.2. Framework Characteristics
The following distinguishing features for a framework are listed on Wikipedia. [3] I will use them to structure some further explanations.
- Inversion of Control (IoC)
-
Unlike a procedural algorithm where our concrete code makes library calls to external components, a framework calls our code to do detailed things at certain points. All the complex but reusable logic has been abstracted into the framework.
-
"Don’t call us. We’ll call you." is a very common phrase to describe inversion of control
-
- Default Behavior
-
Users of the framework do not have to supply everything. One or more selectable defaults try to do the common, right thing.
-
Remember — the framework developers have solved this before and have harvested the key abstractions and processing from the skeletal remains of previous solutions
-
- Extensibility
-
To solve the concrete case, users of the framework must be able to provide specializations that are specific to their problem domain.
-
Framework developers — understanding the problem domain — have pre-identified which abstractions will need to be specialized by users. If they get that wrong, it is a sign of a bad framework.
-
- Non-modifiable Framework code
-
A framework has a tangible structure; well-known abstractions that perform well-defined responsibilities. That tangible aspect is visible in each of the concrete solutions and is what makes the product of a framework immediately understandable to other users of the framework.
-
"This is very familiar."
-
10. Framework Enablers
10.1. Dependency Injection
A process to enable Inversion of Control (IoC), whereby objects define their dependencies [4] and the manager (the "Container") assembles and connects the objects according to definitions.
The "manager" can be your setup code ("POJO" setup) or in realistic cases a "container" (see later definition)
10.2. POJO
A Plain Old Java Object (POJO) is what the name says it is. It is nothing more than an instantiated Java class.
A POJO normally will address the main purpose of the object and can be missing details or dependencies that give it complete functionality. Those details or dependencies are normally for specialization and extensibility that is considered outside of the main purpose of the object.
-
Example: POJO may assume inputs are valid but does not know validation rules.
10.3. Component
A component is a fully assembled set of code (one or more POJOs) that can perform its duties for its clients. A component will normally have a well-defined interface and a well-defined set of functions it can perform.
A component can have zero or more dependencies on other components, but there should be no further mandatory assembly once your client code gains access to it.
10.4. Bean
A generalized term that tends to refer to an object in the range of a POJO to a component that encapsulates something. A supplied "bean" takes care of aspects that we do not need to have knowledge of.
In Spring, the objects that form the backbone of your application and that are managed by the Spring IoC container are called beans. A bean is an object that is instantiated, assembled, and managed by a Spring IoC container. Otherwise, a bean is simply one of many objects in your application. Beans, and the dependencies among them, are reflected in the configuration metadata used by a container. [4]
Introduction to the Spring IoC Container and Beans
You will find that I commonly use the term "component" in the lecture notes — to be a bean that is fully assembled and managed by the container. |
10.5. Container
A container is the assembler and manager of components.
Both Docker and Spring are two popular containers that work at two different levels but share the same core responsibility.
10.5.1. Docker Container Definition
-
Docker supplies a container that assembles and packages software so that it can be generically executed on remote platforms.
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. [5]
Use containers to Build Share and Run your applications
10.5.2. Spring Container Definition
-
Spring supplies a container that assembles and packages software to run within a JVM.
(The container) is responsible for instantiating, configuring, and assembling the beans. The container gets its instructions on what objects to instantiate, configure, and assemble by reading configuration metadata. The configuration metadata is represented in XML, Java annotations, or Java code. It lets you express the objects that compose your application and the rich interdependencies between those objects. [6]
Container Overview
10.6. Interpose
Containers do more than just configure and assemble simple POJOs. Containers can apply layers of functionality onto beans when wrapping them into components. Examples:
-
Perform validation
-
Enforce security constraints
-
Manage transaction for backend resource
-
Perform Method in a separate thread
11. Language Impact on Frameworks
As stated earlier, frameworks provide a template of behavior — allowing for configuration and specialization. Over the years, the ability to configure and to specialize has gone through significant changes with language support.
11.1. XML Configurations
Prior to Java 5, the primary way to identify components was with an XML file. The XML file would identify a bean class provided by the framework user. The bean class would either implement an interface or comply with JavaBean getter/setter conventions.
11.1.1. Inheritance
Early JavaEE EJB defined a set of interfaces that represented things like stateless and stateful sessions and persistent entity classes. End-users would implement the interface to supply specializations for the framework. These interfaces had many callbacks that were commonly not needed but had to be tediously implemented with noop return statements — which produced some code bloat.
11.1.2. Java Reflection
Early Spring bean definitions used some interface implementation, but more heavily leveraged compliance to JavaBean setter/getter behavior and Java reflection. Bean classes listed in the XML were scanned for methods that started with "set" or "get" (or anything else specially configured) and would form a call to them using Java reflection. This eliminated much of the need for strict interfaces and noop boilerplate return code.
11.2. Annotations
By the time Java 5 and annotations arrived in 2005 (late 2004), the Java framework worlds were drowning in XML. During that early time, everything was required to be defined. There were no defaults.
Although changes did not seem immediate, the JavaEE frameworks like EJB 3.0/JPA 1.0 provided a substantial example for the framework communities in 2006. They introduced "sane" defaults and a primary (XML) and secondary (annotation) override system to give full choice and override of how to configure. Many things just worked right out of the box and only required a minor set of annotations to customize.
Spring went a step further and created a Java Configuration capability to be a 100% replacement for the old XML configurations. XML files were replaced by Java classes. XML bean definitions were replaced by annotated factory methods. Bean construction and injection was replaced by instantiation and setter calls within the factory methods.
Both JavaEE and Spring supported class level annotations for components that were very simple to instantiate and followed standard injection rules.
11.3. Lambdas
Java 8 brought in lambdas and functional processing, which from a strictly syntactical viewpoint is primarily a shorthand for writing an implementation to an interface (or abstract class) with only one abstract method.
You will find many instances in modern libraries where a call will accept a lambda function to implement core business functionality within the scope of the called method. Although — as stated — this is primarily syntactical sugar, it has made method definitions so simple that many more calls take optional lambdas to provide convenient extensions.
12. Key Frameworks
In this section I am going to list a limited set of key Java framework highlights. In following the primarily Java path for enterprise frameworks, you will see a remarkable change over the years.
12.1. CGI Scripts
The Common Gateway Interface (CGI) was the cornerstone web framework when Java started coming onto the scene. [7] CGI was created in 1993 and, for the most part, was a framework for accepting HTTP calls, serving up static content and calling scripts to return dynamic content results. [8]
The important parts to remember is that CGI was 100% stateless relative to backend resources. Each dynamic script called was a new, heavyweight operating system process and new connection to the database. Java programs were shoehorned into this framework as scripts.
12.2. JavaEE
Jakarta EE, formerly the Java Platform, Enterprise Edition (JavaEE) and Java 2 Platform, Enterprise Edition (J2EE) is a framework that extends the Java Platform, Standard Edition (Java SE) to be an end-to-end Web to database functionality and more. [9] Focusing only on the web and database portions here, JakartaEE provided a means to invoke dynamic scripts — written in Java — within a process thread and cached database connections.
The initial versions of Jakarta EE aimed big. Everything was a large problem and nothing could be done simply. It was viewed as being overly complex for most users. Spring was formed initially as a means to make J2EE simpler and ended up soon being an independent framework of its own.
J2EE first was released in 1999 and guided by Sun Microsystems. The Servlet portion was likely the most successful portion of the early release. The Enterprise Java Beans (EJB) portion was not realistically usable until JavaEE 5 / post 2006. By then, frameworks like Spring had taken hold of the target community.
In 2010, Sun Microsystems and control of both JavaSE and JavaEE was purchased by Oracle and seemed to progress but on a slow path. By JavaEE 8 in 2017, the framework had become very Spring-like with its POJO-based design. In 2017, Oracle transferred ownership of JavaEE to Jakarta. The framework seems to have paused for a while for naming changes and compatibility releases. [9]
12.3. Spring
Spring 1.0 was released in 2004 and was an offshoot of a book written by Rod Johnson "Expert One-on-One J2EE Design and Development" that was originally meant to explain how to be successful with J2EE. [10]
In a nutshell, Rod Johnson and the other designers of Spring thought that rather than starting with a large architecture like J2EE, one should start with a simple bean and scale up from there without boundaries. Small Spring applications were quickly achieved and gave birth to other frameworks like the Hibernate persistence framework (first released in 2003) which significantly influenced the EJB3/JPA standard. [11]
12.4. Jakarta Persistence API (JPA)
The Jakarta Persistence API (JPA), formerly the Java Persistence API, was developed as a part of the JavaEE community and provided a framework definition for persisting objects in a relational database. JPA fully replaced the original EJB Entity Beans standards of earlier releases. It has an API, provider, and user extensions. [12] The main drivers of JPA where EclipseLink (formerly TopLink from Oracle) and Hibernate.
Frameworks should be based on the skeletons of successful implementations
Early EJB Entity Bean standards (< 3) were not thought to have been based on successful implementations.
The persistence framework failed to deliver, was modified with each major release, and eventually replaced by something that formed from industry successes.
|
JPA has been a wildly productive API. It provides simple API access and many extension points for DB/SQL-aware developers to supply more efficient implementations. JPA’s primary downside is likely that it allows Java developers to develop persistent objects without thinking of database concerns first. One could hardly blame that on the framework.
12.5. Spring Data
Spring Data is a data access framework centered around a core data object and its primary key — which is very synergistic with Domain-Driven Design (DDD) Aggregate and Repository concepts. [13]
-
Persistence models like JPA allow relationships to be defined to infinity and beyond.
-
In DDD the persisted object has a firm boundary and only IDs are allowed to be expressed when crossing those boundaries.
-
These DDD boundary concepts are very consistent with the development of microservices — where large transactional, monoliths are broken down into eventually consistent smaller services.
By limiting the scope of the data object relationships, Spring has been able to automatically define an extensive CRUD (Create, Read, Update, and Delete), query, and extension framework for persisted objects on multiple storage mechanisms.
We will be working with Spring Data JPA and Spring Data Mongo in this class. With the bounding DDD concepts, the two frameworks have an amazing amount of API synergy between them.
12.6. Spring Boot
Spring Boot was first released in 2014. Rather than take the "build anything you want, any way you want" approach in Spring, Spring Boot provides a framework for providing an opinionated view of how to build applications. [14]
-
By adding a dependency, a default implementation is added with "sane" defaults.
-
By setting a few properties, defaults are customized to your desired settings.
-
By defining a few beans, you can override the default implementations with local choices.
There is no external container in Spring Boot. Everything gets boiled down to an executable JAR and launched my a simple Java main (and a lot of other intelligent code).
Our focus will be on Spring Boot, Spring, and lower-level Spring and external frameworks.
13. Summary
In this module we:
-
identified the key differences between a library and framework
-
identify the purpose for a framework in solving an application solution
-
identify the key concepts that enable a framework
-
identify specific constructs that have enabled the advance of frameworks
-
identify key Java frameworks that have evolved over the years
Pure Java Main Application
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
14. Introduction
This material provides an introduction to building a bare bones Java application using a single, simple Java class, packaging that in a Java ARchive (JAR), and executing it two ways:
-
as a class in the classpath
-
as the Main-Class of a JAR
14.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
create source code for an executable Java class
-
add that Java class to a Maven module
-
build the module using a Maven pom.xml
-
execute the application using a classpath
-
configure the application as an executable JAR
-
execute an application packaged as an executable JAR
15. Simple Java Class with a Main
Our simple Java application starts with a public class with a static main() method that optionally accepts command-line arguments from the caller
package info.ejava.examples.app.build.javamain;
import java.util.List;
public class SimpleMainApp { (1)
public static final void main(String...args) { (2) (3)
System.out.println("Hello " + List.of(args));
}
}
1 | public class |
2 | implements a static main() method |
3 | optionally accepts arguments |
16. Project Source Tree
This class is placed within a module source tree in the
src/main/java
directory below a set of additional directories (info/ejava/examples/app/build/javamain
)
that match the Java package name of the class (info.ejava.examples.app.build.javamain
)
|-- pom.xml (1)
`-- src
|-- main (2)
| |-- java
| | `-- info
| | `-- ejava
| | `-- examples
| | `-- app
| | `-- build
| | `-- javamain
| | `-- SimpleMainApp.java
| `-- resources (3)
`-- test (4)
|-- java
`-- resources
1 | pom.xml will define our project artifact and how to build it |
2 | src/main will contain the pre-built, source form of our artifacts that will be part of our primary JAR output for the module |
3 | src/main/resources is commonly used for property files or other resource files
read in during the program execution |
4 | src/test is will contain the pre-built, source form of our test artifacts. These will not be part of the
primary JAR output for the module |
17. Building the Java Archive (JAR) with Maven
In setting up the build within Maven, I am going to limit the focus to just compiling our simple Java class and packaging that into a standard Java JAR.
17.1. Add Core pom.xml Document
Add the core document with required GAV
information (groupId
, artifactId
, version
) to the pom.xml
file at the root of the module tree. Packaging is also required but will have a default of jar
if not supplied.
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>info.ejava.examples.app</groupId> (1)
<artifactId>java-app-example</artifactId> (2)
<version>6.0.1-SNAPSHOT</version> (3)
<packaging>jar</packaging> (4)
<project>
1 | groupId |
2 | artifactId |
3 | version |
4 | packaging |
Module directory should be the same name/spelling as artifactId to align with default directory naming patterns used by plugins. |
Packaging optional in this case. The default is to |
17.2. Add Optional Elements to pom.xml
-
name
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>info.ejava.examples.app</groupId>
<artifactId>java-app-example</artifactId>
<version>6.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>App::Build::Java Main Example</name> (1)
<project>
1 | name appears in Maven build output but not required |
17.3. Define Plugin Versions
Define plugin versions so the module can be deterministically built in multiple environments
-
Each version of Maven has a set of default plugins and plugin versions
-
Each plugin version may or may not have a set of defaults (e.g., not Java 17) that are compatible with our module
<properties>
<java.target.version>17</java.target.version>
<maven-compiler-plugin.version>3.10.1</maven-compiler-plugin.version>
<maven-jar-plugin.version>3.2.2</maven-jar-plugin.version>
</properties>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<release>${java.target.version}</release>
</configuration>
</plugin>
</plugins>
</pluginManagement>
The jar
packaging will automatically activate the maven-compiler-plugin
and maven-jar-plugin
.
Our definition above identifies the version of the plugin to be used (if used) and any desired
configuration of the plugin(s).
17.4. pluginManagement vs. plugins
-
Use
pluginManagement
to define a plugin if it activated in the module build-
useful to promote consistency in multi-module builds
-
commonly seen in parent modules
-
-
Use
plugins
to declare that a plugin be active in the module build-
ideally only used by child modules
-
our child module indirectly activated several plugins by using the
jar
packaging type
-
18. Build the Module
Maven modules are commonly built with the following commands/ phases
-
clean
removes previously built artifacts -
package
creates primary artifact(s) (e.g., JAR)-
processes main and test resources
-
compiles main and test classes
-
runs unit tests
-
builds the archive
-
$mvn clean package
[INFO] Scanning for projects...
[INFO]
[INFO] --------------< info.ejava.examples.app:java-app-example >--------------
[INFO] Building App::Build::Java App Example 6.0.1-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.2.0:clean (default-clean) @ java-app-example ---
[INFO] Deleting .../java-app-example/target
[INFO]
...
[INFO] --- maven-resources-plugin:3.2.0:resources (default-resources) @ java-app-example ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Using 'UTF-8' encoding to copy filtered properties files.
[INFO] Copying 0 resource
[INFO]
...
[INFO] --- maven-compiler-plugin:3.10.1:compile (default-compile) @ java-app-example ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to .../java-app-example/target/classes
[INFO]
[INFO] --- maven-resources-plugin:3.2.0:testResources (default-testResources) @ java-app-example ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Using 'UTF-8' encoding to copy filtered properties files.
[INFO] Copying 0 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.10.1:testCompile (default-testCompile) @ java-app-example ---
[INFO] Changes detected - recompiling the module!
[INFO]
[INFO] --- maven-surefire-plugin:3.0.0-M7:test (default-test) @ java-app-example ---
[INFO]
[INFO] --- maven-jar-plugin:3.2.2:jar (default-jar) @ java-app-example ---
[INFO] Building jar: .../java-app-example/target/java-app-example-6.0.1-SNAPSHOT.jar
[INFO]
...
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.428 s
19. Project Build Tree
The produced build tree from mvn clean package
contains the following key artifacts (and more)
|-- pom.xml
|-- src
`-- target
|-- classes (1)
| `-- info
| `-- ejava
| `-- examples
| `-- app
| `-- build
| `-- javamain
| `-- SimpleMainApp.class
...
|-- java-app-example-6.0.1-SNAPSHOT.jar (2)
...
`-- test-classes (3)
1 | target/classes for built artifacts from src/main |
2 | primary artifact(s) (e.g., Java Archive (JAR)) |
3 | target/test-classes for built artifacts from src/test |
20. Resulting Java Archive (JAR)
Maven adds a few extra files to the META-INF directory that we can ignore. The key files we want to focus on are:
-
SimpleMainApp.class
is the compiled version of our application -
[META-INF/MANIFEST.MF](https://docs.oracle.com/javase/tutorial/deployment/jar/manifestindex.html) contains properties relevant to the archive
$ jar tf target/java-app-example-*-SNAPSHOT.jar | egrep -v "/$" | sort
META-INF/MANIFEST.MF
META-INF/maven/info.ejava.examples.app/java-app-example/pom.properties
META-INF/maven/info.ejava.examples.app/java-app-example/pom.xml
info/ejava/examples/app/build/javamain/SimpleMainApp.class
|
21. Execute the Application
The application is executed by
-
invoking the
java
command -
adding the JAR file (and any other dependencies) to the classpath
-
specifying the fully qualified class name of the class that contains our main() method
$ java -cp target/java-app-example-*-SNAPSHOT.jar info.ejava.examples.app.build.javamain.SimpleMainApp
Output:
Hello []
$ java -cp target/java-app-example-*-SNAPSHOT.jar info.ejava.examples.app.build.javamain.SimpleMainApp arg1 arg2 "arg3 and 4"
Output:
Hello [arg1, arg2, arg3 and 4]
-
example passed three (3) arguments separated by spaces
-
third argument (
arg3 and arg4
) used quotes around the entire string to escape spaces and have them included in the single parameter
-
22. Configure Application as an Executable JAR
To execute a specific Java class within a classpath is conceptually simple. However, there is a lot more to know than we need to when there may be only a single entry point. In the following sections we will assign a default Main-Class by using the MANIFEST.MF properties
22.1. Add Main-Class property to MANIFEST.MF
$ unzip -qc target/java-app-example-*-SNAPSHOT.jar META-INF/MANIFEST.MF
Manifest-Version: 1.0
Created-By: Maven JAR Plugin 3.2.2
Build-Jdk-Spec: 17
Main-Class: info.ejava.examples.app.build.javamain.SimpleMainApp
22.2. Automate Additions to MANIFEST.MF using Maven
One way to surgically add that property is thru the maven-jar-plugin
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>${maven-jar-plugin.version}</version>
<configuration>
<archive>
<manifest>
<mainClass>info.ejava.examples.app.build.javamain.SimpleMainApp</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
This is a very specific plugin configuration that would only apply to a specific child module.
Therefore, we would place this in a |
23. Execute the JAR versus just adding to classpath
The executable JAR is executed by
-
invoking the
java
command -
adding the -jar option
-
adding the JAR file (and any other dependencies) to the classpath
$ java -jar target/java-app-example-*-SNAPSHOT.jar
Output:
Hello []
$ java -jar target/java-app-example-*-SNAPSHOT.jar one two "three and four"
Output:
Hello [one, two, three and four]
-
example passed three (3) arguments separated by spaces
-
third argument (
three and four
) used quotes around the entire string to escape spaces and have them included in the single parameter
-
24. Configure pom.xml to Test
At this point we are ready to create an automated execution of our JAR as a part of the build.
We have to do that after the packaging
phase and will leverage the integration-test
Maven phase
<build>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId> (1)
<executions>
<execution>
<id>execute-jar</id>
<phase>integration-test</phase> (4)
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<java fork="true" classname="info.ejava.examples.app.build.javamain.SimpleMainApp"> (2)
<classpath>
<pathelement path="${project.build.directory}/${project.build.finalName}.jar"/>
</classpath>
<arg value="Ant-supplied java -cp"/>
<arg value="Command Line"/>
<arg value="args"/>
</java>
<java fork="true"
jar="${project.build.directory}/${project.build.finalName}.jar"> (3)
<arg value="Ant-supplied java -jar"/>
<arg value="Command Line"/>
<arg value="args"/>
</java>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
1 | Using the maven-ant-run plugin to execute Ant task |
2 | Using the java Ant task to execute shell java -cp command line |
3 | Using the java Ant task to execute shell java -jar command line |
4 | Running the plugin during the integration-phase
|
24.1. Execute JAR as part of the build
$ mvn clean verify
[INFO] Scanning for projects...
[INFO]
[INFO] -------------< info.ejava.examples.app:java-app-example >--------------
...
[INFO] --- maven-jar-plugin:3.2.2:jar (default-jar) @ java-app-example -(1)
[INFO] Building jar: .../java-app-example/target/java-app-example-6.0.1-SNAPSHOT.jar
[INFO]
...
[INFO] --- maven-antrun-plugin:3.1.0:run (execute-jar) @ java-app-example ---
[INFO] Executing tasks (2)
[INFO] [java] Hello [Ant-supplied java -cp, Command Line, args]
[INFO] [java] Hello [Ant-supplied java -jar, Command Line, args]
[INFO] Executed tasks
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
1 | Our plugin is executing |
2 | Our application was executed and the results displayed |
25. Summary
-
The JVM will execute the static
main()
method of the class specified in the java command -
The class must be in the JVM classpath
-
Maven can be used to build a JAR with classes
-
A JAR can be the subject of a java execution
-
The Java
META-INF/MANIFEST.MF
Main-Class
property within the target JAR can express the class with themain()
method to execute -
The maven-jar-plugin can be used to add properties to the
META-INF/MANIFEST.MF
file -
A Maven build can be configured to execute a JAR
Simple Spring Boot Application
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
26. Introduction
This material makes the transition from a creating and executing a simple Java main application to a Spring Boot application.
26.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
extend the standard Maven
jar
module packaging type to include core Spring Boot dependencies -
construct a basic Spring Boot application
-
build and execute an executable Spring Boot JAR
-
define a simple Spring component and inject that into the Spring Boot application
27. Spring Boot Maven Dependencies
Spring Boot provides a spring-boot-starter-parent
(gradle source,
pom.xml) pom that can be used as a parent pom for our Spring Boot modules.
[15]
This defines version information for dependencies and plugins for building Spring Boot artifacts — along with an opinionated view of how the module should be built.
spring-boot-starter-parent
inherits from a spring-boot-dependencies
(gradle source,
pom.xml)
pom that provides a definition of artifact versions without an opinionated view of how the module is built.
This pom can be imported by modules that already inherit from a local Maven parent — which would be common.
This is the demonstrated approach we will take here. We will also include demonstration of how the build constructs are commonly spread across parent and local poms.
Spring Boot has converted over to gradle and posts a pom version of the gradle artifact to Maven central repository as a part of their build process. |
28. Parent POM
We are likely to create multiple Spring Boot modules and would be well-advised to begin by creating a local parent pom construct to house the common passive definitions. By passive definitions (versus active declarations), I mean definitions for the child poms to use if needed versus mandated declarations for each child module. For example, a parent pom may define the JDBC driver to use when needed but not all child modules will need a JDBC driver nor a database for that matter. In that case, we do not want the parent pom to actively declare a dependency. We just want the parent to passively define the dependency that the child can optionally choose to actively declare. This construct promotes consistency among all of the modules.
"Root"/parent poms should define dependencies and plugins for consistent re-use among child poms and use dependencyManagement and pluginManagement elements to do so. |
"Child"/concrete/leaf poms declare dependencies and plugins to be used when building that module and try to keep dependencies to a minimum. |
"Prototype" poms are a blend of root and child pom concepts. They are a nearly-concrete, parent pom that can be extended by child poms but actively declare a select set of dependencies and plugins to allow child poms to be as terse as possible. |
28.1. Define Version for Spring Boot artifacts
Define the version for Spring Boot artifacts to use. I am using a technique below of defining the value in a property so that it is easy to locate and change as well as re-use elsewhere if necessary.
# Place this declaration in an inherited parent pom
<properties>
<springboot.version>2.7.0</springboot.version> (1)
</properties>
1 | default value has been declared in imported ejava-build-bom |
Property values can be overruled at build time by supplying a system property on the command line "-D(name)=(value)" |
28.2. Import springboot-dependencies-plugin
Import springboot-dependencies-plugin
. This will define dependencyManagement
for us for many artifacts that are relevant to our Spring Boot development.
# Place this declaration in an inherited parent pom
<dependencyManagement> (1)
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>${springboot.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
1 | import is within examples-root for class examples, which is a grandparent of this example |
29. Local Child/Leaf Module POM
The local child module pom.xml is where the module is physically built. Although Maven modules can have multiple levels of inheritance — where each level is a child of their parent, the child module I am referring to here is the leaf module where the artifacts are meant to be really built. Everything defined above it is primarily used as a common definition (thru dependencyManagement and pluginManagement) to simplify the child pom.xml and to promote consistency among sibling modules. It is the job of the leaf module to activate these definitions that are appropriate for the type of module being built.
29.2. Declare dependency on artifacts used
Realize the parent definition of the spring-boot-starter
dependency by declaring
it within the child dependencies section.
For where we are in this introduction, only the above dependency will be necessary.
The imported spring-boot-dependencies
will take care of declaring the version#
# Place this declaration in the child/leaf pom building the JAR archive
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
<!--version --> (1)
</dependency>
</dependencies>
1 | parent has defined (using import in this case) the version for all children to consistently use |
The figure below shows the parent poms being the source of the passive dependency definitions and the child being the source of the active dependency declarations.
-
the parent is responsible for defining the version# for dependencies used
-
the child is responsible for declaring what dependencies are needed and adopts the parent version definition
An upgrade to a future dependency version should not require a change of a child module declaration if this pattern is followed.
30. Simple Spring Boot Applicaton Java Class
With the necessary dependencies added to our build classpath, we now have enough to begin defining a simple Spring Boot Application.
package info.ejava.springboot.examples.app.build.springboot;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication (3)
public class SpringBootApp {
public static final void main(String...args) { (1)
System.out.println("Running SpringApplication");
SpringApplication.run(SpringBootApp.class, args); (2)
System.out.println("Done SpringApplication");
}
}
1 | Define a class with a static main() method |
2 | Initiate Spring applicaton bootstrap by invoking SpringApplication.run()
and passing a) application class and b) args passed into main() |
3 | Annotate the class with @SpringBootApplication |
Startup can, of course be customized (e.g., change the printed banner, registering event listeners) |
30.1. Module Source Tree
The source tree will look similar to our previous Java main example.
|-- pom.xml
`-- src
|-- main
| |-- java
| | `-- info
| | `-- ejava
| | `-- examples
| | `-- app
| | `-- build
| | `-- springboot
| | `-- SpringBootApp.java
| `-- resources
`-- test
|-- java
`-- resources
31. Spring Boot Executable JAR
At this point we can likely execute the Spring Boot Application within the IDE but instead, lets go back to the pom and construct a JAR file to be able to execute the application from the command line.
31.1. Building the Spring Boot Executable JAR
We saw earlier how we could build a standard executable JAR using the maven-jar-plugin
.
However, there were some limitations to that approach — especially the fact that a standard Java JAR cannot house dependencies to form a self-contained classpath and Spring Boot will need additional JARs to complete the application bootstrap.
Spring Boot uses a custom executable JAR format that can be built with the aid of the
spring-boot-maven-plugin.
Lets extend our pom.xml file to enhance the standard JAR to be a Spring Boot executable JAR.
31.1.1. Declare spring-boot-maven-plugin
The following snippet shows the configuration for a spring-boot-maven-plugin
that defines a default execution to build the Spring Boot executable JAR for all child modules that declare using it.
In addition to building the Spring Boot executable JAR, we are setting up a standard in the parent for all children to have their follow-on JAR classified separately as a bootexec
.
classifier
is a core Maven construct and is meant to lable sibling artifacts to the original Java JAR for the module.
Other types of classifiers
are source
, schema
, javadoc
, etc.
bootexec
is a value we made up.
By default, the repackage
goal would have replaced the Java JAR with the Spring Boot executable JAR.
That would have left an ambiguous JAR artifact in the repository — we would not easily know its JAR type.
This will help eliminate dependency errors during the semester when we layer N+1
assignments on top of layer N
.
Only standard Java JARs can be used in classpath dependencies.
<properties>
<spring-boot.classifier>bootexec</spring-boot.classifier>
</properties>
...
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<classifier>${spring-boot.classifier}</classifier> (4)
</configuration>
<executions>
<execution>
<id>build-app</id> (1)
<phase>package</phase> (2)
<goals>
<goal>repackage</goal> (3)
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
1 | id used to describe execution and required when having more than one |
2 | phase identifies the maven goal in which this plugin runs |
3 | repackage identifies the goal to execute within the spring-boot-maven-plugin |
4 | adds a -bootexec to the executable JAR’s name |
We can do much more with the spring-boot-maven-plugin
on a per-module basis (e.g., run the application from within Maven).
We are just starting at construction at this point.
31.1.2. Build the JAR
$ mvn clean package
[INFO] Scanning for projects...
...
[INFO] --- maven-jar-plugin:3.2.2:jar (default-jar) @ springboot-app-example ---
[INFO] Building jar: .../target/springboot-app-example-6.0.1-SNAPSHOT.jar (1)
[INFO]
[INFO] --- spring-boot-maven-plugin:2.7.0:repackage (build-app) @ springboot-app-example ---
[INFO] Attaching repackaged archive .../target/springboot-app-example-6.0.1-SNAPSHOT-bootexec.jar with classifier bootexec (2)
1 | standard Java JAR is built by the maven-jar-plugin |
2 | standard Java JAR is augmented by the spring-boot-maven-plugin |
31.2. Java MANIFEST.MF properties
The spring-boot-maven-plugin
augmented the standard JAR by adding a few properties to the MANIFEST.MF file
$ unzip -qc target/springboot-app-example-6.0.1-SNAPSHOT-bootexec.jar META-INF/MANIFEST.MF
Manifest-Version: 1.0
Created-By: Maven JAR Plugin 3.2.2
Build-Jdk-Spec: 17
Main-Class: org.springframework.boot.loader.JarLauncher
Start-Class: info.ejava.examples.app.build.springboot.SpringBootApp
Spring-Boot-Version: 2.7.0
Spring-Boot-Classes: BOOT-INF/classes/
Spring-Boot-Lib: BOOT-INF/lib/
Spring-Boot-Classpath-Index: BOOT-INF/classpath.idx
Spring-Boot-Layers-Index: BOOT-INF/layers.idx
1 | Main-Class was set to a Spring Boot launcher |
2 | Start-Class was set to the class we defined with @SpringBootApplication |
31.3. JAR size
Notice that the size of the Spring Boot executable JAR is significantly larger the original standard JAR.
$ ls -lh target/*jar* | grep -v sources | cut -d\ -f9-99
8.4M Aug 28 15:19 target/springboot-app-example-6.0.1-SNAPSHOT-bootexec.jar (2)
4.1K Aug 28 15:19 target/springboot-app-example-6.0.1-SNAPSHOT.jar (1)
1 | The original Java JAR with Spring Boot annotations was 4.1KB |
2 | The Spring Boot JAR is 8.4MB |
31.4. JAR Contents
Unlike WARs, a standard Java JAR does not provide a standard way to embed dependency JARs. Common approaches to embed dependencies within a single JAR include a "shaded" JAR where all dependency JAR are unwound and packaged as a single "uber" JAR
-
positives
-
works
-
follows standard Java JAR constructs
-
-
negatives
-
obscures contents of the application
-
problem if multiple source JARs use files with same path/name
-
Spring Boot creates a custom WAR-like structure
BOOT-INF/classes/info/ejava/examples/app/build/springboot/AppCommand.class
BOOT-INF/classes/info/ejava/examples/app/build/springboot/SpringBootApp.class (3)
BOOT-INF/lib/javax.annotation-api-1.3.2.jar (2)
...
BOOT-INF/lib/spring-boot-2.7.0.jar
BOOT-INF/lib/spring-context-5.3.20.jar
BOOT-INF/lib/spring-beans-5.3.20.jar
BOOT-INF/lib/spring-core-5.3.20.jar
...
META-INF/MANIFEST.MF
META-INF/maven/info.ejava.examples.app/springboot-app-example/pom.properties
META-INF/maven/info.ejava.examples.app/springboot-app-example/pom.xml
org/springframework/boot/loader/ExecutableArchiveLauncher.class (1)
org/springframework/boot/loader/JarLauncher.class
...
org/springframework/boot/loader/util/SystemPropertyUtils.class
1 | Spring Boot loader classes hosted at the root / |
2 | Local application classes hosted in /BOOT-INF/classes |
3 | Dependency JARs hosted in /BOOT-INF/lib |
Spring Boot can also use a standard WAR structure — to be deployed to a web server.
|
31.5. Execute Command Line
springboot-app-example$ java -jar target/springboot-app-example-6.0.1-SNAPSHOT-bootexec.jar (1)
Running SpringApplication (2)
. ____ _ __ _ _ (3)
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.0})
2019-12-04 09:01:03.014 INFO 1287 --- [main] i.e.e.a.build.springboot.SpringBootApp: \
Starting SpringBootApp on Jamess-MBP with PID 1287 (.../springboot-app-example/target/springboot-app-example-6.0.1-SNAPSHOT.jar \
started by jim in .../springboot-app-example)
2019-12-04 09:01:03.017 INFO 1287 --- [main] i.e.e.a.build.springboot.SpringBootApp: \
No active profile set, falling back to default profiles: default
2019-12-04 09:01:03.416 INFO 1287 --- [main] i.e.e.a.build.springboot.SpringBootApp: \
Started SpringBootApp in 0.745 seconds (JVM running for 1.13)
Done SpringApplication (4)
1 | Execute the JAR using the java -jar command |
2 | Main executes and passes control to SpringApplication |
3 | Spring Boot bootstrap is started |
4 | SpringApplication terminates and returns control to our main() |
32. Add a Component to Output Message and Args
We have a lot of capability embedded into our current Spring Boot executable JAR that is there to bootstrap the application by looking around for components to activate. Lets explore this capability with a simple class that will take over the responsibility for the output of a message with the arguments to the program.
We want this class found by Spring’s application startup processing, so we will:
// AppCommand.java
package info.ejava.examples.app.build.springboot; (2)
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import java.util.List;
@Component (1)
public class AppCommand implements CommandLineRunner {
public void run(String... args) throws Exception {
System.out.println("Component code says Hello " + List.of(args));
}
}
1 | Add a @Component annotation on the class |
2 | Place the class in a Java package configured to be scanned |
32.1. @Component Annotation
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
@Component
public class AppCommand implements CommandLineRunner {
Classes can be configured to have their instances managed by Spring. Class annotations
can be used to express the purpose of a class and to trigger Spring into managing them
in specific ways. The most generic form of component annotation is @Component
.
Others will include @Repository
, @Controller
, etc. Classes directly annotated with
a @Component
(or other annotation) indicates that Spring can instantiate instances
of this class with no additional assistance from a @Bean
factory.
32.2. Interface: CommandLineRunner
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
@Component
public class AppCommand implements CommandLineRunner {
public void run(String... args) throws Exception {
}
}
-
Components implementing CommandLineRunner interface get called after application initialization
-
Program arguments are passed to the
run()
method -
Can be used to perform one-time initialization at start-up
-
Alternative Interface: ApplicationRunner
-
Components implementing ApplicationRunner are also called after application initialization
-
Program arguments are passed to its
run()
method have been wrapped in ApplicationArguments convenience class
-
Component startup can be ordered with the @Ordered Annotation. |
32.3. @ComponentScan Tree
By default, the @SpringBootApplication
annotation configured Spring to look at
and below the Java package for our SpringBootApp class. I chose to place this
component class in the same Java package as the application class
@SpringBootApplication
// @ComponentScan
// @SpringBootConfiguration
// @EnableAutoConfiguration
public class SpringBootApp {
}
src/main/java
`-- info
`-- ejava
`-- springboot
`-- examples
`-- app
|-- AppCommand.java
`-- SpringBootApp.java
33. Running the Spring Boot Application
$ java -jar target/springboot-app-example-6.0.1-SNAPSHOT-bootexec.jar
Running SpringApplication (1)
. ____ _ __ _ _ (2)
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.0)
2019-09-06 15:56:45.666 INFO 11480 --- [ main] i.e.s.examples.app.SpringBootApp
: Starting SpringBootApp on Jamess-MacBook-Pro.local with PID 11480 (.../target/springboot-app-example-6.0.1-SNAPSHOT.jar ...)
2019-09-06 15:56:45.668 INFO 11480 --- [ main] i.e.s.examples.app.SpringBootApp
: No active profile set, falling back to default profiles: default
2019-09-06 15:56:46.146 INFO 11480 --- [ main] i.e.s.examples.app.SpringBootApp
: Started SpringBootApp in 5.791 seconds (JVM running for 6.161) (3)
Hello (4) (5)
Done SpringApplication (6)
1 | Our SpringBootApp.main() is called and logs Running SpringApplication |
2 | SpringApplication.run() is called to execute the Spring Boot application |
3 | Our AppCommand component is found within the classpath at or under the package declaring @SpringBootApplication |
4 | The AppCommand component run() method is called and it prints out a message |
5 | The Spring Boot application terminates |
6 | Our SpringBootApp.main() logs Done SpringApplication an exits |
33.1. Implementation Note
I added print statements directly in the Spring Boot Application’s main() method
to help illustrate when calls were made. This output could have been packaged into listener
callbacks to leave the main() method implementation free — except to register the callbacks.
If you happen to need more complex behavior to fire before the Spring context begins initialization,
then look to add
listeners
of the SpringApplication instead.
|
34. Configure pom.xml to Test
At this point we are again ready to setup an automated execution of our JAR as a part of the build.
We can do that by adding a separate goal execution of the spring-boot-maven-plugin
.
<build>
...
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<id>run-application</id> (1)
<phase>integration-test</phase>
<goals>
<goal>run</goal>
</goals>
<configuration> (2)
<arguments>Maven,plugin-supplied,args</arguments>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
1 | new execution of the run goal to be performed during the Maven integration-test phase |
2 | command line arguments passed to main
|
34.1. Execute JAR as part of the build
$ mvn clean verify
[INFO] Scanning for projects...
...
[INFO] --- spring-boot-maven-plugin:2.7.0:run (run-application) @ springboot-app-example ---
[INFO] Attaching agents: [] (1)
Running SpringApplication
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.0)
2022-07-02 14:11:46.110 INFO 48432 --- [ main] i.e.e.a.build.springboot.SpringBootApp : Starting SpringBootApp using Java 17.0.3 on Jamess-MacBook-Pro.local with PID 48432 (.../springboot-app-example/target/classes started by jim in .../springboot-app-example)
2022-07-02 14:11:46.112 INFO 48432 --- [ main] i.e.e.a.build.springboot.SpringBootApp : No active profile set, falling back to 1 default profile: "default"
2022-07-02 14:11:46.463 INFO 48432 --- [ main] i.e.e.a.build.springboot.SpringBootApp : Started SpringBootApp in 0.611 seconds (JVM running for 0.87)
Component code says Hello [Maven, plugin-supplied, args] (2)
Done SpringApplication
1 | Our plugin is executing |
2 | Our application was executed and the results displayed |
35. Summary
As a part of this material, the student has learned how to:
-
Add Spring Boot constructs and artifact dependencies to the Maven POM
-
Define Application class with a main() method
-
Annotate the application class with @SpringBootApplication (and optionally use lower-level annotations)
-
Place the application class in a Java package that is at or above the Java packages with beans that will make-up the core of your application
-
Add component classes that are core to your application to your Maven module
-
Typically define components in a Java package that is at or below the Java package for the
SpringBootApplication
-
Annotate components with @Component (or other special-purpose annotations used by Spring)
-
Execute application like a normal executable JAR
Home Sales Assignment 0
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
The following makes up "Assignment 0". It is intended to get you started developing right away, communicating questions/answers, and turning something in with some of the basics.
As with most assignments, a set of starter projects is available in assignment-starter/homesales-starter
.
It is expected that you can implement the complete assignment on your own.
However, the Maven poms and the portions unrelated to the assignment focus are commonly provided for reference to keep the focus on each assignment part.
Your submission should not be a direct edit/hand-in of the starters.
Your submission should — at a minimum:
-
use your own Maven groupIds
-
change the "starter" portion of the provided groupId to a name unique to you
Change: <groupId>info.ejava-student.starter.assignments.projectName</groupId> To: <groupId>info.ejava-student.[your-value].assignments.projectName</groupId>
-
-
use your own Maven descriptive name
-
change the "Starter" portion of the provided name to a name unique to you
Change: <name>Starter::Assignments::ProjectName</name> To: <name>[Your Value]::Assignments::ProjectName</name>
-
-
use your own Java package names
-
change the "starter" portion of the provided package name to a name unique to you
Change: package info.ejava_student.starter.assignment1.app.build; To: package info.ejava_student.[your_value].assignment1.app.build;
-
-
extend either
spring-boot-starter-parent
orejava-build-parent
The following diagram depicts the 3 modules (parent, javaapp, and bootapp) you will turn in. You will inherit or depend on external artifacts that will be supplied via Maven.
36. Part A: Build Pure Java Application JAR
36.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of building a module containing a pure Java application. You will:
-
create source code for an executable Java class
-
add that Java class to a Maven module
-
build the module using a Maven pom.xml
-
execute the application using a classpath
-
configure the application as an executable JAR
-
execute an application packaged as an executable JAR
36.2. Overview
In this portion of the assignment you are going to implement a JAR with a Java main class and execute it.
36.3. Requirements
-
Create a Maven project that will host a Java program
-
Supply a single Java class with a main() method that will print a single "Sales has started" message to stdout
-
Compile the Java class
-
Archive the Java class into a JAR
-
Execute the Java class using the JAR as a classpath
-
Register the Java class as the
Main-Class
in theMETA-INF/MANIFEST.MF
file of the JAR -
Execute the JAR to launch the Java class
-
Turn in a source tree with a complete Maven module that will build and execute a demonstration of the pure Java main application.
36.4. Grading
Your solution will be evaluated on:
-
create source code for an executable Java class
-
whether the Java class includes a non-root Java package
-
the assignment of a unique Java package for your work
-
whether you have successfully provided a main method that prints a startup message
-
-
add that Java class to a Maven module
-
the assignment of a unique groupId relative to your work
-
whether it follows standard, basic Maven src/main directory structure
-
-
build the module using a Maven pom.xml
-
whether the module builds from the command line
-
-
execute the application using a classpath
-
if the Java main class executes using a
java -cp
approach -
if the demonstration of execution is performed as part of the Maven build
-
-
execute an application packaged as an executable JAR
-
if the java main class executes using a
java -jar
approach -
if the demonstration of execution is performed as part of the Maven build
-
36.5. Additional Details
-
The root maven pom can extend either
spring-boot-starter-parent
orejava-build-parent
. Add<relativeParent/>
tag to parent reference to indicate an orphan project if doing so. -
When inheriting or depending on
ejava
class modules, include a JHU repository reference in your root pom.xml.<repositories> <repository> <id>ejava-nexus</id> <url>https://pika.jhuep.com/nexus/repository/ejava-snapshots</url> </repository> </repositories>
-
The maven build shall automate to demonstration of the two execution styles. You can use the
maven-antrun-plugin
or any other Maven plugin to implement this. -
A quick start project is available in
assignment-starter/homesales-starter/assignment0-homesales-javaapp
Modify Maven groupId and Java package if used.
37. Part B: Build Spring Boot Executable JAR
37.1. Purpose
In this portion of the assignment you will demonstrate your knowledge of building a simple Spring Boot Application. You will:
-
construct a basic Spring Boot application
-
define a simple Spring component and inject that into the Spring Boot application
-
build and execute an executable Spring Boot JAR
37.2. Overview
In this portion of the assignment, you are going to implement a Spring Boot executable JAR with a Spring Boot application and execute it.
37.3. Requirements
-
Create a Maven project to host a Spring Boot Application
-
Supply a single Java class with a main() method that bootstraps the Spring Boot Application
-
Supply a
@Component
that will be loaded and invoked when the application starts-
have that
@Component
print a single "Sales has started" message to stdout
-
-
Compile the Java class
-
Archive the Java class
-
Convert the JAR into an executable Spring Boot Application JAR
-
Execute the JAR and Spring Boot Application
-
Turn in a source tree with a complete Maven module that will build and execute a demonstration of the Spring Boot application
37.4. Grading
Your solution will be evaluated on:
-
extend the standard Maven jar module packaging type to include core Spring Boot dependencies
-
whether you have added a dependency on
spring-boot-starter
to bring in required dependencies
-
-
construct a basic Spring Boot application
-
whether you have defined a proper
@SpringBootApplication
-
-
define a simple Spring component and inject that into the Spring Boot application
-
whether you have successfully injected a
@Component
that prints a startup message
-
-
build and execute an executable Spring Boot JAR
-
whether you have configured the Spring Boot plugin to build an executable JAR
-
if the demonstration of execution is performed as part of the Maven build
-
37.5. Additional Details
-
The root maven pom can extend either
spring-boot-starter-parent
orejava-build-parent
. Add<relativeParent/>
tag to parent reference to indicate an orphan project if doing so. -
When inheriting or depending on
ejava
class modules, include a JHU repository reference in your root pom.xml.<repositories> <repository> <id>ejava-nexus</id> <url>https://pika.jhuep.com/nexus/repository/ejava-snapshots</url> </repository> </repositories>
-
The maven build shall automate to demonstration of the application using the
spring-boot-maven-plugin
. There is no need for themaven-antrun-plugin
in this portion of the assignment. -
A quick start project is available in
assignment-starter/homesales-starter/assignment0-homesales-bootapp
. Modify Maven groupId and Java package if used.
Bean Factory and Dependency Injection
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
38. Introduction
This material provides an introduction to configuring an application using a factory method. This is the most basic use of separation between the interface used by the application and the decision of what the implementation will be.
The configuration choice shown will be part of the application but as you will see later, configurations can be deeply nested — far away from the details known to the application writer.
38.1. Goals
The student will learn:
-
to decouple an application through the separation of interface and implementation
-
to configure an application using dependency injection and factory methods of a configuration class
38.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
implement a service interface and implementation component
-
package a service within a Maven module separate from the application module
-
implement a Maven module dependency to make the component class available to the application module
-
use a
@Bean
factory method of a@Configuration
class to instantiate a Spring-managed component
39. Hello Service
To get started, we are going to create a sample Hello
service. We are going
to implement an interface and a single implementation right off the bat.
They will be housed in two separate modules
-
hello-service-api
-
hello-service-stdout
We will start out by creating two separate module directories.
39.1. Hello Service API
The Hello Service API module will contain a single interface and pom.xml.
hello-service-api/
|-- pom.xml
`-- src
`-- main
`-- java
`-- info
`-- ejava
`-- examples
`-- app
`-- hello
`-- Hello.java (1)
1 | Service interface |
39.2. Hello Service StdOut
The Hello Service StdOut module will contain a single implementation class and pom.xml.
hello-service-stdout/
|-- pom.xml
`-- src
`-- main
`-- java
`-- info
`-- ejava
`-- examples
`-- app
`-- hello
`-- stdout
`-- StdOutHello.java (1)
1 | Service implementation |
39.3. Hello Service API pom.xml
We will be building a normal Java JAR with no direct dependencies on Spring Boot or Spring.
#pom.xml
...
<groupId>info.ejava.examples.app</groupId>
<version>6.0.1-SNAPSHOT</version>
<artifactId>hello-service-api</artifactId>
<packaging>jar</packaging>
...
39.4. Hello Service StdOut pom.xml
The implementation will be similar to the interface’s pom.xml except it requires a dependency on the interface module.
#pom.xml
...
<groupId>info.ejava.examples.app</groupId>
<version>6.0.1-SNAPSHOT</version>
<artifactId>hello-service-stdout</artifactId>
<packaging>jar</packaging>
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId> (1)
<artifactId>hello-service-api</artifactId>
<version>${project.version}</version> (1)
</dependency>
</dependencies>
...
1 | Dependency references leveraging ${project} variables module shares with dependency |
Since we are using the same source tree, we can leverage ${project} variables.
This will not be the case when declaring dependencies on external modules.
|
39.5. Hello Service Interface
The interface is quite simple, just pass in the String name for what you want the service to say hello to.
package info.ejava.examples.app.hello;
public interface Hello {
void sayHello(String name);
}
The service instance will be responsible for
-
the greeting
-
the implementation — how we say hello
39.7. Hello Service Modules Complete
We are now done implementing our sample service interface and implementation — just build and install to make available to the application we will work on next.
39.8. Hello Service API Maven Build
$ mvn clean install -f hello-service-api
[INFO] Scanning for projects...
[INFO]
[INFO] -------------< info.ejava.examples.app:hello-service-api >--------------
[INFO] Building App::Config::Hello Service API 6.0.1-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ hello-service-api ---
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ hello-service-api ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory .../app-config/hello-service-api/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ hello-service-api ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to .../app-config/hello-service-api/target/classes
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ hello-service-api ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory .../app-config/hello-service-api/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ hello-service-api ---
[INFO] No sources to compile
[INFO]
[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ hello-service-api ---
[INFO] No tests to run.
[INFO]
[INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ hello-service-api ---
[INFO] Building jar: .../app-config/hello-service-api/target/hello-service-api-6.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- maven-install-plugin:3.0.0-M1:install (default-install) @ hello-service-api ---
[INFO] Installing .../app-config/hello-service-api/target/hello-service-api-6.0.1-SNAPSHOT.jar to .../.m2/repository/info/ejava/examples/app/hello-service-api/6.0.1-SNAPSHOT/hello-service-api-6.0.1-SNAPSHOT.jar
[INFO] Installing .../app-config/hello-service-api/pom.xml to .../.m2/repository/info/ejava/examples/app/hello-service-api/6.0.1-SNAPSHOT/hello-service-api-6.0.1-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.070 s
39.9. Hello Service StdOut Maven Build
$ mvn clean install -f hello-service-stdout
[INFO] Scanning for projects...
[INFO]
[INFO] ------------< info.ejava.examples.app:hello-service-stdout >------------
[INFO] Building App::Config::Hello Service StdOut 6.0.1-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ hello-service-stdout ---
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ hello-service-stdout ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory .../app-config/hello-service-stdout/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ hello-service-stdout ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to .../app-config/hello-service-stdout/target/classes
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ hello-service-stdout ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory .../app-config/hello-service-stdout/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ hello-service-stdout ---
[INFO] No sources to compile
[INFO]
[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ hello-service-stdout ---
[INFO] No tests to run.
[INFO]
[INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ hello-service-stdout ---
[INFO] Building jar: .../app-config/hello-service-stdout/target/hello-service-stdout-6.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- maven-install-plugin:3.0.0-M1:install (default-install) @ hello-service-stdout ---
[INFO] Installing .../app-config/hello-service-stdout/target/hello-service-stdout-6.0.1-SNAPSHOT.jar to .../.m2/repository/info/ejava/examples/app/hello-service-stdout/6.0.1-SNAPSHOT/hello-service-stdout-6.0.1-SNAPSHOT.jar
[INFO] Installing .../app-config/hello-service-stdout/pom.xml to .../.m2/repository/info/ejava/examples/app/hello-service-stdout/6.0.1-SNAPSHOT/hello-service-stdout-6.0.1-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.658 s
40. Application Module
We now move on to developing our application within its own module containing two (2) classes similar to earlier examples.
|-- pom.xml
`-- src
``-- main
`-- java
`-- info
`-- ejava
`-- examples
`-- app
`-- config
`-- beanfactory
|-- AppCommand.java (2)
`-- SelfConfiguredApp.java (1)
1 | Class with Java main() that starts Spring |
2 | Class containing our first component that will be the focus of our injection |
40.1. Application Maven Dependency
We make the Hello Service visible to our application by adding a dependency
on the hello-service-api
and hello-service-stdout
artifacts. Since the
implementation already declares a compile dependency on the interface, we
can get away with only declaring a direct dependency just on the implementation.
<groupId>info.ejava.examples.app</groupId>
<artifactId>appconfig-beanfactory-example</artifactId>
<name>App::Config::Bean Factory Example</name>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>hello-service-stdout</artifactId> (1)
<version>${project.version}</version>
</dependency>
</dependencies>
1 | Dependency on implementation creates dependency on both implementation and interface |
In this case, the module we are depending upon is in the
same groupId and shares the same version . For simplicity of
reference and versioning, I used the ${project} variables to
reference it. That will not always be the case.
|
40.2. Viewing Dependencies
You can verify the dependencies exist using the tree
goal of the dependency
plugin.
$ mvn dependency:tree -f hello-service-stdout
...
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hello-service-stdout ---
[INFO] info.ejava.examples.app:hello-service-stdout:jar:6.0.1-SNAPSHOT
[INFO] \- info.ejava.examples.app:hello-service-api:jar:6.0.1-SNAPSHOT:compile
40.3. Application Java Dependency
Next we add a reference to the Hello interface and define how we can get it injected. In this case we are using contructor injection where the instance is supplied to the class through a parameter to the constructor.
The component class now has a non-default
constructor to allow the Hello implementation to be injected and the
Java attribute is defined as final to help assure that the value
is assigned during the constructor.
|
package info.ejava.examples.app.config.beanfactory;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import info.ejava.examples.app.hello.Hello;
@Component
public class AppCommand implements CommandLineRunner {
private final Hello greeter; (1)
public AppCommand(Hello greeter) { (2)
this.greeter = greeter;
}
public void run(String... args) throws Exception {
greeter.sayHello("World");
}
}
1 | Add a reference to the Hello interface. Java attribute defined as final
to help assure that the value is assigned during the constructor. |
2 | Using contructor injection where the instance is supplied to the class through a parameter to the constructor |
41. Dependency Injection
Our AppCommand
class has been defined only with the interface to Hello
and not a
specific implementation.
This Separation of Concerns helps improve modularity, testability, reuse, and many other desirable features of an application. The interaction between the two classes is defined by an interface.
But how do does our client class (AppCommand
) get an instance of the implementation (StdOutHello
)?
-
If the client class directly instantiates the implementation — it is coupled to that specific implementation.
public AppCommand() {
this.greeter = new StdOutHello("World");
}
-
If the client class procedurally delegates to a factory — it runs the risk of violating Separation of Concerns by adding complex initialization code to its primary business purpose
public AppCommand() {
this.greeter = BeanFactory.makeGreeter();
}
Traditional procedural code normally makes calls to libraries in order to perform a specific purpose. If we instead remove the instantiation logic and decisions from the client and place that elsewhere, we can keep the client more focused on its intended purpose. With this inversion of control (IoC), the application code is part of a framework that calls the application code when it is time to do something versus the other way around. In this case the framework is for application assembly.
Most frameworks, including Spring, implement dependency injection through a form of IoC.
42. Spring Dependency Injection
We defined the dependency using the Hello
interface and have three primary ways
to have dependencies injected into an instance.
import org.springframework.beans.factory.annotation.Autowired;
public class AppCommand implements CommandLineRunner {
//@Autowired -- FIELD injection (3)
private Hello greeter;
@Autowired //-- Constructor injection (1)
public AppCommand(Hello greeter) {
this.greeter = greeter;
}
//@Autowired -- PROPERTY injection (2)
public void setGreeter(Hello hello) {
this.greeter = hello;
}
1 | constructor injection - injected values required prior to instance being created |
2 | field injection - value injected directly into attribute |
3 | setter or property injection - setter() called with value |
42.1. @Autowired Annotation
The @Autowired(required=…)
annotation
-
may be applied to fields, methods, constructors
-
@Autowired(required=true)
- default value forrequired
attribute-
successful injection mandatory when applied to a property
-
specific constructor use required when applied to a constructor
-
only a single constructor per class may have this annotation
-
-
-
@Autowired(required=false)
-
injected bean not required to exist when applied to a property
-
specific constructor an option for container to use
-
multiple constructors may have this annotation applied
-
container will determine best based on number of matches
-
-
single constructor has an implied
@Autowired(required=false)
- making annotation optional
-
There are more details to learn about injection and the lifecycle of a bean. However, know that we are using constructor injection at this point in time since the dependency is required for the instance to be valid.
42.2. Dependency Injection Flow
In our example:
-
Spring will detect the AppCommand component and look for ways to instantiate it
-
The only constructor requires a Hello instance
-
Spring will then look for a way to instantiate an instance of Hello
43. Bean Missing
When we go to run the application, we get the following error
$ mvn clean package
...
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of constructor in AppCommand required a bean of type 'Hello' that could not be found.
Action:
Consider defining a bean of type 'Hello' in your configuration.
The problem is that the container has no knowledge of any beans that can satisfy the
only available constructor. The StdOutHello
class is not defined in a way that
allows Spring to use it.
43.1. Bean Missing Error Solution(s)
We can solve this in at least two (2) ways.
-
Add @Component to the StdOutHello class. This will trigger Spring to directly instantiate the class.
@Component public class StdOutHello implements Hello {
-
problem: It may be one of many implementations of Hello
-
-
Define what is needed using a
@Bean
factory method of a@Configuration
class. This will trigger Spring to call a method that is in charge of instantiating an object of the type identified in the method return signature.@Configuration public class AConfigurationClass { @Bean public Hello hello() { return new StdOutHello("..."); } }
44. @Configuration classes
@Configuration
classes are classes that Spring expects to have one or more @Bean
factory
methods. If you remember back, our Spring Boot application class was annotated with
@SpringBootApplication
@SpringBootApplication (1)
//==> wraps @SpringBootConfiguration (2)
// ==> wraps @Configuration
public class SelfConfiguredApp {
public static final void main(String...args) {
SpringApplication.run(SelfConfiguredApp.class, args);
}
//...
}
1 | @SpringBootApplication is a wrapper around a few annotations including
@SpringBootConfiguration |
2 | @SpringBootConfiguration is an alternative annotation to using @Configuration
with the caveat that there be only one @SpringBootConfiguration per application |
Therefore, we have the option to use our Spring Boot application class to host the configuration and
the @Bean
factory.
45. @Bean Factory Method
There is more to @Bean
factory methods than we will cover here, but at its
simplest and most functional level — this is a method the container will call
when the container determines it needs a bean of a certain type and locates
a @Bean
annotated method with a return type of the required type.
Adding a @Bean
factory method to our Spring Boot application class will
result in the following in our Java class.
@SpringBootApplication (4) (5)
public class SelfConfiguredApp {
public static final void main(String...args) {
SpringApplication.run(SelfConfiguredApp.class, args);
}
@Bean (1)
public Hello hello() { (2)
return new StdOutHello("Application @Bean says Hey"); (3)
}
}
1 | method annotated with @Bean implementation |
2 | method returns Hello type required by container |
3 | method returns a fully instantiated instance. |
4 | method hosted within class with @Configuration annotation |
5 | @SpringBootConfiguration annotation included the capability defined for
@Configuration |
Anything missing to create instance gets declared as an input to the method and it will get created in the same manner and passed as a parameter. |
46. @Bean Factory Used
With the @Bean
factory method in place, all comes together at runtime to produce the following:
$ java -jar target/appconfig-beanfactory-example-*-SNAPSHOT-bootexec.jar
...
Application @Bean says Hey World
-
the container
-
obtained an instance of a
Hello
bean -
passed that bean to the
AppCommand
class' constructor to instantiate that@Component
-
-
the
@Bean
factory method-
chose the implementation of the
Hello
service (StdOutHello
) -
chose the greeting to be used ("Application @Bean says Hey")
return new StdOutHello("Application @Bean says Hey");
-
-
the AppCommand CommandLineRunner determined who to say hello to ("World")
greeter.sayHello("World");
48. Summary
In this module we
-
decoupled part of our application into three Maven modules (app, iface, and impl1)
-
decoupled the implementation details (
StdOutHello
) of a service from the caller (AppCommand
) of that service -
injected the implementation of the service into a component using constructor injection
-
defined a
@Bean
factory method to make the determination of what to inject -
showed an alternative using XML-based configuration and
@ImportResource
In future modules we will look at more detailed aspects of Bean lifecycle and @Bean factory methods. Right now we are focused on following a path to explore decoupling our the application even further.
Value Injection
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
49. Introduction
One of the things you may have noticed was the hard-coded string in the AppCommand class in the previous example.
public void run(String... args) throws Exception {
greeter.sayHello("World");
}
Lets say we don’t want the value hard-coded or passed in as a command-line argument. Lets go down a path that uses standard Spring value injection to inject a value from a property file.
49.1. Goals
The student will learn:
-
how to configure an application using properties
-
how to use different forms of injection
49.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
implement value injection into a Spring Bean attribute using
-
field injection
-
constructor injection
-
-
inject a specific value at runtime using a command line parameter
-
define a default value for the attribute
-
define property values for attributes of different type
50. @Value Annotation
To inject a value from a property source, we can add the Spring
@Value
annotation to the component property.
package info.ejava.examples.app.config.valueinject;
import org.springframework.beans.factory.annotation.Value;
...
@Component
public class AppCommand implements CommandLineRunner {
private final Hello greeter;
@Value("${app.audience}") (2)
private String audience; (1)
public AppCommand(Hello greeter) {
this.greeter = greeter;
}
public void run(String... args) throws Exception {
greeter.sayHello(audience);
}
}
1 | defining target of value as a FIELD |
2 | using FIELD injection to directly inject into the field |
There are no specific requirements for property names but there
are some common conventions followed using (prefix).(property)
to scope the property within a context.
-
app.audience
-
logging.file.name
-
spring.application.name
50.1. Value Not Found
However, if the property is not defined anywhere the following ugly error will appear.
2019-09-22 20:16:24.286 WARN 38915 --- [main] s.c.a.AnnotationConfigApplicationContext :
Exception encountered during context initialization - cancelling refresh attempt:
org.springframework.beans.factory.BeanCreationException: Error creating bean with
name 'appCommand': Injection of autowired dependencies failed; nested exception
is java.lang.IllegalArgumentException: Could not resolve placeholder
'app.audience' in value "${app.audience}"
50.2. Value Property Provided by Command Line
We can try to fix the problem by defining the property value on the command line
$ java -jar target/appconfig-valueinject-example-*-SNAPSHOT-bootexec.jar \
--app.audience="Command line World" (1)
...
Application @Bean says Hey Command line World
1 | use double dash (-- ) and property name to supply property value |
50.3. Default Value
We can defend against the value not being provided by assigning a default value where we declared the injection
@Value("${app.audience:Default World}") (1)
private String audience;
1 | use :value to express a default value for injection |
That results in the following output
$ java -jar target/appconfig-valueinject-example-*-SNAPSHOT-bootexec.jar
...
Application @Bean says Hey Default World
$ java -jar target/appconfig-valueinject-example-*-SNAPSHOT-bootexec.jar \
--app.audience="Command line World"
...
Application @Bean says Hey Command line World
51. Constructor Injection
In the above version of the example, we injected the Hello
bean through the constructor
and the audience
property using FIELD injection. This means
-
the value for
audience
attribute will not be known during the constructor -
the value for
audience
attribute cannot be made final
@Value("${app.audience}")
private String audience;
public AppCommand(Hello greeter) {
this.greeter = greeter;
greeter.sayHello(audience); //X-no (1)
}
1 | audience value will be null when used in the constructor — when using FIELD injection |
51.1. Constructor Injection Solution
An alternative to using field
injection is to change it to constructor
injection.
This has the benefit of having all properties injected in time to have them declared final.
@Component
public class AppCommand implements CommandLineRunner {
private final Hello greeter;
private final String audience; (2)
public AppCommand(Hello greeter,
@Value("${app.audience:Default World}") String audience) {
this.greeter = greeter;
this.audience = audience; (1)
}
1 | audience value will be known when used in the constructor |
2 | audience value can be optionally made final |
52. @PostConstruct
If field-injection is our choice, we can account for the late-arriving injections by leveraging @PostConstruct
.
The Spring container will call a method annotated with @PostConstruct
after instantiation (ctor called) and properties fully injected.
import javax.annotation.PostConstruct;
...
@Component
public class AppCommand implements CommandLineRunner {
private final Hello greeter; (1)
@Value("${app.audience}")
private String audience; (2)
@PostConstruct
void init() { (3)
greeter.sayHello(audience); //yes-greeter and audience initialized
}
public AppCommand(Hello greeter) {
this.greeter = greeter;
}
1 | constructor injection occurs first and in-time to declare attribute as final |
2 | field and property-injection occurs next and can involve many properties |
3 | Container calls @PostConstruct when all injection complete |
53. Property Types
53.1. non-String Property Types
Properties can also express non-String types as the following example shows.
@Component
public class PropertyExample implements CommandLineRunner {
private final String strVal;
private final int intVal;
private final boolean booleanVal;
private final float floatVal;
public PropertyExample(
@Value("${val.str:}") String strVal,
@Value("${val.int:0}") int intVal,
@Value("${val.boolean:false}") boolean booleanVal,
@Value("${val.float:0.0}") float floatVal) {
...
The property values are expressed using string values that can be syntactically converted to the type of the target variable.
$ java -jar target/appconfig-valueinject-example-*-SNAPSHOT-bootexec.jar \
--app.audience="Command line option" \
--val.str=aString \
--val.int=123 \
--val.boolean=true \
--val.float=123.45
...
Application @Bean says Hey Command line option
strVal=aString
intVal=123
booleanVal=true
floatVal=123.45
53.2. Collection Property Types
We can also express properties as a sequence of values and inject the parsed string into Arrays and Collections.
...
private final List<Integer> intList;
private final int[] intArray;
private final Set<Integer> intSet;
public PropertyExample(...
@Value("${val.intList:}") List<Integer> intList,
@Value("${val.intList:}") Set<Integer> intSet,
@Value("${val.intList:}") int[] intArray) {
...
--val.intList=1,2,3,3,3
...
intList=[1, 2, 3, 3, 3] (1)
intSet=[1, 2, 3] (2)
intArray=[1, 2, 3, 3, 3] (3)
1 | parsed sequence with duplicates injected into List maintained duplicates |
2 | prased sequence with duplicates injected into Set retained only unique values |
3 | parsed sequence with duplicates injected into Array maintained duplicates |
53.3. Custom Delimiters (using Spring EL)
We can get a bit more elaborate and define a custom delimiter for the values.
However, it requires the use of Spring Expression Language (EL) #{}
operator.
(Ref: A Quick Guide to Spring @Value)
private final List<Integer> intList;
private final List<Integer> intListDelimiter;
public PropertyExample(
...
@Value("${val.intList:}") List<Integer> intList,
@Value("#{'${val.intListDelimiter:}'.split('!')}") List<Integer> intListDelimiter, (2)
...
--val.intList=1,2,3,3,3 --val.intListDelimiter='1!2!3!3!3' (1)
...
intList=[1, 2, 3, 3, 3]
intListDelimeter=[1, 2, 3, 3, 3]
...
1 | sequence is expressed on command line using two different delimiters |
2 | val.intListDelimiter String is read in from raw property value and segmented at the custom ! character |
53.4. Map Property Types
We can also leverage Spring EL to inject property values directly into a Map.
private final Map<Integer,String> map;
public PropertyExample( ...
@Value("#{${val.map:{}}}") Map<Integer,String> map) { (1)
...
--val.map="{0:'a', 1:'b,c,d', 2:'x'}"
...
map={0=a, 1=b,c,d, 2=x}
1 | parsed map injected into Map of specific type using Spring Expression Language (`#{}') operator |
53.5. Map Element
We can also use Spring EL to obtain a specific element from a Map.
private final Map<String, String> systemProperties;
public PropertyExample(
...
@Value("#{${val.map:{0:'',3:''}}[3]}") String mapValue, (1)
...
(no args)
...
mapValue= (2)
--val.map={0:'foo', 2:'bar, baz', 3:'buz'}
...
mapValue=buz (3)
...
1 | Spring EL declared to use Map element with key 3 and default to a Map of 2 elements with key 0 and 3 |
2 | With no arguments provided, the default 3:'' value was injected |
3 | With a map provided, the value 3:'buz' was injected |
53.6. System Properties
We can also simply inject Java System Properties into a Map using Spring EL.
private final Map<String, String> systemProperties;
public PropertyExample(
...
@Value("#{systemProperties}") Map<String, String> systemProperties) { (1)
...
System.out.println("systemProperties[user.timezone]=" + systemProperties.get("user.timezone")); (2)
...
systemProperties[user.timezone]=America/New_York
1 | Complete Map of system properties is injected |
2 | Single element is accessed and printed |
53.7. Property Conversion Errors
An error will be reported and the program will not start if the value provided cannot be syntactically converted to the target variable type.
$ java -jar target/appconfig-valueinject-example-*-SNAPSHOT-bootexec.jar \ --val.int=abc ... TypeMismatchException: Failed to convert value of type 'java.lang.String' to required type 'int'; nested exception is java.lang.NumberFormatException: For input string: "abc"
54. Summary
In this section we
-
defined a value injection for an attribute within a Spring Bean using
-
field injection
-
constructor injection
-
-
defined a default value to use in the event a value is not provided
-
defined a specific value to inject at runtime using a command line parameter
-
implemented property injection for attributes of different types
-
Built-in types (String, int, boolean, etc)
-
Collection types
-
Maps
-
-
Defined custom parsing techniques using Spring Expression Language (EL)
In future sections we will look to specify properties using aggregate property sources like file(s) rather than specifying each property individually.
Property Source
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
55. Introduction
In the previous section we defined a value injection into an attribute of a Spring Bean class and defined a few ways to inject a value on an individual basis. Next, we will setup ways to specify entire collection of property values through files.
55.1. Goals
The student will learn:
-
to supply groups of properties using files
-
to configure a Spring Boot application using property files
-
to flexibly configure and control configurations applied
55.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
configure a Spring Boot application using a property file
-
specify a property file for a basename
-
specify a property file packaged within a JAR file
-
specify a property file located on the file system
-
specify a both straight
properties
andYAML
property file sources -
specify multiple files to derive an injected property from
-
specify properties based on an active profile
-
specify properties based on placeholder values
56. Property File Source(s)
Spring Boot uses three key properties when looking for configuration files (Ref: docs.spring.io):
-
spring.config.name
— one or more base names separated by commas. The default isapplication
and the suffixes searched for are.properties
and.yml
(or.yaml
) -
spring.profiles.active
— one or more profile names separated by commas used in this context to identify which form of the base name to use. The default isdefault
and this value is located at the end of the base filename separated by a dash (-
; e.g.,application-default
) -
spring.config.location
— one or more directories/packages to search for configuration files or explicit references to specific files. The default is:-
file:config/
- within aconfig
directory in the current directory -
file:./
- within the current directory -
classpath:/config/
- within aconfig
package in the classpath -
classpath:/
— within the root package of the classpath
-
Names are primarily used to identify the base name of the application (e.g., application
or myapp
) or
of distinct areas (e.g., database
, security
). Profiles are primarily used to supply
variants of property values. Location is primarily used to identify the search paths to look for
configuration files but can be used to override names and profiles when a complete file path is supplied.
56.1. Property File Source Example
In this initial example I will demonstrate spring.config.name
and spring.config.location
and
use a single value injection similar to previous examples.
//AppCommand.java
...
@Value("${app.audience}")
private String audience;
...
However, the source of the property value will not come from the command line. It will come from one of the following property and/or YAML files in our module.
src
`-- main
|-- java
| `-- ...
`-- resources
|-- alternate_source.properties
|-- alternate_source.yml
|-- application.properties
`-- property_source.properties
$ jar tf target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar | \
egrep 'classes.*(properties|yml)'
BOOT-INF/classes/alternate_source.properties
BOOT-INF/classes/alternate_source.yml
BOOT-INF/classes/property_source.properties
BOOT-INF/classes/application.properties
56.2. Example Property File Contents
The four files each declare the same property app.audience
but with a different
value.
Spring Boot primarily supports the two file types shown (properties
and YAML
).
There is
some support for JSON
and XML
is primarily used to define configurations.
The first three below are in
properties
format.
#property_source.properties
app.audience=Property Source value
#alternate_source.properties
app.audience=alternate source property file
#application.properties
app.audience=application.properties value
This last file is in
YAML
format.
#alternate_source.yml
app:
audience: alternate source YAML file
That means the following — which will load the application.(properties|yml)
file
from one of the four locations …
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar
...
Application @Bean says Hey application.properties value
can also be completed with
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.location="classpath:/"
...
Application @Bean says Hey application.properties value
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.location="file:src/main/resources/"
...
Application @Bean says Hey application.properties value
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.location="file:src/main/resources/application.properties"
...
Application @Bean says Hey application.properties value
$ cp src/main/resources/application.properties /tmp/xyz.properties
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name=xyz --spring.config.location="file:/tmp/"
...
Application @Bean says Hey application.properties value
56.3. Non-existent Path
If you supply a non-existent path, Spring will report that as an error.
java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.location="file:src/main/resources/,file:src/main/resources/does_not_exit/"
***************************
APPLICATION FAILED TO START
***************************
Description:
Config data location 'file:src/main/resources/does_not_exit/' does not exist
Action:
Check that the value 'file:src/main/resources/does_not_exit/' is correct, or prefix it with 'optional:'
You can mark the location with optional:
for cases where it is legitimate for the location not to exist.
java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.location="file:src/main/resources/,optional:file:src/main/resources/does_not_exit/"
56.4. Path not Ending with Slash ("/")
If you supply a path not ending with a slash ("/"), Spring will also report an error.
java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.location="file:src/main/resources"
...
14:28:23.544 [main] ERROR org.springframework.boot.SpringApplication - Application run failed
java.lang.IllegalStateException: Unable to load config data from 'file:src/main/resources'
...
Caused by: java.lang.IllegalStateException: File extension is not known to any PropertySourceLoader. If the location is meant to reference a directory, it must end in '/' or File.separator
56.5. Alternate File Examples
We can switch to a different set of configuration files by changing the
spring.config.name
or spring.config.location
so that …
#property_source.properties
app.audience=Property Source value
#alternate_source.properties
app.audience=alternate source property file
#alternate_source.yml
app:
audience: alternate source YAML file
can be used to produce
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name=property_source
...
Application @Bean says Hey Property Source value
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name=alternate_source
...
Application @Bean says Hey alternate source property file
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.location="classpath:alternate_source.properties,classpath:alternate_source.yml"
...
Application @Bean says Hey alternate source YAML file
56.6. Series of files
#property_source.properties
app.audience=Property Source value
#alternate_source.properties
app.audience=alternate source property file
The default priority is last specified.
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name="property_source,alternate_source"
...
Application @Bean says Hey alternate source property file
$ java -jar target/appconfig-propertysource-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name="alternate_source,property_source"
...
Application @Bean says Hey Property Source value
57. @PropertySource Annotation
We can define a property to explicitly be loaded using a Spring-provided
@PropertySource
annotation. This annotation can be used on any class that
is used as a @Configuration
, so I will add that to the main application.
However, because we are still working with a very simplistic, single property example — I have started a sibling example that only has a
single property file so that no priority/overrides from application.properties
will occur.
|-- pom.xml
`-- src
`-- main
|-- java
| `-- info
| `-- ejava
| `-- examples
| `-- app
| `-- config
| `-- propertysource
| `-- annotation
| |-- AppCommand.java
| `-- PropertySourceApp.java
`-- resources
`-- property_source.properties
#property_source.properties
app.audience=Property Source value
...
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.PropertySource;
@SpringBootApplication
@PropertySource("classpath:property_source.properties") (1)
public class PropertySourceApp {
...
1 | An explicit reference to the properties file is placed within the
annotation on the @Configuration class |
When we now execute our JAR, we get the contents of the property file.
java -jar target/appconfig-propertysource-annotation-example-*-SNAPSHOT-bootexec.jar
...
Application @Bean says Hey Property Source value
58. Profiles
In addition to spring.config.name
and spring.config.location
, there is a third
configuration property — spring.profiles.active
— that Spring uses when configuring
an application. Profiles are identified by
-(profileName)
at the end of the base filename
(e.g., application-site1.properties
, myapp-site1.properties
)
I am going to create a new example to help explain this.
|-- pom.xml
`-- src
`-- main
|-- java
| `-- info
| `-- ejava
| `-- examples
| `-- app
| `-- config
| `-- propertysource
| `-- profiles
| |-- AppCommand.java
| `-- PropertySourceApp.java
`-- resources
|-- application-default.properties
|-- application-site1.properties
|-- application-site2.properties
`-- application.properties
The example uses the default spring.config.name
of application
and supplies four
property files.
-
each of the property files supplies a common property of
app.commonProperty
to help demonstrate priority -
each of the property files supplies a unique property to help identify whether the file was used
#application.properties
app.commonProperty=commonProperty from application.properties
app.appProperty=appProperty from application.properties
#application-default.properties
app.commonProperty=commonProperty from application-default.properties
app.defaultProperty=defaultProperty from application-default.properties
#application-site1.properties
app.commonProperty=commonProperty from application-site1.properties
app.site1Property=site1Property from application-site1.properties
#application-site2.properties
app.commonProperty=commonProperty from application-site2.properties
app.site2Property=site2Property from application-site2.properties
The component class defines an attribute for each of the available properties and defines a default value to identify when they have not been supplied.
@Component
public class AppCommand implements CommandLineRunner {
@Value("${app.commonProperty:not supplied}")
private String commonProperty;
@Value("${app.appProperty:not supplied}")
private String appProperty;
@Value("${app.defaultProperty:not supplied}")
private String defaultProperty;
@Value("${app.site1Property:not supplied}")
private String site1Property;
@Value("${app.site2Property:not supplied}")
private String site2Property;
In all cases (except when using an alternate spring.config.name ), we will get
the application.properties loaded. However, it is used at a lower priority
than all other sources.
|
58.1. Default Profile
If we run the program with no profiles active, we enact the default
profile.
site1
and site2
profiles are not loaded.
$ java -jar target/appconfig-propertysource-profile-example-*-SNAPSHOT-bootexec.jar
...
commonProperty=commonProperty from application-default.properties (1)
appProperty=appProperty from application.properties (2)
defaultProperty=defaultProperty from application-default.properties (3)
site1Property=not supplied (4)
site2Property=not supplied
1 | commonProperty was set to the value from default profile |
2 | application.properties was loaded |
3 | the default profile was loaded |
4 | site1 and site2 profiles where not loaded |
58.2. Specific Active Profile
If we activate a specific profile (site1
) the associated file is loaded
and the alternate profiles — including default
— are not loaded.
$ java -jar target/appconfig-propertysource-profile-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=site1
...
commonProperty=commonProperty from application-site1.properties (1)
appProperty=appProperty from application.properties (2)
defaultProperty=not supplied (3)
site1Property=site1Property from application-site1.properties (4)
site2Property=not supplied (3)
1 | commonProperty was set to the value from site1 profile |
2 | application.properties was loaded |
3 | default and site2 profiles were not loaded |
4 | the site1 profile was loaded |
58.3. Multiple Active Profiles
We can activate multiple profiles at the same time. If they define overlapping properties, the later one specified takes priority.
$ java -jar target/appconfig-propertysource-profile-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=site1,site2 (1)
...
commonProperty=commonProperty from application-site2.properties (1)
appProperty=appProperty from application.properties (2)
defaultProperty=not supplied (3)
site1Property=site1Property from application-site1.properties (4)
site2Property=site2Property from application-site2.properties (4)
$ java -jar target/appconfig-propertysource-profile-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=site2,site1 (1)
...
commonProperty=commonProperty from application-site1.properties (1)
appProperty=appProperty from application.properties (2)
defaultProperty=not supplied (3)
site1Property=site1Property from application-site1.properties (4)
site2Property=site2Property from application-site2.properties (4)
1 | commonProperty was set to the value from last specified profile |
2 | application.properties was loaded |
3 | the default profile was not loaded |
4 | site1 and site2 profiles were loaded |
58.4. No Associated Profile
If there are no associated profiles with a given spring.config.name
, then
none will be loaded.
$ java -jar target/appconfig-propertysource-profile-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name=BOGUS --spring.profiles.active=site1 (1)
...
commonProperty=not supplied (1)
appProperty=not supplied
defaultProperty=not supplied
site1Property=not supplied
site2Property=not supplied
1 | No profiles where loaded for spring.config.name BOGUS |
59. Property Placeholders
We have the ability to build property values using a placeholder that will come from elsewhere. Consider the following example where there is a common pattern to a specific set of URLs that change based on a base URL value.
-
(config_name).properties
would be the candidate to host the following definitionsecurity.authn=${security.service.url}/authentications?user=:user security.authz=${security.service.url}/authorizations/roles?user=:user
-
profiles would host the specific value for the placeholder
-
(config_name)-(profileA).properties
security.service.url=http://localhost:8080
-
(config_name)-(profileB).properties
security.service.url=https://acme.com
-
-
the default value for the placeholder can be declared in the same property file that uses it
security.service.url=https://acme.com security.authn=${security.service.url}/authentications?user=:user security.authz=${security.service.url}/authorizations/roles?user=:user
59.1. Placeholder Demonstration
To demonstrate this further, I am going to add three additional property files to the previous example.
`-- src
`-- main
...
`-- resources
|-- ...
|-- myapp-site1.properties
|-- myapp-site2.properties
`-- myapp.properties
59.2. Placeholder Property Files
# myapp.properties
app.commonProperty=commonProperty from myapp.properties (2)
app.appProperty="${app.commonProperty}" used by myapp.property (1)
1 | defines a placeholder for another property |
2 | defines a default value for the placeholder within this file |
Only the ${} characters and property name are specific to
property placeholders. Quotes ("" ) within this property value are part of the this example
and not anything specific to property placeholders in general.
|
# myapp-site1.properties
app.commonProperty=commonProperty from myapp-site1.properties (1)
app.site1Property=site1Property from myapp-site1.properties
1 | defines a value for the placeholder |
# myapp-site2.properties
app.commonProperty=commonProperty from myapp-site2.properties (1)
app.site2Property=site2Property from myapp-site2.properties
1 | defines a value for the placeholder |
59.3. Placeholder Value Defined Internally
Without any profiles activated, we obtain a value for the placeholder
from within myapp.properties
.
$ java -jar target/appconfig-propertysource-profile-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name=myapp
...
commonProperty=commonProperty from myapp.properties
appProperty="commonProperty from myapp.properties" used by myapp.property (1)
defaultProperty=not supplied
site1Property=not supplied
site2Property=not supplied
1 | placeholder value coming from default value defined in same myapp.properties |
59.4. Placeholder Value Defined in Profile
Activating the site1
profile causes the placeholder value to get defined by
myapp-site1.properties
.
$ java -jar target/appconfig-propertysource-profile-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name=myapp --spring.profiles.active=site1
...
commonProperty=commonProperty from myapp-site1.properties
appProperty="commonProperty from myapp-site1.properties" used by myapp.property (1)
defaultProperty=not supplied
site1Property=site1Property from myapp-site1.properties
site2Property=not supplied
1 | placeholder value coming from value defined in myapp-site1.properties |
59.5. Multiple Active Profiles
Multiple profiles can be activated. By default — the last profile specified has the highest priority.
$ java -jar target/appconfig-propertysource-profile-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name=myapp --spring.profiles.active=site1,site2
...
commonProperty=commonProperty from myapp-site2.properties
appProperty="commonProperty from myapp-site2.properties" used by myapp.property (1)
defaultProperty=not supplied
site1Property=site1Property from myapp-site1.properties
site2Property=site2Property from myapp-site2.properties
1 | placeholder value coming from value defined in last profile — myapp-site2.properties |
59.6. Mixing Names, Profiles, and Location
Name, profile, and location constructs can play well together as long as location only references a directory path and not a specific file. In the example below, we are defining a non-default name, a non-default profile, and a non-default location to search for the property files.
$ java -jar target/appconfig-propertysource-profile-example-*-SNAPSHOT-bootexec.jar \
--spring.config.name=myapp \
--spring.profiles.active=site1 \
--spring.config.location="file:src/main/resources/"
...
commonProperty=commonProperty from myapp-site1.properties
appProperty="commonProperty from myapp-site1.properties" used by myapp.property
defaultProperty=not supplied
site1Property=site1Property from myapp-site1.properties
site2Property=not supplied
The above example located the following property files in the filesystem (not classpath)
-
src/main/resources/myapp.properties
-
src/main/resources/myapp-site1.properties
60. Summary
In this module we
-
supplied property value(s) through a set of property files
-
used both
properties
andYAML
formatted files to express property values -
specified base filename(s) to use using the
--spring.config.name
property -
specified profile(s) to use using the
--spring.profiles.active
property -
specified paths(s) to search using the
--spring.config.location
property -
specified a custom file to load using the
@PropertySource
annotation -
specified multiple names, profiles, and locations
In future modules we will show how to leverage these property sources in a way that can make configuring the Java code easier.
Configuration Properties
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
61. Introduction
In the previous chapter we mapped properties from different sources and then mapped them directly into individual component Java class attributes. That showed a lot of power but had at least one flaw — each component would define its own injection of a property. If we changed the structure of a property, we would have many places to update and some of that might not be within our code base.
In this chapter we are going to continue to leverage the same property source(s) as before but remove the individual configuration properties entirely from the component classes and encapsulate them within a configuration class that gets instantiated, populated, and injected into the component at runtime.
We will also explore adding validation of properties and leveraging tooling to automatically generate boilerplate JavaBean constructs.
61.1. Goals
The student will learn to:
-
map a Java
@ConfigurationProperties
class to properties -
define validation rules for property values
-
leverage tooling to generate boilerplate code for JavaBean classes
-
solve more complex property mapping scenarios
-
solve injection mapping or ambiguity
61.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
map a Java
@ConfigurationProperties
class to a group of properties-
generate property metadata — used by IDEs for property editors
-
-
create read-only
@ConfigurationProperties
class using@ConstructorBinding
-
define Jakarta EE Java validation rule for property and have validated at runtime
-
generate boilerplate JavaBean methods using Lombok library
-
use relaxed binding to map between JavaBean and property syntax
-
map nested properties to a
@ConfigurationProperties
class -
map array properties to a
@ConfigurationProperties
class -
reuse
@ConfigurationProperties
class to map multiple property trees -
use
@Qualifier
annotation and other techniques to map or disambiguate an injection
62. Mapping properties to @ConfigurationProperties class
Starting off simple, we define a property (app.config.car.name
) in application.properties
to hold the name of a car.
# application.properties
app.config.car.name=Suburban
62.1. Mapped Java Class
At this point we now want to create a Java class to be instantiated and be assigned the
value(s) from the various property sources — application.properties
in this case, but as
we have seen from earlier lectures properties can come from many places. The class follows
standard JavaBean characteristics
-
default constructor to instantiate the class in a default state
-
"setter"/"getter" methods to set and get the state of the instance
A "toString()" method was also added to self-describe the state of the instance.
import org.springframework.boot.context.properties.ConfigurationProperties;
@ConfigurationProperties("app.config.car") (3)
public class CarProperties { (1)
private String name;
//default ctor (2)
public String getName() {
return name;
}
public void setName(String name) {
this.name = name; (2)
}
@Override
public String toString() {
return "CarProperties{name='" + name + "\'}";
}
}
1 | class is a standard Java bean with one property |
2 | class designed for us to use its default constructor and a setter() to assign value(s) |
3 | class annotated with @ConfigurationProperties to identify that is mapped to properties and
the property prefix that pertains to this class |
62.2. Injection Point
We can have Spring instantiate the bean, set the state, and inject that into a component at runtime and have the state of the bean accessible to the component.
...
@Component
public class AppCommand implements CommandLineRunner {
@Autowired
private CarProperties carProperties; (1)
public void run(String... args) throws Exception {
System.out.println("carProperties=" + carProperties); (2)
...
1 | Our @ConfigurationProperties instance is being injected into a @Component class
using FIELD injection |
2 | Simple print statement of bean’s toString() result |
62.3. Initial Error
However, if we build and run our application at this point, our injection will fail because Spring was not able to locate what it needed to complete the injection.
***************************
APPLICATION FAILED TO START
***************************
Description:
Field carProperties in info.ejava.examples.app.config.configproperties.AppCommand required a bean
of type 'info.ejava.examples.app.config.configproperties.properties.CarProperties' that could
not be found.
The injection point has the following annotations:
- @org.springframework.beans.factory.annotation.Autowired(required=true)
Action:
Consider defining a bean of type
'info.ejava.examples.app.config.configproperties.properties.CarProperties'
in your configuration. (1)
1 | Error message indicates that Spring is not seeing our @ConfigurationProperties class |
62.4. Registering the @ConfigurationProperties class
We currently have a similar problem that we had when we implemented our first @Configuration
and @Component
classes — the bean is not being scanned. Even though we have our
@ConfigurationProperties
class is in the same basic classpath as the @Configuration
and @Component
classes — we need a little more to have it processed by Spring. There are several ways to do that:
src
`-- main
|-- java
| `-- info
| `-- ejava
| `-- examples
| `-- app
| `-- config
| `-- configproperties
| |-- AppCommand.java
| |-- ConfigurationPropertiesApp.java
| `-- properties
| `-- CarProperties.java
`-- resources
`-- application.properties
62.4.1. way 1 - Register Class as a @Component
Our package is being scanned by Spring for components, so if we add a @Component
annotation
the @ConfigurationProperties
class will be automatically picked up.
package info.ejava.examples.app.config.configproperties.properties;
...
@Component
@ConfigurationProperties("app.config.car") (1)
public class CarProperties {
1 | causes Spring to process the bean and annotation as part of component classpath scanning |
-
benefits: simple
-
drawbacks: harder to override when configuration class and component class are in the same Java class package tree
62.4.2. way 2 - Explicitly Register Class
Explicitly register the class using
@EnableConfigurationProperties
annotation on a @Configuration
class (such as the @SpringBootApplication
class)
import info.ejava.examples.app.config.configproperties.properties.CarProperties;
import org.springframework.boot.context.properties.ConfigurationPropertiesScan;
...
@SpringBootApplication
@EnableConfigurationProperties(CarProperties.class) (1)
public class ConfigurationPropertiesApp {
1 | targets a specific @ConfigurationProperties class to process |
-
benefits:
@Configuration
class has explicit control over which configuration properties classes to activate -
drawbacks: application could be coupled with the details if where configurations come from
62.4.3. way 3 - Enable Package Scanning
Enable package scanning for @ConfigurationProperties
classes with the
@ConfigurationPropertiesScan
annotation
@SpringBootApplication
@ConfigurationPropertiesScan (1)
public class ConfigurationPropertiesApp {
1 | allows a generalized scan to be defined that is separate for configurations |
-
benefits: easy to add more configuration classes without changing application
-
drawbacks: generalized scan may accidentally pick up an unwanted configuration
62.4.4. way 4 - Use @Bean factory
Create a @Bean
factory method in a @Configuration
class for the type .
@SpringBootApplication
public class ConfigurationPropertiesApp {
...
@Bean
@ConfigurationProperties("app.config.car") (1)
public CarProperties carProperties() {
return new CarProperties();
}
1 | gives more control over the runtime mapping of the bean to the @Configuration class |
-
benefits: decouples the
@ConfigurationProperties
class from the specific property prefix used to populate it. This allows for reuse of the same@ConfigurationProperties
class for multiple prefixes -
drawbacks: implementation spread out between the
@ConfigurationProperties
and@Configuration
classes. It also prohibits the use of read-only instances since the returned object is not yet populated
For our solution for this example, I am going to use @ConfigurationPropertiesScan
("way3") and
drop multiple @ConfigurationProperties
classes into the same classpath and have them automatically
scanned for.
62.5. Result
Having things properly in place, we get the instantiated and initialized
CarProperties
@ConfigurationProperties
class injected into out component(s). Our example
AppCommand
component simply prints the toString()
result of the instance and we see the property
we set in the applications.property
file.
# application.properties
app.config.car.name=Suburban
...
@Component
public class AppCommand implements CommandLineRunner {
@Autowired
private CarProperties carProperties;
public void run(String... args) throws Exception {
System.out.println("carProperties" + carProperties);
...
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
...
carProperties=CarProperties{name='Suburban'}
63. Metadata
IDEs have support for linking Java properties to their @ConfigurationProperty
class
information.
This allows the property editor to know:
-
there is a property
app.config.carname
-
any provided Javadoc
Spring Configuration Metadata and IDE support is very helpful when faced with configuring dozens of components with hundreds of properties (or more!) |
63.1. Spring Configuration Metadata
IDEs rely on a JSON-formatted metadata file located in
META-INF/spring-configuration-metadata.json
to provide that information.
...
"properties": [
{
"name": "app.config.car.name",
"type": "java.lang.String",
"description": "Name of car with no set maximum size",
"sourceType": "info.ejava.examples.app.config.configproperties.properties.CarProperties"
}
...
We can author it manually. However, there are ways to automate this.
63.2. Spring Configuration Processor
To have Maven automatically generate the JSON metadata file, add the following dependency
to the project to have additional artifacts generated during Java compilation.
The Java compiler will inspect and recognize a type of class inside the dependency
and call it to perform additional processing.
Make it optional=true
since it is only needed during compilation and not at runtime.
<!-- pom.xml dependencies -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId> (1)
<optional>true</optional> (2)
</dependency>
1 | dependency will generate additional artifacts during compilation |
2 | dependency not required at runtime and can be eliminated from dependents |
Dependencies labelled optional=true or scope=provided are not included in the
Spring Boot executable JAR or transitive dependencies in downstream deployments without
further configuration by downstream dependents.
|
63.3. Javadoc Supported
As noted earlier, the metadata also supports documentation extracted from Javadoc comments. To demonstrate this, I will add some simple Javadoc to our example property.
@ConfigurationProperties("app.config.car")
public class CarProperties {
/**
* Name of car with no set maximum size (1)
*/
private String name;
1 | Javadoc information is extracted from the class and placed in the property metadata |
63.4. Rebuild Module
Rebuilding the module with Maven and reloading the module within the IDE should give the IDE additional information it needs to help fill out the properties file.
$ mvn clean compile
target/classes
Treetarget/classes/META-INF/
`-- spring-configuration-metadata.json
{
"groups": [
{
"name": "app.config.car",
"type": "info.ejava.examples.app.config.configproperties.properties.CarProperties",
"sourceType": "info.ejava.examples.app.config.configproperties.properties.CarProperties"
}
],
"properties": [
{
"name": "app.config.car.name",
"type": "java.lang.String",
"description": "Name of car with no set maximum size",
"sourceType": "info.ejava.examples.app.config.configproperties.properties.CarProperties"
}
],
"hints": []
}
63.5. IDE Property Help
If your IDE supports Spring Boot and property metadata, the property editor will offer help filling out properties.
IntelliJ free Community Edition does not support this feature. The following link provides a comparison with the for-cost Ultimate Edition. |
64. Constructor Binding
The previous example was a good start. However, I want to create a slight improvement at this point with a similar example and make the JavaBean read-only. This better depicts the contract we have with properties. They are read-only.
To accomplish a read-only JavaBean, we should remove the setter(s), create a custom constructor that will initialize the attributes at instantiation time, and ideally declare the attributes as final to enforce that they get initialized during construction and never changed.
The only requirement Spring places on us is to add a @ConstructorBinding
annotation to the
class or constructor method when using this approach.
...
import org.springframework.boot.context.properties.ConstructorBinding;
@ConfigurationProperties("app.config.boat")
public class BoatProperties {
private final String name; (3)
@ConstructorBinding (2)
public BoatProperties(String name) {
this.name = name;
}
//no setter method(s) (1)
public String getName() {
return name;
}
@Override
public String toString() {
return "BoatProperties{name='" + name + "\'}";
}
}
1 | remove setter methods to better advertise the read-only contract of the bean |
2 | add custom constructor and annotate the class or constructor with @ConstructorBinding |
3 | make attributes final to better enforce the read-only nature of the bean |
@ConstructorBinding annotation required on the constructor method when more than
one constructor is supplied.
|
64.1. Property Names Bound to Constructor Parameter Names
When using constructor binding, we no longer have the name of the setter method(s) to help map the properties. The parameter name(s) of the constructor are used instead to resolve the property values.
In the following example, the property app.config.boat.name
matches the constructor
parameter name
. The result is that we get the output we expect.
# application.properties
app.config.boat.name=Maxum
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
...
boatProperties=BoatProperties{name='Maxum'}
64.2. Constructor Parameter Name Mismatch
If we change the constructor parameter name to not match the property name, we will get a null for the property.
@ConfigurationProperties("app.config.boat")
public class BoatProperties {
private final String name;
@ConstructorBinding
public BoatProperties(String nameX) { (1)
this.name = nameX;
}
1 | constructor argument name has been changed to not match the property name from application.properties |
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
...
boatProperties=BoatProperties{name='null'}
We will discuss relaxed binding soon and see that some syntactical
differences between the property name and JavaBean property name are accounted
for during @ConfigurationProperties binding. However, this was a clear case
of a name mis-match that will not be mapped.
|
65. Validation
The error in the last example would have occurred whether we used constructor or setter-based binding. We would have had a possibly vague problem if the property was needed by the application. We can help detect invalid property values for both the setter and constructor approaches by leveraging validation.
Java validation is a JavaEE/ Jakarta EE standard API for expressing validation for JavaBeans. It allows us to express constraints on JavaBeans to help further modularize objects within our application.
To add validation to our application, we start by adding the Spring Boot validation starter
(spring-boot-starter-validation
) to our pom.xml.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
This will bring in three (3) dependencies
-
jakarta.validation-api - this is the validation API and is required to compile the module
-
hibernate-validator - this is a validation implementation
-
tomcat-embed-el - this is required when expressing validations using regular expressions with
@Pattern
annotation
65.1. Validation Annotations
We trigger Spring to validate our JavaBean when instantiated by the container by adding the
Spring @Validated
annotation to the class.
We further define the Java attribute with the Jakarta EE
@NotNull
constraint to report an error if the property is ever null.
...
import org.springframework.validation.annotation.Validated;
import javax.validation.constraints.NotNull;
@ConfigurationProperties("app.config.boat")
@Validated (1)
public class BoatProperties {
@NotNull (2)
private final String name;
@ConstructorBinding
public BoatProperties(String nameX) {
this.name = nameX;
}
...
1 | The Spring @Validated annotation tells Spring to validate instances of this
class |
2 | The Jakarta EE @NotNull annotation tells the validator this field is not
allowed to be null |
You can locate other validation constraints in the Validation API and also extend the API to provide more customized validations using the Validation Spec, Hibernate Validator Documentation, or various web searches. |
65.2. Validation Error
The error produced is caught by Spring Boot and turned into a helpful description of the problem clearly stating there is a problem with one of the properties specified (when actually it was a problem with the way the JavaBean class was implemented)
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target org.springframework.boot.context.properties.bind.BindException:
Failed to bind properties under 'app.config.boat' to
info.ejava.examples.app.config.configproperties.properties.BoatProperties failed:
Property: app.config.boat.name
Value: null
Reason: must not be null
Action:
Update your application's configuration
Notice how the error message output by Spring Boot automatically knew what a validation error was and that the invalid property mapped to a specific property name. That is an example of Spring Boot’s FailureAnalyzer framework in action — which aims to make meaningful messages out of what would otherwise be a clunky stack trace. |
66. Boilerplate JavaBean Methods
Before our implementations gets more complicated, we need to address a simplification we can make to our JavaBean source code which will make all future JavaBean implementations incredibly easy.
Notice all the boilerplate constructor, getter/setter, toString(), etc. methods within our earlier JavaBean classes? These methods are primarily based off the attributes of the class. They are commonly implemented by IDEs during development but then become part of the overall code base that has to be maintained over the lifetime of the class. This will only get worse as we add additional attributes to the class when our code gets more complex.
...
@ConfigurationProperties("app.config.boat")
@Validated
public class BoatProperties {
@NotNull
private final String name;
@ConstructorBinding
public BoatProperties(String name) { //boilerplate (1)
this.name = name;
}
public String getName() { //boilerplate (1)
return name;
}
@Override
public String toString() { //boilerplate (1)
return "BoatProperties{name='" + name + "\'}";
}
}
1 | Many boilerplate methods in source code — likely generated by IDE |
66.1. Generating Boilerplate Methods with Lombok
These boilerplate methods can be automatically provided for us at compilation using the Lombok library. Lombok is not unique to Spring Boot but has been adopted into Spring Boot’s overall opinionated approach to developing software and has been integrated into the popular Java IDEs.
I will introduce various Lombok features during later portions of the course
and start with a simple case here where all defaults for a JavaBean are desired.
The simple Lombok @Data
annotation intelligently inspects the JavaBean class
with just an attribute and supplies boilerplate constructs commonly supplied
by the IDE:
-
constructor to initialize attributes
-
getter
-
toString()
-
hashCode() and equals()
A setter was not defined by Lombok because the name
attribute is declared final.
...
import lombok.Data;
@ConfigurationProperties("app.config.company")
@ConstructorBinding
@Data (1)
@Validated
public class CompanyProperties {
@NotNull
private final String name;
//constructor (1)
//getter (1)
//toString (1)
//hashCode and equals (1)
}
1 | Lombok @Data annotation generated constructor, getter(/setter), toString, hashCode, and equals |
66.2. Visible Generated Constructs
The additional methods can be identified in a class structure view of an IDE or
using Java disassembler (javap
) command
You may need to locate a compiler option within your IDE properties to make the code generation within your IDE. |
$ javap -cp target/classes info.ejava.examples.app.config.configproperties.properties.CompanyProperties
Compiled from "CompanyProperties.java"
public class info.ejava.examples.app.config.configproperties.properties.CompanyProperties {
public info.ejava.examples.app.config.configproperties.properties.CompanyProperties(java.lang.String);
public java.lang.String getName();
public boolean equals(java.lang.Object);
protected boolean canEqual(java.lang.Object);
public int hashCode();
public java.lang.String toString();
}
66.3. Lombok Build Dependency
The Lombok annotations are defined with
RetentionPolicy.SOURCE
.
That means they are discarded by the compiler and not available at runtime.
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.SOURCE)
public @interface Data {
That permits us to declare the dependency as scope=provided
to eliminate it from the application’s
executable JAR and transitive dependencies and have no extra bloat in the module
as well.
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<scope>provided</scope>
</dependency>
66.4. Example Output
Running our example using the same, simple toString()
print statement and
property definitions produces near identical results from the caller’s perspective.
The only difference here is the specific text used in the returned string.
...
@Autowired
private BoatProperties boatProperties;
@Autowired
private CompanyProperties companyProperties;
public void run(String... args) throws Exception {
System.out.println("boatProperties=" + boatProperties); (1)
System.out.println("====");
System.out.println("companyProperties=" + companyProperties); (2)
...
1 | BoatProperties JavaBean methods were provided by hand |
2 | CompanyProperties JavaBean methods were provided by Lombok |
# application.properties
app.config.boat.name=Maxum
app.config.company.name=Acme
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
...
boatProperties=BoatProperties{name='Maxum'}
====
companyProperties=CompanyProperties(name=Acme)
There is a Spring
@ConstructorBinding issue that prevents property metadata from being
automatically generated. This is due to a
Lombok issue where usable argument names are not provided in the
generated constructor. The only workaround at this time if you want
metadata generated for
|
Lombok ConstructorBinding Issue Listed as Closed
Since providing the warning above, the version of Lombok has advanced in class (1.18.20 ), issue closed, and may have been resolved.
Confirmation needed.
|
With the exception of the property metadata issue just mentioned, adding Lombok to our development approach for JavaBeans is almost a 100% win situation. 80-90% of the JavaBean class is written for us and we can override the defaults at any time with further annotations or custom methods. The fact that Lombok will not replace methods we have manually provided for the class always gives us an escape route in the event something needs to be customized.
67. Relaxed Binding
One of the key differences between Spring’s @Value
injection and @ConfigurationProperties
is the support for relaxed binding by the later. With relaxed binding, property definitions do
not have to be an exact match. JavaBean properties are commonly defined with camelCase.
Property definitions can come in a number of
different case formats. Here is a few.
-
camelCase
-
UpperCamelCase
-
kebab-case
-
snake_case
-
UPPERCASE
67.1. Relaxed Binding Example JavaBean
In this example, I am going to add a class to express many different properties of a business. Each of the attributes is expressed using camelCase to be consistent with common Java coding conventions and further validated using Jakarta EE Validation.
@ConfigurationProperties("app.config.business")
@ConstructorBinding
@Data
@Validated
public class BusinessProperties {
@NotNull
private final String name;
@NotNull
private final String streetAddress;
@NotNull
private final String city;
@NotNull
private final String state;
@NotNull
private final String zipCode;
private final String notes;
}
67.2. Relaxed Binding Example Properties
The properties supplied provide an example of the relaxed binding Spring implements between property and JavaBean definitions.
# application.properties
app.config.business.name=Acme
app.config.business.street-address=100 Suburban Dr
app.config.business.CITY=Newark
app.config.business.State=DE
app.config.business.zip_code=19711
app.config.business.notess=This is a property name typo
-
kebab-case
street-address
matched Java camelCasestreetAddress
-
UPPERCASE
CITY
matched Java camelCasecity
-
UpperCamelCase
State
matched Java camelCasestate
-
snake_case
zip_code
matched Java camelCasezipCode
-
typo
notess
does not match Java camelCasenotes
67.3. Relaxed Binding Example Output
These relaxed bindings are shown in the following output. However, the
note
attribute is an example that there is no magic when it comes to
correcting typo errors. The extra character in notess
prevented a mapping
to the notes
attribute. The IDE/metadata can help avoid the error
and validation can identify when the error exists.
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
...
businessProperties=BusinessProperties(name=Acme, streetAddress=100 Suburban Dr,
city=Newark, state=DE, zipCode=19711, notes=null)
68. Nested Properties
The previous examples used a flat property model. That may not always be the case. In this example we will look into mapping nested properties.
(1)
app.config.corp.name=Acme
(2)
app.config.corp.address.street=100 Suburban Dr
app.config.corp.address.city=Newark
app.config.corp.address.state=DE
app.config.corp.address.zip=19711
1 | name is part of a flat property model below corp |
2 | address is a container of nested properties |
68.1. Nested Properties JavaBean Mapping
The mapping of the nested class is no surprise. We supply a JavaBean to hold their nested properties and reference it from the host/outer-class.
...
@Data
@ConstructorBinding
public class AddressProperties {
private final String street;
@NotNull
private final String city;
@NotNull
private final String state;
@NotNull
private final String zip;
}
In this specific case we are using a read-only JavaBean and need to supply the @ConstructorBinding
annotation.
|
68.2. Nested Properties Host JavaBean Mapping
The host class (CorporateProperties
) declares the base property prefix
and a reference (address
) to the nested class.
...
import org.springframework.boot.context.properties.NestedConfigurationProperty;
@ConfigurationProperties("app.config.corp")
@ConstructorBinding
@Data
@Validated
public class CorporationProperties {
@NotNull
private final String name;
@NestedConfigurationProperty //needed for metadata
@NotNull
//@Valid
private final AddressProperties address;
The @NestedConfigurationProperty is only supplied to generate
correct metadata — otherwise only a single address
property will be identified to exist within the generated metadata.
|
The validation initiated by the @Validated annotation seems to
automatically propagate into the nested AddressProperties class without
the need to add @Valid annotation.
|
68.3. Nested Properties Output
The defined properties are populated within the host and nested bean and accessible to components within the application.
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
...
corporationProperties=CorporationProperties(name=Acme,
address=AddressProperties(street=null, city=Newark, state=DE, zip=19711))
69. Property Arrays
As the previous example begins to show, property mapping can begin to get complex. I won’t demonstrate all of them. Please consult documentation available on the Internet for a complete view. However, I will demonstrate an initial collection mapping to arrays to get started going a level deeper.
In this example, RouteProperties
hosts a local name
property and a
list of stops
that are of type AddressProperties
that we used before.
...
@ConfigurationProperties("app.config.route")
@ConstructorBinding
@Data
@Validated
public class RouteProperties {
@NotNull
private String name;
@NestedConfigurationProperty
@NotNull
@Size(min = 1)
private List<AddressProperties> stops; (1)
...
1 | RouteProperties hosts list of stops as AddressProperties |
69.1. Property Arrays Definition
The above can be mapped using a properties format.
# application.properties
app.config.route.name: Superbowl
app.config.route.stops[0].street: 1101 Russell St
app.config.route.stops[0].city: Baltimore
app.config.route.stops[0].state: MD
app.config.route.stops[0].zip: 21230
app.config.route.stops[1].street: 347 Don Shula Drive
app.config.route.stops[1].city: Miami
app.config.route.stops[1].state: FLA
app.config.route.stops[1].zip: 33056
However, it may be easier to map using YAML.
# application.yml
app:
config:
route:
name: Superbowl
stops:
- street: 1101 Russell St
city: Baltimore
state: MD
zip: 21230
- street: 347 Don Shula Drive
city: Miami
state: FLA
zip: 33056
69.2. Property Arrays Output
Injecting that into our application and printing the state of the bean (with a
little formatting) produces the following output showing that each of the stops
were added to the route
using the AddressProperty
.
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
...
routeProperties=RouteProperties(name=Superbowl, stops=[
AddressProperties(street=1101 Russell St, city=Baltimore, state=MD, zip=21230),
AddressProperties(street=347 Don Shula Drive, city=Miami, state=FLA, zip=33056)
])
70. System Properties
Note that Java properties can come from several sources and we are able to map them from standard Java system properties as well.
The following example shows mapping three (3) system properties: user.name
,
user.home
, and user.timezone
to a @ConfigurationProperties
class.
@ConfigurationProperties("user")
@ConstructorBinding
@Data
public class UserProperties {
@NotNull
private final String name; (1)
@NotNull
private final String home; (2)
@NotNull
private final String timezone; (3)
1 | mapped to SystemProperty user.name |
2 | mapped to SystemProperty user.home |
3 | mapped to SystemProperty user.timezone |
70.1. System Properties Usage
Injecting that into our components give us access to mapped properties and, of course,
access to them using standard getters and not just toString()
output.
@Component
public class AppCommand implements CommandLineRunner {
...
@Autowired
private UserProperties userProps;
public void run(String... args) throws Exception {
...
System.out.println(userProps); (1)
System.out.println("user.home=" + userProps.getHome()); (2)
1 | output UserProperties toString |
2 | get specific value mapped from user.home |
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
...
UserProperties(name=jim, home=/Users/jim, timezone=America/New_York)
user.home=/Users/jim
71. @ConfigurationProperties Class Reuse
The examples to date have been singleton values mapped to one root source. However,
as we saw with AddressProperties
, we could have multiple groups of
properties with the same structure and different root prefix.
In the following example we have two instances of person. One has the prefix
of owner
and the other manager
, but they both follow the same structural schema.
# application.yml
owner: (1)
name: Steve Bushati
address:
city: Millersville
state: MD
zip: 21108
manager: (1)
name: Eric Decosta
address:
city: Owings Mills
state: MD
zip: 21117
1 | owner and manager root prefixes both follow the same structural schema |
71.1. @ConfigurationProperties Class Reuse Mapping
We would like two (2) bean instances that represent their respective person implemented as one
JavaBean class. We can structurally map both to the same class and create two instances of that
class. However when we do that — we can no longer apply the @ConfigurationProperties
annotation
and prefix to the bean class because the prefix will be instance-specific
//@ConfigurationProperties("???") multiple prefixes mapped (1)
@Data
@Validated
public class PersonProperties {
@NotNull
private String name;
@NestedConfigurationProperty
@NotNull
private AddressProperties address;
1 | unable to apply root prefix-specific @ConfigurationProperties to class |
71.2. @ConfigurationProperties @Bean Factory
We can solve the issue of having two (2) separate leading prefixes by adding a @Bean
factory method for each use and we can use our root-level application class to host those
factory methods.
@SpringBootApplication
@ConfigurationPropertiesScan
public class ConfigurationPropertiesApp {
...
@Bean
@ConfigurationProperties("owner") (2)
public PersonProperties ownerProps() {
return new PersonProperties(); (1)
}
@Bean
@ConfigurationProperties("manager") (2)
public PersonProperties managerProps() {
return new PersonProperties(); (1)
}
1 | @Bean factory method returns JavaBean instance to use |
2 | Spring populates the JavaBean according to the ConfigurationProperties annotation |
We are no longer able to use read-only JavaBeans when using the @Bean factory method
in this way. We are returning a default instance for Spring to populate based on the specified
@ConfigurationProperties prefix of the factory method.
|
71.3. Injecting ownerProps
Taking this one instance at a time, when we inject an instance of PersonProperties
into
the ownerProps
attribute of our component, the ownerProps
@Bean
factory is called
and we get the information for our owner.
@Component
public class AppCommand implements CommandLineRunner {
@Autowired
private PersonProperties ownerProps;
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
...
PersonProperties(name=Steve Bushati, address=AddressProperties(street=null, city=Millersville, state=MD, zip=21108))
Great! However, there was something subtle there that allowed things to work.
71.4. Injection Matching
Spring had two @Bean
factory methods to chose from to produce an instance of PersonProperties
.
@Bean
@ConfigurationProperties("owner")
public PersonProperties ownerProps() {
...
@Bean
@ConfigurationProperties("manager")
public PersonProperties managerProps() {
...
The ownerProps
@Bean
factory method name happened to match the ownerProps
Java attribute name
and that resolved the ambiguity.
@Component
public class AppCommand implements CommandLineRunner {
@Autowired
private PersonProperties ownerProps; (1)
1 | Attribute name of injected bean matches @Bean factory method name |
71.5. Ambiguous Injection
If we were to add the manager
and specifically not make the two names match, there will
be ambiguity as to which @Bean
factory to use. The injected attribute name is manager
and the desired @Bean
factory method name is managerProps
.
@Component
public class AppCommand implements CommandLineRunner {
@Autowired
private PersonProperties manager; (1)
1 | Java attribute name does not match @Bean factory method name |
$ java -jar target/appconfig-configproperties-example-*-SNAPSHOT-bootexec.jar
***************************
APPLICATION FAILED TO START
***************************
Description:
Field manager in info.ejava.examples.app.config.configproperties.AppCommand
required a single bean, but 2 were found:
- ownerProps: defined by method 'ownerProps' in
info.ejava.examples.app.config.configproperties.ConfigurationPropertiesApp
- managerProps: defined by method 'managerProps' in
info.ejava.examples.app.config.configproperties.ConfigurationPropertiesApp
Action:
Consider marking one of the beans as @Primary, updating the consumer to accept multiple beans,
or using @Qualifier to identify the bean that should be consumed
71.6. Injection @Qualifier
As the error message states, we can solve this one of several ways. The @Qualifier
route
is mostly what we want and can do that one of at least three ways.
71.7. way1: Create Custom @Qualifier Annotation
Create a custom @Qualifier
annotation and apply that to the @Bean
factory and injection
point.
-
benefits: eliminates string name matching between factory mechanism and attribute
-
drawbacks: new annotation must be created and applied to both factory and injection point
package info.ejava.examples.app.config.configproperties.properties;
import org.springframework.beans.factory.annotation.Qualifier;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
@Qualifier
@Target({ElementType.METHOD, ElementType.FIELD, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
public @interface Manager {
}
@Bean
@ConfigurationProperties("manager")
@Manager (1)
public PersonProperties managerProps() {
return new PersonProperties();
}
1 | @Manager annotation used to add additional qualification beyond just type |
@Autowired
private PersonProperties ownerProps;
@Autowired
@Manager (1)
private PersonProperties manager;
1 | @Manager annotation is used to disambiguate the factory choices |
71.8. way2: @Bean Factory Method Name as Qualifier
Use the name of the @Bean
factory method as a qualifier.
-
benefits: no custom qualifier class required and factory signature does not need to be modified
-
drawbacks: text string must match factory method name
@Autowired private PersonProperties ownerProps; @Autowired @Qualifier("managerProps") (1) private PersonProperties manager;
1 @Bean
factory name is being applied as a qualifier versus defining a type
71.9. way3: Match @Bean Factory Method Name
Change the name of the injected attribute to match the @Bean
factory method name
-
benefits: simple and properly represents the semantics of the singleton property
-
drawbacks: injected attribute name must match factory method name
@Bean
@ConfigurationProperties("owner")
public PersonProperties ownerProps() {
...
@Bean
@ConfigurationProperties("manager")
public PersonProperties managerProps() {
...
@Autowired
private PersonProperties ownerProps;
@Autowired
private PersonProperties managerProps; (1)
1 | Attribute name of injected bean matches @Bean factory method name |
71.10. Ambiguous Injection Summary
Factory choices and qualifiers is a whole topic within itself. However, this set of
examples showed how @ConfigurationProperties
can leverage @Bean
factories to assist
in additional complex property mappings. We likely will be happy taking the simple way3
solution but it is good to know there is an easy way to use a @Qualifier
annotation
when we do not want to rely on a textual name match.
72. Summary
In this module we
-
mapped properties from property sources to JavaBean classes annotated with
@ConfigurationProperties
and injected them into component classes -
generated property metadata that can be used by IDEs to provide an aid to configuring properties
-
implemented a read-only JavaBean
-
defined property validation using Jakarta EE Java Validation framework
-
generated boilerplate JavaBean constructs with the Lombok library
-
demonstrated how relaxed binding can lead to more flexible property names
-
mapped flat/simple properties, nested properties, and collections of properties
-
leveraged custom
@Bean
factories to reuse common property structure for different root instances -
leveraged
@Qualifier
s in order to map or disambiguate injections
Auto Configuration
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
73. Introduction
Thus far we have focused on how to configure an application within the primary application module, under fairly static conditions, and applied directly to a single application.
However, our application configuration will likely be required to be:
-
dynamically determined - Application configurations commonly need to be dynamic based on libraries present, properties defined, resources found, etc. at startup. For example, what database will be used when in development, integration, or production? What security should be enabled in development versus production areas?
-
modularized and not repeated - Breaking the application down into separate components and making these components reusable in multiple applications by physically breaking them into separate modules is a good practice. However, that leaves us with the repeated responsibility to configure the components reused. Many times there could be dozens of choices to make within a component configuration and the application can be significantly simplified if an opinionated configuration can be supplied based on the runtime environment of the module.
If you find yourself needing configurations determined dynamically at runtime or find yourself solving a repeated problem and bundling that into a library shared by multiple applications, you are going to want to master the concepts within Spring Boot’s Auto-configuration capability that will be discussed here. Some of these Auto-configuraton capabilities mentioned can be placed directly into the application while others are meant to be placed into separate Auto-configuration modules called "starter" modules that can come with an opinionated, default way to configure the component for use with as little work as possible.
73.1. Goals
The student will learn to:
-
Enable/disable
@Configuration
classes and@Bean
factories based on condition(s) at startup -
Create Auto-configuration/Starter module(s) that establish necessary dependencies and conditionally supplies beans
-
Resolve conflicts between alternate configurations
-
Locate environment and condition details to debug Auto-configuration issues
73.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
Enable a
@Component
,@Configuration
class, or@Bean
factory method based on the result of a condition at startup -
Create Spring Boot Auto-configuration/Starter module(s)
-
Bootstrap Auto-configuration classes into applications using a
spring.factories
metadata file -
Create a conditional component based on the presence of a property value
-
Create a conditional component based on a missing component
-
Create a conditional component based on the presence of a class
-
Define a processing dependency order for Auto-configuration classes
-
Access textual debug information relative to conditions using the
debug
property -
Access web-based debug information relative to conditionals and properties using the Spring Boot Actuator
74. Review: Configuration Class
As we have seen earlier, @Configuration
classes are how we bootstrap an application
using Java classes. They are the modern alternative to the legacy XML definitions that
basically do the same thing — define and configure beans.
@Configuration
classes can be the @SpringBootApplication
class itself. This would be
appropriate for a small application.
@SpringBootApplication
//==> wraps @EnableAutoConfiguration
//==> wraps @SpringBootConfiguration
// ==> wraps @Configuration
public class SelfConfiguredApp {
public static final void main(String...args) {
SpringApplication.run(SelfConfiguredApp.class, args);
}
@Bean
public Hello hello() {
return new StdOutHello("Application @Bean says Hey");
}
}
74.1. Separate @Configuration Class
@Configuration
classes can be broken out into separate classes. This would be
appropriate for larger applications with distinct areas to be configured.
@Configuration(proxyBeanMethods = false)
public class AConfigurationClass {
@Bean
public Hello hello() {
return new StdOutHello("...");
}
}
@Configuration classes are commonly annotated with the proxyMethods=false attribute
that tells Spring it need not create extra proxy code to enforce normal, singleton
return of the created instance to be shared by all callers since @Configuration class
instances are only called by Spring. The
javadoc for the annotation attribute describes the extra and unnecessary work saved.
|
75. Conditional Configuration
We can make @Bean
factory methods (or the @Component
annotated class) and entire @Configuration
classes dependent on conditions found at startup.
The following example uses the
@ConditionalOnProperty annotation to define a Hello
bean based on the presence
of the hello.quiet
property equaling the value true
.
...
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
@SpringBootApplication
public class StarterConfiguredApp {
public static final void main(String...args) {
SpringApplication.run(StarterConfiguredApp.class, args);
}
@Bean
@ConditionalOnProperty(prefix="hello", name="quiet", havingValue="true") (1)
public Hello quietHello() {
return new StdOutHello("(hello.quiet property condition set, Application @Bean says hi)");
}
}
1 |
@ConditionalOnProperty annotation used to define a Hello bean based on the presence
of the hello.quiet property equaling the value true |
75.1. Property Value Condition Satisfied
The following is an example of the property being defined with the targeted value.
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar --hello.quiet=true (1)
...
(hello.quiet property condition set, Application @Bean says hi) World (2)
1 | matching property supplied using command line |
2 | satisfies property condition in @SpringBootApplication |
The (parentheses) is trying to indicate a whisper.
hello.quiet=true property turns on this behavior.
|
75.2. Property Value Condition Not Satisfied
The following is an example of the property being missing. Since there is no
Hello
bean factory, we encounter an error that we will look to solve
using a separate Auto-configuration module.
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar (1)
...
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of constructor in info.ejava.springboot.examples.app.AppCommand required a bean of type
'info.ejava.examples.app.hello.Hello' that could not be found.
The following candidates were found but could not be injected: (2)
- Bean method 'quietHello' in 'StarterConfiguredApp' not loaded because
@ConditionalOnProperty (hello.quiet=true) did not find property 'quiet'
Action:
Consider revisiting the entries above or defining a bean of type
'info.ejava.examples.app.hello.Hello' in your configuration.
1 | property either not specified or not specified with targeted value |
2 | property condition within @SpringBootApplication not satisfied |
76. Two Primary Configuration Phases
Configuration processing within Spring Boot is broken into two primary phases:
-
User-defined configuration classes
-
processed first
-
part of the application module
-
located through the use of a
@ComponentScan
(wrapped by@SpringBootApplication
) -
establish the base configuration for the application
-
fill in any fine-tuning details.
-
-
Auto-configuration classes
-
parsed second
-
outside the scope of the
@ComponentScan
-
placed in separate modules, identified by metadata within those modules
-
enabled by application using
@EnableAutoConfiguration
(also wrapped by@SpringBootApplication
) -
provide defaults to fill in the reusable parts of the application
-
use User-defined configuration for details
-
77. Auto-Configuration
An Auto-configuration class is technically no different than any other @Configuration
class
except that it is inspected after the User-defined @Configuration
class(es)
processing is complete and based on being named in a META-INF/spring.factories
descriptor. This alternate identification and second pass processing allows the core application to
make key directional and detailed decisions and control conditions for the Auto-configuration
class(es).
The following Auto-configuration class example defines an unconditional Hello
bean
factory that is configured using a @ConfigurationProperties
class.
package info.ejava.examples.app.hello; (2)
...
@Configuration(proxyBeanMethods = false)
@EnableConfigurationProperties(HelloProperties.class)
public class HelloAutoConfiguration {
@Bean (1)
public Hello hello(HelloProperties helloProperties) {
return new StdOutHello(helloProperties.getGreeting());
}
}
1 | Example Auto-configuration class provides unconditional @Bean factory for Hello |
2 | this @Configuration package is outside the default scanning scope of @SpringBootApplication |
Auto-Configuration Packages are Separate from Application
Auto-Configuration classes are designed to be outside the scope of the
|
77.1. Supporting @ConfigurationProperties
This particular @Bean
factory defines the @ConfigurationProperties
class to
encapsulate the details of configuring Hello. It supplies a default greeting making
it optional for the User-defined configuration to do anything.
@ConfigurationProperties("hello")
@Data
@Validated
public class HelloProperties {
@NotNull
private String greeting = "HelloProperties default greeting says Hola!"; (1)
}
1 | Value used if user-configuration does not specify a property value |
77.2. Locating Auto Configuration Classes
Auto-configuration class(es) are registered with an entry within the META-INF/spring.factories
file of the Auto-configuration class’s JAR.
This module is typically called an "auto-configuration".
$ jar tf target/hello-starter-*-SNAPSHOT-bootexec.jar | egrep -v '/$|maven|MANIFEST.MF'
META-INF/spring.factories (1)
META-INF/spring-configuration-metadata.json (2)
info/ejava/examples/app/hello/HelloAutoConfiguration.class
info/ejava/examples/app/hello/HelloProperties.class
1 | "auto-configuration" dependency JAR supplies META-INF/spring.factories |
2 | @ConfigurationProperties class metadata generated by maven plugin for use by IDEs |
It is common best-practice to host Auto-configuration classes in a separate
module than the beans it configures. The Hello interface and Hello implementation(s)
comply with this convention and are housed in separate modules.
|
77.3. META-INF/spring.factories Metadata File
The Auto-configuraton classes are registered using the property name equaling the
fully qualified classname of the @EnableAutoConfiguration
annotation and the value
equaling the fully qualified classname of the Auto-configuration class(es). Multiple
classes can be specified separated by commas as I will show later.
# src/main/resources/META-INF/spring.factories
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
info.ejava.examples.app.hello.HelloAutoConfiguration (1)
1 | Auto-configuration class metadata registration |
77.4. Spring Boot 2.7 AutoConfiguration Changes
Spring Boot 2.7 has announced:
-
a new
@AutoConfiguration
annotation that is meant to take the place of using@Configuration
on top-level classes -
the deprecation of
META-INF/spring.factories
in favor ofMETA-INF/spring/ org.springframework.boot. autoconfigure.AutoConfiguration.imports
For backwards compatibility, entries in spring.factories will still be honored.
Spring Boot 2.7.0 M2 Release Notes -- Changes to Auto-configuration
77.5. Example Auto-Configuration Module Source Tree
Our configuration and properties class — along with the spring.factories
file get placed in a separate module source tree.
pom.xml
src
`-- main
|-- java
| `-- info
| `-- ejava
| `-- examples
| `-- app
| `-- hello
| |-- HelloAutoConfiguration.java
| `-- HelloProperties.java
`-- resources
`-- META-INF
`-- spring.factories
77.6. Auto-Configuration / Starter Roles/Relationships
Modules designed as starters can have varying designs with the following roles carried out:
-
Auto-configuration classes that conditionally wire the application
-
An opinionated starter with dependencies that trigger the Auto-configuration rules
77.7. Example Starter Module pom.xml
The module is commonly termed a starter
and will have dependencies on
-
spring-boot-starter
-
the service interface
-
one or more service implementation(s) and their implementation dependencies
<groupId>info.ejava.examples.app</groupId>
<artifactId>hello-starter</artifactId>
<dependencies>
<dependency> (1)
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<!-- commonly declares dependency on interface module -->
<dependency> (2)
<groupId>${project.groupId}</groupId>
<artifactId>hello-service-api</artifactId>
<version>${project.version}</version>
</dependency> (2)
<!-- hello implementation dependency -->
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>hello-service-stdout</artifactId>
<version>${project.version}</version>
</dependency>
1 | dependency on spring-boot-starter define classes pertinent to Auto-configuration |
2 | starter modules commonly define dependencies on interface and implementation modules |
77.8. Example Starter Implementation Dependencies
The rest of the dependencies have nothing specific to do with Auto-configuration or starter modules and are there to support the module implementation.
<dependency> (1)
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<scope>provided</scope>
</dependency>
<dependency> (1)
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
</dependency>
<!-- creates a JSON metadata file describing @ConfigurationProperties -->
<dependency> (1)
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
1 | these dependencies are part of optional implementation detail having nothing to do with Auto-configuration topic |
77.9. Application Starter Dependency
The application module declares dependency on the starter module containing or having a dependency on the Auto-configuration artifacts.
<!-- takes care of initializing Hello Service for us to inject -->
<dependency>
<groupId>${project.groupId}</groupId> (1)
<artifactId>hello-starter</artifactId>
<version>${project.version}</version> (1)
</dependency>
1 | For this example, the application and starter modules share the same groupId and version
and leverage a ${project} variable to simplify the expression.
That will likely not be the case with most starter module dependencies and will need to be spelled out. |
77.10. Starter Brings in Pertinent Dependencies
The starter dependency brings in the Hello Service interface, targeted implementation(s), and some implementation dependencies.
$ mvn dependency:tree
...
[INFO] +- info.ejava.examples.app:hello-starter:jar:6.0.1-SNAPSHOT:compile
[INFO] | +- info.ejava.examples.app:hello-service-api:jar:6.0.1-SNAPSHOT:compile
[INFO] | +- info.ejava.examples.app:hello-service-stdout:jar:6.0.1-SNAPSHOT:compile
[INFO] | +- org.projectlombok:lombok:jar:1.18.10:provided
[INFO] | \- org.springframework.boot:spring-boot-starter-validation:jar:2.7.0:compile
...
78. Configured Application
The example application contains a component that requests the greeter implementation to say hello to "World".
import lombok.RequiredArgsConstructor;
...
@Component
@RequiredArgsConstructor (1)
public class AppCommand implements CommandLineRunner {
private final Hello greeter;
public void run(String... args) throws Exception {
greeter.sayHello("World");
}
}
1 | lombok is being used to provide the constructor injection |
78.1. Review: Unconditional Auto-Configuration Class
This starter dependency is bringing in a @Bean
factory to construct an implementation of Hello
.
package info.ejava.examples.app.hello;
...
@Configuration(proxyBeanMethods = false)
@EnableConfigurationProperties(HelloProperties.class)
public class HelloAutoConfiguration {
@Bean
public Hello hello(HelloProperties helloProperties) { (1)
return new StdOutHello(helloProperties.getGreeting());
}
}
1 | Example Auto-configuration configured by HelloProperties |
78.2. Review: Starter Module Default
The starter dependency brings in an Auto-configuration class that instantiates a StdOutHello
implementation configured by a HelloProperties
class.
@ConfigurationProperties("hello")
@Data
@Validated
public class HelloProperties {
@NotNull
private String greeting = "HelloProperties default greeting says Hola!"; (1)
}
1 | hello.greeting default defined in @ConfigurationProperties class of starter/autoconfigure module |
78.3. Produced Default Starter Greeting
This produces the default greeting
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar
...
HelloProperties default greeting says Hola! World
78.4. User-Application Supplies Property Details
Since the Auto-configuration class is using a properties class, we can define properties (aka "the details") in the main application for the dependency module to use.
#appconfig-autoconfig-example application.properties
#uncomment to use this greeting
hello.greeting: application.properties Says - Hey
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar
...
application.properties Says - Hey World (1)
1 | auto-configured implementation using user-defined property |
79. Auto-Configuration Conflict
79.1. Review: Conditional @Bean Factory
We saw how we could make a @Bean
factory in the User-defined application module conditional (on the value of a property).
@SpringBootApplication
public class StarterConfiguredApp {
...
@Bean
@ConditionalOnProperty(prefix = "hello", name = "quiet", havingValue = "true")
public Hello quietHello() {
return new StdOutHello("(hello.quiet property condition set, Application @Bean says hi)");
}
}
79.2. Potential Conflict
We also saw how to define @Bean
factory in an Auto-configuration class brought in by starter module.
We now have a condition where the two can cause an ambiguity error that we need to account for.
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar --hello.quiet=true (1)
...
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of constructor in info.ejava.examples.app.config.auto.AppCommand
required a single bean, but 2 were found:
- quietHello: defined by method 'quietHello' in
info.ejava.examples.app.config.auto.StarterConfiguredApp
- hello: defined by method 'hello' in class path resource
[info/ejava/examples/app/hello/HelloAutoConfiguration.class]
Action:
Consider marking one of the beans as @Primary, updating the consumer to accept multiple beans,
or using @Qualifier to identify the bean that should be consumed
1 | Supplying the hello.quiet=true property value causes two @Bean factories to chose from |
79.3. @ConditionalOnMissingBean
One way to solve the ambiguity is by using the
@ConditionalOnMissingBean annotation — which defines a condition based on the absence of a bean.
Most conditional annotations can be used in both the application and autoconfigure modules.
However, the @ConditionalOnMissingBean
and its sibling
@ConditionalOnBean are special and meant to be used with Auto-configuration classes in the autoconfigure modules.
Since the Auto-configuration classes are processed after the User-defined classes — there is a clear point to determine whether a User-defined @Bean
factory does or does not exist.
Any other use of these two annotations requires careful ordering and is not recommended.
...
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
@Configuration(proxyBeanMethods = false)
@EnableConfigurationProperties(HelloProperties.class)
public class HelloAutoConfiguration {
@Bean
@ConditionalOnMissingBean (1)
public Hello hello(HelloProperties helloProperties) {
return new StdOutHello(helloProperties.getGreeting());
}
}
1 | @ConditionOnMissingBean causes Auto-configured @Bean method to be inactive when Hello bean already exists |
79.4. Bean Conditional Example Output
With the @ConditionalOnMissingBean
defined on the Auto-configuration class and the property
condition satisfied, we get the bean injected from the User-defined @Bean
factory.
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar --hello.quiet=true
...
(hello.quiet property condition set, Application @Bean says hi) World
With the property condition not satisfied, we get the bean injected from the
Auto-configuration @Bean
factory. Wahoo!
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar
...
application.properties Says - Hey World
80. Resource Conditional and Ordering
We can also define a condition based on the presence of a resource on the filesystem
or classpath using the
@ConditionOnResource. The following example satisfies the condition if
the file hello.properties
exists in the current directory. We are also
going to order our Auto-configured classes with the help of the
@AutoConfigureBefore annotation. There is a sibling
@AutoConfigureAfter annotation as well as a
AutoConfigureOrder we could have used.
...
import org.springframework.boot.autoconfigure.AutoConfigureBefore;
import org.springframework.boot.autoconfigure.condition.ConditionalOnResource;
@ConditionalOnResource(resources = "file:./hello.properties") (1)
@AutoConfigureBefore(HelloAutoConfiguration.class) (2)
public class HelloResourceAutoConfiguration {
@Bean
public Hello resourceHello() {
return new StdOutHello("hello.properties exists says hello");
}
}
1 | Auto-configured class satisfied only when file hello.properties present |
2 | This Auto-configuration class is processed prior to HelloAutoConfiguration |
80.1. Registering Second Auto-Configuration Class
This second Auto-configuration class is being provided in the same, hello-starter
module, so we need to update the Auto-configuration property within the
META-INF/spring.factories
file. We do this by listing the full classnames
of each Auto-configuration class, separated by comma(s).
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
info.ejava.examples.app.hello.HelloAutoConfiguration, \ (1)
info.ejava.examples.app.hello.HelloResourceAutoConfiguration
1 | comma separated |
80.2. Resource Conditional Example Output
The following execution with hello.properties
present in the current directory
satisfies the condition, causes the @Bean
factory from HelloAutoConfiguration
to be skipped because the bean already exists.
$ echo hello.greeting: hello.properties exists says hello World > hello.properties
$ cat hello.properties
hello.greeting: hello.properties exists says hello World
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar
...
hello.properties exists says hello World
-
when property file is not present
-
@Bean
factory fromHelloAutoConfiguration
used since neither property or resource-based conditions satisfied
-
$ rm hello.properties
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar
...
application.properties Says - Hey World
82. Class Conditions
There are many conditions we can add to our @Configuration
class or methods. However,
there is an important difference between the two.
-
class conditional annotations prevent the entire class from loading when not satisfied
-
@Bean
factory conditional annotations allow the class to load but prevent the method from being called when not satisfied
This works for missing classes too! Spring Boot parses the conditional class using
ASM
to detect and then evaluate conditions prior to allowing the class to be loaded into the JVM.
Otherwise we would get a ClassNotFoundException
for the import of a class we are trying
to base our condition on.
82.1. Class Conditional Example
In the following example, I am adding @ConditionalOnClass annotation to prevent the class from being loaded if the implementation class does not exist on the classpath.
...
import info.ejava.examples.app.hello.stdout.StdOutHello; (2)
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
@Configuration(proxyBeanMethods = false)
@ConditionalOnClass(StdOutHello.class) (2)
@EnableConfigurationProperties(HelloProperties.class)
public class HelloAutoConfiguration {
@Bean
@ConditionalOnMissingBean
public Hello hello(HelloProperties helloProperties) {
return new StdOutHello(helloProperties.getGreeting()); (1)
}
}
1 | StdOutHello is the implementation instantiated by the @Bean factory method |
2 | HelloAutoConfiguration.class will not get loaded if StdOutHello.class does not exist |
The @ConditionOnClass
accepts either a class or string expression of the fully qualified classname.
The sibling
@ConditionalOnMissingClass accepts only the string form of the classname.
Spring Boot Autoconfigure module contains many examples of real Auto-configuration classes |
83. Excluding Auto Configurations
We can turn off certain Auto-configured classes using the
-
exclude
attribute of the@EnableAutoConfiguration
annotation -
exclude
attribute of the@SpringBootApplication
annotation which wraps the@EnableAutoConfiguration
annotation
@SpringBootApplication(exclude = {})
// ==> wraps @EnableAutoConfiguration(exclude={})
public class StarterConfiguredApp {
...
}
84. Debugging Auto Configurations
With all these conditional User-defined and Auto-configurations going on, it is easy to get lost or make a mistake. There are two primary tools that can be used to expose the details of the conditional configuration decisions.
84.1. Conditions Evaluation Report
It is easy to get a simplistic textual report of positive and negative condition evaluation matches
by adding a debug
property to the configuration. This can be done by adding --debug
or -Ddebug
to the command line.
The following output shows only the positive and negative matching conditions relevant to our example. There is plently more in the full output.
84.2. Conditions Evaluation Report Example
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar --debug | less
...
============================
CONDITIONS EVALUATION REPORT
============================
Positive matches: (1)
-----------------
HelloAutoConfiguration matched:
- @ConditionalOnClass found required class 'info.ejava.examples.app.hello.stdout.StdOutHello' (OnClassCondition)
HelloAutoConfiguration#hello matched:
- @ConditionalOnBean (types: info.ejava.examples.app.hello.Hello; SearchStrategy: all) did not find any beans (OnBeanCondition)
Negative matches: (2)
-----------------
HelloResourceAutoConfiguration:
Did not match:
- @ConditionalOnResource did not find resource 'file:./hello.properties' (OnResourceCondition)
StarterConfiguredApp#quietHello:
Did not match:
- @ConditionalOnProperty (hello.quiet=true) did not find property 'quiet' (OnPropertyCondition)
1 | Positive matches show which conditionals are activated and why |
2 | Negative matches show which conditionals are not activated and why |
84.3. Condition Evaluation Report Results
The report shows us that
-
HelloAutoConfiguration
class was enabled becauseStdOutHello
class was present -
hello
@Bean
factory method ofHelloAutoConfiguration
class was enabled because no other beans were located -
entire
HelloResourceAutoConfiguration
class was not loaded because filehello.properties
was not present -
quietHello
@Bean
factory method of application class was not activated becausehello.quiet
property was not found
84.4. Actuator Conditions
We can also get a look at the conditionals while the application is running for Web applications using the Spring Boot Actuator. However, doing so requires that we transition our application from a command to a Web application. Luckily this can be done technically by simply changing our starter in the pom.xml file.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<!-- <artifactId>spring-boot-starter</artifactId>-->
</dependency>
We also need to add a dependency on the spring-boot-starter-actuator
module.
<!-- added to inspect env -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
84.5. Activating Actuator Conditions
The Actuator, by default, will not expose any information without being configured to do so. We
can show a JSON version of the Conditions Evaluation Report by adding the
management.endpoints.web.exposure.include
equal to the value conditions
.
I will do that on the command line here. Normally it would be in a profile-specific
properties file appropriate for exposing this information.
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar \
--management.endpoints.web.exposure.include=conditions
The Conditions Evaluation Report is available at the following URL: http://localhost:8080/actuator/conditions.
{
"contexts": {
"application": {
"positiveMatches": {
"HelloAutoConfiguration": [{
"condition": "OnClassCondition",
"message": "@ConditionalOnClass found required class 'info.ejava.examples.app.hello.stdout.StdOutHello'"
}],
"HelloAutoConfiguration#hello": [{
"condition": "OnBeanCondition",
"message": "@ConditionalOnBean (types: info.ejava.examples.app.hello.Hello; SearchStrategy: all) did not find any beans"
}],
...
,
"negativeMatches": {
"StarterConfiguredApp#quietHello": {
"notMatched": [{
"condition": "OnPropertyCondition",
"message": "@ConditionalOnProperty (hello.quiet=true) did not find property 'quiet'"
}],
"matched": []
},
"HelloResourceAutoConfiguration": {
"notMatched": [{
"condition": "OnResourceCondition",
"message": "@ConditionalOnResource did not find resource 'file:./hello.properties'"
}],
"matched": []
},
...
84.6. Actuator Environment
It can also be helpful to inspect the environment to determine the value of properties and which source
of properties is being used. To see that information, we add env
to the exposure.include
property.
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar \
--management.endpoints.web.exposure.include=conditions,env
84.7. Actuator Links
This adds a full /env
endpoint and a view specific /env/{property}
endpoint to see information
for a specific property name. The available Actuator links are available at http://localhost:8080/actuator.
{
_links: {
self: {
href: "http://localhost:8080/actuator",
templated: false
},
conditions: {
href: "http://localhost:8080/actuator/conditions",
templated: false
},
env: {
href: "http://localhost:8080/actuator/env",
templated: false
},
env-toMatch: {
href: "http://localhost:8080/actuator/env/{toMatch}",
templated: true
}
}
}
84.8. Actuator Environment Report
The Actuator Environment Report is available at http://localhost:8080/actuator/env.
{
activeProfiles: [ ],
propertySources: [{
name: "server.ports",
properties: {
local.server.port: {
value: 8080
}
}
},
{
name: "commandLineArgs",
properties: {
management.endpoints.web.exposure.include: {
value: "conditions,env"
}
}
},
...
84.9. Actuator Specific Property Source
The source of a specific property and its defined value is available below the /actuator/env
URI
such that the hello.greeting
property is located at
http://localhost:8080/actuator/env/hello.greeting.
{
property: {
source: "applicationConfig: [classpath:/application.properties]",
value: "application.properties Says - Hey"
},
...
84.10. More Actuator
We can explore some of the other Actuator endpoints by changing the include property to * and revisiting the main actuator endpoint. Actuator Documentation is available on the web.
$ java -jar target/appconfig-autoconfig-*-SNAPSHOT-bootexec.jar \
--management.endpoints.web.exposure.include="*" (1)
1 | double quotes ("") being used to escape * special character on command line |
85. Summary
In this module we:
-
Defined conditions for
@Configuration
classes and@Bean
factory methods that are evaluated at runtime startup -
Placed User-defined conditions, which are evaluated first, in with with application module
-
Placed Auto-configuration classes in separate
starter
module to automatically bootstrap applications with specific capabilities -
Added conflict resolution and ordering to conditions to avoid ambiguous matches
-
Discovered how class conditions can help prevent entire
@Configuration
classes from being loaded and disrupt the application because an optional class is missing -
Learned how to debug conditions and visualize the runtime environment through use of the
debug
property or by using the Actuator for web applications
Logging
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
86. Introduction
86.1. Why log?
Logging has many uses within an application — spanning:
-
auditing actions
-
reporting errors
-
providing debug information to assist in locating a problem
With much of our code located in libraries — logging is not just for our application code. We will want to know audit, error, and debug information in our library calls as well:
-
did that timer fire?
-
which calls failed?
-
what HTTP headers were input or returned from a REST call?
86.2. Why use a Logger over System.out?
Use of Loggers allow statements to exist within the code that will either:
-
be disabled
-
log output uninhibited
-
log output with additional properties (e.g., timestamp, thread, caller, etc.)
Logs commonly are written to the console and/or files by default — but that is not always the case. Logs can also be exported into centralized servers or database(s) so they can form an integrated picture of a distributed system and provide search and alarm capabilities.
However simple or robust your end logs become, logging starts with the code and is a very important thing to include from the beginning (even if we waited a few modules to cover it). |
86.3. Goals
The student will learn:
-
to understand the value in using logging over simple System.out.println calls
-
to understand the interface and implementation separation of a modern logging framework
-
the relationship between the different logger interfaces and implementations
-
to use log levels and verbosity to properly monitor the application under different circumstances
-
to express valuable context information in logged messages
-
to manage logging verbosity
-
to configure the output of logs to provide useful information
86.4. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
obtain access to an SLF4J Logger
-
issue log events at different severity levels
-
filter log events based on source and severity thresholds
-
efficiently bypass log statements that do not meet criteria
-
format log events for regular and exception parameters
-
customize log patterns
-
customize appenders
-
add contextual information to log events using Mapped Diagnostic Context
-
trigger additional logging events using Markers
-
use Spring Profiles to conditionally configure logging
87. Starting References
There are many resources on the Internet that cover logging, the individual logging implementations, and the Spring Boot opinionated support for logging. You may want to keep a browser window open to one or more of the following starting links while we cover this material. You will not need to go thru all of them, but know there is a starting point to where detailed examples and explanations can be found if not covered in this lesson.
-
Spring Boot Logging Feature provides documentation from a top-down perspective of how it supplies a common logging abstraction over potentially different logging implementations.
-
SLF4J Web Site provides documentation, articles, and presentations on SLF4J — the chosen logging interface for Spring Boot and much of the Java community.
-
Logback Web Site provides a wealth of documentation, articles, and presentations on Logback — the default logging implementation for Spring Boot.
-
Log4J2 Web Site provides core documentation on Log4J2 — a top-tier Spring Boot alternative logging implementation.
-
Java Util Logging (JUL) Documentation Web Site provides an overview of JUL — a lesser supported Spring Boot alternative implementation for logging.
88. Logging Dependencies
Most of what we need to perform logging is supplied
through our dependency on the spring-boot-starter
and its dependency on
spring-boot-starter-logging
. The only time we need to supply additional dependencies
is when we want to change the default logging implementation or make use of optional,
specialized extensions provided by that logging implementation.
Take a look at the transitive dependencies brought in by a straight forward dependency on
spring-boot-starter
.
$ mvn dependency:tree
...
[INFO] info.ejava.examples.app:appconfig-logging-example:jar:6.0.1-SNAPSHOT
[INFO] \- org.springframework.boot:spring-boot-starter:jar:2.7.0:compile
[INFO] +- org.springframework.boot:spring-boot:jar:2.7.0:compile
...
[INFO] +- org.springframework.boot:spring-boot-autoconfigure:jar:2.7.0:compile
[INFO] +- org.springframework.boot:spring-boot-starter-logging:jar:2.7.0:compile (1)
[INFO] | +- ch.qos.logback:logback-classic:jar:1.2.11:compile
[INFO] | | +- ch.qos.logback:logback-core:jar:1.2.11:compile
[INFO] | | \- org.slf4j:slf4j-api:jar:1.7.36:compile
[INFO] | +- org.apache.logging.log4j:log4j-to-slf4j:jar:2.17.2:compile
[INFO] | | \- org.apache.logging.log4j:log4j-api:jar:2.17.2:compile
[INFO] | \- org.slf4j:jul-to-slf4j:jar:1.7.36:compile
...
1 | dependency on spring-boot-starter brings in spring-boot-starter-logging |
88.1. Logging Libraries
Notice that:
-
spring-core
dependency brings in its own repackaging and optimizations of Commons Logging withinspring-jcl
-
spring-jcl
provides a thin wrapper that looks for logging APIs and self-bootstraps itself to use them — with a preference for the SLF4J interface, then Log4J2 directly, and then JUL as a fallback -
spring-jcl
looks to have replaced the need for jcl-over-slf4j
-
-
spring-boot-starter-logging
provides dependencies for the SLF4J API, adapters and three optional implementations-
implementations — these will perform the work behind the SLF4J interface calls
-
Logback (the default)
-
-
adapters — these will bridge the SLF4J calls to the implementations
-
Logback
implements SLF4J natively - no adapter necessary -
log4j-to-slf4j
bridges Log4j to SLF4J -
jul-to-slf4j
- bridges Java Util Logging (JUL) to SLF4J
-
-
If we use Spring Boot with spring-boot-starter right out of the box, we will
be using the SLF4J API and Logback implementation configured to work correctly for most cases.
|
88.2. Spring and Spring Boot Internal Logging
Spring and Spring Boot use an internal version of the
Apache Commons Logging API
(Git Repo)
(that was previously known as the Jakarta Commons Logging or JCL (
Ref: Wikipedia, Apache Commons Logging))
that is rehosted within the
spring-jcl
module to serve as a bridge to different logging implementations (Ref:
Spring Boot Logging).
89. Getting Started
OK. We get the libraries we need to implement logging right out of the box with the
basic spring-boot-starter
. How do we get started generating log messages?
Lets begin with a comparison with System.out
so we can see how they are similar
and different.
89.1. System.out
System.out was built into Java from day 1
-
no extra imports are required
-
no extra libraries are required
System.out writes to wherever System.out
references. The default is stdout. You have
seen many earlier examples just like the following.
@Component
@Profile("system-out") (1)
public class SystemOutCommand implements CommandLineRunner {
public void run(String... args) throws Exception {
System.out.println("System.out message");
}
}
1 | restricting component to profile to allow us to turn off unwanted output after this demo |
89.2. System.out Output
The example SystemOutCommand
component above outputs the following statement when called with the
system-out
profile active (using spring.profiles.active
property).
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=system-out (1)
System.out message (2)
1 | activating profile that turns on our component and turns off all logging |
2 | System.out is not impacted by logging configuration and printed to stdout |
89.3. Turning Off Spring Boot Logging
Where did all the built-in logging (e.g., Spring Boot banner, startup messages, etc.) go in the last example?
The system-out
profile specified a logging.level.root
property that
effectively turned off all logging.
spring.main.banner-mode=off (1)
logging.level.root=OFF (2)
1 | turns off printing of verbose Spring Boot startup banner |
2 | turns off all logging (inheriting from the root configuration) |
Technically the logging was only turned off for loggers inheriting the root configuration — but we will ignore that detail for right now and just say "all logging". |
89.4. Getting a Logger
Logging frameworks make use of the fundamental design idiom — separate interface from implementation. We want our calling code to have simple access to a simple interface to express information to be logged and the severity of that information. We want the implementation to have limitless capability to produce and manage the logs, but want to only pay for what we likely will use. Logging frameworks allow that to occur and provide primary access thru a logging interface and a means to create an instance of that logger. The following diagram shows the basic stereotype roles played by the factory and logger.
-
Factory creates Logger
Lets take a look at several ways to obtain a Logger using different APIs and techniques.
89.6. JUL Example Output
The following output shows that even code using the JUL interface will be integrated into our standard Spring Boot logs.
java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=factory
...
20:40:54,136 INFO info.ejava.examples.app.config.logging.factory.JULLogger - Java Util logger message
...
However, JUL is not widely used as an API or implementation. I won’t detail it here, but it has been reported to be much slower and missing robust features of modern alternatives. That does not mean JUL cannot be used as an API for your code (and the libraries your code relies on) and an implementation for your packaged application. It just means using it as an implementation is uncommon and won’t be the default in Spring Boot and other frameworks. |
89.8. SLF4J Example Output
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=factory (1)
...
20:40:55,156 INFO info.ejava.examples.app.config.logging.factory.DeclaredLogger - declared SLF4J logger message
...
1 | supplying custom profile to filter output to include only the factory examples |
89.9. Lombok SLF4J Declaration Example
Naming loggers after the fully qualified classname is so common that the Lombok library was able to successfully take advantage of that fact to automate the tasks for adding the imports and declaring the Logger during Java compilation.
package info.ejava.examples.app.config.logging.factory;
import lombok.extern.slf4j.Slf4j;
...
@Component
@Slf4j (1)
public class LombokDeclaredLogger implements CommandLineRunner {
(2)
@Override
public void run(String... args) throws Exception {
log.info("lombok declared SLF4J logger"); (3)
}
}
1 | @Slf4j annotation automates the import statements and Logger declaration |
2 | Lombok will declare a static log property using LoggerFactory during compilation |
3 | normal log statement provided by calling class — no different from earlier example |
89.10. Lombok Example Output
Since Lombok primarily automates code generation at compile time, the produced output is identical to the previous manual declaration example.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=factory
...
20:40:55,155 INFO info.ejava.examples.app.config.logging.factory.LombokDeclaredLogger - lombok declared SLF4J logger message
...
89.11. Lombok Dependency
Of course, we need to add the following dependency to the project pom.xml
to enable
Lombok annotation processing.
<!-- used to declare logger -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<scope>provided</scope>
</dependency>
91. Discarded Message Expense
The designers of logger frameworks are well aware that excess logging — even statements that are disabled — can increase the execution time of a library call or overall application. We have already seen how severity level thresholds can turn off output and that gives us substantial savings within the logging framework itself. However, we must be aware that building a message to be logged can carry its own expense and be aware of the tools to mitigate the problem.
Assume we have a class that is relatively expensive to obtain a String representation.
class ExpensiveToLog {
public String toString() { (1)
try { Thread.sleep(1000); } catch (Exception ignored) {}
return "hello";
}
}
1 | calling toString() on instances of this class will incur noticeable delay |
91.1. Blind String Concatenation
Now lets say we create a message to log through straight, eager String concatenation. What is wong here?
ExpensiveToLog obj=new ExpensiveToLog();
//...
log.debug("debug for expensiveToLog: " + obj + "!");
-
The log message will get formed by eagerly concatenating several Strings together
-
One of those Strings is produced by a relatively expensive
toString()
method -
Problem: The work of eagerly forming the String is wasted if
DEBUG
is not enabled
91.2. Verbosity Check
Assuming the information from the toString()
call is valuable and needed when we
have DEBUG
enabled — a verbosity check is one common solution we can use to determine
if the end result is worth the work. There are two very similar ways we can do this.
The first way is to dynamically check the current threshold level of the logger
within the code and only execute if the requested severity level is enabled.
We are still going to build the relatively expensive String when DEBUG
is enabled
but we are going to save all that processing time when it is not enabled. This overall
approach of using a code block works best when creating the message requires multiple
lines of code. This specific technique of dynamically checking is suitable when there
are very few checks within a class.
91.2.1. Dynamic Verbosity Check
The first way is to dynamically check the current threshold level of the logger
within the code and only execute if the requested severity level is enabled.
We are still going to build the relatively expensive String when DEBUG
is enabled
but we are going to save all that processing time when it is not enabled. This overall
approach of using a code block works best when creating the message requires multiple
lines of code. This specific technique of dynamically checking is suitable when there
are very few checks within a class.
if (log.isDebugEnabled()) { (1)
log.debug("debug for expensiveToLog: " + obj +"!");
}
1 | code block with expensive toString() call is bypassed when DEBUG disabled |
91.2.2. Static Final Verbosity Check
A variant of the first approach is to define a static final boolean
variable
at the start of the class, equal to the result of the enabled test.
This variant allows the JVM to know that the value of the if
predicate will never change allowing the code block and further checks to be eliminated when disabled.
This alternative is better when there are multiple blocks of code that you want to make conditional on the threshold level of the logger.
This solution assumes the logger threshold will never be changed or that the JVM will be restarted to use the changed value.
I have seen this technique commonly used in
libraries
where they anticipate many calls and they are commonly judged on their method throughput performance.
private static final boolean DEBUG_ENABLED = log.isDebugEnabled(); (1)
...
if (DEBUG_ENABLED) { (2)
log.debug("debug for expensiveToLog: " + obj + "!");
}
...
1 | logger’s verbosity level tested when class loaded and stored in static final variable |
2 | code block with expensive toString() |
91.3. SLF4J Parameterized Logging
SLF4J API offers another solution that removes the need for the if
clause — thus
cleaning your code of those extra conditional blocks. The SLF4J Logger
interface has a
format
and args
variant for each verbosity level call that permits the threshold to
be consulted prior to converting any of the parameters to a String.
The format specification uses a set of curly braces ("{}") to express an insertion
point for an ordered set of arguments. There are no format options. It is strictly a
way to lazily call toString()
on each argument and insert the result.
log.debug("debug for expensiveToLog: {}!", obj); (1) (2)
1 | {} is a placeholder for the result of obj.toString() if called |
2 | obj.toString() only called and overall message concatenated if logger threshold set to ⇐ DEBUG |
91.5. Simple Performance Results: Enabled
The second set of results are from logging threshold set to DEBUG
. You can see that causes the
relatively expensive toString()
to be called for each of the four techniques shown with somewhat
equal results. I would not put too much weight on a few milliseconds difference between the calls
here except to know that neither provide a noticeable processing delay over the other when the
logging threshold has been met.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=expense \
--logging.level.info.ejava.examples.app.config.logging.expense=DEBUG
11:44:43.560 INFO info.ejava.examples.app.config.logging.expense.DisabledOptimization - warmup logger
11:44:43.561 DEBUG info.ejava.examples.app.config.logging.expense.DisabledOptimization - warmup logger
11:44:44.572 DEBUG info.ejava.examples.app.config.logging.expense.DisabledOptimization - debug for expensiveToLog: hello!
11:44:45.575 DEBUG info.ejava.examples.app.config.logging.expense.DisabledOptimization - debug for expensiveToLog: hello!
11:44:46.579 DEBUG info.ejava.examples.app.config.logging.expense.DisabledOptimization - debug for expensiveToLog: hello!
11:44:46.579 DEBUG info.ejava.examples.app.config.logging.expense.DisabledOptimization - debug for expensiveToLog: hello!
11:44:47.582 INFO info.ejava.examples.app.config.logging.expense.DisabledOptimization - \
concat: 1010, ifDebug=1003, DEBUG_ENABLED=1004, param=1003 (1)
1 | all four methods paying the cost of the relatively expensive obj.toString() call |
92. Exception Logging
SLF4J interface and parameterized logging goes one step further to also support Exceptions
. If you pass
an Exception
object as the last parameter in the list — it is treated special and will
not have its toString()
called with the rest of the parameters. Depending on the configuration
in place, the stack trace for the Exception
is logged instead. The following snippet shows
an example of an Exception
being thrown, caught, and then logged.
public void run(String... args) throws Exception {
try {
log.info("calling iThrowException");
iThrowException();
} catch (Exception ex) {
log.warn("caught exception", ex); (1)
}
}
private void iThrowException() throws Exception {
throw new Exception("example exception");
}
1 | Exception passed to logger with message |
92.1. Exception Example Output
When we run the example, note that the message is printed in its normal location and a stack trace is
added for the supplied Exception
parameter.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=exceptions
13:41:17.119 INFO info.ejava.examples.app.config.logging.exceptions.ExceptionExample - calling iThrowException
13:41:17.121 WARN info.ejava.examples.app.config.logging.exceptions.ExceptionExample - caught exception (1)
java.lang.Exception: example exception (2)
at info.ejava.examples.app.config.logging.exceptions.ExceptionExample.iThrowException(ExceptionExample.java:23)
at info.ejava.examples.app.config.logging.exceptions.ExceptionExample.run(ExceptionExample.java:15)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:784)
...
at org.springframework.boot.loader.Launcher.launch(Launcher.java:51)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:52)
1 | normal message logged |
2 | stack trace for last Exception parameter logged |
92.2. Exception Logging and Formatting
Note that you can continue to use parameterized logging with Exceptions. The message passed in
above was actually a format with no parameters. The snippet below shows a format with two parameters
and an Exception
.
log.warn("caught exception {} {}", "p1","p2", ex);
The first two parameters are used in the formatting of the core message. The last Exception parameters is printed as a regular exception.
13:41:17.119 INFO info.ejava.examples.app.config.logging.exceptions.ExceptionExample - calling iThrowException
13:41:17.122 WARN info.ejava.examples.app.config.logging.exceptions.ExceptionExample - caught exception p1 p2 (1)
java.lang.Exception: example exception (2)
at info.ejava.examples.app.config.logging.exceptions.ExceptionExample.iThrowException(ExceptionExample.java:23)
at info.ejava.examples.app.config.logging.exceptions.ExceptionExample.run(ExceptionExample.java:15)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:784)
...
at org.springframework.boot.loader.Launcher.launch(Launcher.java:51)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:52)
1 | two early parameters ("p1" and "p2") where used to complete the message template |
2 | Exception passed as the last parameter had its stack trace logged |
93. Logging Pattern
Each of the previous examples showed logging output using a particular pattern. The pattern
was expressed using a logging.pattern.console
property. The
Logback Conversion Documentation
provides details about how the logging pattern is defined.
logging.pattern.console=%date{HH:mm:ss.SSS} %-5level %logger - %msg%n
The pattern consisted of:
-
%date (or %d)- time of day down to millisecs
-
%level (or %p, %le)- severity level left justified and padded to 5 characters
-
%logger (or %c, %lo)- full name of logger
-
%msg (or %m, %message) - full logged message
-
%n - operating system-specific new line
If you remember, that produced the following output.
java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=levels
06:00:38.891 INFO info.ejava.examples.app.config.logging.levels.LoggerLevels - info message
06:00:38.892 WARN info.ejava.examples.app.config.logging.levels.LoggerLevels - warn message
06:00:38.892 ERROR info.ejava.examples.app.config.logging.levels.LoggerLevels - error message
93.1. Default Console Pattern
Spring Boot comes out of the box with a slightly more verbose default pattern expressed with the CONSOLE_LOG_PATTERN property. The following snippet depicts the information found within the Logback property definition — with some new lines added in to help read it.
CONSOLE_LOG_PATTERN
from GitHub%clr(%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}){faint}
%clr(${LOG_LEVEL_PATTERN:-%5p})
%clr(${PID:- }){magenta}
%clr(---){faint}
%clr([%15.15t]){faint}
%clr(%-40.40logger{39}){cyan}
%clr(:){faint}
%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}
You should see some familiar conversion words from my earlier pattern example. However, there are some additional conversion words used as well. Again, keep the Logback Conversion Documentation close by to lookup any additional details.
-
%d - timestamp defaulting to full format
-
%p - severity level right justified and padded to 5 characters
-
$PID - system property containing the process ID
-
%t (or %thread) - thread name right justified and padded to 15 characters and chopped at 15 characters
-
%logger - logger name optimized to fit within 39 characters , left justified and padded to 40 characters, chopped at 40 characters
-
%m - fully logged message
-
%n - operating system-specific new line
93.2. Default Console Pattern Output
We will take a look at conditional variable substitution in a moment. This next example reverts to the
default CONSOLE_LOG_PATTERN
.
java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--logging.level.root=OFF \
--logging.level.info.ejava.examples.app.config.logging.levels.LoggerLevels=TRACE
2020-03-27 06:31:21.475 TRACE 31203 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : trace message
2020-03-27 06:31:21.477 DEBUG 31203 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : debug message
2020-03-27 06:31:21.477 INFO 31203 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : info message
2020-03-27 06:31:21.477 WARN 31203 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : warn message
2020-03-27 06:31:21.477 ERROR 31203 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : error message
Spring Boot defines color coding for the console that is not visible in the text of this document.
The color for severity level is triggered by the level — red for ERROR
, yellow for WARN
, and
green for the other three levels.
93.3. Variable Substitution
Logging configurations within Spring Boot make use of variable substitution. The value of LOG_DATEFORMAT_PATTERN
will be applied wherever the expression ${LOG_DATEFORMAT_PATTERN}
appears. The "${}"
characters are part
of the variable expression and will not be part of the result.
93.4. Conditional Variable Substitution
Variables can be defined with default values in the event they are not defined. In the following
expression ${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}
:
-
the value of LOG_DATEFORMAT_PATTERN will be used if defined
-
the value of "yyyy-MM-dd HH:mm:ss.SSS" will be used if not defined
The "${}" and embedded ":-" characters following the variable name are part of the expression
when appearing within an XML configuration file and will not be part of the result. The dash (- )
character should be removed if using within a property definition.
|
93.5. Date Format Pattern
As we saw from a peek at the Spring Boot CONSOLE_LOG_PATTERN
default definition, we can
change the format of the timestamp using the LOG_DATEFORMAT_PATTERN
system property.
That system property can flexibly be set using the logging.pattern.dateformat
property. See the
Spring Boot Documentation for information on this and other properties.
The following example shows setting that property using a command line argument.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--logging.level.root=OFF \
--logging.level.info.ejava.examples.app.config.logging.levels.LoggerLevels=INFO \
--logging.pattern.dateformat="HH:mm:ss.SSS" (1)
08:20:42.939 INFO 39013 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : info message
08:20:42.942 WARN 39013 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : warn message
08:20:42.942 ERROR 39013 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : error message
1 | setting LOG_DATEFORMAT_PATTERN using logging.pattern.dateformat property |
93.6. Log Level Pattern
We also saw from the default definition of CONSOLE_LOG_PATTERN
that the severity level
of the output can be changed using the LOG_LEVEL_PATTERN
system property. That system
property can be flexibly set with the logging.pattern.level
property. The following
example shows setting the format to a single character, left justified. Therefore, we can map
INFO
⇒ I
, WARN
⇒ W
, and ERROR
⇒ E
.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--logging.level.root=OFF \
--logging.level.info.ejava.examples.app.config.logging.levels.LoggerLevels=INFO \
--logging.pattern.dateformat="HH:mm:ss.SSS" \
--logging.pattern.level="%.-1p" (1)
(2)
08:59:17.376 I 44756 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : info message
08:59:17.379 W 44756 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : warn message
08:59:17.379 E 44756 --- [ main] i.e.e.a.c.logging.levels.LoggerLevels : error message
1 | logging.level.pattern expressed to be 1 character, left justified |
2 | single character produced in console log output |
93.7. Conversion Pattern Specifiers
Spring Boot Features Web Page documents some formatting rules. However, more details on the parts within the conversion pattern are located on the Logback Pattern Layout Web Page. The overall end-to-end pattern definition I have shown you is called a "Conversion Pattern". Conversion Patterns are made up of:
-
Literal Text (e.g.,
---
, whitespace,:
) — hard-coded strings providing decoration and spacing for conversion specifiers -
Conversion Specifiers - (e.g.,
%-40.40logger{39}
) — an expression that will contribute a formatted property of the current logging context-
starts with
%
-
followed by format modifiers — (e.g.,
-40.40
) — addresses min/max spacing and right/left justification-
optionally provide minimum number of spaces
-
use a negative number (
-#
) to make it left justified and a positive number (#
) to make it right justified
-
-
optionally provide maximum number of spaces using a decimal place and number (
.#
). Extra characters will be cut off-
use a negative number (
.-#
) to start from the left and positive number (.#
) to start from the right
-
-
-
followed by a conversion word (e.g.,
logger
,msg
) — keyword name for the property -
optional parameters (e.g.,
{39}
) — see individual conversion words for details on each
-
93.8. Format Modifier Impact Example
The following example demonstrates how the different format modifier expressions can impact the level
property.
logging.pattern.loglevel | output | comment |
---|---|---|
[%level] |
[INFO] [WARN] [ERROR] |
value takes whatever space necessary |
[%6level] |
[ INFO] [ WARN] [ ERROR] |
value takes at least 6 characters, right justified |
[%-6level] |
[INFO ] [WARN ] [ERROR ] |
value takes at least 6 characters, left justified |
[%.-2level] |
[IN] [WA] [ER] |
value takes no more than 2 characters, starting from the left |
[%.2level] |
[FO] [RN] [OR] |
value takes no more than 2 characters, starting from the right |
93.10. Expensive Conversion Words
I added two new helpful properties that could be considered controversial because they require extra overhead to obtain that information from the JVM. The technique has commonly involved throwing and catching an exception internally to determine the calling location from the self-generated stack trace:
-
%method (or %M) - name of method calling logger
-
%line (or %L) - line number of the file where logger call was made
The additional "expensive" fields are being used for console output for demonstrations using a demonstration profile. Consider your log information needs on a case-by-case basis and learn from this lesson what and how you can modify the logs for your specific needs. For example — to debug an error, you can switch to a more detailed and verbose profile without changing code. |
93.11. Example Override Output
We can activate the profile and demonstrate the modified format using the following command.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=layout
14:25:58.428 INFO - logging.levels.LoggerLevels#run:14 info message
14:25:58.430 WARN - logging.levels.LoggerLevels#run:15 warn message
14:25:58.430 ERROR - logging.levels.LoggerLevels#run:16 error message
The coloring does not show up above so the image below provides a perspective of what that looks like.
93.12. Layout Fields
Please see the Logback Layouts Documentation for a detailed list of conversion words and how to optionally format them.
94. Loggers
We have demonstrated a fair amount capability thus far without having to know much about the internals of the logger framework. However, we need to take a small dive into the logging framework in order to explain some further concepts.
-
Logger Ancestry
-
Logger Inheritance
-
Appenders
-
Logger Additivity
94.1. Logger Tree
Loggers are organized in a hierarchy starting with a root logger called "root". As you would expect, higher in the tree are considered ancestors and lower in the tree are called descendants.
Except for root, the ancestor/descendant structure of loggers depends on the hierarchical name of each logger. Based on the loggers in the diagram
-
X, Y.3, and security are descendants and direct children of root
-
Y.3 is example of logger lacking an explicitly defined parent in hierarchy before reaching root. We can skip many levels between child and root and still retain same hierarchical name
-
X.1, X.2, and X.3 are descendants of X and root and direct children of X
-
Y.3.p is descendant of Y.3 and root and direct child of Y.3
94.2. Logger Inheritance
Each logger has a set of allowed properties. Each logger may define its own value for those properties, inherit the value of its parent, or be assigned a default (as in the case for root).
94.3. Logger Threshold Level Inheritance
The first inheritance property we will look at is a familiar one to you — severity threshold level. As the diagram shows
-
root, loggerX, security, loggerY.3, loggerX.1 and loggerX.3 set an explicit value for their threshold.
-
loggerX.2 and loggerY.3.p inherit the threshold from their parent
94.4. Logger Effective Threshold Level Inheritance
The following table shows the specified and effective values applied to each logger for their threshold.
logger name | specified threshold | effective threshold |
---|---|---|
root |
OFF |
OFF |
X |
INFO |
INFO |
X.1 |
ERROR |
ERROR |
X.2 |
INFO |
|
X.3 |
OFF |
OFF |
Y.3 |
WARN |
WARN |
Y.3.p |
WARN |
|
security |
TRACE |
TRACE |
94.5. Example Logger Threshold Level Properties
These thresholds can be expressed in a property file.
logging.level.X=info
logging.level.X.1=error
logging.level.X.3=OFF
logging.level.security=trace
logging.level.Y.3=warn
logging.level.root=OFF
94.6. Example Logger Threshold Level Output
The output below demonstrates the impact of logging level inheritance from ancestors to descendants.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=tree
CONSOLE 05:58:14.956 INFO - X#run:25 X info
CONSOLE 05:58:14.959 WARN - X#run:26 X warn
CONSOLE 05:58:14.959 ERROR - X#run:27 X error
CONSOLE 05:58:14.960 ERROR - X.1#run:27 X.1 error (2)
CONSOLE 05:58:14.960 INFO - X.2#run:25 X.2 info (1)
CONSOLE 05:58:14.960 WARN - X.2#run:26 X.2 warn
CONSOLE 05:58:14.960 ERROR - X.2#run:27 X.2 error
CONSOLE 05:58:14.960 WARN - Y.3#run:26 Y.3 warn
CONSOLE 05:58:14.960 ERROR - Y.3#run:27 Y.3 error
CONSOLE 05:58:14.960 WARN - Y.3.p#run:26 Y.3.p warn (1)
CONSOLE 05:58:14.961 ERROR - Y.3.p#run:27 Y.3.p error
CONSOLE 05:58:14.961 TRACE - security#run:23 security trace (3)
CONSOLE 05:58:14.961 DEBUG - security#run:24 security debug
CONSOLE 05:58:14.962 INFO - security#run:25 security info
CONSOLE 05:58:14.962 WARN - security#run:26 security warn
CONSOLE 05:58:14.962 ERROR - security#run:27 security error
1 | X.2 and Y.3.p exhibit the same threshold level as their parents X (INFO ) and Y.3 (WARN ) |
2 | X.1 (ERROR ) and X.3 (OFF ) override their parent threshold levels |
3 | security is writing all levels >= TRACE |
95. Appenders
Loggers generate LoggerEvents
but do not directly log anything.
Appenders are responsible for taking a LoggerEvent
and producing a message to a log.
There are many types of appenders. We have been working exclusively with
a ConsoleAppender
thus far but will work with some others before we are done.
At this point — just know that a ConsoleLogger
uses:
-
an encoder to determine when to write messages to the log
-
a layout to determine how to transform an individual
LoggerEvent
to a String -
a pattern when using a
PatternLayout
to define the transformation
95.1. Logger has N Appenders
Each of the loggers in our tree has the chance to have 0..N appenders.
95.2. Logger Configuration Files
To date we have been able to work mostly with Spring Boot properties when using loggers. However, we will need to know a few things about the Logger Configuration File in order to define an appender and assign it to logger(s). We will start with how the logger configuration is found.
Logback and Log4J2 both use XML as their primary definition language. Spring Boot will automatically locate a well-known named configuration file in the root of the classpath:
-
logback.xml
orlogback-spring.xml
for Logback -
log4j2.xml
orlog4j2-spring.xml
for Log4J2
Spring Boot documentation
recommends using the -spring.xml
suffixed files over the provider default
named files in order for Spring Boot to assure that all documented features can be enabled.
Alternately, we can explicitly specify the location using the logging.config
property to
reference anywhere in the classpath or file system.
...
logging.config=classpath:/logging-configs/tree/logback-spring.xml (1)
...
1 | an explicit property reference to the logging configuration file to use |
95.3. Logback Root Configuration Element
The XML file has a root configuration
element which contains details of the appender(s) and logger(s).
See the
Spring Boot Configuration Documentation and the
Logback Configuration Documentation for details on how to configure.
<configuration debug="false">
...
</configuration>
95.4. Retain Spring Boot Defaults
We will lose most/all of the Spring Boot customizations for logging when we
define our own custom logging configuration file. We can restore them
by adding an
include. This is that same file that we looked at earlier for the definition of
CONSOLE_LOG_PATTERN
.
<configuration debug="false">
<!-- bring in Spring Boot defaults for Logback -->
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
...
</configuration>
95.5. Appender Configuration
Our example tree has three (3) appenders total. Each adds a literal string prefix so we know which appender is being called.
<!-- leverages what Spring Boot would have given us for console -->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> (1)
<pattern>CONSOLE ${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<appender name="X-appender" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>X ${CONSOLE_LOG_PATTERN}</pattern>
</encoder>
</appender>
<appender name="security-appender" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>SECURITY ${CONSOLE_LOG_PATTERN}</pattern>
</encoder>
</appender>
1 | PatternLayoutEncoder is the default encoder |
This example forms the basis for demonstrating logger/appender assignment
and appender additivity. ConsoleAppender is used in each case for ease of
demonstration and not meant to depict a realistic configuration.
|
95.6. Appenders Attached to Loggers
The appenders are each attached to a single logger using the appender-ref
element.
-
console is attached to the root logger
-
X-appender is attached to loggerX logger
-
security-appender is attached to security logger
I am latching the two child appender assignments within an appenders
profile to:
-
keep them separate from the earlier log level demo
-
demonstrate how to leverage Spring Boot extensions to build profile-based conditional logging configurations.
<springProfile name="appenders"> (1)
<logger name="X">
<appender-ref ref="X-appender"/> (2)
</logger>
<!-- this logger starts a new tree of appenders, nothing gets written to root logger -->
<logger name="security" additivity="false">
<appender-ref ref="security-appender"/>
</logger>
</springProfile>
<root>
<appender-ref ref="console"/>
</root>
1 | using Spring Boot Logback extension to only enable appenders when profile active |
2 | appenders associated with logger using appender-ref |
95.7. Appender Tree Inheritance
These appenders, in addition to level, are inherited from ancestor to descendant
unless there is an override defined by the property additivity=false
.
95.8. Appender Additivity Result
logger name | assigned threshold | assigned appender | effective threshold | effective appender |
---|---|---|---|---|
root |
OFF |
console |
OFF |
console |
X |
INFO |
X-appender |
INFO |
console, X-appender |
X.1 |
ERROR |
ERROR |
console, X-appender |
|
X.2 |
INFO |
console, X-appender |
||
X.3 |
OFF |
OFF |
console, X-appender |
|
Y.3 |
WARN |
WARN |
console |
|
Y.3.p |
WARN |
console |
||
security *additivity=false |
TRACE |
security-appender |
TRACE |
security-appender |
95.9. Logger Inheritance Tree Output
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=tree,appenders
X 19:12:07.220 INFO - X#run:25 X info (1)
CONSOLE 19:12:07.220 INFO - X#run:25 X info (1)
X 19:12:07.224 WARN - X#run:26 X warn
CONSOLE 19:12:07.224 WARN - X#run:26 X warn
X 19:12:07.225 ERROR - X#run:27 X error
CONSOLE 19:12:07.225 ERROR - X#run:27 X error
X 19:12:07.225 ERROR - X.1#run:27 X.1 error
CONSOLE 19:12:07.225 ERROR - X.1#run:27 X.1 error
X 19:12:07.225 INFO - X.2#run:25 X.2 info
CONSOLE 19:12:07.225 INFO - X.2#run:25 X.2 info
X 19:12:07.225 WARN - X.2#run:26 X.2 warn
CONSOLE 19:12:07.225 WARN - X.2#run:26 X.2 warn
X 19:12:07.226 ERROR - X.2#run:27 X.2 error
CONSOLE 19:12:07.226 ERROR - X.2#run:27 X.2 error
CONSOLE 19:12:07.226 WARN - Y.3#run:26 Y.3 warn (2)
CONSOLE 19:12:07.227 ERROR - Y.3#run:27 Y.3 error (2)
CONSOLE 19:12:07.227 WARN - Y.3.p#run:26 Y.3.p warn
CONSOLE 19:12:07.227 ERROR - Y.3.p#run:27 Y.3.p error
SECURITY 19:12:07.227 TRACE - security#run:23 security trace (3)
SECURITY 19:12:07.227 DEBUG - security#run:24 security debug (3)
SECURITY 19:12:07.227 INFO - security#run:25 security info (3)
SECURITY 19:12:07.228 WARN - security#run:26 security warn (3)
SECURITY 19:12:07.228 ERROR - security#run:27 security error (3)
1 | log messages written to logger X and descendants are written to console and X-appender appenders |
2 | log messages written to logger Y.3 and descendants are written only to console appender |
3 | log messages written to security logger are written only to security appender because of additivity=false |
96. Mapped Diagnostic Context
Thus far, we have been focusing on calls made within the code without much concern about the overall context in which they were made. In a multi-threaded, multi-user environment there is additional context information related to the code making the calls that we may want to keep track of — like userId and transactionId.
SLF4J and the logging implementations support the need for call context information
through the use of
Mapped Diagnostic Context (MDC).
The
MDC class is a essentially a
ThreadLocal
map of strings that are assigned for the
current thread. The values of the MDC are commonly set and cleared in container filters
that fire before and after client calls are executed.
96.1. MDC Example
The following is an example where the run()
method is playing the role of the container filter — setting and clearing the MDC. For this MDC map — I am setting a "user" and "requestId" key
with the current user identity and a value that represents the request. The doWork()
method
is oblivious of the MDC and simply logs the start and end of work.
import org.slf4j.MDC;
...
public class MDCLogger implements CommandLineRunner {
private static final String[] USERS = new String[]{"jim", "joe", "mary"};
private static final SecureRandom r = new SecureRandom();
@Override
public void run(String... args) throws Exception {
for (int i=0; i<5; i++) {
String user = USERS[r.nextInt(USERS.length)];
MDC.put("user", user); (1)
MDC.put("requestId", Integer.toString(r.nextInt(99999)));
doWork();
MDC.clear(); (2)
doWork();
}
}
public void doWork() {
log.info("starting work");
log.info("finished work");
}
}
1 | run() method simulates container filter setting context properties before call executed |
2 | context properties removed after all calls for the context complete |
96.2. MDC Example Pattern
To make use of the new "user" and "requestId" properties of the thread,
we can add the %mdc
(or %X) conversion word to the appender pattern as follows.
#application-mdc.properties
logging.pattern.console=%date{HH:mm:ss.SSS} %-5level [%-9mdc{user:-anonymous}][%5mdc{requestId}] %logger{0} - %msg%n
-
%mdc{user:-anonymous} - the identity of the user making the call or "anonymous" if not supplied
-
%mdc{requestId} - the specific request made or blank if not supplied
96.3. MDC Example Output
The following is an example of running the MDC example. Users are randomly selected and work is performed for both identified and anonymous users. This allows us to track who made the work request and sort out the results of each work request.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar --spring.profiles.active=mdc
17:11:59.100 INFO [jim ][61165] MDCLogger - starting work
17:11:59.101 INFO [jim ][61165] MDCLogger - finished work
17:11:59.101 INFO [anonymous][ ] MDCLogger - starting work
17:11:59.101 INFO [anonymous][ ] MDCLogger - finished work
17:11:59.101 INFO [mary ][ 8802] MDCLogger - starting work
17:11:59.101 INFO [mary ][ 8802] MDCLogger - finished work
17:11:59.101 INFO [anonymous][ ] MDCLogger - starting work
17:11:59.101 INFO [anonymous][ ] MDCLogger - finished work
17:11:59.101 INFO [mary ][86993] MDCLogger - starting work
17:11:59.101 INFO [mary ][86993] MDCLogger - finished work
17:11:59.101 INFO [anonymous][ ] MDCLogger - starting work
17:11:59.101 INFO [anonymous][ ] MDCLogger - finished work
17:11:59.102 INFO [mary ][67677] MDCLogger - starting work
17:11:59.102 INFO [mary ][67677] MDCLogger - finished work
17:11:59.102 INFO [anonymous][ ] MDCLogger - starting work
17:11:59.102 INFO [anonymous][ ] MDCLogger - finished work
17:11:59.102 INFO [jim ][25693] MDCLogger - starting work
17:11:59.102 INFO [jim ][25693] MDCLogger - finished work
17:11:59.102 INFO [anonymous][ ] MDCLogger - starting work
17:11:59.102 INFO [anonymous][ ] MDCLogger - finished work
Like standard ThreadLocal variables,
child threads do not inherit values of parent thread.
|
97. Markers
SLF4J and the logging implementations support markers. Unlike MDC data — which quietly sit in the background — markers are optionally supplied on a per-call basis. Markers have two primary uses
-
trigger reporting events to appenders — e.g., flush log, send the e-mail
-
implement additional severity levels — e.g.,
log.warn(FLESH_WOUND,"come back here!")
versuslog.warn(FATAL,"ouch!!!")
[16]
The additional functionality commonly is implemented through the use of
filters assigned to appenders looking for these Markers .
|
To me having triggers initiated by the logging statements does not sound appropriate (but still could be useful). However, when the thought of filtering comes up — I think of cases where we may want to better classify the subject(s) of the statement so that we have more to filter on when configuring appenders. More than once I have been in a situation where adjusting the verbosity of a single logger was not granular enough to provide an ideal result. |
97.1. Marker Class
Markers
have a single property called name and an optional collection of child Markers
.
The name and collection properties allow the parent marker to represent one or more values.
Appender filters test Markers
using the contains()
method to determine if the parent or any
children are the targeted value.
Markers
are obtained through the MarkerFactory
— which caches the Markers
it creates
unless requested to make them detached so they can be uniquely added to separate parents.
97.2. Marker Example
The following simple example issues two log events. The first is without a Marker
and the second with a Marker
that represents the value ALARM
.
import org.slf4j.Marker;
import org.slf4j.MarkerFactory;
...
public class MarkerLogger implements CommandLineRunner {
private static final Marker ALARM = MarkerFactory.getMarker("ALARM"); (1)
@Override
public void run(String... args) throws Exception {
log.warn("non-alarming warning statement"); (2)
log.warn(ALARM,"alarming statement"); (3)
}
}
1 | created single managed marker |
2 | no marker added to logging call |
3 | marker added to logging call to trigger something special about this call |
97.3. Marker Appender Filter Example
The Logback configuration has two appenders. The first appender — alarms
— is meant to
log only log events with an ALARM marker. I have applied the Logback-supplied
EvaluatorFilter
and OnMarkerEvaluator
to eliminate any log events that do not meet
that criteria.
<appender name="alarms" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.core.filter.EvaluatorFilter">
<evaluator name="ALARM" class="ch.qos.logback.classic.boolex.OnMarkerEvaluator">
<marker>ALARM</marker>
</evaluator>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
<encoder>
<pattern>%red(ALARM>>> ${CONSOLE_LOG_PATTERN})</pattern>
</encoder>
</appender>
The second appender — console — accepts all log events.
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
</encoder>
</appender>
Both appenders are attached to the same root logger — which means that anything logged to the alarm appender will also be logged to the console appender.
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
...
<root>
<appender-ref ref="console"/>
<appender-ref ref="alarms"/>
</root>
</configuration>
97.4. Marker Example Result
The following shows the results of running the marker example — where both events
are written to the console appender and only the log event with the ALARM
Marker
is written to the alarm appender.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=markers
18:06:52.135 WARN [] MarkerLogger - non-alarming warning statement (1)
18:06:52.136 WARN [ALARM] MarkerLogger - alarming statement (1)
ALARM>>> 18:06:52.136 WARN [ALARM] MarkerLogger - alarming statement (2)
1 | non-ALARM and ALARM events are written to the console appender |
2 | ALARM event is also written to alarm appender |
98. File Logging
Each topic and example so far has been demonstrated using the console because it is simple to demonstrate and to try out for yourself. However, once we get into more significant use of our application we are going to need to write this information somewhere to analyze later when necessary.
For that purpose, Spring Boot has a built-in appender ready to go for file logging. It is not active by default but all we have to do is specify the file name or path to trigger its activation.
java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=levels \
--logging.file.name="mylog.log" (1) (2)
1 | adding this property adds file logging to default configuration |
2 | this expressed logfile will be written to mylog.log in current directory |
98.1. root Logger Appenders
As we saw earlier with appender additivity, multiple appenders can be associated with the same logger (root logger in this case). With the trigger property supplied, a file-based appender is added to the root logger to produce a log file in addition to our console output.
98.2. FILE Appender Output
Under these simple conditions, a file is produced in the current directory with the specified
mylog.log
filename and the following contents.
$ cat mylog.log (1)
2020-03-29 07:14:33.533 INFO 90958 --- [main] i.e.e.a.c.logging.levels.LoggerLevels : info message
2020-03-29 07:14:33.542 WARN 90958 --- [main] i.e.e.a.c.logging.levels.LoggerLevels : warn message
2020-03-29 07:14:33.542 ERROR 90958 --- [main] i.e.e.a.c.logging.levels.LoggerLevels : error message
1 | written to file specified by logging.file property |
The file and parent directories will be created if they do not exist. The default definition of the appender will append to an existing file if it already exists. Therefore — if we run the example a second time we get a second set of messages in the file.
$ cat mylog.log
2020-03-29 07:14:33.533 INFO 90958 --- [main] i.e.e.a.c.logging.levels.LoggerLevels : info message
2020-03-29 07:14:33.542 WARN 90958 --- [main] i.e.e.a.c.logging.levels.LoggerLevels : warn message
2020-03-29 07:14:33.542 ERROR 90958 --- [main] i.e.e.a.c.logging.levels.LoggerLevels : error message
2020-03-29 07:15:00.338 INFO 91090 --- [main] i.e.e.a.c.logging.levels.LoggerLevels : info message (1)
2020-03-29 07:15:00.342 WARN 91090 --- [main] i.e.e.a.c.logging.levels.LoggerLevels : warn message
2020-03-29 07:15:00.342 ERROR 91090 --- [main] i.e.e.a.c.logging.levels.LoggerLevels : error message
1 | messages from second execution appended to same log |
98.3. Spring Boot FILE Appender Definition
If we take a look at the definition for
Spring Boot’s Logback FILE Appender, we can see that it is a
Logback RollingFileAppender
with a
Logback SizeAndTimeBasedRollingPolicy
.
<appender name="FILE"
class="ch.qos.logback.core.rolling.RollingFileAppender">(1)
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> (2)
<cleanHistoryOnStart>${LOG_FILE_CLEAN_HISTORY_ON_START:-false}</cleanHistoryOnStart>
<fileNamePattern>${ROLLING_FILE_NAME_PATTERN:-${LOG_FILE}.%d{yyyy-MM-dd}.%i.gz}</fileNamePattern>
<maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
<maxHistory>${LOG_FILE_MAX_HISTORY:-7}</maxHistory>
<totalSizeCap>${LOG_FILE_TOTAL_SIZE_CAP:-0}</totalSizeCap>
</rollingPolicy>
</appender>
1 | performs file rollover functionality based on configured policy |
2 | specifies policy and policy configuration to use |
98.4. RollingFileAppender
The Logback RollingFileAppender will:
-
write log messages to a specified file — and at some point, switch to writing to a different file
-
use a triggering policy to determine the point in which to switch files (i.e., "when it will occur")
-
use a rolling policy to determine how the file switchover will occur (i.e., "what will occur")
-
use a single policy for both if the rolling policy implements both policy interfaces
-
use file append mode by default
98.5. SizeAndTimeBasedRollingPolicy
The Logback SizeAndTimeBasedRollingPolicy will:
-
trigger a file switch when the current file reaches a maximum size
-
trigger a file switch when the granularity of the primary date (%d) pattern in the file path/name would rollover to a new value
-
supply a name for the old/historical file using a mandatory date (%d) pattern and index (%i)
-
define a maximum number of historical files to retain
-
define a total size to allocate to current and historical files
-
define an option to process quotas at startup in addition to file changeover for short running applications
98.7. logging.file.path
If we specify only the logging.file.path
, the filename will default to spring.log
and will be written to the directory path we supply.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--logging.file.path=target/logs (1)
...
$ ls target/logs (2)
spring.log
1 | specifying logging.file.path as target/logs |
2 | produces a spring.log in that directory |
98.8. logging.file.name
If we specify only the logging.file.name
, the file will be written to the filename
and directory we explicitly supply.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--logging.file.name=target/logs/mylog.log (1)
...
$ ls target/logs (2)
mylog.log
1 | specifying a logging.file.name |
2 | produces a logfile with that path and name |
98.9. logging.file.max-size Trigger
One trigger for changing over to the next file is logging.file.max-size
. The condition
is satisfied when the current logfile reaches this value. The default is 10MB.
The following example changes that to 500 Bytes. Once each instance of logging.file.name
reached the logging.file.max-size
, it is compressed and moved to a filename with the
pattern from logging.pattern.rolling-file-name
.
java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=rollover \
--logging.file.name=target/logs/mylog.log \
--logging.file.max-size=500B (1)
...
$ ls -ltr target/logs
total 40
-rw-r--r-- 1 jim staff 154 Mar 29 16:00 mylog.log.2020-03-29.0.gz
-rw-r--r-- 1 jim staff 153 Mar 29 16:00 mylog.log.2020-03-29.1.gz
-rw-r--r-- 1 jim staff 156 Mar 29 16:00 mylog.log.2020-03-29.2.gz
-rw-r--r-- 1 jim staff 156 Mar 29 16:00 mylog.log.2020-03-29.3.gz (2)
-rw-r--r-- 1 jim staff 240 Mar 29 16:00 mylog.log (1)
1 | logging.file.max-size limits the size of the current logfile |
2 | historical logfiles renamed according to logging.pattern.rolling-file-name pattern |
98.10. logging.pattern.rolling-file-name
There are several aspects of logging.pattern.rolling-file-name
to be aware of
-
%d
timestamp pattern and%i
index are required and the FILE appender will be disabled if not specified -
the timestamp pattern directly impacts when the file changeover will occur when we are still below the
logging.file.max-size
. In that case — the changeover occurs when there is a value change in the result of applying the timestamp pattern. Many of my examples here use a pattern that includesHH:mm:ss
just for demonstration purposes. A more common pattern would be by date only. -
the index is used when the
logging.file.max-size
triggers the changeover and we already have a historical name with the same timestamp. -
the number of historical files is throttled using
logging.file.max-history
only when index is used and not when file changeover is due tologging.file.max-size
-
the historical file will be compressed if
gz
is specified as the suffix
98.11. Timestamp Rollover Example
The following example shows the file changeover occurring because the evaluation of
the %d
template expression within logging.pattern.rolling-file-name
changing.
The historical file is left uncompressed because the
logging.pattern.rolling-file-name
does not end in gz
.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=rollover \
--logging.file.name=target/logs/mylog.log \
--logging.file.max-size=500 \
--logging.pattern.rolling-file-name='${logging.file.name}.%d{yyyy-MM-dd-HH:mm:ss}.%i'.log (1)
...
$ ls -ltr target/logs
total 64
-rw-r--r-- 1 jim staff 79 Mar 29 17:50 mylog.log.2020-03-29-17:50:22.0.log (1)
-rw-r--r-- 1 jim staff 79 Mar 29 17:50 mylog.log.2020-03-29-17:50:23.0.log
-rw-r--r-- 1 jim staff 79 Mar 29 17:50 mylog.log.2020-03-29-17:50:24.0.log
-rw-r--r-- 1 jim staff 79 Mar 29 17:50 mylog.log.2020-03-29-17:50:25.0.log
-rw-r--r-- 1 jim staff 79 Mar 29 17:50 mylog.log.2020-03-29-17:50:26.0.log
-rw-r--r-- 1 jim staff 80 Mar 29 17:50 mylog.log.2020-03-29-17:50:27.0.log
-rw-r--r-- 1 jim staff 80 Mar 29 17:50 mylog.log.2020-03-29-17:50:28.0.log
-rw-r--r-- 1 jim staff 80 Mar 29 17:50 mylog.log
$ file target/logs/mylog.log.2020-03-29-17\:50\:28.0.log (2)
target/logs/mylog.log.2020-03-29-17:50:28.0.log: ASCII text
1 | logging.pattern.rolling-file-name pattern triggers changeover at the seconds boundary |
2 | historical logfiles are left uncompressed because of name suffix specified |
Using a date pattern to include minutes and seconds is just for demonstration and learning purposes. Most patterns would be daily. |
98.12. History Compression Example
The following example is similar to the previous one with the exception that the
logging.pattern.rolling-file-name
ends in gz
— triggering the historical file
to be compressed.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=rollover \
--logging.file.name=target/logs/mylog.log \
--logging.pattern.rolling-file-name='${logging.file.name}.%d{yyyy-MM-dd-HH:mm:ss}.%i'.gz (1)
...
$ ls -ltr target/logs
total 64
-rw-r--r-- 1 jim staff 97 Mar 29 16:26 mylog.log.2020-03-29-16:26:11.0.gz (1)
-rw-r--r-- 1 jim staff 97 Mar 29 16:26 mylog.log.2020-03-29-16:26:12.0.gz
-rw-r--r-- 1 jim staff 97 Mar 29 16:26 mylog.log.2020-03-29-16:26:13.0.gz
-rw-r--r-- 1 jim staff 97 Mar 29 16:26 mylog.log.2020-03-29-16:26:14.0.gz
-rw-r--r-- 1 jim staff 97 Mar 29 16:26 mylog.log.2020-03-29-16:26:15.0.gz
-rw-r--r-- 1 jim staff 97 Mar 29 16:26 mylog.log.2020-03-29-16:26:16.0.gz
-rw-r--r-- 1 jim staff 79 Mar 29 16:26 mylog.log
-rw-r--r-- 1 jim staff 97 Mar 29 16:26 mylog.log.2020-03-29-16:26:17.0.gz
$ file target/logs/mylog.log.2020-03-29-16\:26\:16.0.gz
target/logs/mylog.log.2020-03-29-16:26:16.0.gz: \
gzip compressed data, from FAT filesystem (MS-DOS, OS/2, NT), original size 79
1 | historical logfiles are compressed when pattern uses a .gz suffix |
98.13. logging.file.max-history Example
logging.file.max-history
will constrain the number of files created for
independent timestamps. In the example below, I constrained the limit to 2.
Note that the logging.file.max-history
property does not seem to apply to
files terminated because of size. For that, we can use
logging.file.total-size-cap
.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=rollover \
--logging.file.name=target/logs/mylog.log \
--logging.file.max-size=250 \
--logging.pattern.rolling-file-name='${logging.file.name}.%d{yyyy-MM-dd-HH:mm:ss}.%i'.log \
--logging.file.max-history=2 (1)
...
$ ls -ltr target/logs
total 24
-rw-r--r-- 1 jim staff 80 Mar 29 17:52 mylog.log.2020-03-29-17:52:58.0.log (1)
-rw-r--r-- 1 jim staff 80 Mar 29 17:52 mylog.log.2020-03-29-17:52:59.0.log (1)
-rw-r--r-- 1 jim staff 80 Mar 29 17:53 mylog.log
1 | specifying logging.file.max-history limited number of historical logfiles. Oldest
files exceeding the criteria are deleted. |
98.14. logging.file.total-size-cap Index Example
The following example triggers file changeover every 1000 Bytes and makes use of the index because we encounter multiple changes per timestamp pattern.
The files are aged-off at the point where total size for all logs reaches logging.file.total-size-cap
.
Thus historical files with indexes 1 and 2 have been deleted at this point in time in order to stay below the file size limit.
java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=rollover \
--logging.file.name=target/logs/mylog.log \
--logging.file.max-size=1000 \
--logging.pattern.rolling-file-name='${logging.file.name}.%d{yyyy-MM-dd}.%i'.log \
--logging.file.max-history=20 \
--logging.file.total-size-cap=3500 (1)
...
$ ls -ltr target/logs
total 32 (2)
-rw-r--r-- 1 jim staff 1040 Mar 29 18:09 mylog.log.2020-03-29.2.log (1)
-rw-r--r-- 1 jim staff 1040 Mar 29 18:10 mylog.log.2020-03-29.3.log (1)
-rw-r--r-- 1 jim staff 1040 Mar 29 18:10 mylog.log.2020-03-29.4.log (1)
-rw-r--r-- 1 jim staff 160 Mar 29 18:10 mylog.log (1)
1 | logging.file.total-size-cap constrains current plus historical files retained |
2 | historical files with indexes 1 and 2 were deleted to stay below file size limit |
98.15. logging.file.total-size-cap no Index Example
The following example triggers file changeover every second and makes no use of the index because the timestamp pattern is so granular that max-size
is not reached before the timestamp changes the base.
As with the previous example, the files are also aged-off when the total byte count reaches logging.file.total-size-cap
.
$ java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--spring.profiles.active=rollover \
--logging.file.name=target/logs/mylog.log \
--logging.file.max-size=100 \
--logging.pattern.rolling-file-name='${logging.file.name}.%d{yyyy-MM-dd-HH:mm:ss}.%i'.log \
--logging.file.max-history=200 \
--logging.file.total-size-cap=500 (1)
...
$ ls -ltr target/logs Jamess-MacBook-Pro.local: Sun Mar 29 18:33:41 2020
total 56
-rw-r--r-- 1 jim staff 79 Mar 29 18:33 mylog.log.2020-03-29-18:33:32.0.log (1)
-rw-r--r-- 1 jim staff 79 Mar 29 18:33 mylog.log.2020-03-29-18:33:33.0.log (1)
-rw-r--r-- 1 jim staff 79 Mar 29 18:33 mylog.log.2020-03-29-18:33:34.0.log (1)
-rw-r--r-- 1 jim staff 79 Mar 29 18:33 mylog.log.2020-03-29-18:33:35.0.log (1)
-rw-r--r-- 1 jim staff 80 Mar 29 18:33 mylog.log.2020-03-29-18:33:36.0.log (1)
-rw-r--r-- 1 jim staff 80 Mar 29 18:33 mylog.log.2020-03-29-18:33:37.0.log (1)
-rw-r--r-- 1 jim staff 80 Mar 29 18:33 mylog.log (1)
1 | logging.file.total-size-cap constrains current plus historical files retained |
The logging.file.total-size-cap value — if specified — must be larger than
the logging.file.max-size constraint. Otherwise the file appender will not be
activated.
|
99. Custom Configurations
At this point, you should have a good foundation in logging and how to get started with a decent logging capability and understand how the default configuration can be modified for your immediate and profile-based circumstances. For cases when this is not enough, know that:
-
detailed XML Logback and Log4J2 configurations can be specified — which allows the definition of loggers, appenders, filters, etc. of nearly unlimited power
-
Spring Boot provides include files that can be used as a starting point for defining the custom configurations without giving up most of what Spring Boot defines for the default configuration
99.2. Provided Logback Includes
-
defaults.xml - defines the logging configuration defaults we have been working with
-
base.xml - defines root logger with CONSOLE and FILE appenders we have discussed
-
puts you at the point of the out-of-the-box configuration
-
-
console-appender.xml - defines the
CONSOLE
appender we have been working with-
uses the
CONSOLE_LOG_PATTERN
-
-
file-appender.xml - defines the
FILE
appender we have been working with-
uses the
RollingFileAppender
withFILE_LOG_PATTERN
andSizeAndTimeBasedRollingPolicy
-
These files provide an XML representation of what Spring Boot configures with straight Java code. There are minor differences (e.g., enable/disable FILE Appender) between using the supplied XML files and using the out-of-the-box defaults. |
99.3. Customization Example: Turn off Console Logging
The following is an example custom configuration where we wish to turn off console logging and only rely on the logfiles. This result is essentially a copy/edit of the supplied base.xml.
<!-- logging-configs/no-console/logback-spring.xml (1)
Example Logback configuration file to turn off CONSOLE Appender and retain all other
FILE Appender default behavior.
-->
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/> (2)
<property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/> (3)
<include resource="org/springframework/boot/logging/logback/file-appender.xml"/> (4)
<root>
<appender-ref ref="FILE"/> (5)
</root>
</configuration>
1 | a logback-spring.xml file has been created to host the custom configuration |
2 | the standard Spring Boot defaults are included |
3 | LOG_FILE defined using the original expression from Spring Boot base.xml |
4 | the standard Spring Boot FILE appender is included |
5 | only the FILE appender is assigned to our logger(s) |
99.4. LOG_FILE Property Definition
The only complicated part is what I copy/pasted from
base.xml to express the LOG_FILE
property used by the included FILE appender:
<property name="LOG_FILE"
value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/>
-
use the value of
${LOG_FILE}
if that is defined -
otherwise use the filename
spring.log
and for the path-
use
${LOG_PATH}
if that is defined -
otherwise use
${LOG_TEMP}
if that is defined -
otherwise use
${java.io.tmpdir}
if that is defined -
otherwise use
/tmp
-
99.5. Customization Example: Leverage Restored Defaults
Our first execution uses all defaults and is written to ${java.io.tmpdir}/spring.log
java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--logging.config=src/main/resources/logging-configs/no-console/logback-spring.xml
(no console output)
$ ls -ltr $TMPDIR/spring.log (1)
-rw-r--r-- 1 jim staff 67238 Apr 2 06:42 /var/folders/zm/cskr47zn0yjd0zwkn870y5sc0000gn/T//spring.log
1 | logfile written to restored default ${java.io.tmpdir}/spring.log |
99.6. Customization Example: Provide Override
Our second execution specified an override for the logfile to use. This is expressed exactly as we did earlier with the default configuration.
java -jar target/appconfig-logging-example-*-SNAPSHOT-bootexec.jar \
--logging.config=src/main/resources/logging-configs/no-console/logback-spring.xml \
--logging.file.name="target/logs/mylog.log" (2)
(no console output)
$ ls -ltr target/logs (1)
total 136
-rw-r--r-- 1 jim staff 67236 Apr 2 06:46 mylog.log (1)
1 | logfile written to target/logs/mylog.log |
2 | defined using logging.file.name |
100. Spring Profiles
Spring Boot extends the logback.xml capabilities to allow us to easily
take advantage of profiles. Any of the elements within the configuration file
can be wrapped in a springProfile
element to make their activation depend
on the profile value.
<springProfile name="appenders"> (1)
<logger name="X">
<appender-ref ref="X-appender"/>
</logger>
<!-- this logger starts a new tree of appenders, nothing gets written to root logger -->
<logger name="security" additivity="false">
<appender-ref ref="security-appender"/>
</logger>
</springProfile>
1 | elements are activated when appenders profile is activated |
See Profile-Specific Configuration for more examples involving multiple profile names and boolean operations.
101. Summary
In this module we:
-
made a case for the value of logging
-
demonstrated how logging frameworks are much better than
System.out
logging techniques -
discussed the different interface, adapter, and implementation libraries involved with Spring Boot logging
-
learned how the interface of the logging framework is separate from the implementation
-
learned to log information at different severity levels using loggers
-
learned how to write logging statements that can be efficiently executed when disabled
-
learned how to establish a hierarchy of loggers
-
learned how to configure appenders and associate with loggers
-
learned how to configure pattern layouts
-
learned how to configure the FILE Appender
-
looked at additional topics like Mapped Data Context (MDC) and Markers that can augment standard logging events
We covered the basics in great detail so that you understood the logging framework, what kinds of things are available to you, how it was doing its job, and how it could be configured. However, we still did not cover everything. For example, we left topics like accessing and viewing logs within a distributed environment, structured appender formatters (e.g., JSON), etc.. It is important for you to know that this lesson placed you at a point where those logging extensions can be implemented by you in a straight forward manner.
Testing
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
102. Introduction
102.1. Why Do We Test?
-
demonstrate capability?
-
verify/validate correctness?
-
find bugs?
-
aid design?
-
more …?
There are many great reasons to incorporate software testing into the application lifecycle. There is no time too early to start.
102.2. What are Test Levels?
-
Unit Testing - verifies a specific area of code
-
Integration Testing - any type of testing focusing on interface between components
-
System Testing — tests involving the complete system
-
Acceptance Testing — normally conducted as part of a contract sign-off
It would be easy to say that our focus in this lesson will be on unit and integration testing. However, there are some aspects of system and acceptance testing that are applicable as well.
102.3. What are some Approaches to Testing?
-
Static Analysis — code reviews, syntax checkers
-
Dynamic Analysis — takes place while code is running
-
White-box Testing — makes use of an internal perspective
-
Black-box Testing — makes use of only what the item is required to do
-
Many more …
In this lesson we will focus on dynamic analysis testing using both black-box interface contract testing and white-box implementation and collaboration testing.
102.4. Goals
The student will learn:
-
to understand the testing frameworks bundled within Spring Boot Test Starter
-
to leverage test cases and test methods to automate tests performed
-
to leverage assertions to verify correctness
-
to integrate mocks into test cases
-
to implement unit integration tests within Spring Boot
-
to express tests using Behavior-Driven Development (BDD) acceptance test keywords
-
to automate the execution of tests using Maven
-
to augment and/or replace components used in a unit integration test
102.5. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
write a test case and assertions using "Vintage" JUnit 4 constructs
-
write a test case and assertions using JUnit 5 "Jupiter" constructs
-
leverage alternate (JUnit, Hamcrest, AssertJ, etc.) assertion libraries
-
implement a mock (using Mockito) into a JUnit unit test
-
define custom behavior for a mock
-
capture and inspect calls made to mocks by subjects under test
-
-
implement BDD acceptance test keywords into Mockito & AssertJ-based tests
-
implement unit integration tests using a Spring context
-
implement (Mockito) mocks in Spring context for unit integration tests
-
augment and/or override Spring context components using
@TestConfiguration
-
execute tests using Maven Surefire plugin
103. Test Constructs
At the heart of testing, we want to
|
Figure 23. Basic Test Concepts
|
Subjects can vary in scope depending on the type of our test. Unit testing will have class and method-level subjects. Integration tests can span multiple classes/components — whether vertically (e.g., front-end request to database) or horizontally (e.g., peers).
103.1. Automated Test Terminology
Unfortunately, you will see the terms "unit" and "integration" used differently as we go through the testing topics and span tooling. There is a conceptual way of thinking of testing and a technical way of how to manage testing to be concerned with when seeing these terms used:
Conceptual - At a conceptual level, we simply think of unit tests dealing with one subject at a time and involve varying levels of simulation around them in order to test that subject. We conceptually think of integration tests at the point where multiple real components are brought together to form the overall set of subjects — whether that be vertical (e.g., to the database and back) or horizontal (e.g., peer interactions) in nature.
Test Management - At a test management level, we have to worry about what it takes to spin up and shutdown resources to conduct our testing. Build systems like Maven refer to unit tests as anything that can be performed within a single JVM and integration tests as tests that require managing external resources (e.g., start/stop web server). Maven runs these tests in different phases — executing unit tests first with the Surefire plugin and integration tests last with the Failsafe plugin. By default, Surefire will locate unit tests starting with "Test" or ending with "Test", "Tests", or "TestCase". Failsafe will locate integration tests starting with "IT" or ending with "IT" or "ITCase".
103.2. Maven Test Types
Maven runs these tests in different phases — executing unit tests first with the Surefire plugin and integration tests last with the Failsafe plugin. By default, Surefire will locate unit tests starting with "Test" or ending with "Test", "Tests", or "TestCase". Failsafe will locate integration tests starting with "IT" or ending with "IT" or "ITCase".
103.3. Test Naming Conventions
Neither tools like JUnit or the IDEs care how are classes are named. However, since our goal is to eventually check these tests in with our source code and run them in an automated manner — we will have to pay early attention to Maven Surefire and Failsafe naming rules while we also address the conceptual aspects of testing.
103.4. Lecture Test Naming Conventions
I will try to use the following terms to mean the following:
-
Unit Test - conceptual unit test focused on a limited subject and will use the suffix "Test". These will generally be run without a Spring context and will be picked up by Maven Surefire.
-
Unit Integration Test - conceptual integration test (vertical or horizontal) runnable within a single JVM and will use the suffix "NTest". This will be picked up by Maven Surefire and will likely involve a Spring context.
-
External Integration Test - conceptual integration test (vertical or horizontal) requiring external resource management and will use the suffix "IT". This will be picked up by Maven Failsafe. These will always have Spring context(s) running in one or more JVMs.
That means to not be surprised to see a conceptual integration test bringing multiple real components together to be executed during the Maven Surefire test phase if we can perform this testing without the resource management of external processes.
104. Spring Boot Starter Test Frameworks
We want to automate tests as much as possible and can do that with many of the
Spring Boot testing options made available using the spring-boot-starter-test
dependency. This single dependency defines transitive dependencies on several powerful,
state of the art as well as legacy, testing frameworks.
These dependencies are only used during builds and not
in production — so we assign a scope of test
to this dependency.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope> (1)
</dependency>
1 | dependency scope is test since these dependencies are not required to run outside of build environment |
104.1. Spring Boot Starter Transitive Dependencies
If we take a look at the transitive dependencies brought in by spring-boot-test-starter
, we see
a wide array of choices pre-integrated.
[INFO] +- org.springframework.boot:spring-boot-starter-test:jar:2.7.0:test
[INFO] | +- org.springframework.boot:spring-boot-test:jar:2.7.0:test
[INFO] | +- org.springframework.boot:spring-boot-test-autoconfigure:jar:2.7.0:test
[INFO] | +- org.springframework:spring-test:jar:5.3.20:test
[INFO] | +- org.junit.jupiter:junit-jupiter:jar:5.8.2:test
[INFO] | | +- org.junit.jupiter:junit-jupiter-api:jar:5.8.2:test
[INFO] | | +- org.junit.jupiter:junit-jupiter-params:jar:5.8.2:test
[INFO] | | \- org.junit.jupiter:junit-jupiter-engine:jar:5.8.2:test
[INFO] | +- org.assertj:assertj-core:jar:3.22.0:test
[INFO] | +- org.hamcrest:hamcrest:jar:2.2:test
[INFO] | +- org.mockito:mockito-core:jar:4.5.1:test
[INFO] | | +- net.bytebuddy:byte-buddy:jar:1.12.10:test
[INFO] | | +- net.bytebuddy:byte-buddy-agent:jar:1.12.10:test
[INFO] | | \- org.objenesis:objenesis:jar:3.2:test
[INFO] | +- org.mockito:mockito-junit-jupiter:jar:4.5.1:test
[INFO] | +- com.jayway.jsonpath:json-path:jar:2.7.0:test
[INFO] | +- jakarta.xml.bind:jakarta.xml.bind-api:jar:2.3.3:test
[INFO] | +- org.skyscreamer:jsonassert:jar:1.5.0:test
[INFO] \- org.junit.vintage:junit-vintage-engine:jar:5.8.2:test
[INFO] +- org.junit.platform:junit-platform-engine:jar:1.8.2:test
[INFO] +- junit:junit:jar:4.13.2:test
104.2. Transitive Dependency Test Tools
At a high level:
-
spring-boot-test-autoconfigure
- contains many auto-configuration classes that detect test conditions and configure common resources for use in a test mode -
junit
- required to run the JUnit tests -
hamcrest
- required to implement Hamcrest test assertions -
assertj
- required to implement AssertJ test assertions -
mockito
- required to implement Mockito mocks -
jsonassert
- required to write flexible assertions for JSON data -
jsonpath
- used to express paths within JSON structures -
xmlunit
- required to write flexible assertions for XML data
In the rest of this lesson, I will be describing how JUnit, the assertion libraries, Mockito and Spring Boot play a significant role in unit and integration testing.
105. JUnit Background
JUnit is a test framework that has been around for many years (I found first commit in git from Dec 3, 2000). The test framework was originated by Kent Beck and Erich Gamma during a plane ride they shared in 1997. Its basic structure is centered around:
|
Figure 24. Basic JUnit Test Framework Constructs
|
These constructs have gone through evolutionary changes in Java — to include annotations in Java 5 and lamda functions in Java 8 — which have provided substantial API changes in Java frameworks.
-
annotations added in Java 5 permitted frameworks to move away from inheritance-based approaches — with specifically named methods (JUnit 3.8) and to leverage annotations added to classes and methods (JUnit 4)
|
|
-
lamda functions (JUnit 5/Jupiter) added in Java 8 permit the flexible expression of code blocks that can extend the behavior of provided functionality without requiring verbose subclassing
JUnit 4/Vintage Assertions
|
JUnit 5/Jupiter Lambda Assertions
|
105.1. JUnit 5 Evolution
The success and simplicity of JUnit 4 made it hard to incorporate new features. JUnit 4 was a single module/JAR and everything that used JUnit leveraged that single jar.
105.2. JUnit 5 Areas
The next iteration of JUnit involved a total rewrite — that separated the overall project into three (3) modules.
|
Figure 26. JUnit 5 Modularization
|
The name Jupiter was selected because it is the 5th planet from the Sun |
105.3. JUnit 5 Module JARs
The JUnit 5 modules have several JARs within them that separate interface from implementation — ultimately decoupling the test code from the core engine.
106. Syntax Basics
Before getting too deep into testing, I think it is a good idea to make a very shallow pass at the technical stack we will be leveraging.
-
JUnit
-
Mockito
-
Spring Boot
Each of the example tests that follow can be run within the IDE at the method, class, and parent java package level. The specifics of each IDE will not be addressed here but I will cover some Maven details once we have a few tests defined.
107. JUnit Vintage Basics
It is highly likely that projects will have JUnit 4-based tests around for a significant
amount of time without good reason to update them — because we do not have to.
There is full backwards-compatibility support within JUnit 5 and the specific libraries
to enable that are automatically included by spring-boot-starter-test
.
The following example shows a basic JUnit example using the Vintage syntax.
107.2. JUnit Vintage Example Test Methods
@Test(expected = IllegalArgumentException.class)
public void two_plus_two() {
log.info("2+2=4");
assertEquals(4,2+2);
throw new IllegalArgumentException("just demonstrating expected exception");
}
@Test
public void one_and_one() {
log.info("1+1=2");
assertTrue("problem with 1+1", 1+1==2);
assertTrue(String.format("problem with %d+%d",1,1), 1+1==2);
}
-
@Test - a public instance method where subjects are invoked and result assertions are made
-
exceptions can be asserted at overall method level — but not at a specific point in the method and exception itself cannot be inspected without switching to a manual try/catch technique
-
asserts can be augmented with a String message in the first position
-
the expense of building String message is always paid whether needed or not
assertTrue(String.format("problem with %d+%d",1,1), 1+1==2);
-
Vintage requires the class and methods have public access. |
107.3. JUnit Vintage Basic Syntax Example Output
The following example output shows the lifecycle of the setup and teardown methods combined with two test methods. Note that:
-
the static @BeforeClass and @AfterClass methods are run once
-
the instance @Before and @After methods are run for each test
16:35:42.293 INFO ...testing.testbasics.vintage.ExampleJUnit4Test - setUpClass (1)
16:35:42.297 INFO ...testing.testbasics.vintage.ExampleJUnit4Test - setUp (2)
16:35:42.297 INFO ...testing.testbasics.vintage.ExampleJUnit4Test - 2+2=4
16:35:42.297 INFO ...testing.testbasics.vintage.ExampleJUnit4Test - tearDown (2)
16:35:42.299 INFO ...testing.testbasics.vintage.ExampleJUnit4Test - setUp (2)
16:35:42.300 INFO ...testing.testbasics.vintage.ExampleJUnit4Test - 1+1=2
16:35:42.300 INFO ...testing.testbasics.vintage.ExampleJUnit4Test - tearDown (2)
16:35:42.300 INFO ...testing.testbasics.vintage.ExampleJUnit4Test - tearDownClass (1)
1 | @BeforeClass and @AfterClass called once per test class |
2 | @Before and @After executed for each @Test |
Not demonstrated — a new instance of the test class is instantiated for each test. No object state is retained from test to test without the manual use of static variables. |
JUnit Vintage provides no construct to dictate repeatable ordering of test methods within a class — thus making it hard to use test cases to depict lengthy, deterministically ordered scenarios. |
108. JUnit Jupiter Basics
To simply change-over from Vintage to Jupiter syntax, there are a few minor changes.
-
annotations and assertions have changed packages from
org.junit
toorg.junit.jupiter.api
-
lifecycle annotations have changed names
-
assertions have changed the order of optional arguments
-
exceptions can now be explicitly tested and inspected within the test method body
Vintage no longer requires classes or methods to be public. Anything non-private should work. |
108.2. JUnit Jupiter Example Test Methods
@Test
void two_plus_two() {
log.info("2+2=4");
assertEquals(4,2+2);
Exception ex=assertThrows(IllegalArgumentException.class, () ->{
throw new IllegalArgumentException("just demonstrating expected exception");
});
assertTrue(ex.getMessage().startsWith("just demo"));
}
@Test
void one_and_one() {
log.info("1+1=2");
assertTrue(1+1==2, "problem with 1+1");
assertTrue(1+1==2, ()->String.format("problem with %d+%d",1,1));
}
-
@Test - a instance method where assertions are made
-
exceptions can now be explicitly tested at a specific point in the test method — permitting details of the exception to also be inspected
-
asserts can be augmented with a String message in the last position
-
this is a breaking change from Vintage syntax
-
the expense of building complex String messages can be deferred to a lambda function
assertTrue(1+1==2, ()→String.format("problem with %d+%d",1,1));
-
108.3. JUnit Jupiter Basic Syntax Example Output
The following example output shows the lifecycle of the setup/teardown methods combined with two test methods. The default logger formatting added the new lines in between tests.
16:53:44.852 INFO ...testing.testbasics.jupiter.ExampleJUnit5Test - setUpClass (1)
(3)
16:53:44.866 INFO ...testing.testbasics.jupiter.ExampleJUnit5Test - setUp (2)
16:53:44.869 INFO ...testing.testbasics.jupiter.ExampleJUnit5Test - 2+2=4
16:53:44.874 INFO ...testing.testbasics.jupiter.ExampleJUnit5Test - tearDown (2)
(3)
16:53:44.879 INFO ...testing.testbasics.jupiter.ExampleJUnit5Test - setUp (2)
16:53:44.880 INFO ...testing.testbasics.jupiter.ExampleJUnit5Test - 1+1=2
16:53:44.881 INFO ...testing.testbasics.jupiter.ExampleJUnit5Test - tearDown (2)
(3)
16:53:44.883 INFO ...testing.testbasics.jupiter.ExampleJUnit5Test - tearDownClass (1)
1 | @BeforeAll and @AfterAll called once per test class |
2 | @Before and @After executed for each @Test |
3 | The default IDE logger formatting added the new lines in between tests |
Not demonstrated — we have the default option to have a new instance per test like Vintage or same instance for all tests and a defined test method order — which allows for lengthy scenario tests to be broken into increments. See @TestInstance annotation and TestInstance.Lifecycle enum for details. |
109. JUnit Jupiter Test Case Adjustments
109.1. Test Instance
State used by tests can be expensive to create or outside the scope of individual tests.
JUnit allows this state to be initialized and shared between test methods using one of two test instance techniques using the @TestInstance
annotation.
109.1.1. Shared Static State - PER_METHOD
The default test instance is PER_METHOD
.
With this option, the instance of the class is torn down and re-instantiated between each test.
We must declare any shared state as static
to have it live during the lifecycle of all instance methods.
The @BeforeAll
and @AfterAll
methods that initialize and tear down this data must be declared static when using PER_METHOD
.
@TestInstance(TestInstance.Lifecycle.PER_METHOD) //the default (1)
class StaticShared {
private static int staticState; (2)
@BeforeAll
static void init() { (3)
log.info("state={}", staticState++);
}
@Test
void testA() { log.info("state={}", staticState); } (4)
@Test
void testB() { log.info("state={}", staticState); }
1 | test case class is instantiated per method |
2 | any shared state must be declared private |
3 | @BeforeAll and @AfterAll methods must be declared static |
4 | @Test methods are normal instance methods with access to the static state |
109.2. Shared Instance State - PER_CLASS
There are often times during an integration test where shared state (e.g., injected components) is only available once the test case is instantiated.
We can make instance state sharable by using the PER_CLASS
option.
This makes the test case injectable by the container.
@TestInstance(TestInstance.Lifecycle.PER_CLASS) (1)
class InstanceShared {
private int instanceState; (2)
@BeforeAll
void init() { (3)
log.info("state={}", instanceState++);
}
@Test
void testA() { log.info("state={}", instanceState); }
@Test
void testB() { log.info("state={}", instanceState); }
1 | one instance is created for all tests |
2 | any shared state must be declared private |
3 | @BeforeAll and @AfterAll methods must be declared non-static |
109.2.1. Test Ordering
Although it is a "best practice" to make tests independent and be executed in any order — there can be times when one wants a specified order. There are a few options: [17]
-
Random Order
-
Specified Order
-
by Method Name
-
by Display Name
-
(custom order)
...
import org.junit.jupiter.api.*;
@TestMethodOrder(
// MethodOrderer.OrderAnnotation.class
// MethodOrderer.MethodName.class
// MethodOrderer.DisplayName.class
MethodOrderer.Random.class
)
class ExampleJUnit5Test {
@Test
@Order(1)
void two_plus_two() {
...
@Test
@Order(2)
void one_and_one() {
Explicit Method Ordering is the Exception
It is best practice to make test cases and tests within test cases modular and independent of one another.
To require a specific order violates that practice — but sometimes there are reasons to do so.
One example violation is when the overall test case is broken down into test methods that addresses a multi-step scenario.
In older versions of JUnit — that would have been required to be a single @Test calling out to helper methods.
|
110. Assertion Basics
The setup methods (@BeforeAll
and @BeforeEach
) of the test case and early parts of the
test method (@Test
) allow for us to define a given test context and scenario for the
subject of the test. Assertions are added to the evaluation
portion of the test method to determine whether the subject performed correctly. The result
of the assertions determine the pass/fail of the test.
110.1. Assertion Libraries
There are three to four primary general purpose assertion libraries available for us
to use within the spring-boot-starter-test
suite before we start considering
data format assertions for XML and JSON or add custom libraries of our own:
-
JUnit - has built-in, basic assertions like True, False, Equals, NotEquals, etc.
-
Hamcrest - uses natural-language expressions for matches
-
AssertJ - an improvement to natural-language assertion expressions using type-based builders
The built-in JUnit assertions are functional enough to get any job done. The value in using the other libraries is their ability to express the assertion using natural-language terms without using a lot of extra, generic Java code.
I found the following article quite helpful: Hamcrest vs AssertJ Assertion Frameworks - Which One Should You Choose?, 2017 by Yuri Bushnev. Many of his data points are years out of date — but the facts he brings up with the core design of Hamcrest and AssertJ are still true and enlightening. |
110.1.1. JUnit Assertions
The assertions built into JUnit are basic and easy to understand — but limited in their expression. They have the basic form of taking subject argument(s) and the name of the static method is the assertion made about the arguments.
import static org.junit.jupiter.api.Assertions.*;
...
assertEquals(expected, lhs+rhs); (1)
1 | JUnit static method assertions express assertion of one to two arguments |
We are limited by the number of static assertion methods present and have to extend them by using code to manipulate the arguments (e.g., to be equal or true/false). However, once we get to that point — we can easily bring in robust assertion libraries. In fact, that is exactly what JUnit describes for us to do in the JUnit User Guide.
110.1.2. Hamcrest Assertions
Hamcrest has a common pattern of taking a subject argument and a Matcher
argument.
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.*;
...
assertThat(beaver.getFirstName(), equalTo("Jerry")); (1)
1 | LHS argument is value being tested, RHS equalTo returns an object implementing Matcher interface |
The Matcher
interface can be implemented by an unlimited
number of expressions to implement the details of the assertion.
110.1.3. AssertJ Assertions
AssertJ uses a builder pattern that starts with the subject and then offers a nested number of assertion builders that are based on the previous node type.
import static org.assertj.core.api.Assertions.*;
...
assertThat(beaver.getFirstName()).isEqualTo("Jerry"); (1)
1 | assertThat is a builder of assertion factories and isEqual executes an assertion in chain |
Custom extensions are accomplished by creating a new builder factory at the start
of the tree. See the following
link for a small example. AssertJ also provides an
Assertion Generator that generates assertion source code based on
specific POJO classes and templates we can override using a
maven or
gradle plugin. This allows us to express assertions about a Person
class using the following syntax.
import static info.ejava.examples.app.testing.testbasics.Assertions.*;
...
assertThat(beaver).hasFirstName("Jerry");
IDEs have an easier time suggesting assertion builders with AssertJ because everything is a method call on the previous type. IDEs have a harder time suggesting Hamcrest matchers because there is very little to base the context on. |
110.2. Example Library Assertions
The following example shows a small peek at the syntax for each of the four assertion
libraries used within a JUnit Jupiter test case. They are shown without an
import static
declaration to better see where each comes from.
package info.ejava.examples.app.testing.testbasics.jupiter;
import lombok.extern.slf4j.Slf4j;
import org.hamcrest.MatcherAssert;
import org.hamcrest.Matchers;
import org.junit.Assert;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
@Slf4j
class AssertionsTest {
int lhs=1;
int rhs=1;
int expected=2;
@Test
void one_and_one() {
//junit 4/Vintage assertion
Assert.assertEquals(expected, lhs+rhs); (1)
//Jupiter assertion
Assertions.assertEquals(expected, lhs+rhs); (1)
//hamcrest assertion
MatcherAssert.assertThat(lhs+rhs, Matchers.is(expected)); (2)
//AssertJ assertion
org.assertj.core.api.Assertions.assertThat(lhs+rhs).isEqualTo(expected); (3)
}
}
1 | JUnit assertions are expressed using a static method and one or more subject arguments |
2 | Hamcrest asserts that the subject matches a Matcher that can be infinitely extended |
3 | AssertJ’s extensible subject assertion provides type-specific assertion builders |
110.3. Assertion Failures
Assertions will report a generic message when they fail. If we change the expected result of the example from 2 to 3, the following error message will be reported. It contains a generic message of the assertion failure (location not shown) without context other than the test case and test method it was generated from (not shown).
java.lang.AssertionError: expected:<3> but was:<2> (1)
1 | we are not told what 3 and 2 are within a test except that 3 was expected and they are not equal |
110.3.1. Adding Assertion Context
However, there are times when some additional text can help to provide more context
about the problem. The following example shows the previous test augmented with
an optional message. Note that JUnit Jupiter assertions permit the lazy instantiation
of complex message strings using a lamda. AssertJ provides for lazy instantiation
using String.format
built into the as()
method.
@Test
void one_and_one_description() {
//junit 4/Vintage assertion
Assert.assertEquals("math error", expected, lhs+rhs); (1)
//Jupiter assertions
Assertions.assertEquals(expected, lhs+rhs, "math error"); (2)
Assertions.assertEquals(expected, lhs+rhs,
()->String.format("math error %d+%d!=%d",lhs,rhs,expected)); (3)
//hamcrest assertion
MatcherAssert.assertThat("math error",lhs+rhs, Matchers.is(expected)); (4)
//AssertJ assertion
org.assertj.core.api.Assertions.assertThat(lhs+rhs)
.as("math error") (5)
.isEqualTo(expected);
org.assertj.core.api.Assertions.assertThat(lhs+rhs)
.as("math error %d+%d!=%d",lhs,rhs,expected) (6)
.isEqualTo(expected);
}
1 | JUnit Vintage syntax places optional message as first parameter |
2 | JUnit Jupiter moves the optional message to the last parameter |
3 | JUnit Jupiter also allows optional message to be expressed thru a lambda function |
4 | Hamcrest passes message in first position like JUnit Vintage syntax |
5 | AspectJ uses an as() builder method to supply a message |
6 | AspectJ also supports String.format and args when expressing message |
java.lang.AssertionError: math error expected:<3> but was:<2> (1)
1 | an extra "math error" was added to the reported error to help provide context |
Although AssertJ supports multiple asserts in a single call chain,
your description (as("description") ) must come before the first failing assertion.
|
Because AssertJ uses chaining
|
110.4. Testing Multiple Assertions
The above examples showed several ways to assert the same thing with different libraries. However, evaluation would have stopped at the first failure in each test method. There are many times when we want to know the results of several assertions. For example, take the case where we are testing different fields in a returned object (e.g., person.getFirstName(), person.getLastName()). We may want to see all the results to give us better insight for the entire problem.
JUnit Jupiter and AssertJ support testing multiple assertions prior to failing a specific test and then go on to report the results of each failed assertion.
110.4.1. JUnit Jupiter Multiple Assertion Support
JUnit Jupiter uses a variable argument list of Java 8 lambda functions in order to provide support for testing multiple assertions prior to failing a test. The following example will execute both assertions and report the result of both when they fail.
@Test
void junit_all() {
Assertions.assertAll("all assertions",
() -> Assertions.assertEquals(expected, lhs+rhs, "jupiter assertion"), (1)
() -> Assertions.assertEquals(expected, lhs+rhs,
()->String.format("jupiter format %d+%d!=%d",lhs,rhs,expected))
);
}
1 | JUnit Jupiter uses Java 8 lambda functions to execute and report results for multiple assertions |
110.4.2. AssertJ Multiple Assertion Support
AssertJ uses a special factory class (SoftAssertions
) to build assertions from to support that capability.
Notice also that we have the chance to inspect the state of the assertions before failing the test.
That can give us the chance to gather additional information to place into the log. We also
have the option of not technically failing the test under certain conditions.
import org.assertj.core.api.SoftAssertions;
...
@Test
public void all() {
Person p = beaver; //change to eddie to cause failures
SoftAssertions softly = new SoftAssertions(); (1)
softly.assertThat(p.getFirstName()).isEqualTo("Jerry");
softly.assertThat(p.getLastName()).isEqualTo("Mathers");
softly.assertThat(p.getDob()).isAfter(wally.getDob());
log.info("error count={}", softly.errorsCollected().size()); (2)
softly.assertAll(); (3)
}
1 | a special SoftAssertions builder is used to construct assertions |
2 | we are able to inspect the status of the assertions before failure thrown |
3 | assertion failure thrown during later assertAll() call |
110.5. Asserting Exceptions
JUnit Jupiter and AssertJ provide direct support for inspecting Exceptions within the body of the test method. Surprisingly, Hamcrest offers no built-in matchers to directly inspect Exceptions.
110.5.1. JUnit Jupiter Exception Handling Support
JUnit Jupiter allows for an explicit testing for Exceptions at specific points within the test method. The type of Exception is checked and made available to follow-on assertions to inspect. From this point forward JUnit assertions do not provide any direct support to inspect the Exception.
import org.junit.jupiter.api.Assertions;
...
@Test
public void exceptions() {
RuntimeException ex1 = Assertions.assertThrows(RuntimeException.class, (1)
() -> {
throw new IllegalArgumentException("example exception");
});
}
1 | JUnit Jupiter provides means to assert an Exception thrown and provide it for inspection |
110.5.2. AssertJ Exception Handling Support
AssertJ has an Exception testing capability that is similar to JUnit Jupiter — where an explicit check for the Exception to be thrown is performed and the thrown Exception is made available for inspection. The big difference here is that AssertJ provides Exception assertions that can directly inspect the properties of Exceptions using natural-language calls.
Throwable ex1 = catchThrowable( (1)
()->{ throw new IllegalArgumentException("example exception"); });
assertThat(ex1).hasMessage("example exception"); (2)
RuntimeException ex2 = catchThrowableOfType( (1)
()->{ throw new IllegalArgumentException("example exception"); },
RuntimeException.class);
assertThat(ex1).hasMessage("example exception"); (2)
1 | AssertJ provides means to assert an Exception thrown and provide it for inspection |
2 | AssertJ provides assertions to directly inspect Exceptions |
AssertJ goes one step further by providing an assertion that not only is the exception thrown, but can also tack on assertion builders to make on-the-spot assertions about the exception thrown. This has the same end functionality as the previous example — except:
-
previous method returned the exception thrown that can be subject to independent inspection
-
this technique returns an assertion builder with the capability to build further assertions against the exception
assertThatThrownBy( (1)
() -> {
throw new IllegalArgumentException("example exception");
}).hasMessage("example exception");
assertThatExceptionOfType(RuntimeException.class).isThrownBy( (1)
() -> {
throw new IllegalArgumentException("example exception");
}).withMessage("example exception");
1 | AssertJ provides means to use the caught Exception as an assertion factory to directly inspect the Exception in a single chained call |
110.6. Asserting Dates
AssertJ has built-in support for date assertions. We have to add a separate library to gain date matchers for Hamcrest.
110.6.1. AssertJ Date Handling Support
The following shows an example of AssertJ’s built-in, natural-language support for Dates.
import static org.assertj.core.api.Assertions.*;
...
@Test
public void dateTypes() {
assertThat(beaver.getDob()).isAfter(wally.getDob());
assertThat(beaver.getDob())
.as("beaver NOT younger than wally")
.isAfter(wally.getDob()); (1)
}
1 | AssertJ builds date assertions that directly inspect dates using natural-language |
110.6.2. Hamcrest Date Handling Support
Hamcrest can be extended to support date matches by adding an external hamcrest-date
library.
<!-- for hamcrest date comparisons -->
<dependency>
<groupId>org.exparity</groupId>
<artifactId>hamcrest-date</artifactId>
<version>2.0.7</version>
<scope>test</scope>
</dependency>
That dependency adds at least a DateMatchers
class with date matchers that can be used to
express date assertions using natural-language expression.
import org.exparity.hamcrest.date.DateMatchers;
import static org.hamcrest.MatcherAssert.assertThat;
...
@Test
public void dateTypes() {
//requires additional org.exparity:hamcrest-date library
assertThat(beaver.getDob(), DateMatchers.after(wally.getDob()));
assertThat("beaver NOT younger than wally", beaver.getDob(),
DateMatchers.after(wally.getDob())); (1)
}
1 | hamcrest-date adds matchers that can directly inspect dates |
111. Mockito Basics
Without much question — we will have more complex software to test than what we have briefly shown so far in this lesson. The software will inevitably be structured into layered dependencies where one layer cannot be tested without the lower layers it calls. To implement unit tests, we have a few choices:
-
use the real lower-level components (i.e., "all the way to the DB and back", remember — I am calling that choice "Unit Integration Tests" if it can be technically implemented/managed within a single JVM)
-
create a stand-in for the lower-level components (aka "test double")
We will likely take the first approach during integration testing but the lower-level components may bring in too many dependencies to realistically test during a separate unit’s own detailed testing.
111.1. Test Doubles
The second approach ("test double") has a few options:
-
fake - using a scaled down version of the real component (e.g., in-memory SQL database)
-
stub - simulation of the real component by using pre-cached test data
-
mock - defining responses to calls and the ability to inspect the actual incoming calls made
111.2. Mock Support
spring-boot-starter-test
brings in a pre-integrated, mature
open source mocking framework
called called Mockito. See the example below for an example unit
test augmented with mocks using Mockito. It uses a simple Java Map<String, String>
to demonstrate some simulation and inspection concepts. In a real unit test,
the Java Map interface would stand for:
-
an interface we are designing (i.e., testing the interface contract we are designing from the client-side)
-
a test double we want to inject into a component under test that will answer with pre-configured answers and be able to inspect how called (e.g., testing collaborations within a white box test)
111.3. Mockito Example Declarations
package info.ejava.examples.app.testing.testbasics.mockito;
import org.junit.jupiter.api.*;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.ArgumentCaptor;
import org.mockito.Captor;
import org.mockito.Mock;
import org.mockito.junit.jupiter.MockitoExtension;
import java.util.Map;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.Mockito.*;
@ExtendWith(MockitoExtension.class)
public class ExampleMockitoTest {
@Mock //creating a mock to configure for use in each test
private Map<String, String> mapMock;
@Captor
private ArgumentCaptor<String> stringArgCaptor;
-
@ExtendWith
bootstraps Mockito behavior into test case -
@Mock
can be used to inject a mock of the defined type-
"nice mock" is immediately available - will react in potentially useful manner by default
-
-
@Captor
can be used to capture input parameters passed to the mock calls
@InjectMocks will be demonstrated in later white box testing — where the defined mocks get injected into component under test. |
111.4. Mockito Example Test
@Test
public void listMap() {
//define behavior of mock during test
when(mapMock.get(stringArgCaptor.capture()))
.thenReturn("springboot", "testing"); (1)
//conduct test
int size = mapMock.size();
String secret1 = mapMock.get("happiness");
String secret2 = mapMock.get("joy");
//evaluate results
verify(mapMock).size(); //verify called once (3)
verify(mapMock, times(2)).get(anyString()); //verify called twice
//verify what was given to mock
assertThat(stringArgCaptor.getAllValues().get(0)).isEqualTo("happiness"); (2)
assertThat(stringArgCaptor.getAllValues().get(1)).isEqualTo("joy");
//verify what was returned by mock
assertThat(size).as("unexpected size").isEqualTo(0);
assertThat(secret1).as("unexpected first result").isEqualTo("springboot");
assertThat(secret2).as("unexpected second result").isEqualTo("testing");
}
1 | when()/then() define custom conditions and responses for mock within scope of test |
2 | getValue()/getAllValues() can be called on the captor to obtain value(s) passed to the mock |
3 | verify() can be called to verify what was called of the mock |
mapMock.size() returned 0 while mapMock.get() returned values.
We defined behavior for mapMock.get() but left other interface methods
in their default, "nice mock" state.
|
112. BDD Acceptance Test Terminology
Behavior-Driven Development (BDD) can be part of an agile development process and adds the use of natural-language constructs to express behaviors and outcomes. The BDD behavior specifications are stories with a certain structure that contain an acceptance criteria that follows a "given", "when", "then" structure:
-
given - initial context
-
when - event triggering scenario under test
-
then - expected outcome
112.1. Alternate BDD Syntax Support
There is also a strong push to express acceptance criteria in code that can be executed versus a document. Although far from a perfect solution, JUnit, AssertJ, and Mockito do provide some syntax support for BDD-based testing:
-
JUnit Jupiter allows the assignment of meaningful natural-language phrases for test case and test method names. Nested classes can also be employed to provide additional expression.
-
Mockito defines alternate method names to better map to the given/when/then language of BDD
-
AssertJ defines alternate assertion factory names using
then()
andand.then()
wording
112.3. Example BDD Syntax Output
When we run our test — the following natural-language text is displayed.
112.4. JUnit Options Expressed in Properties
We can define a global setting for the display name generator using junit-platform.properties
junit.jupiter.displayname.generator.default = \
org.junit.jupiter.api.DisplayNameGenerator$ReplaceUnderscores
This can also be used to express:
-
method order
-
class order
-
test instance lifecycle
-
@Parameterized test naming
-
parallel execution
113. Tipping Example
To go much further describing testing — we need to assemble a small set of interfaces and classes to test. I am going to use a common problem when several people go out for a meal together and need to split the check after factoring in the tip.
-
TipCalculator - returns the amount of tip required when given a certain bill total and rating of service. We could have multiple evaluators for tips and have defined an interface for clients to depend upon.
-
BillCalculator - provides the ability to calculate the share of an equally split bill given a total, service quality, and number of people.
The following class diagram shows the relationship between the interfaces/classes. They will be the subject of the following Unit Integration Tests involving the Spring context.
114. Review: Unit Test Basics
In previous chapters we have looked at pure unit test constructs with an eye on JUnit, assertion libraries, and a little of Mockito. In preparation for the unit integration topic and adding the Spring context in the following chapter — I want to review the simple test constructs in terms of the Tipping example.
114.1. Review: POJO Unit Test Setup
@DisplayNameGeneration(DisplayNameGenerator.ReplaceUnderscores.class) (1)
@DisplayName("Standard Tipping Calculator")
public class StandardTippingCalculatorImplTest {
//subject under test
private TipCalculator tipCalculator; (2)
@BeforeEach (3)
void setup() { //simulating a complex initialization
tipCalculator=new StandardTippingImpl();
}
1 | DisplayName is part of BDD naming and optional for all tests |
2 | there will be one or more objects under test. These will be POJOs. |
3 | @BeforeEach plays the role of a the container — wiring up objects under test |
114.2. Review: POJO Unit Test
The unit test is being expressed in terms of BDD conventions. It is broken up into "given", "when", and "then" blocks and highlighted with use of BDD syntax where provided (JUnit and AssertJ in this case).
@Test
public void given_fair_service() { (1)
//given - a $100 bill with FAIR service (2)
BigDecimal billTotal = new BigDecimal(100);
ServiceQuality serviceQuality = ServiceQuality.FAIR;
//when - calculating tip (2)
BigDecimal resultTip = tipCalculator.calcTip(billTotal, serviceQuality);
//then - expect a result that is 15% of the $100 total (2)
BigDecimal expectedTip = billTotal.multiply(BigDecimal.valueOf(0.15));
then(resultTip).isEqualTo(expectedTip); (3)
}
1 | using JUnit snake_case natural language expression for test name |
2 | BDD convention of given, when, then blocks. Helps to be short and focused |
3 | using AssertJ assertions with BDD syntax |
114.3. Review: Mocked Unit Test Setup
The following example moves up a level in the hierarchy and forces us to test a class that had a dependency.
A pure unit test would mock out all dependencies — which we are doing for TipCalculator
.
@ExtendWith(MockitoExtension.class) (1)
@DisplayNameGeneration(DisplayNameGenerator.ReplaceUnderscores.class)
@DisplayName("Bill CalculatorImpl Mocked Unit Test")
public class BillCalculatorMockedTest {
//subject under test
private BillCalculator billCalculator;
@Mock (2)
private TipCalculator tipCalculatorMock;
@BeforeEach
void init() { (3)
billCalculator = new BillCalculatorImpl(tipCalculatorMock);
}
1 | Add Mockito extension to JUnit |
2 | Identify which interfaces to Mock |
3 | In this example, we are manually wiring up the subject under test |
114.4. Review: Mocked Unit Test
The following shows the TipCalculator mock being instructed on what to return based on input criteria and making call activity available to the test.
@Test
public void calc_shares_for_people_including_tip() {
//given - we have a bill for 4 people and tip calculator that returns tip amount
BigDecimal billTotal = new BigDecimal(100.0);
ServiceQuality service = ServiceQuality.GOOD;
BigDecimal tip = billTotal.multiply(new BigDecimal(0.18));
int numPeople = 4;
//configure mock
given(tipCalculatorMock.calcTip(billTotal, service)).willReturn(tip); (1)
//when - call method under test
BigDecimal shareResult = billCalculator.calcShares(billTotal, service, numPeople);
//then - tip calculator should be called once to get result
then(tipCalculatorMock).should(times(1)).calcTip(billTotal,service); (2)
//verify correct result
BigDecimal expectedShare = billTotal.add(tip).divide(new BigDecimal(numPeople));
and.then(shareResult).isEqualTo(expectedShare);
}
1 | configuring response behavior of Mock |
2 | optionally inspecting subject calls made |
114.5. Alternative Mocked Unit Test
The final unit test example shows how we can leverage Mockito to instantiate our subject(s) under test and inject them with mocks.
That takes over at least one job the @BeforeEach
was performing.
@ExtendWith(MockitoExtension.class)
@DisplayNameGeneration(DisplayNameGenerator.ReplaceUnderscores.class)
@DisplayName("Bill CalculatorImpl")
public class BillCalculatorImplTest {
@Mock
TipCalculator tipCalculatorMock;
/*
Mockito is instantiating this implementation class for us an injecting Mocks
*/
@InjectMocks (1)
BillCalculatorImpl billCalculator;
1 | instantiates and injects out subject under test |
115. Spring Boot Unit Integration Test Basics
Pure unit testing can be efficiently executed without a Spring context, but there will eventually be a time to either:
-
integrate peer components with one another (horizontal integration)
-
integrate layered components to test the stack (vertical integration)
These goals are not easily accomplished without a Spring context and whatever is created outside of a Spring context will be different from production. Spring Boot and the Spring context can be brought into the test picture to more seamlessly integrate with other components and component infrastructure present in the end application. Although valuable, it will come at a performance cost and potentially add external resource dependencies — so don’t look for it to replace the lightweight pure unit testing alternatives covered earlier.
115.1. Adding Spring Boot to Testing
There are two primary things that will change with our Spring Boot integration test:
-
define a Spring context for our test to operate using
@SpringBootTest
-
inject components we wish to use/test from the Spring context into our tests using
@Autowire
I found the following article: Integration Tests with @SpringBootTest, by Tom Hombergs and his "Testing with Spring Boot" series to be quite helpful in clarifying my thoughts and preparing these lecture notes. The Spring Boot Testing Features web page provides detailed coverage of the test constructs that go well beyond what I am covering at this point in the course. We will pick up more of that material as we get into web and data tier topics. |
115.2. @SpringBootTest
To obtain a Spring context and leverage the auto-configuration capabilities of
Spring Boot, we can take the easy way out and annotate our test with @SpringBootTest
.
This will instantiate a default Spring context based on the configuration defined
or can be found.
package info.ejava.examples.app.testing.testbasics.tips;
...
import org.springframework.boot.test.context.SpringBootTest;
...
@SpringBootTest (1)
public class BillCalculatorNTest {
1 | using the default configuration search rules |
115.3. Default @SpringBootConfiguration Class
By default, Spring Boot will look for a class annotated with @SpringBootConfiguration
that is present at or above the Java package containing the test. Since we have a
class in a parent directory that represents our @SpringBootApplication
and that annotation
wraps @SpringBootConfiguration
, that class will be used to define the Spring context
for our test.
package info.ejava.examples.app.testing.testbasics;
...
@SpringBootApplication
// wraps => @SpringBootConfiguration
public class TestBasicsApp {
public static void main(String...args) {
SpringApplication.run(TestBasicsApp.class,args);
}
}
115.4. Conditional Components
When using the @SpringBootApplication, all components normally a part of the application will be part of the test. Be sure to define auto-configuration exclusions for any production components that would need to be turned off during testing.
|
115.5. Explicit Reference to @SpringBootConfiguration
Alternatively, we could have made an explicit reference as to which class to use if it was not in a standard relative directory or we wanted to use a custom version of the application for testing.
import info.ejava.examples.app.testing.testbasics.TestBasicsApp;
...
@SpringBootTest(classes = TestBasicsApp.class)
public class BillCalculatorNTest {
115.6. Explicit Reference to Components
Assuming the components required for test is known and a manageable number…
@Component
@RequiredArgsConstructor
public class BillCalculatorImpl implements BillCalculator {
private final TipCalculator tipCalculator;
...
@Component
public class StandardTippingImpl implements TipCalculator {
...
We can explicitly reference component classes needed to be in the Spring context.
@SpringBootTest(classes = {BillCalculatorImpl.class, StandardTippingImpl.class})
public class BillCalculatorNTest {
@Autowired
BillCalculator billCalculator;
115.7. Active Profiles
Prior to adding the Spring context, Spring Boot configuration and logging conventions were not being enacted. However, now that we are bringing in a Spring context — we can designate special profiles to be activated for our context. This can allow us to define properties that are more relevant to our tests (e.g., expressive log context, increased log verbosity).
package info.ejava.examples.app.testing.testbasics.tips;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.ActiveProfiles;
@SpringBootTest
@ActiveProfiles("test") (1)
public class BillCalculatorNTest {
1 | activating the "test" profile for this test |
# application-test.properties (1)
logging.level.info.ejava.examples.app.testing.testbasics=DEBUG
1 | "test" profile setting loggers for package under test to DEBUG severity threshold |
115.9. Example @SpringBootTest NTest Output
When we run our test we get the following console information printed. Note that
-
the
DEBUG
messages are from theBillCalculatorImpl
-
DEBUG
is being printed because the "test" profile is active and the "test" profile set the severity threshold for that package to beDEBUG
-
method and line number information is also displayed because the test profile defines an expressive log event pattern
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.0)
14:17:15.427 INFO BillCalculatorNTest#logStarting:55 - Starting BillCalculatorNTest
14:17:15.429 DEBUG BillCalculatorNTest#logStarting:56 - Running with Spring Boot v2.2.6.RELEASE, Spring v5.2.5.RELEASE
14:17:15.430 INFO BillCalculatorNTest#logStartupProfileInfo:655 - The following profiles are active: test
14:17:16.135 INFO BillCalculatorNTest#logStarted:61 - Started BillCalculatorNTest in 6.155 seconds (JVM running for 8.085)
14:17:16.138 DEBUG BillCalculatorImpl#calcShares:24 - tip=$9.00, for $50.00 and GOOD service
14:17:16.142 DEBUG BillCalculatorImpl#calcShares:33 - share=$14.75 for $50.00, 4 people and GOOD service
14:17:16.143 INFO BillHandler#run:24 - bill total $50.00, share=$14.75 for 4 people, after adding tip for GOOD service
14:17:16.679 DEBUG BillCalculatorImpl#calcShares:24 - tip=$18.00, for $100.00 and GOOD service
14:17:16.679 DEBUG BillCalculatorImpl#calcShares:33 - share=$29.50 for $100.00, 4 people and GOOD service
115.10. Alternative Test Slices
The @SpringBootTest
annotation is a general purpose test annotation that likely will
work in many generic cases. However, there are other cases where we may need a specific
database or other technologies available.
Spring Boot pre-defines a set of Test Slices that can establish more specialized test environments.
The following are a few examples:
-
@DataJpaTest - JPA/RDBMS testing for the data tier
-
@DataMongoTest - MongoDB testing for the data tier
-
@JsonTest - JSON data validation for marshalled data
-
@RestClientTest - executing tests that perform actual HTTP calls for the web tier
We will revisit these topics as we move through the course and construct tests relative additional domains and technologies.
116. Mocking Spring Boot Unit Integration Tests
In the previous @SpringBootTest
example I showed you how to instantiate a complete Spring context
to inject and execute test(s) against an integrated set of real components. However,
in some cases we may need the Spring context — but do not need or want the
interfacing components. In this example I am going to mock out the TipCalculator
to produce whatever the test requires.
import org.springframework.boot.test.mock.mockito.MockBean;
import static org.assertj.core.api.BDDAssertions.and;
import static org.mockito.BDDMockito.given;
import static org.mockito.BDDMockito.then;
import static org.mockito.Mockito.times;
@SpringBootTest(classes={BillCalculatorImpl.class})//defines custom Spring context (1)
@ActiveProfiles("test")
@DisplayNameGeneration(DisplayNameGenerator.ReplaceUnderscores.class)
@DisplayName("Bill CalculatorImpl Mocked Integration")
public class BillCalculatorMockedNTest {
@Autowired //subject under test (2)
private BillCalculator billCalculator;
@MockBean //will satisfy Autowired injection point within BillCalculatorImpl (3)
private TipCalculator tipCalculatorMock;
1 | defining a custom context that excludes TipCalculator component(s) |
2 | injecting BillCalculator bean under test from Spring context |
3 | defining a mock to be injected into BillCalculatorImpl in Spring context |
116.1. Example @SpringBoot/Mockito Test
The actual test is similar to the earlier example when we injected a real TipCalculator
from the Spring context.
However, since we have a mock in this case we must define its behavior
and then optionally determine if it was called.
@Test
public void calc_shares_for_people_including_tip() {
//given - we have a bill for 4 people and tip calculator that returns tip amount
BigDecimal billTotal = BigDecimal.valueOf(100.0);
ServiceQuality service = ServiceQuality.GOOD;
BigDecimal tip = billTotal.multiply(BigDecimal.valueOf(0.18));
int numPeople = 4;
//configure mock
given(tipCalculatorMock.calcTip(billTotal, service)).willReturn(tip); (1)
//when - call method under test (2)
BigDecimal shareResult = billCalculator.calcShares(billTotal, service, numPeople);
//then - tip calculator should be called once to get result
then(tipCalculatorMock).should(times(1)).calcTip(billTotal,service); (3)
//verify correct result
BigDecimal expectedShare = billTotal.add(tip).divide(BigDecimal.valueOf(numPeople));
and.then(shareResult).isEqualTo(expectedShare); (4)
}
1 | instruct the Mockito mock to return a tip result |
2 | call method on subject under test |
3 | verify mock was invoked N times with the value of the bill and service |
4 | verify with AssertJ that the resulting share value was the expected share value |
117. Maven Unit Testing Basics
At this point we have some technical basics for how tests are syntactically expressed. Now lets take a look at how they fit into a module and how we can execute them as part of the Maven build.
You learned in earlier lessons that production artifacts that are part of our
deployed artifact are placed in src/main
(java
and resources
). Our test artifacts
are placed in src/test
(java
and resources
). The following example shows the
layout of the module we are currently working with.
|-- pom.xml
`-- src
`-- test
|-- java
| `-- info
| `-- ejava
| `-- examples
| `-- app
| `-- testing
| `-- testbasics
| |-- PeopleFactory.java
| |-- jupiter
| | |-- AspectJAssertionsTest.java
| | |-- AssertionsTest.java
| | |-- ExampleJUnit5Test.java
| | `-- HamcrestAssertionsTest.java
| |-- mockito
| | `-- ExampleMockitoTest.java
| |-- tips
| | |-- BillCalculatorContractTest.java
| | |-- BillCalculatorImplTest.java
| | |-- BillCalculatorMockedNTest.java
| | |-- BillCalculatorNTest.java
| | `-- StandardTippingCalculatorImplTest.java
| `-- vintage
| `-- ExampleJUnit4Test.java
`-- resources
|-- application-test.properties
117.1. Maven Surefire Plugin
The
Maven Surefire plugin looks for classes that have been compiled from the src/test/java
source tree that have a
prefix of "Test" or suffix of "Test", "Tests", or "TestCase" by default.
Surefire starts up the JUnit context(s) and provides test results to the console
and target/surefire-reports directory.
Surefire is part of the standard "jar" profile we use for normal Java projects and will run automatically. The following shows the final output after running all the unit tests for the module.
$ mvn clean test
...
[INFO] Results:
[INFO]
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14.280 s
Consult online documentation on how Maven Surefire can be configured. However, I will demonstrate at least one feature that allows us to filter tests executed.
117.2. Filtering Tests
One new JUnit Jupiter feature is the ability to categorize tests using @Tag
annotations.
The following example shows a unit integration
test annotated with two tags: "springboot" and "tips". The "springboot" tag was added to
all tests that launch the Spring context. The "tips" tag was added to all tests that
are part of the tips example set of components.
import org.junit.jupiter.api.*;
...
@SpringBootTest(classes = {BillCalculatorImpl.class}) //defining custom Spring context
@Tag("springboot") @Tag("tips") (1)
...
public class BillCalculatorMockedNTest {
1 | test case has been tagged with JUnit "springboot" and "tips" tag values |
117.3. Filtering Tests Executed
We can use the tag names as a "groups" property specification to Maven Surefire to only run matching tests. The following example requests all tests tagged with "tips" but not tagged with "springboot" are to be run. Notice we have fewer tests executed and a much faster completion time.
$ mvn clean test -Dgroups='tips & !springboot' -Pbdd (1) (2)
...
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running Bill Calculator Contract
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.41 s - in Bill Calculator Contract
[INFO] Running Bill CalculatorImpl
15:43:47.605 [main] DEBUG info.ejava.examples.app.testing.testbasics.tips.BillCalculatorImpl - tip=$50.00, for $100.00 and GOOD service
15:43:47.608 [main] DEBUG info.ejava.examples.app.testing.testbasics.tips.BillCalculatorImpl - share=$37.50 for $100.00, 4 people and GOOD service
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.165 s - in Bill CalculatorImpl
[INFO] Running Standard Tipping Calculator
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 s - in Standard Tipping Calculator
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.537 s
1 | execute tests with tag "tips" and without tag "springboot" |
2 | activating "bdd" profile that configures Surefire reports within the example Maven environment setup to understand display names |
117.4. Maven Failsafe Plugin
The
Maven Failsafe plugin looks for classes compiled from the src/test/java
tree that have a
prefix of "IT" or suffix of "IT", or "ITCase" by default.
Like Surefire, Failsafe is part of the standard Maven "jar" profile and runs later in the
build process. However, unlike Surefire that runs within one
Maven phase (test
), Failsafe runs within the scope of four Maven phases:
pre-integration-test
,
integration-test
,
post-integration-test
, and
verify
-
pre-integration-test - when external resources get started (e.g., web server)
-
integration-test - when tests are executed
-
post-integration-test - when external resources are stopped/cleaned up (e.g., shutdown web server)
-
verify - when results of tests are evaluated and build potentially fails
117.5. Failsafe Overhead
Aside from the integration tests, all other processes are normally started and stopped through the use of Maven plugins. Multiple phases are required for IT tests so that:
-
all resources are ready to test once the tests begin
-
all resources can be shutdown prior to failing the build for a failed test
With the robust capability to stand up a Spring context within a single JVM, we really have limited use for Failsafe for testing Spring Boot applications. The exception for that is when we truly need to interface with something external — like stand up a real database or host endpoints in Docker images. I will wait until we get to topics like that before showing examples. Just know that when Maven references "integration tests", they come with extra hooks and overhead that may not be technically needed for integration tests — like the ones we have demonstrated in this lesson — that can be executed within a single JVM.
118. @TestConfiguration
Tests often require additional components that are not part of the Spring context under test — or need to override one or more of those components.
SpringBoot supplies a @TestConfiguration
annotation that:
-
allows the class to be skipped by standard component scan
-
is loaded into a
@SpringBootTest
to add or replace components
118.1. Example Spring Context
In our example Spring context, we will have a TipCalculator
component located using a component scan.
It will have the name "standardTippingImpl" if we do not supply an override in the @Component
annotation.
@Primary (1)
@Component
public class StandardTippingImpl implements TipCalculator {
1 | declaring type as primary to make example more significant |
That bean gets injected into BillCalculatorImpl.tipCalculator
because it implements the required type.
@Component
@RequiredArgsConstructor
public class BillCalculatorImpl implements BillCalculator {
private final TipCalculator tipCalculator;
118.2. Test TippingCalculator
Our intent here is to manually write a stub and have it replace the TipCalculator
from the application’s Spring context.
import org.springframework.boot.test.context.TestConfiguration;
...
@TestConfiguration(proxyBeanMethods = false) //skipped in component scan -- manually included (1)
public class MyTestConfiguration {
@Bean
public TipCalculator standardTippingImpl() { (2)
return new TipCalculator() {
@Override
public BigDecimal calcTip(BigDecimal amount, ServiceQuality serviceQuality) {
return BigDecimal.ZERO; (3)
}
};
}
}
1 | @TestConfiguration annotation prevents class from being picked up in normal component scan |
2 | standardTippingImpl name matches existing component |
3 | test-specific custom response |
118.3. Enable Component Replacement
Since we are going to replace an existing component, we need to enable bean overrides using the following property definition.
@SpringBootTest(
properties = "spring.main.allow-bean-definition-overriding=true"
)
public class TestConfigurationNTest {
Otherwise, we end up with the following error when we make our follow-on changes.
***************************
APPLICATION FAILED TO START
***************************
Description:
The bean 'standardTippingImpl', defined in class path resource
[.../testconfiguration/MyTestConfiguration.class], could not be registered.
A bean with that name has already been defined in file
[.../tips/StandardTippingImpl.class] and overriding is disabled.
Action:
Consider renaming one of the beans or enabling overriding by setting
spring.main.allow-bean-definition-overriding=true
118.4. Embedded TestConfiguration
We can have the @TestConfiguration
class automatically found using an embedded static class.
@SpringBootTest(properties={"..."})
public class TestConfigurationNTest {
@Autowired
BillCalculator billCalculator; (1)
@TestConfiguration(proxyBeanMethods = false)
static class MyEmbeddedTestConfiguration { (2)
@Bean
public TipCalculator standardTippingImpl() { ... }
}
1 | injected billCaculator will be injected with @Bean from @TestConfiguration |
2 | embedded static class used automatically |
118.5. External TestConfiguration
Alternatively, we can place the configuration in a separate/stand-alone class.
@TestConfiguration(proxyBeanMethods = false)
public class MyTestConfiguration {
@Bean
public TipCalculator tipCalculator() {
return new TipCalculator() {
@Override
public BigDecimal calcTip(BigDecimal amount, ServiceQuality serviceQuality) {
return BigDecimal.ZERO;
}
};
}
}
118.6. Using External Configuration
The external @TestConfiguration
will only be used if specifically named in either:
-
@SpringBootTest.classes
-
@ContextConfiguration.classes
-
@Import.value
Pick one way.
@SpringBootTest(
classes=MyTestConfiguration.class, //way1 (1)
properties = "spring.main.allow-bean-definition-overriding=true"
)
@ContextConfiguration(classes=MyTestConfiguration.class) //way2 (2)
@Import(MyTestConfiguration.class) //way3 (3)
public class TestConfigurationNTest {
1 | way1 leverages the `@SpringBootTest configuration |
2 | way2 pre-dates @SpringBootTest |
3 | way3 pre-dates @SpringBootTest and is a standard way to import a configuration definition from one class to another |
118.7. TestConfiguration Result
Running the following test results in:
-
a single
TipCalculator
registered in the list because each considered have the same name and overriding is enabled -
the
TipCalculator
used is one of the@TestConfiguration
-supplied components
@SpringBootTest(
classes=MyTestConfiguration.class,
properties = "spring.main.allow-bean-definition-overriding=true")
public class TestConfigurationNTest {
@Autowired
BillCalculator billCalculator;
@Autowired
List<TipCalculator> tipCalculators;
@Test
void calc_has_been_replaced() {
//then
then(tipCalculators).as("too many topCalculators").hasSize(1);
then(tipCalculators.get(0).getClass()).hasAnnotation(TestConfiguration.class); (1)
}
1 | @Primary TipCalculator bean replaced by our @TestConfiguration -supplied bean |
119. Summary
In this module we:
-
learned the importance of testing
-
introduced some of the testing capabilities of libraries integrated into
spring-boot-starter-test
-
went thru an overview of JUnit Vintage and Jupiter test constructs
-
stressed the significance of using assertions in testing and the value in making them based on natural-language to make them easy to understand
-
introduced how to inject a mock into a subject under test
-
demonstrated how to define a mock for testing a particular scenario
-
demonstrated how to inspect calls made to the mock during testing of a subject
-
discovered how to switch default Mockito and AssertJ methods to match Business-Driven Development (BDD) acceptance test keywords
-
implemented unit integration tests with Spring context using
@SpringBootTest
-
implemented mocks into the Spring context of a unit integration test
-
ran tests using Maven Surefire
-
implemented a
@TestConfiguration
with component override
HomeSales Assignment 1
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
The following three areas (Config, Logging, and Testing) map out the different portions of "Assignment 1". It is broken up to provide some focus.
-
Each of the areas (1a Config, 1b Logging, and 1c Testing) are separate but are to be turned in together, under a single root project tree. There is no relationship between the classes used in the three areas — even if they have the same name. Treat them as separate.
-
Each of the areas are further broken down into parts. The parts of the Config area are separate. Treat them that way by working in separate module trees (under a common grandparent). The individual parts for Logging and Testing overlap. Once you have a set of classes in place — you build from that point. They should be worked/turned in as a single module each (one for Logging and one for Testing; under the same parent as Config).
A set of starter projects is available in assignment-starter/homesales-starters
.
It is expected that you can implement the complete assignment on your own.
However, the Maven poms and the portions unrelated to the assignment focus are commonly provided for reference to keep the focus on each assignment part.
Your submission should not be a direct edit/hand-in of the starters.
Your submission should — at a minimum:
-
use you own Maven groupIds
-
use your own Java package names
-
extend either
spring-boot-starter-parent
orejava-build-parent
Your assignment submission should be a single-rooted source tree with sub-modules or sub-module trees for each independent area part. The assignment starters — again can be your guide for mapping these out.
|-- assignment1-homesales-autoconfig
| |-- pom.xml
| |-- sales-autoconfig-app
| |-- sales-autoconfig-autosales
| `-- sales-autoconfig-starter
|-- assignment1-homesales-beanfactory
| |-- pom.xml
| |-- sales-beanfactory-app
| |-- sales-beanfactory-homesales
| `-- sales-beanfactory-iface
|-- assignment1-homesales-configprops
| |-- pom.xml
| `-- src
|-- assignment1-homesales-logging
| |-- pom.xml
| `-- src
|-- assignment1-homesales-propertysource
| |-- pom.xml
| `-- src
|-- assignment1-homesales-testing
| |-- pom.xml
| `-- src
`-- pom.xml
120. Assignment 1a: App Config
-
2022-09-14: Removed reference to SalesDTO.id. Only name is needed
-
2022-09-14: Corrected expected printed output for configuration sources (added application- prefix)
-
2022-09-19: Corrected some sales.preference property callouts
-
2022-09-26: Corrected typo in autoconfig example output and added line spacing between command and output
120.1. @Bean Factory Configuration
120.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of configuring a decoupled application integrated using Spring Boot. You will:
-
implement a service interface and implementation component
-
package a service within a Maven module separate from the application module
-
implement a Maven module dependency to make the component class available to the application module
-
use a @Bean factory method of a @Configuration class to instantiate a Spring-managed component
120.1.2. Overview
In this portion of the assignment you will be implementing a component class and defining that as a Spring bean using a @Bean
factory located within the core application JAR.
120.1.3. Requirements
-
Create an interface module with
-
a
SaleDTO
class withname
property. This is a simple data class. -
a
SalesService
interface with agetRandomSale()
method. This method returns a singleSaleDTO
instances.
-
-
Create a HomeSale implementation module with
-
a HomeSale implementation of the
SalesService
interface that returns aSaleDTO
name with "homeSale" within it (e.g., "homeSale0").
-
-
Create an application module with
-
a class that
-
implements
CommandLineRunner
interface -
has the
SalesService
component injected using constructor injection -
a
run()
method that-
calls the
SalesService
for a random HomeSaleDTO -
prints a startup message with the DTO name
-
relies on a
@Bean
factory to register it with the container and not a@Component
mechanism
-
-
-
a
@Configuration
class with two@Bean
factory methods-
one
@Bean
factory method to instantiate aSalesService
homeSale implementation -
one
@Bean
factory method to instantiate theAppCommand
injected with aSalesService
bean (not a POJO)@Bean
factories that require external beans, can have the dependencies injected by declaring them in their method signature. Example:TypeB factoryB(TypeA beanA) {return new TypeB(beanA); }
That way the you can be assured that the dependency is a fully initialized bean versus a partially initialized POJO.
-
-
a
@SpringBootApplication
class that initializes the Spring Context — which will process the@Configuration
class
-
-
Turn in a source tree with three or more complete Maven modules that will build and demonstrate a configured Spring Boot application.
120.1.4. Grading
Your solution will be evaluated on:
-
implement a service interface and implementation component
-
whether an interface module was created to contain interface and data dependencies of that interface
-
whether an implementation module was created to contain a class implementation of the interface
-
-
package a service within a Maven module separate from the application module
-
whether an application module was created to house a
@SpringBootApplication
and@Configuration
set of classes
-
-
implement a Maven module dependency to make the component class available to the application module
-
whether at least three separate Maven modules were created with a one-way dependency between them
-
-
use a @Bean factory method of a @Configuration class to instantiate Spring-managed components
-
whether the
@Configuration
class successfully instantiates theSalesService
component -
whether the
@Configuration
class successfully instantiates the startup message component injected with aSalesService
component.
-
120.1.5. Additional Details
-
The
spring-boot-maven-plugin
can be used to both build the Spring Boot executable JAR and execute the JAR to demonstrate the instantiations, injections, and desired application output. -
A quick start project is available in
assignment-starter/homesales-starters/assignment1-homesales-beanfactory
. Modify Maven groupId and Java package if used.
120.2. Property Source Configuration
120.2.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of how to flexibly supply application properties based on application, location, and profile options. You will:
-
implement value injection into a Spring Component
-
define a default value for the injection
-
specify property files from different locations
-
specify a property file for a basename
-
specify properties based on an active profile
-
specify a both straight properties and YAML property file sources
120.2.2. Overview
You are given a Java application that prints out information based on injected properties, defaults, a base property file, and executed using different named profiles. You are to supply several profile-specific property files that — when processed together — produce the required output.
This assignment involves very little - to no new Java coding (the "assignment starter" has all you need). It is designed as a puzzle where — given some constant surroundings — you need to determine what properties to supply and in which file to supply them, to satisfy all listed test scenarios.
The assignment is structured into two modules: app and support
-
app - is your assignment. The skeletal structure is provided in
homesales-starter/assignment2-homesales-propertysource
-
support - is provided is provided in the
homesales-starter/homesales-support-propertysource
module and is to be used, unmodified through a Maven dependency. It contains a defaultapplication.properties
file with skeletal values, a component that gets injected with property values, and a unit integration test that verifies the program results.
The homesales-support-propertysource
module provides the following resources.
- PropertyCheck Class
-
This class has property injections defined with default values when they are not supplied. This class will be in your classpath and automatically packaged within your JAR.
public class PropertyCheck implements CommandLineRunner { @Value("${spring.config.name:(default value)}") String configName; @Value("${spring.config.location:(default value)}") String configLocations; @Value("${spring.profiles.active:(default value)}") String profilesActive; @Value("${sales.priority.source:not assigned}") String prioritySource; @Value("${sales.db.url:not assigned}") String dbUrl;
- application.properties File
-
This file provides a template of a database URL with placeholders that will get populated from other property sources. This file will be in your classpath and automatically packaged within your JAR.
#application.properties sales.priority.source=application.properties sales.db.user=user sales.db.port=00000 (1) sales.db.url=mongodb://${sales.db.user}:${sales.db.password}@${sales.db.host}:${sales.db.port}/test?authSource=admin
1 sales.db.url
is built from several property placeholders.password
is not specified. - PropertySourceTest Class
-
a unit integration test is provided that can verify the results of your property file population. This test will run automatically during the Maven build.
public class PropertySourceTest { static final String CONFIG_LOCATION="classpath:/,optional:file:src/test/resources/"; class no_profile { @Test void has_expected_sources() throws Exception { @Test void has_generic_files_in_classpath() { @Test void has_no_credential_files_in_classpath() { class dev_dev1_profiles { @Test void has_expected_sources() throws Exception { class prd_site1_profiles { @Test void has_expected_sources() throws Exception { class prd_site2_profiles { @Test void has_expected_sources() throws Exception {
120.2.3. Requirements
The starter module has much of the setup already defined. |
-
Create a dependency on the support module. (provided in starter)
<dependency> <groupId>info.ejava.assignments.propertysource.homesales</groupId> <artifactId>homesales-support-propertysource</artifactId> <version>${ejava.version}</version> </dependency>
-
Add a @SpringBootApplication class with main() (provided in starter)
package info.ejava_student.starter.assignment1.propertysource.sales; import info.ejava.assignments.propertysource.sales.PropertyCheck; @SpringBootApplication public class PropertySourceApp {
-
Provide the following property file sources. (provided in starter)
application.properties
will be provided through the dependency on the support module and will get included in the JAR.src/main/resources:/ (1) application-default.properties application-dev.yml (3) application-prd.properties src/test/resources/ (2) application-dev1.properties application-site1.properties application-site2.yml (3)
1 src/main/resources
files will get packaged into JAR and will automatically be in the classpath at runtime2 src/test/resources are not packaged into the JAR and will be referenced by a command-line parameter to add them to the classpath 3 example uses of YAML files yml files must be expressed as a YAML fileapplication-dev.yml and application-site2.yml must be expressed using YAML syntax -
Enable the unit integration test from the starter when you are ready to test — by removing
@Disabled
.package info.ejava_student.starter.assignment1.propertysource.sales; import info.ejava.assignments.propertysource.sales.PropertySourceTest; ... //we will cover testing in a future topic, very soon @Disabled //enable when ready to start assignment public class MyPropertySourceTest extends PropertySourceTest {
-
Use a constant base command. This part of the command remains constant.
$ java -jar target/*-propertysource-1.0-SNAPSHOT-bootexec.jar --spring.config.location=classpath:/,optional:file:src/test/resources/ (1)
1 this is the base command for 4 specific commands that specify profiles active The only modification to the command line will be the conditional addition of a profile activation.
--spring.profiles.active= (1)
1 the following 4 commands will supply a different value for this property -
Populate the property and YAML files so that the scenarios in the following paragraph are satisfied. The default starter with the "base command" and "no active profile" set, produces the following by default.
$ java -jar target/*-propertysource-1.0-SNAPSHOT-bootexec.jar --spring.config.location=classpath:/,optional:file:src/test/resources/ configName=(default value) configLocation=classpath:/,optional:file:src/test/resources/ profilesActive=(default value) prioritySource=application-default.properties Sales has started dbUrl=mongodb://user:NOT_SUPPLIED@NOT_SUPPLIED:00000/test?authSource=admin
Any property value that does not contain a developer/site-specific value (e.g., defaultUser and defaultPass) must be provided by a property file packaged into the JAR (i.e., source src/main/resources
)Any property value that does contain a developer/site-specific value (e.g., dev1pass and site1Pass) must be provided by a property file in the file:
part of the location path and not in the JAR (i.e., sourcesrc/test/resources
).Complete the following 4 scenarios:
-
No Active Profile Command Result
configName=(default value) configLocation=classpath:/,optional:file:src/test/resources/ profilesActive=(default value) prioritySource=application-default.properties Sales has started dbUrl=mongodb://defaultUser:defaultPass@defaulthost:27027/test?authSource=admin
You must supply a populated set of configuration files so that, under this option, user:NOT_SUPPLIED@NOT_SUPPLIED:00000
becomesdefaultUser:defaultPass@defaulthost:27027
. -
dev,dev1 Active Profile Command Result
--spring.profiles.active=dev,dev1
configName=(default value) configLocation=classpath:/,optional:file:src/test/resources/ profilesActive=dev,dev1 prioritySource=application-dev1.properties Sales has started dbUrl=mongodb://devUser:dev1pass@127.0.0.1:17027/test?authSource=admin
-
prd,site1 Active Profile Command Result
--spring.profiles.active=prd,site1
configName=(default value) configLocation=classpath:/,optional:file:src/test/resources/ profilesActive=prd,site1 prioritySource=application-site1.properties Sales has started dbUrl=mongodb://prdUser:site1pass@db.site1.net:27017/test?authSource=admin
-
prd,site2 Active Profile Command Result
--spring.profiles.active=prd,site2
configName=(default value) configLocation=classpath:/,optional:file:src/test/resources/ profilesActive=prd,site2 prioritySource=application-site2.properties Sales has started dbUrl=mongodb://prdUser:site2pass@db.site2.net:27017/test?authSource=admin
-
-
Turn in a source tree with a complete Maven module that will build and demonstrate the
@Value
injections for the 4 different active profile settings.
120.2.4. Grading
Your solution will be evaluated on:
-
implement value injection into a Spring Component
-
whether
@Component
attributes were injected with values from property sources
-
-
define a default value for the injection
-
whether default values were correctly accepted or overridden
-
-
specify property files from different locations
-
whether your solution provides property values coming from multiple file locations
-
any property value that does not contain a developer/site-specific value (e.g.,
defaultUser
anddefaultPass
) must be provided by a property file within the JAR -
any property value that contains developer/site-specific values (e.g.,
dev1pass
andsite1pass
) must be provided by a property file outside of the JAR
-
-
the given
application.properties
file may not be modified -
named
.properties
files are supplied as properties files -
named
.yml
(i.e.,application-dev.yml
) files are supplied as YAML files
-
-
specify properties based on an active profile
-
whether your output reflects current values for
dev1
,site1
, and`site2
profiles
-
-
specify both straight properties and YAML property file sources
-
whether your solution correctly supplies values for at least 1 properties file
-
whether your solution correctly supplies values for at least 1 YAML file
-
120.2.5. Additional Details
-
The
spring-boot-maven-plugin
can be used to both build the Spring Boot executable JAR and demonstrate the instantiations, injections, and desired application output. -
A quick start project is available in
assignment-starter/homesales-starter/assignment1-homesales-propertysource
that supplies much of the boilerplate file and Maven setup. Modify Maven groupId and Java package if used. -
An integration unit test (
PropertySourceTest
) is provided within the support module that can automate the verifications. -
Ungraded Question to Ponder: How could you at runtime, provide a parameter option to the application to make the following output appear?
Alternate OutputconfigName=homesales configLocation=(default value) profilesActive=(default value) prioritySource=not assigned Race Registration has started dbUrl=not assigned
120.3. Configuration Properties
120.3.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of injecting properties into a @ConfigurationProperties
class to be injected into components - to encapsulate the runtime configuration of the component(s).
You will:
-
map a Java @ConfigurationProperties class to a group of properties
-
create a read-only @ConfigurationProperties class using @ConstructorBinding
-
define a Jakarta EE Java validation rule for a property and have the property validated at runtime
-
generate boilerplate JavaBean methods using Lombok library
-
map nested properties to a @ConfigurationProperties class
-
reuse a @ConfigurationProperties class to map multiple property trees of the same structure
-
use @Qualifier annotation and other techniques to map or disambiguate an injection
120.3.2. Overview
In this assignment, you are going to finish mapping a YAML file of properties to a set of Java classes and have them injected as @ConfigurationProperty
beans.
BoatSaleProperties
is a straight-forward, single use bean that can have the class directly mapped to a specific property prefix.
SalesProperties
will be mapped to two separate prefixes — so the mapping cannot be applied directly to that class.
Keep this in mind when wiring up your solution.
An integration unit test is supplied and can be activated when you are ready to test your progress.
120.3.3. Requirements
-
Given the following read-only property classes, application.yml file, and
@Component
…-
read-only property classes
@ConstructorBinding @Value public class SalesProperties { private int id; private LocalDate saleDate; private BigDecimal saleAmount; private String buyerName; private AddressProperties location; }
@ConstructorBinding @Value public class BoatSaleProperties { private int id; private LocalDate saleDate; private BigDecimal saleAmount; private String buyerName; private AddressProperties location; }
@ConstructorBinding @Value public class AddressProperties { private final String city; private final String state; }
Lombok @Value
annotation defines the class to be read-only by only declaring getter()s and no setter()s. This will require use of constructor binding.The property classes are supplied in the starter module.
-
application.yml
YAML filesales: homes: - id: 1 saleDate: 2010-07-01 (6) saleAmount: 100.00 (1) buyerName: Joe Camper location: city: Jonestown state: PA #... autos: - id: 2 sale-date: 2000-01-01 sale-amount: 1000 (2) buyer_name: Itis Clunker (3) location: city: Dundalk state: MD boatSale: id: 3 SALE_DATE: 2022-08-01 (4) SALE_AMOUNT: 200_000 BUYER-NAME: Alexus Blabidy (5) LOCATION: city: Annapolis state: MD
1 lower camelCase 2 lower kabob-case 3 lower snake_case 4 upper SNAKE-CASE 5 upper KABOB-CASE 6 LocalDate parsing will need to be addressed The full contents of the YAML file can be found in the
homesales-support/homesales-support-configprops
support project. YAML was used here because it is easier to express and read the nested properties.Notice that multiple text cases (upper, lower, snake, kabob) are used to map the the same Java properties. This demonstrates one of the benefits in using
@ConfigurationProperties
over@Value
injection — configuration files can be expressed in syntax that may be closer to the external domain.Note that the
LocalDate
will require additional work to parse. That is provided to you in the starter project and described later in this assignment. -
@Component
with constructor injection and getters to inspect what was injected//@Component @Getter @RequiredArgsConstructor public class PropertyPrinter implements CommandLineRunner { private final List<SalesProperties> homes; private final List<SalesProperties> autos; private final BoatSaleProperties boat; @Override public void run(String... args) throws Exception { System.out.println("homes:" + format(homes)); System.out.println("autos:" + format(autos)); System.out.println("boat:" + format(null==boat ? null : List.of(boat))); } private String format(List<?> sales) { return null==sales ? "(null)" : String.format("%s", sales.stream() .map(r->"*" + r.toString()) .collect(Collectors.joining(System.lineSeparator(), System.lineSeparator(), ""))); } }
The source for the
PropertyPrinter
component is supplied in the starter module. Except for getting it registered as a component, there should be nothing needing change here.
-
-
When running the application, a
@ConfigurationProperties
beans will be created to represent the contents of the YAML file as two separateList<SalesProperties>
objects and a BoatSaleProperties object. When properly configured, they will be injected into the@Component
, and and it will output the following.homes: *SalesProperties(id=1, saleDate=2010-07-01, saleAmount=100.0, buyerName=Joe Camper, location=AddressProperties(city=Jonestown, state=PA)) *SalesProperties(id=4, saleDate=2021-05-01, saleAmount=500000, buyerName=Jill Suburb, location=AddressProperties(city=, state=MD)) (1) *SalesProperties(id=5, saleDate=2021-07-01, saleAmount=1000000, buyerName=M.R. Bigshot, location=AddressProperties(city=Rockville, state=MD)) autos: *SalesProperties(id=2, saleDate=2000-01-01, saleAmount=1000, buyerName=Itis Clunker, location=AddressProperties(city=Dundalk, state=MD)) boat: *BoatSaleProperties(id=3, saleDate=2022-08-01, saleAmount=200000, buyerName=Alexus Blabidy, location=AddressProperties(city=Annapolis, state=MD))
1 one of the homeSales addresses is missing a city The "assignment starter" supplies most of the Java code needed for the PropertyPrinter
. -
Configure your solution so that the
BoatSaleProperties
bean is injected into thePropertyPrinter
component along with the List of home and autoSalesProperties
. There is a skeletal configuration supplied in the application class. Most of your work will be within this class.@SpringBootApplication public class ConfigPropertiesApp { public static void main(String[] args) public List<SalesProperties> homes() { return new ArrayList<>(); } public List<SalesProperties> autos() { return new ArrayList<>(); } }
-
Turn in a source tree with a complete Maven module that will build and demonstrate the configuration property processing and output of this application.
120.3.4. Grading
Your solution will be evaluated on:
-
map a Java @ConfigurationProperties class to a group of properties
-
whether Java classes were used to map values from the given YAML file
-
-
create a read-only @ConfigurationProperties class using @ConstructorBinding
-
whether read-only Java classes, using
@ConstructorBinding
were used to map values from the given YAML file
-
-
generate boilerplate JavaBean methods using Lombok library
-
whether lombok annotations were used to generate boilerplate Java bean code
-
-
map nested properties to a @ConfigurationProperties class
-
whether nested Java classes were used to map nested properties from a given YAML file
-
-
reuse a @ConfigurationProperties class to map multiple property trees of the same structure
-
whether multiple property trees were instantiated using the same Java classes
-
-
use @Qualifier annotation and other techniques to map or disambiguate an injection
-
whether multiple
@ConfigurationProperty
beans of the same type could be injected into a@Component
using a disambiguating technique.
-
120.3.5. Additional Details
-
A starter project is available in
homesales-starter/assignment1-homesales-configprops
. Modify Maven groupId and Java package if used. -
I included an additional, general purpose
LocalDateConverter
@Component
in the starter with the property classes. This is a necessary option to successfully parse the date expressed in the YAML file and have it injected as aLocalDate
in the@ConfigurationProperties
class.YAML Text Sourcesales: homes: - id: 1 saleDate: 2010-07-01
Text to LocalDate Converterimport org.springframework.boot.context.properties.ConfigurationPropertiesBinding; import org.springframework.core.convert.converter.Converter; ... @Component @ConfigurationPropertiesBinding public class LocalDateConverter implements Converter<String, LocalDate> { @Override public LocalDate convert(String source) { return null==source ? null : LocalDate.parse(source); } }
Java LocalDate Injectionpublic class BoatSaleProperties { private int id; private LocalDate saleDate;
Without the converter, we would get the following type of error.
Text Conversion Error without ConverterFailed to bind properties under 'sales.homes[0].sale-date' to java.time.LocalDate: ... DateTimeParseException: Text '2010-07-01' could not be parsed at index 4)
-
The
spring-boot-maven-plugin
can be used to both build the Spring Boot executable JAR and demonstrate the instantiations, injections, and desired application output. -
The support project contains an integration unit test that verifies the
PropertyPrinter
component was defined and injected with the expected data. It is activated through a Java class in the starter module. Activate it when you are ready to test.//we will cover testing in a future topic, very soon @Disabled //remove to activate when ready to test public class MyConfigurationTest extends ConfigurationPropertyTest { }
-
Ungraded Question to Ponder: What change(s) could be made to the application to validate the properties and report the following error?
Alternate OutputBinding validation errors on sales.homes[1].location ... codes [sales.homes[1].location.city,city]; arguments []; default message [city]]; default message [must not be blank]
120.4. Auto-Configuration
120.4.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of developing @Configuration
classes used for Auto-Configuration of an application.
You will:
-
Create a
@Configuration
class or@Bean
factory method to be registered based on the result of a condition at startup -
Create a Spring Boot Auto-configuration module to use as a "Starter"
-
Bootstrap Auto-configuration classes into applications using a
spring.factories
metadata file -
Create a conditional component based on the presence of a property value
-
Create a conditional component based on a missing component
-
Create a conditional component based on the presence of a class
-
Define a processing dependency order for Auto-configuration classes
120.4.2. Overview
In this assignment, you will be building a starter module, with a prioritized list of Auto-Configuration classes that will bootstrap an application depending on runtime environment.
This application will have one (1) type of SalesService
out of a choice of two (2) based on the environment at runtime.
Make the @SpringBootApplication class package independent of @Configuration class packages
The Java package for the @SpringBootApplication class must not be a parent or at the same Java package as the @Configuration classes.
Doing so, would place the @Configuration classes in the default component scan path and make them part of the core application — versus a conditional extension of the application.
|
120.4.3. Requirements
You have already implemented the SalesService interface and HomeSales implementation modules in your Bean Factory solution. You will reuse them through a Maven dependency. AutoSales implementation is a copy of HomeSales implementation with name changes. |
-
Create a SalesService interface module (already completed for beanfactory)
-
Add an interface to return a random sale as a
SaleDTO
instance
-
-
Create a HomeSales implementation implementation module (already completed for beanfactory)
-
Add an implementation of the interface to return a SaleDTO with "home" in the name property.
-
-
Create a AutoSales implementation implementation module (new)
-
Add an implementation of the interface to return a SaleDTO with "auto" in the name property.
-
-
Create an Application Module with a
@SpringBootApplication
class-
Add a
CommandLineRunner
implementation class that gets injected with aSalesService
bean and prints "Sales has started" with the name of sale coming from the injected bean.-
Account for a null implementation injected when there is no implementation such that it still quietly prints the state of the component.
-
Include an injection (by any means) for properties
sales.active
andsales.preference
to print their values
-
-
Add a
@Bean
factory for theCommandLineRunner
implementation class — registered as "appCommand".-
Make the injection of the SalesService optional to account for when there is no implementation
@Autowired(required=false)
-
-
Do not place any direct Maven dependencies from the Application Module to the SaleService implementation modules.
At this point you are have mostly repeated the bean factory solution except that you have eliminated the @Bean
factory for theSalesService
in the Application module, added a AutoSale implementation option, and removed a few Maven module dependencies.
-
-
Create a Sale starter Module
-
Add a dependency on the SalesService interface module
-
Add a dependency on the SalesService implementation modules and make them "optional" (this is important) so that the application module will need to make an explicit dependency on the implementation for them to be on the runtime classpath.
-
Add conditional three
@Configuration
classes-
one that provides a
@Bean
factory for the AutoSalesService implementation class-
Make this conditional on the presence of the AutoSale class(es) being available on the classpath
-
-
one that provides a
@Bean
factory for the HomeSalesService implementation class-
Make this conditional on the presence of the HomeSale class(es) being available on the classpath
-
-
A third that provides a another
@Bean
factory for the AutoSale implementation class-
Make this conditional on the presence of the AutoSale class(es) being available on the classpath
-
Make this also conditional on the property
sales.preference
having the value ofautos
.
-
-
-
Set the following priorities for the
@Configuration
classes-
make the AutoSale/property
@Configuration
the highest priority -
make the HomeSale
@Configuration
factory the next highest priority -
make the AutoSale
@Configuration
factory the lowest priorityYou can use org.springframework.boot.autoconfigure.AutoConfigureOrder
to set a relative order — with the lower value having a higher priority.
-
-
Disable all SalesService implementation
@Bean
factories if the propertysales.active
is present and has the valuefalse
Treat false
as being not the valuetrue
. Spring Boot does not offer a disable condition, so you will be looking to enable when the property istrue
or missing. -
Perform necessary registration steps within the Starter module to make the
@Configuration
classes visible to the application bootstrapping.If you don’t know how to register an Auto- @Configuration
class and bypass this step, your solution will not work.Spring Boot only prioritizes explicitly registered @Configuration
classes and not nested classes@Configuration
classes within them.
-
-
Augment the Application module pom to address dependencies
-
Add a dependency on the Starter Module
-
Create a profile (
homes
) that adds a direct dependency on the HomeSales implementation module. The "assignment starter" provides an example of this. -
Create a profile (
autos
) that adds a direct dependency on the AutoSales implementation module.
-
-
Verify your solution will determine its results based on the available classes and properties at runtime. Your solution must have the following behavior
-
no Maven profiles active and no properties provided
$ mvn dependency:list -f *-autoconfig-app | egrep 'ejava-student.*module' (starter module) (interface module) (1) $ mvn clean package $ java -jar *-autoconfig-app/target/*-autoconfig-app-*-bootexec.jar sales.active=(not supplied) sales.preference=(not supplied) Sales is not active (2)
1 no SalesService implementation jars in dependency classpath 2 no implementation was injected because none in the classpath -
homes
only Maven profile active and no properties provided$ mvn dependency:list -f *-autoconfig-app -P homes | egrep 'ejava-student.*module' (starter module) (interface module) (HomeSales implementation module) (1) $ mvn clean package -P homes $ java -jar *-autoconfig-app/target/*-autoconfig-app-*-bootexec.jar sales.active=(not supplied) sales.preference=(not supplied) Sales has started, sale:{homeSales0} (2)
1 HomeSales implementation JAR in dependency classpath 2 HomeSalesService was injected because only implementation in classpath -
autos
only Maven profile active and no properties provided$ mvn dependency:list -f *-autoconfig-app -P autos | egrep 'ejava-student.*module' (starter module) (interface module) (AutoSales implementation module) (1) $ mvn clean package -P autos $ java -jar *-autoconfig-app/target/*-autoconfig-app-*-bootexec.jar sales.active=(not supplied) sales.preference=(not supplied) Sales has started, sale:{autoSales0} (2)
1 AutoSales implementation JAR in dependency classpath 2 AutoSalesService was injected because only implementation in classpath -
homes
andautos
Maven profiles active$ mvn dependency:list -f *-autoconfig-app -P autos,homes | egrep 'ejava-student.*module' (starter module) (interface module) (HomeSales implementation module) (1) (AutoSales implementation module) (2) $ mvn clean install -P autos,homes $ java -jar *-autoconfig-app/target/*-autoconfig-app-*-bootexec.jar sales.active=(not supplied) sales.preference=(not supplied) Sales has started, sale:{homeSales0} (3)
1 HomeSales implementation JAR in dependency classpath 2 AutoSales implementation JAR in dependency classpath 3 HomeSalesService was injected because of higher-priority -
homes
andautos
Maven profiles active and Spring propertySale.preference=autos
$ mvn clean install -P autos,homes (1) java -jar sales-autoconfig-app/target/sales-autoconfig-app-1.0-SNAPSHOT-bootexec.jar --sales.preference=autos (2) sales.active=(not supplied) sales.preference=autos Sales has started, sale:{autoSales0} (3)
1 HomeSale and AutoSale implementation JARs in dependency classpath 2 sales.preference
property supplied withautos
value3 AutoSalesService implementation was injected because of preference specified -
homes
andautos
Maven profiles active and Spring propertysales.active=false
$ mvn clean install -P autos,homes (1) $ java -jar sales-autoconfig-app/target/sales-autoconfig-app-1.0-SNAPSHOT-bootexec.jar --sales.active=false (2) sales.active=false sales.preference=(not supplied) Sales is not active (3)
1 HomeSale and AutoSale implementation JARs in dependency classpath 2 sales.active
property supplied withfalse
value3 no implementation was injected because feature deactivated with property value
-
-
Turn in a source tree with a complete Maven module that will build and demonstrate the Auto-Configuration property processing and output of this application.
120.4.4. Grading
Your solution will be evaluated on:
-
Create a
@Configuration
class/@Bean
factory method to be registered based on the result of a condition at startup-
whether your solution provides the intended implementation class based on the runtime environment
-
-
Create a Spring Boot Auto-configuration module to use as a "Starter"
-
whether you have successfully packaged your
@Configuration
classes as Auto-Configuration classes outside the package scanning of the@SpringBootApplication
-
-
Bootstrap Auto-configuration classes into applications using a
spring.factories
metadata file-
whether you have bootstrapped your Auto-Configuration classes so they are processed by Spring Boot at application startup
-
-
Create a conditional component based on the presence of a property value
-
whether you activate or deactivate a
@Bean
factory based on the presence or absence of a specific the a specific property
-
-
Create a conditional component based on a missing component
-
whether you activate or deactivate a
@Bean
factory based on the presence or absence of a specific@Component
-
-
Create a conditional component based on the presence of a class
-
whether you activate or deactivate a
@Bean
factory based on the presence or absence of a class -
whether your starter causes unnecessary dependencies on the Application module
-
-
Define a processing dependency order for Auto-configuration classes
-
whether your solution is capable of implementing the stated priorities of which bean implementation to instantiate under which conditions
-
120.4.5. Additional Details
-
A starter project is available in
homesales-starter/assignment1-homesales-autoconfig
. Modify Maven groupId and Java package if used. -
A unit integration test is supplied to check the results. We will cover testing very soon. Activate the test when you are ready to get feedback results. The test requires:
-
All classes be below the
info.ejava_student
Java package -
The component class injected with the dependency have the bean identity of
appCommand
. -
The injected service made available via
getSalesService()
method within appCommand.
-
121. Assignment 1b: Logging
121.1. Application Logging
121.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of injecting and calling a logging framework. You will:
-
obtain access to an SLF4J Logger
-
issue log events at different severity levels
-
format log events for regular parameters
-
filter log events based on source and severity thresholds
121.1.2. Overview
In this portion of the assignment, you are going to implement a call thread through a set of components that are in different Java packages that represent at different levels of the architecture.
Each of these components will setup an SLF4J Logger
and issue logging statements relative to the thread.
121.1.3. Requirements
All data is fake and random here. The real emphasis should be placed on the logging events that occur on the different loggers and not on creating a realistic HomeSale result. |
-
Create several components in different Java sub-packages (app, svc, and repo)
-
an
AppCommand
component class in theapp
Java sub-package -
a
HomeSalesServiceImpl
component class in thesvc
Java sub-package -
a
HomeSalesHelperImpl
component class in thesvc
Java sub-package -
a
HomeSalesRepositoryImpl
component class in therepo
Java sub-package
-
-
Implement a chain of calls from the
AppCommand
@Component
run() method through the other components.Figure 34. Required Call Sequence-
AppCommand.run() calls ServiceImpl.calcDelta(homeId, buyerId) with a
homeId
andbuyerId
to determine how far the buyer is behind the leader . -
ServiceImpl.calcDelta(homeId, buyerId) calls RepositoryImpl (getLeaderByHomeId(homeId) and getByBuyerId(buyerId)) to get
HomeSaleDTOs
-
RepositoryImpl can create transient instances with provided Ids and random remaining properties
-
-
ServiceImpl.calcDelta(homeId, buyerId) also calls ResultsHelper.calcDelta() to get the delta between the two
HomeSaleDTOs
-
HelperImpl.calcDelta(leader, target) calls HomeSaleDTO.getAmount() on the two provided
HomeSaleDTO
instances to determine the delta
-
-
Implement a
toString()
method inHomeSaleDTO
that includes thehomeId
,buyerId
, andtime
information. -
Instantiate an SLF4J
Logger
into each of the four components-
manually instantiate a static final
Logger
with the name "X.Y" inAppCommand
-
leverage the Lombok library to instantiate a
Logger
with the name based on the Java package and name of the hosting class for all other components
-
-
Implement logging statements in each of the methods
-
the severity of RepositoryImpl logging events are all TRACE
-
the severity of HelperImpl.calcDelta() logging events are DEBUG and TRACE (there must be at least two — one of each and no other levels)
-
the severity of ServiceImpl.calcDelta() logging events are all INFO and TRACE (there must be at least two — one of each and no other levels)
-
the severity of AppCommand logging events are all INFO (and no other levels)
-
-
Output available race results information in log statements
-
Leverage the LSF4J parameter formatting syntax when implementing the log
-
For each of the INFO and DEBUG statements, include only the
HomeSaleDTO
property values (i.e., homeId, buyerId, timeDelta)Use direct calls on individual properties for INFO and DEBUG statementsi.e., homeSale.getHomeId(), homeSale.getBuyerId(), etc. -
For each of the TRACE statements, include the inferred
HomeSaleDTO.toString()
method.Use inferred toString() on passed object on TRACE statementsi.e., log.debug("…", homeSale) — no direct calls totoString()
-
-
Supply two profiles
-
the root logger must be turned off by default (e.g., in
application.properties
) -
an
app-debug
profile that turns on DEBUG and above priority (e.g., DEBUG, INFO, WARN, ERROR) logging events for all loggers in the application, including "X.Y" -
a
repo-only
profile that turns on only log statements from the repo class(es).
-
-
Wrap your solution in a Maven build that executes the JAR three times with:
-
(no profile) - no logs should be produced
-
app-debug
profile-
DEBUG and higher priority logging events from the application (including "X.Y") are output to console
-
no TRACE messages are output to the console
-
-
repo-only
profile-
logging events from repository class(es) are output to the console
-
no other logging events are output to the console
-
-
121.1.4. Grading
Your solution will be evaluated on:
-
obtain access to an SLF4J Logger
-
whether you manually instantiated a Logger into the AppCommand
@Component
-
whether you leveraged Lombok to instantiate a Logger into the other
@Components
-
whether your App Command
@Component
was named "X.Y" -
whether your other
@Component
loggers were named after the package/class they were declared in
-
-
issue log events at different severity levels
-
where logging statements issued at the specified verbosity levels
-
-
format log events for regular parameters
-
whether SLF4J format statements were used when including variable information
-
-
filter log events based on source and severity thresholds
-
whether your profiles set the logging levels appropriately to only output the requested logging events
-
121.1.5. Other Details
-
You may use any means to instantiate/inject the components (i.e.,
@Bean
factories or@Component
annotations) -
You are encouraged to use Lombok to declare constructors, getter/setter methods, and anything else helpful except for the manual instantiation of the "X.Y" logger in
AppCommand
. -
A starter project is available in
homesales-starter/assignment1-homesales-logging
. It contains a Maven pom that is configured to build and run the application with the following profiles for this assignment:-
no profile
-
app-debug
-
repo-only
-
appenders
-
appenders
andtrace
Modify Maven groupId and Java package if used.
-
-
There is an integration unit test (
MyLoggingNTest
) provided in the starter module. We will discuss testing very soon. Enable this test when you are ready to have the results evaluated.
121.2. Logging Efficiency
121.2.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of making suppressed logging efficient. You will:
-
efficiently bypass log statements that do not meet criteria
121.2.2. Overview
In this portion of the assignment, you are going to increase the cost of calling toString()
on the business object and work to only pay that penalty when needed.
Make your changes to the previous logging assignment solution. Do not create a separate module for this work. |
121.2.3. Requirements
-
Update the
toString()
method inHomeSaleDTO
to be expensive to call-
artificially insert a 750 milliseconds delay within the
toString()
call
-
-
Refactor your log statements, if required, to only call
toString()
when TRACE is active-
leverage the SLF4J API calls to make that as simple as possible
-
121.2.4. Grading
Your solution will be evaluated on:
-
efficiently bypass log statements that do not meet criteria
-
whether your
toString()
method paused the calling thread for 750 milliseconds only for TRACE verbosity when TRACE threshold is activated -
whether the calls to
toString()
are bypassed when priority threshold is set higher than TRACE -
the simplicity of your solution
-
121.2.5. Other Details
-
Include these modifications with the previous work on this overall logging assignment. Meaning — there will not be a separate module turned in for this portion of the assignment.
-
The
app-debug
should not exhibit any additional delays. Therepo-only
should exhibit a 4 (2x2sec) second delay. -
There is an integration unit test (
MyLoggingEfficiencyNTest
) provided in the starter module. We will discuss testing very soon. Enable this test when you are ready to have the results evaluated.
121.3. Appenders and Custom Log Patterns
121.3.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of assigning appenders to loggers and customizing logged events. You will:
-
filter log events based on source and severity thresholds
-
customize log patterns
-
customize appenders
-
add contextual information to log events using Mapped Diagnostic Context
-
use Spring Profiles to conditionally configure logging
121.3.2. Overview
In this portion of the assignment you will be creating/configuring a few basic appenders and mapping loggers to them — to control what, where, and how information is logged. This will involve profiles, property files, and a logback configuration file.
Make your changes to the original logging assignment solution. Do not create a separate module for this work. |
Except for setting the MDC, you are writing no additional code in this portion of the assignment. Most of your work will be in filling out the logback configuration file and setting properties in profile-based property files to tune logged output. |
121.3.3. Requirements
-
Declare two Appenders as part of a custom Logback configuration
-
CONSOLE to output to stdout
-
FILE to output to a file
target/logs/appenders.log
-
-
Assign the Appenders to Loggers
-
root logger events must be assigned to the CONSOLE Appender
-
any log events issued to the "X.Y" Logger must be assigned to both the CONSOLE and FILE Appenders
-
any log events issued to the "…svc" Logger must also be assigned to both the CONSOLE and FILE Appenders
-
any log events issued to the "…repo" Logger must only be assigned to the FILE Appender
Remember "additivity" rules for inheritance and appending assignment These are the only settings you need to make within the Appender file. All other changes can be done through properties. However, there will be no penalty (just complexity) in implementing other mechanisms.
-
-
Add an
appenders
profile that-
automatically enacts the requirements above
-
sets a base of INFO severity and up for all loggers with your application
-
-
Add a
requestId
property to the Mapped Diagnostic Context (MDC)Figure 36. Initialize Mapped Diagnostic Context (MDC)-
generate a random/changing value using a 36 character UUID String
Example: UUID.randomUUID().toString()
⇒d587d04c-9047-4aa2-bfb3-82b25524ce12
-
insert the value prior to the first logging statement — in the
AppCommand
@Component
.
-
-
Declare a custom logging pattern in the
appenders
profile that includes the MDCrequestId
value in each log statements written by the FILE Appender-
The MDC
requestId
is only output by the FILE Appender. Encase the UUID within square[]
brackets so that it can be found in a pattern search more easilyExample: [d587d04c-9047-4aa2-bfb3-82b25524ce12]
-
The MDC
requestId
is not output by the CONSOLE Appender
-
-
Add an additional
trace
profile that-
activates logging events at TRACE severity and up for all loggers with your application
-
adds method and line number information to all entries in the FILE Appender but not the CONSOLE Appender. Use a format of
method:lineNumber
in the output.Example: run:27
Optional: Try defining the logging pattern once with an optional property variable that can be used to add method and line number expression versus repeating the definition twice.
-
-
Apply the
appenders
profile to-
output logging events at INFO severity and up to both CONSOLE and FILE Appenders
-
include the MDC
requestId
in events logged by the FILE Appender -
not include method and line number information in events logged
-
-
Apply the
appenders
andtrace
profiles to-
output logging events at TRACE severity and up to both CONSOLE and FILE Appenders
-
continue to include the MDC
requestId
in events logged by the FILE Appender -
add method and line number information in events logged by the FILE Appender
-
121.3.4. Grading
Your solution will be evaluated on:
-
filter log events based on source and severity thresholds
-
whether your log events from the different Loggers were written to the required appenders
-
whether a log event instance appeared at most once per appender
-
-
customize log patterns
-
whether your FILE Appender output was augmented with the
requestId
whenappenders
profile was active -
whether your FILE Appender output was augmented with method and line number information when
trace
profile was active
-
-
customize appenders
-
whether a FILE and CONSOLE appender were defined
-
whether a custom logging pattern was successfully defined for the FILE Logger
-
-
add contextual information to log events using Mapped Diagnostic Context
-
whether a
requestId
was added to the Mapped Data Context (MDC) -
whether the
requestId
was included in the customized logging pattern for the FILE Appender when theappenders
profile was active
-
-
use Spring Profiles to conditionally configure logging
-
whether your required logging configurations where put in place when activating the
appenders
profile -
whether your required logging configurations where put in place when activating the
appenders
andtrace
profiles
-
121.3.5. Other Details
-
You may use the default Spring Boot LogBack definition for the FILE and CONSOLE Appenders (i.e., include them in your logback configuration definition).
Included Default Spring Boot LogBack definitions<configuration> <include resource="org/springframework/boot/logging/logback/defaults.xml"/> <include resource="org/springframework/boot/logging/logback/console-appender.xml"/> <include resource="org/springframework/boot/logging/logback/file-appender.xml"/> ...
-
Your
appenders
andtrace
profiles may re-define the logging pattern for the FILE logger or add/adjust parameterized definitions. However, try to implement an optional parameterization as your first choice to keep from repeating the same definition. -
The following snippet shows an example resulting logfile from when
appenders
and thenappenders,trace
profiles were activated. Yours may look similar to the following:Example target/logs/appenders.log - "appenders" profile active$ rm target/logs/appenders.log $ java -jar target/assignment1-*-logging-1.0-SNAPSHOT-bootexec.jar --spring.profiles.active=appenders $ head target/logs/appenders.log head target/logs/appenders.log (1) 21:46:01.335 INFO -- [c934e045-1294-43c9-8d22-891eec2b8b84] Y : HomeSales has started (2)
1 requestId
is supplied in all FILE output whenappenders
profile active2 no method and line number info supplied Example target/logs/appenders.log - "appenders,trace" profiles active$ rm target/logs/appenders.log $ java -jar target/assignment1-*-logging-1.0-SNAPSHOT-bootexec.jar --spring.profiles.active=appenders,trace $ head target/logs/appenders.log $ head target/logs/appenders.log (1) 21:47:33.784 INFO -- [0289d00e-5b28-4b01-b1d5-1ef8cf203d5d] Y.run:27 : HomeSales has started (2)
1 requestId
is supplied in all FILE output whenappenders
profile active2 method and line number info are supplied -
There is a set of unit integration tests provided in the support module. We will cover testing very soon. Enable them when you are ready to evaluate your results.
122. Assignment 1c: Testing
The following parts are broken into different styles of conducting a pure unit test and unit integration test — based on the complexity of the class under test. None of the approaches are deemed to be "the best" for all cases.
-
tests that run without a Spring context can run blazingly fast, but lack the target runtime container environment
-
tests that use Mocks keep the focus on the subject being tested, but don’t verify end-to-end integration
-
tests that assemble real components provide verification of end-to-end capability but can introduce additional complexities and performance costs
It is important that you come away knowing how to implement the different styles of unit testing so that they can be leveraged based on specific needs.
122.1. Demo
The assignment1-homesales-testing
assignment starter contains a @SpringBootApplication
main class and a some demonstration code that will execute at startup when using the demo
profile.
$ mvn package -Pdemo
06:34:21.217 INFO -- BuyersServiceImpl : buyer added: BuyerDTO(id=null, firstName=warren, lastName=buffet, dob=1930-08-30)
06:34:21.221 INFO -- BuyersServiceImpl : invalid buyer: BuyerDTO(id=null, firstName=future, lastName=buffet, dob=2022-08-25), [buyer.dob: must be greater than 12 years]
You can follow that thread of execution through the source code to get better familiarity with the code you will be testing.
122.2. Unit Testing
122.2.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of implementing a unit test for a Java class. You will:
-
write a test case and assertions using JUnit 5 "Jupiter" constructs
-
leverage the AssertJ assertion library
-
execute tests using Maven Surefire plugin
122.2.2. Overview
In this portion of the assignment, you are going to implement a test case with 2 unit tests for a completed Java class.
The code under test is 100% complete and provided to you in a separate homesales-support-testing
module.
<dependency>
<groupId>info.ejava.assignments.testing.homesales</groupId>
<artifactId>homesales-support-testing</artifactId>
<version>${ejava.version}</version>
</dependency>
Your assignment will be implemented in a module you create and form a dependency on the implementation code.
122.2.3. Requirements
-
Start with a dependency on supplied and completed
BuyerValidatorImpl
andBuyerDTO
classes in thehomesales-support-testing
module. You only need to understand and test them. You do not need to implement or modify anything being tested.-
BuyerValidatorImpl
implements avalidateNewBuyer
method that returns aList<String>
with identified validation error messages -
BuyerDTO
must have the following to be considered valid for registration:-
null
id
-
non-blank
firstName
andlastName
-
dob
older than minAge
-
-
-
Implement a plain unit test case class for
BuyerValidatorImpl
-
the test must be implemented without the use of a Spring context
-
all instance variables for the test case must come from plain POJO calls
-
tests must be implemented with JUnit 5 (Jupiter) constructs.
-
tests must be implemented using AssertJ assertions. Either BDD or regular form of assertions is acceptable.
-
-
The unit test case must have an
init()
method configured to execute "before each" test-
this can be used to initialize variables prior to each test
-
-
The unit test case must have a test that verifies a valid BuyerDTO will be reported as valid.
-
The unit test case must have a test method that verifies an invalid BuyerDTO will be reported as invalid with a string message for each error.
-
Name the test so that it automatically gets executed by the Maven Surefire plugin.
122.2.4. Grading
Your solution will be evaluated on:
-
write a test case and assertions using JUnit 5 "Jupiter" constructs
-
whether you have implemented a pure unit test absent of any Spring context
-
whether you have used JUnit 5 versus JUnit 4 constructs
-
whether your
init()
method was configured to be automatically called "before each" test -
whether you have tested with a valid and invalid
BuyerDTO
and verified results where appropriate
-
-
leverage AssertJ assertion libraries
-
whether you have used assertions to identify pass/fail
-
whether you have used the AssertJ assertions
-
-
execute tests using Maven Surefire plugin
-
whether your unit test is executed by Maven surefire during a build
-
122.2.5. Additional Details
-
A quick start project is available in
homesales-starter/assignment1-homesales-testing
, but-
copy the module into your own area
-
modify at least the Maven groupId and Java package when used
-
-
You are expected to form a dependency on the
homesales-support-testing
module. The only things present in yoursrc/main
would be demonstration code that is supplied to you in the starter — but not part of any requirement.
122.3. Mocks
122.3.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of instantiating a Mock as part of unit testing. You will:
-
implement a mock (using Mockito) into a JUnit unit test
-
define custom behavior for a mock
-
capture and inspect calls made to mocks by subjects under test
122.3.2. Overview
In this portion of the assignment, you are going to again implement a unit test case for a class and use a mock for one of its dependencies.
122.3.3. Requirements
-
Start with a dependency on supplied and completed
BuyersServiceImpl
and other classes in thehomesales-support-impl
module. You only need to understand and test them. You do not need to implement or modify anything being tested.-
BuyersServiceImpl
implements acreateBuyer
method that-
validates the buyer using a
BuyerValidator
instance -
assigns the
id
if valid -
throws an exception with the error messages from the validator if invalid
-
-
-
Implement a unit test case for the
BuyersService
to verify validation for a valid and invalidBuyerDTO
-
the test case must be implemented without the use of a Spring context
-
all instance variables for the test case, except for the mock, must come from plain POJO calls
-
tests must be implemented using AssertJ assertions. Either BDD or regular form of assertions is acceptable.
-
a Mockito Mock must be used for the
BuyerValidator
instance. You may not use theBuyerValidatorImpl
class as part of this test
-
-
The unit test case must have an
init()
method configured to run "before each" test and initialize theBuyersServiceImpl
with the Mock instance forBuyerValidator
. -
The unit test case must have a test that verifies a valid registration will be handled as valid.
-
configure the Mock to return an empty
List<String>
when asked to validate the buyer.Understand how the default Mock behaves before going too far with this. -
programmatically verify the Mock was called to validate the
BuyerDTO
as part of the test criteria
-
-
The unit test case must have a test method that verifies an invalid registration will be reported with an exception.
-
configure the Mock to return a
List<String>
with errors for the buyer -
programmatically verify the Mock was called to validate the
BuyerDTO
as part of the test criteria
-
-
Name the test so that it automatically gets executed by the Maven Surefire plugin.
This assignment is not to test the Mock. It is a test of the Subject using a Mock
You are not testing or demonstrating the Mock.
Assume the Mock works and use the capabilities of the Mock to test the subject(s) they are injected into.
Place any experiments with the Mock in a separate Test Case and keep this assignment focused on testing the subject (with the functioning Mock).
|
122.3.4. Grading
Your solution will be evaluated on:
-
implement a mock (using Mockito) into a JUnit unit test
-
whether you used a Mock to implement the
BuyerValidator
as part of this unit test -
whether you used a Mockito Mock
-
whether your unit test is executed by Maven surefire during a build
-
-
define custom behavior for a mock
-
whether you successfully configured the Mock to return an empty collection for the valid buyer
-
whether you successfully configured the Mock to return a collection of error messages for the invalid buyer
-
-
capture and inspect calls made to mocks by subjects under test
-
whether you programmatically checked that the Mock validation method was called as a part of registration using Mockito library calls
-
122.3.5. Additional Details
-
This portion of the assignment is expected to primarily consist of one additional test case added to the
src/test
tree. -
You may use BDD or non-BDD syntax for this test case and tests.
122.4. Mocked Unit Integration Test
122.4.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of implementing a unit integration test using a Spring context and Mock beans. You will:
-
to implement unit integration tests within Spring Boot
-
implement mocks (using Mockito) into a Spring context for use with unit integration tests
122.4.2. Overview
In this portion of the assignment, you are going to implement an injected Mock bean that will be injected by Spring into both the BuyersServiceImpl
@Component
for operational functionality and the unit integration test for configuration and inspection commands.
122.4.3. Requirements
-
Start with a supplied, completed, and injectable 'BuyersServiceImpl' versus instantiating one like you did in the pure unit tests.
-
Implement a unit integration test for the
BuyersService
for a valid and invalidBuyerDTO
-
the test must be implemented using a Spring context
-
all instance variables for the test case must come from injected components — even trivial ones.
-
the
BuyerValidator
must be implemented as Mockito Mock/Spring bean and injected into both theBuyerValidatorImpl
@Component
and accessible in the unit integration test. You may not use theBuyerValidatorImpl
class as part of this test. -
define and inject a
BuyerDTO
for a valid buyer as an example of a bean that is unique to the test. This can come from a@Bean
factory
-
-
The unit integration test case must have a test that verifies a valid registration will be handled as valid.
-
The unit integration test case must have a test method that verifies an invalid registration will be reported with an exception.
-
Name the unit integration test so that it automatically gets executed by the Maven Surefire plugin.
122.4.4. Grading
Your solution will be evaluated on:
-
to implement unit integration tests within Spring Boot
-
whether you implemented a test case that instantiated a Spring context
-
whether the subject(s) and their dependencies were injected by the Spring context
-
whether the test case verified the requirements for a valid and invalid input
-
whether your unit test is executed by Maven surefire during a build
-
-
implement mocks (using Mockito) into a Spring context for use with unit integration tests
-
whether you successfully declared a Mock bean that was injected into the necessary components under test and the test case for configuration
-
122.4.5. Additional Details
-
This portion of the assignment is expected to primarily consist of
-
adding one additional test case added to the
src/test
tree -
adding any supporting
@TestConfiguration
or other artifacts required to define the Spring context for the test -
changing the Mock to work with the Spring context
-
-
Anything you may have been tempted to simply instantiate as
private X x = new X();
can be changed to an injection by adding a@(Test)Configuration
/@Bean
factory to support testing. The point of having the 100% injection requirement is to encourage encapsulation and reuse among Test Cases for all types of test support objects. -
You may add the
BuyersTestConfiguration
to the Spring context using either of the two annotation properties-
@SpringBootTest.classes
-
@Import.value
-
-
You may want to experiment with applying @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE) versus the default @Scope(ConfigurableBeanFactory.SCOPE_SINGLETON) to you injected Buyer and generate a random name in the
@Bean
factory. Every injectedSCOPE_SINGLETON
(default) gets the same instance.SCOPE_PROTOTYPE
gets a separate instance. Useful to know, but not a graded part of the assignment.
122.5. Unmocked/BDD Unit Integration Testing
122.5.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of conducting an end-to-end unit integration test that is completely integrated with the Spring context and using Behavior Driven Design (BDD) syntax. You will:
-
make use of BDD acceptance test keywords
122.5.2. Overview
In this portion of the assignment, you are going to implement an end-to-end unit integration test case for two classes integrated/injected using the Spring context with the syntactic assistance of BDD-style naming.
122.5.3. Requirements
-
Start with a supplied, completed, and injectable
BuyersServiceImpl
by creating a dependency on thehomesales-support-testing
module. There are to be no POJO or Mock implementations of any classes under test. -
Implement a unit integration test for the
BuyersService
for a valid and invalidBuyerDTO
-
the test must be implemented using a Spring context
-
all instance variables for the test case must come from injected components
-
the
BuyerValidator
must be injected into theBuyersServiceImpl
using the Spring context. Your test case will not need access to that component. -
define and inject a
BuyerDTO
for a valid buyer as an example of a bean that is unique to the test. This can come from a@Bean
factory from a Test Configuration
-
-
The unit integration test case must have
-
a display name defined for this test case that includes spaces
-
a display name generation policy for contained test methods that includes spaces
-
-
The unit integration test case must have a test that verifies a valid registration will be handled as valid.
-
use BDD (
then()
) alternative syntax for AssertJ assertions
-
-
The unit integration test case must have a test method that verifies an invalid registration will be reported with an exception.
-
use BDD (
then()
) alternative syntax for AssertJ assertions
-
-
Name the unit integration test so that it automatically gets executed by the Maven Surefire plugin.
122.5.4. Grading
Your solution will be evaluated on:
-
make use of BDD acceptance test keywords
-
whether you provided a custom display name for the test case that included spaces
-
whether you provided a custom test method naming policy that included spaces
-
whether you used BDD syntax for AssertJ assertions
-
122.5.5. Additional Details
-
This portion of the assignment is expected to primarily consist of adding a test case that
-
is based on the Mocked Unit Integration Test solution, which relies primarily on the beans of the Spring context
-
removes any Mocks
-
defines a names and naming policies for JUnit
-
changes AssertJ syntax to BDD form
-
-
The "custom test method naming policy" can be set using either an @Annotation or property. The properties approach has the advantage of being global to all tests within the module.
HTTP API
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
123. Introduction
123.1. Goals
The student will learn:
-
how the WWW defined an information system capable of implementing system APIs
-
identify key differences between a truly RESTful API and REST-like or HTTP-based APIs
-
how systems and some actions are broken down into resources
-
how web interactions are targeted at resources
-
standard HTTP methods and the importance to use them as intended against resources
-
individual method safety requirements
-
value in creating idempotent methods
-
standard HTTP response codes and response code families to respond in specific circumstances
123.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
identify API maturity according to the Richardson Maturity Model (RMM)
-
identify resources
-
define a URI for a resource
-
define the proper method for a call against a resource
-
identify safe and unsafe method behavior
-
identify appropriate response code family and value to use in certain circumstances
124. World Wide Web (WWW)
The World Wide Web (WWW) is an information system of web resources identified by Uniform Resource Locators (URLs) that can be interlinked via hypertext and transferred using Hypertext Transfer Protocol (HTTP). [18] Web resources started out being documents to be created, downloaded, replaced, and removed but has progressed to being any identifiable thing — whether it be the entity (e.g., person), something related to that entity (e.g., photo), or an action (e.g., change of address). [19]
124.1. Example WWW Information System
The example information system below is of a standard set of content types, accessed through a standard set of methods, and related through location-independent links using URLs.
125. REST
Representational State Transfer (REST) is an architectural style for creating web services and web services that conform to this style are considered "Restful" web services [20]. REST was defined in 2000 by Roy Fielding in his doctoral dissertation that was also used to design HTTP 1.1. [21] REST relies heavily on the concepts and implementations used in the World Wide Web — which centers around web resources addressable using URIs.
125.1. HATEOAS
At the heart of REST is the notion of hyperlinks to represent state. For example,
the presence of a address_change
link may mean the address of a person can
be changed and the client accessing that person representation is authorized to initiate the change.
The presence of current_address
and addresses
links identifies how the client
can obtain the current and past addresses for the person. This is shallow description
of what is defined as
"Hypermedia As The Engine Of Application State" (HATEOAS).
The interface contract allows clients to dynamically determine current capabilities of a resource and the resource to add capabilities over time.
125.2. Clients Dynamically Discover State
HATEOAS permits the capabilities of client and server to advance independently through the dynamic discovery of links. [22]
125.3. Static Interface Contracts
Dynamic discovery differs significantly from remote procedure call (RPC) techniques where static interface contracts are documented in detail to represent a certain level of capability offered by the server and understood by the client. A capability change rollout under the RPC approach may require coordination between all clients involved.
125.4. Internet Scale
As clients morph from a few, well known sources to millions of lightweight apps running on end-user devices — the need to decouple service capability deployments through dynamic discovery becomes more important. Many features of REST provides this trait.
Do you have control of when clients update?
Design interfaces, clients, and servers with forward and backward compatibility in mind to allow for flexible rollout with minimal downtime. |
125.5. How RESTful?
Many of the open and interfacing concepts of the WWW are attractive to today’s service interface designers. However, implementing dynamic discovery is difficult — potentially making systems more complex and costly to develop. REST officially contains more than most interface designs use or possibly need to use. This causes developments to take only what they need — and triggers some common questions:
What is your definition of REST?
How RESTful are you?
125.6. Buzzword Association
For many developers and product advertisements eager to get their names associated with a modern and successful buzzword — REST to them is (incorrectly) anything using HTTP that is not SOAP. For others, their version of REST is (still incorrectly) anything that embraces much of the WWW but still lacks the rigor of making the interfaces dynamic through hyperlinks.
This places us in a state where most of the world refers to something as REST and RESTful when what they have is far from the official definition.
125.7. REST-like or HTTP-based
Giving a nod to this situation, we might use a few other terms:
-
REST-like
-
HTTP-based
Better yet and for more precise clarity of meaning, I like the definitions put forward in the Richardson Maturity Model (RMM).
125.8. Richardson MaturityModel (RMM)
The Richardson Maturity Model (RMM) was developed by Leonard Richardson and breaks down levels of RESTful maturity. [23] Some of the old CORBA and XML RPC qualify for Level 0 only for the fact they adopt HTTP. However, they tunnel thru many WWW features in spite of using HTTP. Many modern APIs achieve some level of compliance with Levels 1 and 2, but rarely will achieve Level 3. However, that is okay because as you will see in the following sections — there are many worthwhile features in Level 2 without adding the complexity of HATEOAS.
Level 3 |
|
Level 2 |
|
Level 1 |
|
Level 0 |
|
125.9. "REST-like"/"HTTP-based" APIs
Common "REST-like" or "HTTP-based" APIs are normally on a trajectory to strive for RMM Level 2 and are based on a few main principals included within the definition of REST.
-
HTTP Protocol
-
Resources
-
URIs
-
Standard HTTP Method and Status Code Vocabulary
-
Standard Content Types for Representations
125.10. Uncommon REST Features Adopted
Links are used somewhat. However, they are rarely used in an opaque manner, rarely used within payloads, and rarely used with dynamic discovery. Clients commonly know the resources they are communicating with ahead of time and build URIs to those resources based on exposed details of the API and IDs returned in earlier responses. That is technically not a RESTful way to do things.
126. RMM Level 2 APIs
Although I will commonly hear projects state that they implement a "REST" interface (and sometimes use it as "HTTP without SOAP"), I have rarely found a project that strives for dynamic discovery of resource capabilities as depicted by Roy Fielding and categorized by RMM Level 3.
These APIs try to make the most of HTTP and the WWW, thus at least making the term "HTTP-based" appropriate and RMM-level 2 a more accurate description. Acknowledging that there is technically one definition of REST and very few attempting to (or needing to) achieve it — I will be targeting RMM Level 2 for the web service interfaces developed in this course and will generically refer to them as "APIs".
At this point lets cover some of the key points of a RMM Level 2 API that I will be covering as a part of the course.
127. HTTP Protocol Embraced
Various communications protocols have been transport agnostic. If you are old enough to remember SOAP , you will have seen references to it being mapped to protocols other than HTTP (e.g., SOAP over JMS) and its use of HTTP lacked any leverage of WWW HTTP capabilities.
For SOAP and many other RPC protocols operating over HTTP — communication was tunnelled though HTTP POST messages, bypassing investments made in the existing and robust WWW infrastructure. For example, many requests for the same status of the same resource tunnelled thru POST messages would need to be answered again-and-again by the service. To fully leverage HTTP client-side and server-side caches, an alternative approach of exposing the status as a GET of a resource would save the responding service a lot of unnecessary work and speed up client.
REST communication technically does not exist outside of the HTTP transport protocol. Everything is expressed within the context of HTTP, leveraging the investment into the world’s largest information system.
128. Resource
By the time APIs reach RMM Level 1 compliance, service domains have been broken down into key areas, known as resources. These are largely noun-based (e.g., Documents, People, Companies), lower-level properties, or relationships. However, they go on to include actions or a long-running activity to be able to initiate them, monitor their status, and possibly perform some type of control.
Nearly anything can be made into a resource. HTTP has a limited number of methods but can have an unlimited number of resources. Some examples could be:
-
products
-
categories
-
customers
-
todos
128.1. Nested Resources
Resources can be nested under parent or related resources.
-
categories/{id}
-
categories/{id}/products
-
todos/{name}
-
todos/{name}/items
129. Uniform Resource Identifiers (URIs)
Resources are identified using Uniform Resource Identifier (URIs).
A URI is a compact sequence of characters that identifies an abstract or physical resource. [24]
URIs have a generic syntax composed of several components and are specialized by individual schemes (e.g., http, mailto, urn). The precise generic URI and scheme-specific rules guarantee uniformity of addresses.
https://github.com/spring-projects/spring-boot/blob/master/LICENSE.txt#L6 (1) mailto:joe@example.com?cc=bob@example.com&body=hello (2) urn:isbn:0-395-36341-1 (3)
129.1. Related URI Terms
There are a few terms commonly associated with URI.
- Uniform Resource Locator (URL)
-
URLs are a subset of URIs that provide a means to locate a specific resource by specifying primary address mechanism (e.g., network location). [24]
- Uniform Resource Name (URN)
-
URNs are used to identify resources without location information. They are a particular URI scheme. One common use of a URN is to define an XML namespace. e.g.,
<core xmlns="urn:activemq:core">
. - URI reference
-
legal way to specify a full or relative URI
- Base URI
-
leading components of the URI that form a base for additional layers of the tree to be appended
129.2. URI Generic Syntax
URI components are listed in hierarchical significance — from left to right — allowing for scheme-independent references to be made between resources in the hierarchy. The generic URI syntax and components are as follows: [26]
URI = scheme:[//authority]path[?query][#fragment]
The authority component breaks down into subcomponents as follows:
authority = [userinfo@]host[:port]
Scheme |
sequence of characters, beginning with a letter followed by letters, digits, plus (+), period, or hyphen(-) |
Authority |
naming authority responsible for the remainder of the URI |
User |
how to gain access to the resource (e.g., username) - rare, authentication use deprecated |
Host |
case-insensitive DNS name or IP address |
Port |
port number to access authority/host |
Path |
identifies a resource within the scope of a naming authority. Terminated by the first question mark ("?"), pound sign ("#"), or end of URI. When the authority is present, the path must begin with a slash ("/") character |
Query |
indicated with first question mark ("?") and ends with pound sign ("#") or end of URI |
Fragment |
indicated with a pound("#") character and ends with end of URI |
129.3. URI Component Examples
The following shows the earlier URI examples broken down into components.
-- authority fragment -- / \ https://github.com/spring-projects/spring-boot/blob/master/LICENSE.txt#L6 \ \ -- scheme -- path
Path cannot begin with the two slash ("//") character string when the authority is not present.
-- path / mailto:joe@example.com?cc=bob@example.com&body=hello \ \ -- scheme -- query
-- scheme / urn:isbn:0-395-36341-1 \ -- path
129.4. URI Characters and Delimiters
URI characters are encoded using UTF-8. Component delimiters are slash ("/"), question mark ("?"), and pound sign ("#"). Many of the other special characters are reserved for use in delimiting the sub-components.
: / @ [ ] ? (1)
1 | square brackets("[]") are used to surround newer (e.g., IPv6) network addresses |
! $ & ' ( ) * + , ; =
alpha(A-Z,a-z), digit (0-9), dash(-), period(.), underscore(_), tilde(~)
129.5. URI Percent Encoding
(Case-insensitive) Percent encoding is used to represent characters reserved for delimiters or other purposes (e.g., %x2f and %x2F both represent slash ("/") character). Unreserved characters should not be encoded.
https://www.google.com/search?q=this+%2F+that (1)
1 | slash("/") character is Percent Encoded as %2F |
129.6. URI Case Sensitivity
Generic components like scheme and authority are case-insensitive but normalize to lowercase. Other components of the URI are assumed to be case-sensitive.
HTTPS://GITHUB.COM/SPRING-PROJECTS/SPRING-BOOT (1) https://github.com/SPRING-PROJECTS/SPRING-BOOT (2)
1 | value pasted into browser |
2 | value normalized by browser |
129.7. URI Reference
Many times we need to reference a target URI and do so without specifying the complete URI. A URI reference can be the full target URI or a relative reference. A relative reference allows for a set of resources to reference one another without specifying a scheme or upper parts of the path. This also allows entire resource trees to be relocated without having to change relative references between them.
129.8. URI Reference Terms
- target uri
-
the URI being referenced
Example Target URIhttps://github.com/spring-projects/spring-boot/blob/master/LICENSE.txt#L6
- network-path reference
-
relative reference starting with two slashes ("//"). My guess is that this would be useful in expressing a URI to forward to without wishing to express http versus https (i.e., "use the same scheme used to contact me")
Example Network Path Reference//github.com/spring-projects/spring-boot/blob/master/LICENSE.txt#L6
- absolute-path reference
-
relative reference that starts with a slash ("/"). This will be a portion of the URI that our API layer will be well aware of.
Example Absolute Path Reference/spring-projects/spring-boot/blob/master/LICENSE.txt#L6
- relative-path reference
-
relative reference that does not start with a slash ("/"). First segment cannot have a ":" — avoid confusion with scheme by prepending a "./" to the path. This allows us to express the branch of a tree from a point in the path.
Example Relative Path Referencespring-boot/blob/master/LICENSE.txt#L6 LICENSE.txt#L6 ../master/LICENSE.txt#L6
- same-document reference
-
relative reference that starts with a pound ("#") character, supplying a fragment identifier hosted in the current URI.
Example Same Document Reference#L6
- base URI
-
leading components of the URI that form a base for additional layers of the tree to be appended
Example Base URIhttps://github.com/spring-projects /spring-projects
129.9. URI Naming Conventions
Although URI specifications do not list path naming conventions and REST promotes opaque URIs — it is a common practice to name resource collections with a URI path that ends in a plural noun. The following are a few example absolute URI path references.
/api/products (1)
/api/categories
/api/customers
/api/todo_lists
1 | URI paths for resource collections end with a plural noun |
Individual resource URIs are identified by an external identifier below the parent resource collection.
/api/products/{productId} (1)
/api/categories/{categoryId}
/api/customers/{customerId}
/api/customers/{customerId}/sales
1 | URI paths for individual resources are scoped below parent resource collection URI |
Nested resource URIs are commonly expressed as resources below their individual parent.
/api/products/{productId}/instructions (1)
/api/categories/{categoryId}/products
/api/customers/{customerId}/purchases
/api/todo_lists/{listName}/todo_items
1 | URI paths for resources of parent are commonly nested below parent URI |
129.10. URI Variables
The query at the end of the URI path can be used to express optional and mandatory arguments. This is commonly used in queries.
http://127.0.0.1:8080/jaxrsInventoryWAR/api/categories?name=&offset=0&limit=0
name=(null)
offset=>0
limit=>0
Nested path path parameters may express mandatory arguments.
http://127.0.0.1:8080/jaxrsInventoryWAR/api/products/{id}
http://127.0.0.1:8080/jaxrsInventoryWAR/api/products/1
id=>1
130. Methods
HTTP contains a bounded set of methods that represent the "verbs" of what we are communicating relative to the resource. The bounded set provides a uniform interface across all resources.
There are four primary methods that you will see in most tutorials, examples, and application code.
obtain a representation of resource using a non-destructive read |
|
create a new resource or tunnel a command to an existing resource |
|
create a new resource with having a well-known identity or replace existing |
|
delete target resource |
GET http://127.0.0.1:8080/jaxrsInventoryWAR/api/products/1
130.1. Additional HTTP Methods
There are two additional methods useful for certain edge conditions implemented by application code.
logically equivalent to a |
|
partial replace. Similar to PUT, but indicates payload provided does not represent the entire resource and may be represented as instructions of modifications to make. Useful hint for intermediate caches |
There are three more obscure methods used for debug and communication purposes.
generates a list of methods supported for resource |
|
echo received request back to caller to check for changes |
|
used to establish an HTTP tunnel — to proxy communications |
131. Method Safety
Proper execution of the internet protocols relies on proper outcomes for each method. With the potential of client-side proxies and server-side reverse proxies in the communications chain — one needs to pay attention to what can and should not change the state of a resource. "Method Safety" is a characteristic used to describe whether a method executed against a specific resource modifies that resource or has visible side effects.
131.1. Safe and Unsafe Methods
The following methods are considered "Safe" — thus calling them should not modify a resource and will not invalidate any intermediate cache.
-
GET
-
HEAD
-
OPTIONS
-
TRACE
The following methods are considered "Unsafe" — thus calling them is assumed to modify the resource and will invalidate any intermediate cache.
-
POST
-
PUT
-
PATCH
-
DELETE
-
CONNECT
131.2. Violating Method Safety
Do not violate default method safety expectations
Internet communications is based upon assigned method safety expectations. However, these are just definitions. Your application code has the power to implement resource methods any way you wish and to knowingly or unknowingly violate these expectations. Learn the expected characteristics of each method and abide by them or risk having your API not immediately understood and render built-in Internet capabilities (e.g., caches) useless. The following are examples of what not to do: Example Method Saftey Violations
|
132. Idempotent
Idempotence describes a characteristic where a repeated event produces the same outcome every time executed. This is a very important concept in distributed systems that commonly have to implement eventual consistency — where failure recovery can cause unacknowledged commands to be executed multiple times.
The idempotent characteristic is independent of method safety. Idempotence only requires that the same result state be achieved each time called.
132.1. Idempotent and non-Idempotent Methods
The application code implementing the following HTTP methods should strive to be idempotent.
-
GET
-
PUT
-
DELETE
-
HEAD
-
OPTIONS
The following HTTP methods are defined to not be idempotent.
-
POST
-
PATCH
-
CONNECT
Relationship between Idempotent and browser page refresh warnings?
The standard convention of Internet protocol is that most methods except for POST are assumed to be idempotent. That means a page refresh for a page obtained from a GET gets immediately refreshed and a warning dialogue is displayed if it was the result of a POST. |
133. Response Status Codes
Each HTTP response is accompanied by a standard HTTP status code. This is a value that tells the caller whether the request succeeded or failed and a type of success or failure.
Status codes are seperated into five (5) categories
-
1xx - informational responses
-
2xx - successful responses
-
3xx - redirected responses
-
4xx - client errors
-
5xx - server errors
133.1. Common Response Status Codes
The following are common response status codes
Code | Name | Meaning |
---|---|---|
200 |
OK |
"We achieved what you wanted - may have previously done this" |
201 |
CREATED |
"We did what you asked and a new resource was created" |
202 |
ACCEPTED |
"We officially received your request and will begin processing it later" |
204 |
NO_CONTENT |
"Just like a 200 with an empty payload, except the status makes this clear" |
400 |
BAD_REQUEST |
"I do not understand what you said and never will" |
401 |
UNAUTHORIZED |
"We need to know who you are before we do this" |
403 |
FORBIDDEN |
"We know who you are and you cannot say what you just said" |
422 |
UNPROCESSABLE_ENTITY |
"I understood what you said, but you said something wrong" |
500 |
INTERNAL_ERROR |
"Ouch! Nothing wrong with what you asked for or supplied, but we currently have issues completing. Try again later and we may have this fixed." |
134. Representations
Resources may have multiple independent representations. There is no direct tie between the data format received from clients, returned to clients, or managed internally. Representations are exchanged using standard MIME or Media types. Common media types for information include
-
application/json
-
application/xml
-
text/plain
Common data types for raw images include
-
image/jpg
-
image/png
134.1. Content Type Headers
Clients and servers specify the type of content requested or supplied in header fields.
defines a list of media types the client understands, in priority order |
|
identifies the format for data supplied in the payload |
In the following example, the client supplies a representation in
text/plain
and requests a response in XML or JSON — in that priority order.
The client uses the Accept header to express which media types it can handle
and both use the Content-Type to identify the media type of what was provided.
> POST /greeting/hello > Accept: application/xml,application/json > Content-Type: text/plain hi < 200/OK < Content-Type: application/xml <greeting type="hello" value="hi"/>
The next exchange is similar to the previous example, with the exception that the client provides no payload and requests JSON or anything else (in that priority order) using the Accept header. The server returns a JSON response and identifies the media type using the Content-Type header.
> GET /greeting/hello?name=jim > Accept: application/json,*/* < 200/OK < Content-Type: application/json { "msg" : "hi, jim" }
135. Links
RESTful applications dynamically express their state through the use of hyperlinks. That is an RMM Level 3 characteristic use of links. As mentioned earlier, REST-like APIs do not include that level of complexity. If they do use links, these links will likely be constrained to standard response headers.
The following is an example partial POST response with links expressed in the header.
POST http://localhost:8080/ejavaTodos/api/todo_lists
{"name":"My First List"}
=> Created/201
Location: http://localhost:8080/ejavaTodos/api/todo_lists/My%20First%20List (1)
Content-Location: http://localhost:8080/ejavaTodos/api/todo_lists/My%20First%20List (2)
1 | Location expresses the URI to the resource just acted upon |
2 | Content-Location expresses the URI of the resource represented in the payload |
136. Summary
In this module we learned that:
-
technically — terms "REST" and "RESTful" have a specific meaning defined by Roy Fielding
-
the Richardson Maturity Model (RMM) defines several levels of compliance to RESTFul concepts, with level 3 being RESTful
-
very few APIs achieve full RMM level 3 RESTful adoption
-
but that is OK!!! — there are many useful and powerful WWW constructs easily made available before reaching the RMM level 3
-
can be referred to as "REST-like", "HTTP-based", or "RMM level 2"
-
marketers of the world attempting to leverage a buzzword, will still call them REST APIs
-
-
most serious REST-like APIs adopt
-
HTTP
-
multiple resources identified through URIs
-
HTTP-compliant use of methods and status codes
-
method implementations that abide by defined safety and idempotent characteristics
-
standard resource representation formats like JSON, XML, etc.
-
Spring MVC
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
137. Introduction
You learned the meaning of web APIs and supporting concepts in the previous lecture. This module is an introductory lesson to get started implementing some of those concepts. Since this lecture is primarily implementation, I will use a set of simplistic remote procedure calls (RPC) that are far from REST-like and place the focus on making and mapping to HTTP calls from clients to services using Spring and Spring Boot.
137.1. Goals
The student will learn to:
-
identify two primary paradigms in today’s server logic: synchronous and reactive
-
develop a service accessed via HTTP
-
develop a client to an HTTP-based service
-
access HTTP response details returned to the client
-
explicitly supply HTTP response details in the service code
137.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
identify the difference between the Spring MVC and Spring WebFlux frameworks
-
identify the difference between synchronous and reactive approaches
-
identify reasons to choose synchronous or reactive
-
implement a service method with Spring MVC synchronous annotated controller
-
implement a client using Spring MVC RestTemplate
-
implement a client using Spring Webflux in synchronous mode
-
pass parameters between client and service over HTTP
-
return HTTP response details from service
-
access HTTP response details in client
138. Spring Web APIs
There are two primary, overlapping frameworks within Spring for developing HTTP-based APIs:
Spring MVC is the legacy framework that operates using synchronous, blocking request/reply constructs. Spring WebFlux is the follow-on framework that builds on Spring MVC by adding asynchronous, non-blocking constructs that are inline with the reactive streams paradigm.
138.1. Lecture/Course Focus
The focus of this lecture, module, and early portions of the course will be on synchronous communications patterns. The synchronous paradigm is simpler and there are a lot of API concepts to cover before worrying about managing the asynchronous streams of the reactive programming model. In addition to reactive concepts, Spring WebFlux brings in a heavy dose of Java 8 lambdas and functional programming that should only be applied once we master more of the API concepts.
However, we need to know the two approaches exist in order to make sense of
the software and available documentation. For example, the client-side of
Spring MVC (i.e.,
RestTemplate
) has been put in "maintenance mode" (minor changes and bug fixes only) and
its duties fulfilled by Spring WebFlux (i.e.,
WebClient
). Therefore, I will be demonstrating synchronous client concepts
using both libraries to help bridge the transition.
WebClient examples demonstrated here are intentionally synchronous
Examples of Spring WebFlux’s WebClient will be demonstrated as a synchronous replacement for Spring MVC RestTemplate . Details of the reactive API will not be covered.
|
138.2. Spring MVC
Spring MVC was originally implemented for writing Servlet-based applications. The term "MVC" stands for "Model, View, and Controller" — which is a standard framework pattern that separates concerns between:
-
data and access to data ("the model"),
-
representation of the data ("the view"), and
-
decisions of what actions to perform when ("the controller").
The separation of concern provides a means to logically divide web application code along architecture boundaries. Built-in support for HTTP-based APIs have matured over time and with the shift of UI web applications to Javascript frameworks running in the browser, the focus has likely shifted towards the API development.
Figure 45. Spring MVC Synchronous Model
|
As mentioned earlier, the programming model for Spring MVC is synchronous, blocking request/reply. Each active request is blocked in its own thread while waiting for the result of the current request to complete. This mode scales primarily by adding more threads — most of which are blocked performing some sort of I/O operation. |
138.3. Spring WebFlux
Spring WebFlux is built using a stream-based, reactive design as a
part of Spring 5/Spring Boot 2. The
reactive programming model was adopted into the
java.util.concurrent package in Java 9, to go along with other
asynchronous programming constructs — like Future<T>
.
Some of the core concepts — like annotated @RestController
and method
associated annotations — still exist.
The most visible changes added include the optional functional controller
and the new, mandatory data input and return publisher types:
Figure 46. Spring WebFlux Reactive Model
|
For any single call, there is an immediate response and then a flow of events that start once the flow is activated by a subscriber. The flow of events are published to and consumed from the new mandatory Mono and Flux data input and return types. No overall request is completed using an end-to-end single thread. Work to process each event must occur in a non-blocking manner. This technique sacrifices raw throughput of a single request to achieve better performance when operating at greater concurrent scale. |
138.4. Synchronous vs Asynchronous
To go a little further in contrasting the two approaches, the diagram below depicts a contrast between a call to two separate services using the synchronous versus asynchronous processing paradigms.
Figure 47. Synchronous
For synchronous, the call to service 2 cannot be initiated until the synchronous call/response from service 1 is completed For asynchronous, the call to service 1 and 2 are initiated sequentially but are carried out concurrently, and completed independently |
Figure 48. Asynchronous
|
There are different types of asynchronous processing.
Spring has long supported threads with @Async
methods.
However, that style simply launches one or more additional threads that potentially also contain synchronous logic that will likely block at some point.
The reactive model is strictly non-blocking — relying on the backpressure of available data and the resources being available to consume it.
With the reactive programming paradigm comes strict rules of the road.
138.5. Mixing Approaches
There is a certain amount of mixture of approaches allowed with Spring MVC and Spring WebFlux. A pure reactive design without a trace of Spring MVC can operate on the Reactor Netty engine — optimized for reactive processing. Any use of Web MVC will cause the application to be considered a Web MVC application, chose between Tomcat or Jetty for the web server, and operate any use of reactive endpoints in a compatibility mode. [27]
With that said — functionally, we can mix Spring Web MVC and Spring WebFlux together in an application using what is considered to be the Web MVC container.
-
Synchronous and reactive flows can operate side-by-side as independent paths through the code
-
Synchronous flows can make use of asynchronous flows. A primary example of that is using the
WebClient
reactive methods from a Spring MVC controller-initiated flow
However, we cannot have the callback of a reactive flow make synchronous requests that can indeterminately block — or it itself will become synchronous and tie up a critical reactor thread.
Spring MVC has non-optimized, reactive compatibility
Tomcat and Jetty are Spring MVC servlet engines. Reactor Netty
is a Spring WebFlux engine. Use of reactive streams within the Spring MVC
container is supported — but not optimized or recommended
beyond use of the WebClient in Spring MVC applications. Use of
synchronous flows is not supported by Spring WebFlux.
|
138.6. Choosing Approaches
Independent synchronous and reactive flows can be formed on a case-by-case basis and optimized if implemented on separate instances. [27] We can choose our ultimate solution(s) based on some of the recommendations below.
- Synchronous
-
-
existing synchronous API working fine — no need to change [28]
-
easier to learn - can use standard Java imperative programing constructs
-
easier to debug - everything in same flow is commonly in same thread
-
the number of concurrent users is a manageable (e.g., <100) number [29]
-
service is CPU-intensive [30]
-
codebase makes use of ThreadLocal
-
service makes use of synchronous data sources (e.g., JDBC, JPA)
-
- Reactive
-
-
need to serve a significant number (e.g., 100-300) of concurrent users [29]
-
requires knowledge of Java stream and functional programming APIs
-
does little to no good (i.e., badly) if the services called are synchronous (i.e., initial response returns when overall request complete) (e.g., JDBC, JPA)
-
desire to work with Kotlin or Java 8 lambdas [28]
-
service is IO-intensive (e.g., database or external service calls) [30]
-
For many of the above reason, we will start out our HTTP-based API coverage in this course using the synchronous approach.
139. Maven Dependencies
Most dependencies for Spring MVC are satisfied by changing spring-boot-starter
to spring-boot-starter-web
. Among other things, this brings in dependencies on
spring-webmvc
and spring-boot-starter-tomcat
.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
The dependencies for Spring MVC and Spring WebFlux’s WebClient
are satisfied by adding
spring-boot-starter-webflux
. It primarily brings in the spring-webflux
and the reactive libraries, and spring-boot-starter-reactor-netty
. We won’t
be using the netty engine, but WebClient
does make use of some netty client libraries
that are brought in when using the starter.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
140. Sample Application
To get started covering the basics of Web MVC, I am going to use a
very simple, remote procedure call (RPC)-oriented,
RMM level 1 example where the web client simply makes a call to the
service to say "hi". The example is located within the
rpc-greeter-svc
module.
|-- pom.xml `-- src |-- main | |-- java | | `-- info | | `-- ejava | | `-- examples | | `-- svc | | `-- rpc | | |-- GreeterApplication.java | | `-- greeter | | `-- controllers | | `-- RpcGreeterController.java | `-- resources | `-- ... `-- test |-- java | `-- info | `-- ejava | `-- examples | `-- svc | `-- rpc | `-- greeter | |-- GreeterRestTemplateHttpNTest.java | `-- GreeterSyncWebClientHttpNTest.java `-- resources `-- ...
141. Annotated Controllers
Traditional Spring MVC APIs are primarily implemented around annotated
controller components. Spring has a hierarchy of annotations that
help identify the role of the component class. In this case the controller
class will commonly be annotated with @RestController
, which wraps
@Controller
, which wraps @Component
. This primarily means that the
class will get automatically picked up during the component scan if
it is in the application’s scope.
package info.ejava.examples.svc.httpapi.greeter.controllers;
import org.springframework.web.bind.annotation.RestController;
@RestController
// ==> wraps @Controller
// ==> wraps @Component
public class RpcGreeterController {
//...
}
141.1. Class Mappings
Class-level mappings can be used to establish a base definition to be applied
to all methods and extended by method-level annotation mappings. Knowing this,
we can
define the base URI path using a
@RequestMapping
annotation on the controller class and all methods of this
class will either inherit or extend that URI path.
In this particular case, our class-level annotation is defining a base URL path
of /rpc/greeting
.
...
import org.springframework.web.bind.annotation.RequestMapping;
@RestController
@RequestMapping("rpc/greeter") (1)
public class RpcGreeterController {
...
1 | @RequestMapping.path="rpc/greeting" at class level establishes base URI path for all hosted methods |
Annotations can have alias and defaults
We can use either |
Annotating class can help keep from repeating common definitions
Annotations like @RequestMapping , applied at the class level establish
a base definition for all methods of the class.
|
141.2. Method Request Mappings
There are two initial aspects to map to our method in our first simple example: URI and HTTP method.
GET /rpc/greeter/sayHi
-
URI - we already defined a base URI path of
/rpc/greeter
at the class level — we now need to extend that to form the final URI of/rpc/greeter/sayHi
-
HTTP method - this is specific to each class method — so we need to explicitly declare GET (one of the standard RequestMethod enums) on the class method
...
/**
* This is an example of a method as simple as it gets
* @return hi
*/
@RequestMapping(path="sayHi", (1)
method=RequestMethod.GET) (2)
public String sayHi() {
return "hi";
}
1 | @RequestMapping.path at the method level appends sayHi to the base URI |
2 | @RequestMapping.method=GET registers this method to accept HTTP GET calls to
the URI /rpc/greeter/sayHi |
@GetMapping is an alias for @RequestMapping(method=GET)
Spring MVC also defines a
|
141.3. Default Method Response Mappings
A few of the prominent response mappings can be determined automatically by the container in simplistic cases:
- response body
-
The response body is automatically set to the marshalled value returned by the endpoint method. In this case it is a literal String mapping.
- status code
-
The container will return the following default status codes
-
200/OK - if we return a non-null value
-
404/NOT_FOUND - if we return a null value
-
500/INTERNAL_SERVER_ERROR - if we throw an exception
-
- Content-Type header
-
The container sensibly mapped our returned String to the
text/plain
Content-Type.
< HTTP/1.1 200 (1) < Content-Type: text/plain;charset=UTF-8 (2) < Content-Length: 2 ... hi (3)
1 | non-null, no exception return mapped to HTTP status 200 |
2 | non-null java.lang.String mapped to text/plain content type |
3 | value returned by endpoint method |
141.4. Executing Sample Endpoint
Once we start our application and enter the following in the browser, we get the expected string "hi" returned.
http://localhost:8080/rpc/greeter/sayHi hi
If you have access to curl
or another HTTP test tool, you will likely see
the following additional detail.
$ curl -v http://localhost:8080/rpc/greeter/sayHi ... > GET /rpc/greeter/sayHi HTTP/1.1 > Host: localhost:8080 > User-Agent: curl/7.54.0 > Accept: */* > < HTTP/1.1 200 < Content-Type: text/plain;charset=UTF-8 < Content-Length: 2 ... hi
142. RestTemplate Client
The primary point of making a callable HTTP endpoint is the ability
to call that endpoint from another application. With a functional
endpoint ready to go, we are ready to create a Java client and will do so
within a JUnit test using Spring MVC’s
RestTemplate
class in the simplest way possible.
Please note that most of these steps are true for any Java HTTP client
we might use. Only the steps directly related to RestTemplate
are
specific to that topic.
142.1. JUnit Integration Test Setup
We start our example by creating an integration unit test. That means we will be using the Spring context and will do so using @SpringBootTest
annotation with two key properties:
-
classes - reference
@Component
and/or@Configuration
class(es) to define which components will be in our Spring context (default is to look for@SpringBootConfiguration
, which is wrapped by@SpringBootApplication
). -
webEnvironment - to define this as a web-oriented test and whether to have a fixed (e.g., 8080), random, or none for a port number. The random port number will be injected using the
@LocalServerPort
annotation. The default value is MOCK — for Mock test client libraries able to bypass networking.
package info.ejava.examples.svc.rpc.greeter;
import info.ejava.examples.svc.rpc.GreeterApplication;
import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.web.server.LocalServerPort;
@SpringBootTest(classes = GreeterApplication.class, (1)
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) (2)
@Tag("springboot") @Tag("greeter")
@Slf4j
public class GreeterRestTemplateHttpNTest {
@LocalServerPort (3)
private int port;
1 | using the application to define the components for the Spring context |
2 | the application will be started with a random HTTP port# |
3 | the random server port# will be injected into port annotated with @LocalServerPort |
LocalServerPort Injection Alternatives
As you saw earlier, we can have it injected as an attribute of the test case class.
This would be good if many of the Inject as Test Attribute
A close alternative would be to inject the value into the Inject into Test Lifecycle Methods
We could move the injection to the Inject into Bean Factory using @Lazy
|
142.2. Form Endpoint URL
Next we will form the full URL for the target endpoint. We can take the parts we know and merge that with the injected server port number to get a full URL.
@LocalServerPort
private int port;
@Test
public void say_hi() {
//given - a service available at a URL and client access
String url = String.format("http://localhost:%d/rpc/greeter/sayHi", port); (1)
...
1 | full URL to the example endpoint |
142.3. Obtain RestTemplate
With a URL in hand, we are ready to make the call. We will do that using the synchronous RestTemplate from the Spring MVC library.
Spring Template is a thread safe class that can be constructed with a default constructor for the simple case — or through a builder in more complex cases and injected to take advantage of separation of concerns.
RestTemplate restTemplate = new RestTemplate();
142.4. Invoke HTTP Call
There are dozens of potential calls we can make with RestTemplate
.
We will learn many more but in this case we are
|
Example Invoke HTTP Call
|
Note that a successful return from getForObject()
will only occur if the response from the server is a 2xx/successful response.
Otherwise an exception of one of the following types will be thrown:
-
RestClientException - error occured communicating with server
-
RestClientResponseException error response received from server
-
HttpStatusCodeException - HTTP response received and HTTP status known
-
HttpServerErrorException - HTTP server (5xx) errors
-
HttpClientErrorException - HTTP client (4xx) errors
-
BadRequest, NotFound, UnprocessableEntity, …
-
-
-
-
142.5. Evaluate Response
At this point we have made our request and have received our reply and can evaluate the reply against what was expected.
//then - we get a greeting response
then(greeting).isEqualTo("hi");
143. WebClient Client
The Spring 5 documentation states the
RestTemplate
is in "maintenance mode" and that we should switchover to using the Spring WebFlux
WebClient
. Representatives from Pivotal have stated in various conference talks
that RestTemplate
will likely not go away anytime soon but would likely not get upgrades
to any new drivers.
In demonstrating WebClient
, there are a few aspects of our RestTemplate
example
that do not change and I do not need to repeat.
-
JUnit test setup — i.e., establishing the Spring context and random port#
-
Obtaining a URL
-
Evaluating returned response
The new aspects include
-
obtaining the
WebClient
instance -
invoking the HTTP endpoint endpoint and obtaining result
143.1. Obtain WebClient
WebClient
is an interface and must be constructed through a builder.
A default builder can be obtained through a static method of the
WebClient
interface. WebClient
is also thread safe, is capable of
being configured in a number of ways, and its builder can be injected
to create individualized instances.
WebClient webClient = WebClient.builder().build();
143.2. Invoke HTTP Call
The methods for WebClient
are arranged in a builder type pattern where
each layer of call returns a type with a constrained set of methods that
are appropriate for where we are in the call tree.
The example below shows an example of
|
Example Invoke HTTP Call
|
The block()
call is the synchronous part that we would look to avoid in a
truly reactive thread. It is a type of subscriber that triggers the defined
flow to begin producing data. This block()
is blocking the current
(synchronous) thread — just like RestTemplate
. The portions of the call ahead
of block()
are performed in a reactive set of threads.
144. Implementing Parameters
There are three primary ways to map an HTTP call to method input parameters:
-
request body — annotated with @RequestBody that we will see in a POST
-
path parameter — annotated with @PathVariable
-
query parameter - annotated with @RequestParam
The later two are part of the next example and expressed in the URI.
/ (1) GET /rpc/greeter/say/hello?name=jim \ (2)
1 | URI path segments can be mapped to input method parameters |
2 | individual query values can be mapped to input method parameters |
-
we can have 0 to N path or query parameters
-
path parameters are part of the resource URI path and are commonly required when defined — but that is not a firm rule
-
query parameters are commonly the technique for optional arguments against the resource expressed in the URI path
-
144.1. Controller Parameter Handling
Parameters derived from the URI path require that the path be expressed
with {placeholder}
names within the string. That placeholder name will be
mapped to a specific method input parameter using the
@PathVariable
annotation. In the following example, we are mapping whatever
is in the position held by the {greeting}
placeholder — to the greeting
input variable.
Specific query parameters are mapped by their name in the URL to a specific
method input parameter using the
@RequestParam
annotation. In the following example, we are mapping
whatever is in the value position of name=
to the name
input variable.
@RequestMapping(path="say/{greeting}", (1)
method=RequestMethod.GET)
public String sayGreeting(
@PathVariable("greeting") String greeting, (1)
@RequestParam(value = "name", defaultValue = "you") String name) { (2)
return greeting + ", " + name;
}
1 | URI path placeholder {greeting} is being mapped to method input parameter String greeting |
2 | URI query parameter name is being mapped to method input parameter String name |
No direct relationship between placeholder/query names and method input parameter names
There is no direct correlation between the path placeholder or query parameter
name and the name of the variable without the @PathVariable and @RequestParam
mappings.
|
144.2. Client-side Parameter Handling
As mentioned above, the path and query parameters are expressed in the URL — which is
not impacted whether we use RestTemplate
or WebClient
.
http://localhost:8080/rpc/greeter/say/hello?name=jim
A way to build a URL through type-safe convenience methods is with the
UriComponentsBuilder
class. In the following example:
|
Example Client Code Forming URL with Path and Query Params
|
145. Accessing HTTP Responses
The target of an HTTP response may be a specific marshalled object or successful status. However, it is common to want to have access to more detailed information. For example:
-
Success — was it a 201/CREATED or a 200/OK?
-
Failure — was it a 400/BAD_REQUEST, 404/NOT_FOUND, 422/UNPROCESSABLE_ENTITY, or 500/INTERNAL_SERVER_ERROR?
Spring can supply that additional information in a
ResponseEntity<T>
, supplying us with:
-
status code
-
response headers
-
response body — which will be unmarshalled to the specified type of
T
To obtain that object — we need to adjust our call to the client.
145.1. Obtaining ResponseEntity
The two client libraries offer additional calls to obtain the ResponseEntity
.
//when - asking for that greeting
ResponseEntity<String> response = restTemplate.getForEntity(url, String.class);
//when - asking for that greeting
ResponseEntity<String> response = webClient.get()
.uri(url)
.retrieve()
.toEntity(String.class)
.block();
145.2. ResponseEntity<T>
The ResponseEntity<T>
can provide us with more detail than just the response
object from the body. As you can see from the following evaluation block, the
client also has access to the status code and headers.
//then - response be successful with expected greeting
then(response.getStatusCode()).isEqualTo(HttpStatus.OK);
then(response.getHeaders().getFirst(HttpHeaders.CONTENT_TYPE)).startsWith("text/plain");
then(response.getBody()).isEqualTo("hello, jim");
146. Client Error Handling
As indicated earlier, something could fail in the call to the service and we do not get our expected response returned.
$ curl -v http://localhost:8080/rpc/greeter/boom ... < HTTP/1.1 400 < Content-Type: application/json < Transfer-Encoding: chunked < Date: Thu, 21 May 2020 19:37:42 GMT < Connection: close < {"timestamp":"2020-05-21T19:37:42.261+0000","status":400,"error":"Bad Request", "message":"Required String parameter 'value' is not present" (1) ...
1 | Spring MVC has default error handling that will, by default return an application/json rendering of an error |
Although there are differences in their options — for the most part both
RestTemplate
and WebClient
will throw an exception if the status code
is not successful. Although very similar — unfortunately, their exceptions
are technically different and would need separate exception handling logic
if used together.
146.1. RestTemplate Response Exceptions
RestTemplate
is designed to always throw an exception when there is a non-successful
status code. Although we can tweak the specific exceptions thrown with filters,
we are eventually forced to throw something if we cannot return an object of the
requested type or a ResponseEntity<T>
carrying the requested type.
All default RestTemplate
exceptions thrown extend
HttpClientErrorException
— which is a RuntimeException
, so handling the exception
is not mandated by the Java language. The example below is catching a specific
BadRequest
exception (if thrown) and then handling the exception in a generic way.
import org.springframework.web.client.HttpClientErrorException;
...
//when - calling the service
HttpClientErrorException ex = catchThrowableOfType( (1)
()->restTemplate.getForEntity(url, String.class),
HttpClientErrorException.BadRequest.class);
1 | using assertj catchThrowableOfType() to catch the exception and
it be of a specific type only if thrown |
catchThrowableOfType does not fail if no exception thrown
AssertJ catchThrowableOfType only fails if an exception of
the wrong type is thrown. It will return a null if no exception is thrown.
That allows for a "BDD style" of testing where the "when" processing
is separate from the "then" verifications.
|
146.2. WebClient Response Exceptions
WebClient
has two primary paths to invoke a request: retrieve()
and exchange()
.
retrieve()
works very similar to RestTemplate.<method>ForEntity()
— where it returns what you ask or
throws an exception. exchange()
permits some analysis of the response — but ultimately
places you in a position that you need to throw an exception if you cannot return the
type requested or a ResponseEntity<T>
carrying the type requested.
All default WebClient
exceptions thrown extend
WebClientResponseException
— which is also a RuntimeException
, so it has that
in common with the exception handling of RestTemplate
. The example below is catching
a specific BadRequest
exception and then handling the exception in a generic way.
import org.springframework.web.reactive.function.client.WebClientResponseException;
...
//when - calling the service
WebClientResponseException.BadRequest ex = catchThrowableOfType(
() -> webClient.get().uri(url).retrieve().toEntity(String.class).block(),
WebClientResponseException.BadRequest.class);
146.3. RestTemplate and WebClient Exceptions
Once the code calling one of the two clients has the client-specific exception object, they have access to three key response values:
-
HTTP status code
-
HTTP response headers
-
HTTP body as string or byte array
The following is an example of handling an exception thrown by RestTemplate
.
HttpClientErrorException ex = ...
//then - we get a bad request
then(ex.getStatusCode()).isEqualTo(HttpStatus.BAD_REQUEST);
then(ex.getResponseHeaders().getFirst(HttpHeaders.CONTENT_TYPE))
.isEqualTo(MediaType.APPLICATION_JSON_VALUE);
log.info("{}", ex.getResponseBodyAsString());
The following is an example of handling an exception thrown by WebClient
.
WebClientResponseException.BadRequest ex = ...
//then - we get a bad request
then(ex.getStatusCode()).isEqualTo(HttpStatus.BAD_REQUEST);
then(ex.getHeaders().getFirst(HttpHeaders.CONTENT_TYPE)) (1)
.isEqualTo(MediaType.APPLICATION_JSON_VALUE);
log.info("{}", ex.getResponseBodyAsString());
1 | WebClient 's exception method name to retrieve response headers
different from RestTemplate |
147. Controller Responses
In our earlier example, our only response option from the service was a limited set of status codes derived by the container based on what was returned. The specific error demonstrated was generated by the Spring MVC container based on our mapping definition. It will be common for the controller method, itself to need explicit control over the HTTP response returned --primarily to express response-specific
-
HTTP status code
-
HTTP headers
147.1. Controller Return ResponseEntity
The following service example performs some trivial error checking and:
-
responds with an explicit error if there is a problem with the input
-
responds with an explicit status and Content-Location header if successful
The service provides control over the entire response by returning a
ResponseEntity
containing the complete HTTP result versus just returning
the result value for the body. The ResponseEntity can express status code,
headers, and the returned entity.
import org.springframework.web.servlet.support.ServletUriComponentsBuilder;
...
@RequestMapping(path="boys",
method=RequestMethod.GET)
public ResponseEntity<String> createBoy(@RequestParam("name") String name) { (1)
try {
someMethodThatMayThrowException(name);
String url = ServletUriComponentsBuilder.fromCurrentRequest() (2)
.build().toUriString();
return ResponseEntity.ok() (3)
.header(HttpHeaders.CONTENT_LOCATION, url)
.body(String.format("hello %s, how do you do?", name));
} catch (IllegalArgumentException ex) {
return ResponseEntity.unprocessableEntity() (4)
.body(ex.toString());
}
}
private void someMethodThatMayThrowException(String name) {
if ("blue".equalsIgnoreCase(name)) {
throw new IllegalArgumentException("boy named blue?");
}
}
1 | ResponseEntity returned used to express full HTTP response |
2 | ServletUriComponentsBuilder is a URI builder that can provide context of current call |
3 | service is able to return an explicit HTTP response with appropriate success details |
4 | service is able to return an explicit HTTP response with appropriate error details |
147.2. Example ResponseEntity Responses
In the response we see the explicitly assigned status code and Content-Location header.
curl -v http://localhost:8080/rpc/greeter/boys?name=jim ... < HTTP/1.1 200 (1) < Content-Location: http://localhost:8080/rpc/greeter/boys?name=jim (2) < Content-Type: text/plain;charset=UTF-8 < Content-Length: 25 ... hello jim, how do you do?
1 | status explicitly |
2 | Content-Location header explicitly supplied by service |
For the error condition, we see the explicit status code and error payload assigned.
$ curl -v http://localhost:8080/rpc/greeter/boys?name=blue ... < HTTP/1.1 422 (1) < Content-Type: text/plain;charset=UTF-8 < Content-Length: 15 ... boy named blue?
1 | HTTP status code explicitly supplied by service |
147.3. Controller Exception Handler
We can make a small but substantial step at simplifying the controller method by making sure the exception thrown is fully descriptive and moving the exception handling to either a separate, annotated method of the controller or globally to be used by all controllers (shown later).
The following example uses @ExceptionHandler
annotation to register a handler
for when controller methods happen to throw the IllegalArgumentException. The handler
has the ability to return an explicit ResponseEntity with the error details.
import org.springframework.web.bind.annotation.ExceptionHandler;
...
@ExceptionHandler(IllegalArgumentException.class) (1)
public ResponseEntity<String> handle(IllegalArgumentException ex) {
return ResponseEntity.unprocessableEntity() (2)
.body(ex.getMessage());
}
1 | ExceptionHandler is registered to handle all IllegalArgument exceptions
thrown by controller method (or anything it calls) |
2 | handler builds a ResponseEntity with the details of the error |
Create custom exceptions to address specific errors
Create custom exceptions to the point that the handler has the information
and context it needs to return a valuable response.
|
147.4. Simplified Controller Using ExceptionHandler
With all exceptions addressed by ExceptionHandlers
, we can free our controller
methods of tedious, repetitive conditional error reporting logic and still
return an explicit HTTP response.
@RequestMapping(path="boys/throws",
method=RequestMethod.GET)
public ResponseEntity<String> createBoyThrows(@RequestParam("name") String name) {
someMethodThatMayThrowException(name); (1)
String url = ServletUriComponentsBuilder.fromCurrentRequest()
.replacePath("/rpc/greeter/boys") (2)
.build().toUriString();
return ResponseEntity.ok()
.header(HttpHeaders.CONTENT_LOCATION, url)
.body(String.format("hello %s, how do you do?", name));
}
1 | Controller method is free from dealing with exception logic |
2 | replacing path in order to match sibling implementation response |
Note the new method endpoint with the exception handler returns the same, explicit HTTP response as the earlier example.
curl -v http://localhost:8080/rpc/greeter/boys/throws?name=blue ... < HTTP/1.1 422 < Content-Type: text/plain;charset=UTF-8 < Content-Length: 15 ... boy named blue?
148. Summary
In this module we:
-
identified two primary paradigms (synchronous and reactive) and web frameworks (Spring MVC and Spring WebFlux) for implementing web processing and communication
-
implemented an HTTP endpoint for a URI and method using Spring MVC annotated controller in a fully synchronous mode
-
demonstrated how to pass parameters between client and service using path and query parameters
-
demonstrated how to pass return results from service to client using http status code, response headers, and response body
-
demonstrated how to explicitly set HTTP responses in the service
-
demonstrated how to clean up service logic by using exception handlers
-
demonstrated how to invoke methods from a Spring MVC
RestTemplate
and Spring WebFluxWebClient
Controller/Service Interface
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
149. Introduction
Many times we may think of a service from the client’s perspective and term everything on the other side of the HTTP connection to be "the service". That is OK from the client’s perspective, but in reality in even a moderately-sized service — there is normally a few layers of classes playing a certain architectural role and that front-line controller we have been working with should primarily be a "web facade" interfacing the business logic to the outside world.
In this lecture we are going to look more closely at how the overall endpoint breaks down into a set of "facade" and "business logic" pattern players and lay the groundwork for the "Data Transfer Object" (DTO) covered in the next lecture.
149.1. Goals
The student will learn to:
-
identify the Controller class as having the role of a facade
-
encapsulate business logic within a separate service class
-
establish some interface patterns between the two layers so that the web facade is as clean as possible
149.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
implement a service class to encapsulate business logic
-
turn
@RestController
class into a facade and delegate business logic details to an injected service class -
identify error reporting strategy options
-
identify exception design options
-
implement a set of condition-specific exceptions
-
implement a Spring
@RestControllerAdvice
class to offload exception handling and error reporting from the@RestController
150. Roles
In an N-tier, distributed architecture there is commonly a set of patterns to apply to our class design.
-
Business Logic - primary entry point for doing work. The business logic knows the why and when to do things. Within the overall service — this is the class (or set of classes) that make up the core service.
-
Data Transfer Object (DTO) - used to describe requested work to the business logic or results from the business logic. In small systems, the DTO may be the same as the business objects (BO) stored in the database — but the specific role that will be addressed here is communicating outside of the overall service.
-
Facade - this provides an adapter around the business logic that translates commands from various protocols and libraries — into core language commands.
I will cover DTOs in more detail in the next lecture — but relative to the client, facade, and business logic — know that all three work on the same type of data. The DTO data types pass thru the controller without a need for translation — other than what is required for communications.
Our focus in this lecture is still the controller and will now look at some controller/service interface strategies that will help develop a clean web facade in our controller classes.
151. Error Reporting
When an error occurs — whether it be client or internal server errors — we want to have access to useful information that can be used to correct or avoid the error in the future. For example, if a client asks for information on a particular account that cannot be found, it would save minutes to hours of debugging to know whether the client requested a valid account# or the bank’s account repository was not currently available.
We have one of two techniques to report error details: complex object result and thrown exception.
Design a way to allow low-level code report context of failures
The place were the error is detected is normally the place with the
most amount of context details known. Design a way to have the information
from the detection spot propagated to the error handling.
|
151.1. Complex Object Result
For the complex object result approach, each service method returns a complex result object (similar in concept to ResponseEntity). If the business method is:
-
successful: the requested result is returned
-
unsuccessful: the returned result expresses the error
The returned method type is complex enough to carry both types of payloads.
Complex return objects require handling logic in caller
The complex result object requires the caller to have error handling logic
ready to triage and react to the various responses. Anything that is not immediately
handled may accidentally be forgotten.
|
151.2. Thrown Exception
For the thrown exception case, exceptions are declared to carry failure-specific error reporting. The business method only needs to declare the happy path response in the return of the method and optionally declare try/catch blocks for errors it can address.
Thrown exceptions give the caller the option to handle or delegate
The thrown exception technique gives the caller the option to construct a
try/catch block and immediately handle the error or to automatically let it
propagate to a caller that can address the issue.
|
Either technique will functionally work. However, returning the complex object versus exception will require manual triage logic on the receiving end. As long as we can create error-specific exceptions, we can create some cleaner handling options in the controller.
151.3. Exceptions
Going the exception route, we can start to consider:
-
what specific errors should our services report?
-
what information is reported?
-
timestamp
-
(descriptive? redacted?) error text
-
-
are there generalizations or specializations?
The HTTP organization of status codes is a good place to start thinking of error types and how to group them (i.e., it is used by the world’s largest information system — the WWW). HTTP defines two primary types of errors:
-
client-based
-
server-based
It could be convenient to group them into a single hierarchy — depending on how we defined the details of the exceptions.
From the start, we can easily guess that our service method(s) might fail because
-
NotFoundException: the target entity was not found
-
InvalidInputException: something wrong with the content of what was requested
-
BadRequestException: request was not understood or erroneously requested
-
InternalErrorException: infrastructure or something else internal went bad
We can also assume that we would need, at a minimum
-
a message - this would ideally include IDs that are specific to the context
-
cause exception - commonly something wrapped by a server error
151.4. Checked or Unchecked?
Going the exception route — the most significant impact to our codebase will be the choice of checked versus unchecked exceptions (i.e., RuntimeException).
-
Checked Exception - these exceptions inherit from java.lang.Exception and are required to be handled by a try/catch block or declared as rethrown by the calling method. It always starts off looking like a good practice, but can get quite tedious when building layers of methods.
-
RuntimeException - these exceptions inherit from java.lang.RuntimeException and not required to be handled by the calling method. This can be a convenient way to address exceptions "not dealt with here". However, it is always the caller’s option to catch any exception they can specifically address.
If we choose to make them different (i.e., ServerErrorException unchecked and ClientErrorException checked), we will have to create separate inheritance hierarchies (i.e., no common ServiceErrorException parent).
151.5. Candidate Client Exceptions
The following is a candidate implementation for client exceptions. I am going to go the seemingly easy route and make them unchecked/RuntimeExceptions — but keep them in a separate hierarchy from the server exceptions to allow an easy change. Complete examples can be located in the repository
public abstract class ClientErrorException extends RuntimeException {
protected ClientErrorException(Throwable cause) {
super(cause);
}
protected ClientErrorException(String message, Object...args) {
super(String.format(message, args)); (1)
}
protected ClientErrorException(Throwable cause, String message, Object...args) {
super(String.format(message, args), cause);
}
public static class NotFoundException extends ClientErrorException {
public NotFoundException(String message, Object...args)
{ super(message, args); }
public NotFoundException(Throwable cause, String message, Object...args)
{ super(cause, message, args); }
}
public static class InvalidInputException extends ClientErrorException {
public InvalidInputException(String message, Object...args)
{ super(message, args); }
public InvalidInputException(Throwable cause, String message, Object...args)
{ super(cause, message, args); }
}
}
1 | encourage callers to add instance details to exception by supplying built-in, optional formatter |
The following is an example of how the caller can instantiate and throw the exception based on conditions detected in the request.
if (gesture==null) {
throw new ClientErrorException
.NotFoundException("gesture type[%s] not found", gestureType);
}
151.6. Service Errors
The following is a candidate implementation for server exceptions. These types of errors are commonly unchecked.
public abstract class ServerErrorException extends RuntimeException {
protected ServerErrorException(Throwable cause) {
super(cause);
}
protected ServerErrorException(String message, Object...args) {
super(String.format(message, args));
}
protected ServerErrorException(Throwable cause, String message, Object...args) {
super(String.format(message, args), cause);
}
public static class InternalErrorException extends ServerErrorException {
public InternalErrorException(String message, Object...args)
{ super(message, args); }
public InternalErrorException(Throwable cause, String message, Object...args)
{ super(cause, message, args); }
}
}
The following is an example of instantiating and throwing a server exception based on a caught exception.
try {
//...
} catch (RuntimeException ex) {
throw new InternalErrorException(ex, (1)
"unexpected error getting gesture[%s]", gestureType); (2)
}
1 | reporting source exception forward |
2 | encourage callers to add instance details to exception by supplying built-in, optional formatter |
152. Controller Exception Advice
We saw earlier where we could register an exception handler within the controller class and how that could clean up our controller methods of noisy error handling code. I want to now build on that concept and our new concrete service exceptions to define an external controller advice that will handle all registered exceptions.
The following is an example of a controller method that is void of error handling logic because of the external controller advice we will put in place.
@RestController
public class GesturesController {
...
@RequestMapping(path=GESTURE_PATH,
method=RequestMethod.GET,
produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> getGesture(
@PathVariable(name="gestureType") String gestureType,
@RequestParam(name="target", required=false) String target) {
//business method
String result = gestures.getGesture(gestureType, target); (1)
String location = ServletUriComponentsBuilder.fromCurrentRequest()
.build().toUriString();
return ResponseEntity
.status(HttpStatus.OK)
.header(HttpHeaders.CONTENT_LOCATION, location)
.body(result);
}
1 | handles only successful result — exceptions left to controller advice |
152.1. Service Method with Exception Logic
The following is a more complete example of the business method within the service class. Based on the result of the interaction with the data access tier — the business method determines the gesture does not exist and reports that error using an exception.
@Service
public class GesturesServiceImpl implements GesturesService {
@Override
public String getGesture(String gestureType, String target) {
String gesture = gestures.get(gestureType); //data access method
if (gesture==null) {
throw new ClientErrorException (1)
.NotFoundException("gesture type[%s] not found", gestureType);
} else {
String response = gesture + (target==null ? "" : ", " + target);
return response;
}
}
...
1 | service reporting details of error |
152.2. Controller Advice Class
The following is a controller advice class.
We annotate this with @RestControllerAdvice
to better describe its role and give us the option to create fully annotated handler methods.
My candidate controller advice class contains a helper method that programmatically builds a ResponseEntity.
The type-specific exception handler must translate the specific exception into a HTTP status code and body.
A more complete example — designed to be a base class to concrete @RestControllerAdvice
classes — can be found in the repository.
package info.ejava.examples.svc.httpapi.gestures.controllers;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RestControllerAdvice;
@RestControllerAdvice( (1)
// wraps ==> @ControllerAdvice
// wraps ==> @Component
basePackageClasses = GesturesController.class) (2)
public class ExceptionAdvice { /(3)
protected ResponseEntity<String> buildResponse(HttpStatus status, (4)
String text) { (5)
return ResponseEntity
.status(status)
.body(text);
}
...
1 | @RestControllerAdvice denotes this class as a @Component that will handle thrown exceptions |
2 | optional annotations can be used to limit the scope of this advice to certain packages and controller classes |
3 | handled thrown exceptions will return the DTO type for this application — in this case just text/plain |
4 | type-specific exception handlers must map exception to an HTTP status code |
5 | type-specific exception handlers must produce error text |
Example assumes DTO type is
This example assumes the DTO type for errors is a plain/test stringtext/plain string. More robust
response type would be part of an example using complex DTO types.
|
152.3. Advice Exception Handlers
Below are the candidate type-specific exception handlers we can use to translate the context-specific information from the exception to a valuable HTTP response to the client.
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.ExceptionHandler;
import static info.ejava.examples.svc.httpapi.gestures.svc.ClientErrorException.*;
import static info.ejava.examples.svc.httpapi.gestures.svc.ServerErrorException.*;
...
@ExceptionHandler(NotFoundException.class) (1)
public ResponseEntity<String> handle(NotFoundException ex) {
return buildResponse(HttpStatus.NOT_FOUND, ex.getMessage()); (2)
}
@ExceptionHandler(InvalidInputException.class)
public ResponseEntity<String> handle(InvalidInputException ex) {
return buildResponse(HttpStatus.UNPROCESSABLE_ENTITY, ex.getMessage());
}
@ExceptionHandler(InternalErrorException.class)
public ResponseEntity<String> handle(InternalErrorException ex) {
log.warn("{}", ex.getMessage(), ex); (3)
return buildResponse(HttpStatus.INTERNAL_SERVER_ERROR, ex.getMessage());
}
@ExceptionHandler(RuntimeException.class)
public ResponseEntity<String> handleRuntimeException(RuntimeException ex) {
log.warn("{}", ex.getMessage(), ex); (3)
String text = String.format(
"unexpected error executing request: %s", ex.toString());
return buildResponse(HttpStatus.INTERNAL_SERVER_ERROR, text);
}
1 | annotation maps the handler method to a thrown exception type |
2 | handler method receives exception and converts to a ResponseEntity to be returned |
3 | the unknown error exceptions are candidates for mandatory logging |
153. Summary
In this module we:
-
identified the
@RestController
class' role is a "facade" for a web interface -
encapsulated business logic in a
@Service
class -
identified data passing between clients, facades, and business logic is called a Data Transfer Object (DTO). The DTO was a string in this simple example, but will be expanded in the content lecture
-
identified how exceptions could help separate successful business logic results from error path handling
-
identified some design choices for our exceptions
-
identified how a controller advice class can be used to offload exception handling
API Data Formats
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
154. Introduction
Web content is shared using many standardized MIME Types. We will be addressing two of them here
-
XML
-
JSON
I will show manual approaches to marshaling/unmarshalling first. However, content is automatically marshalled/unmarshalled by the web client container once everything is setup properly. Manual marshaling/unmarshalling approaches are mainly useful in determining provider settings and annotations — as well as to perform low-level development debug outside of the server on the shape and content of the payloads.
154.1. Goals
The student will learn to:
-
identify common/standard information exchange content types for web API communications
-
manually marshal and unmarshal Java types to and from a data stream of bytes for multiple content types
-
negotiate content type when communicating using web API
-
pass complex Data Transfer Objects to/from a web API using different content types
-
resolve data mapping issues
154.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
design a set of Data Transfer Objects (DTOs) to render information from and to the service
-
define Java class content type mappings to customize marshalling/unmarshalling
-
specify content types consumed and produced by a controller
-
specify content types supplied and accepted by a client
155. Pattern Data Transfer Object
There can be multiple views of the same conceptual data managed by a service. They can be the same physical implementation — but they serve different purposes that must be addressed. We will be focusing on the external client view (Data Transfer Object (DTO)) during this and other web tier lectures. I will specifically contrast the DTO with the internal implementation view (Business Object (BO)) right now to help us see the difference in the two roles.
155.1. DTO Pattern Problem Space
|
Figure 55. Clients and Service Sharing Implementation Data
|
- Problem
-
Issues can arise when service implementations are complex.
-
client may get data they do not need
-
client may get data they cannot handle
-
client may get data they are not authorized to use
-
client may get too much data to be useful (e.g., entire database serialized to client)
-
- Forces
-
The following issues are assumed to be true:
-
some clients are local and can share object references with business logic
-
handling specifics of remote clients outside of core scope of business logic
-
155.2. DTO Pattern Solution Space
|
Figure 56. DTO Represents Client View of Data
|
DTO/BO Mapping Location is a Design Choice
The design decision of which layer translates between DTOs of the API and BOs of the service is not always fixed. Since the DTO is an interface pattern and the Web API is one of many possible interface facades and clients of the service — the job of DTO/BO mapping may be done in the service tier instead. |
155.3. DTO Pattern Players
- Data Transfer Object
-
-
represents a subset of the state of the application at a point in time
-
not dependent on Business Objects or server-side technologies
-
doing so would require sending Business Objects to client
-
-
XML and JSON provide the βultimate isolationβ in DTO implementation/isolation
-
- Remote (Web) Facade
-
-
uses Business Logic and DTOs to perform core business logic
-
manages interface details with client
-
- Business Logic
-
-
performs core implementation duties that may include interaction with backend services and databases
-
- Business Object (Entity)
-
-
representation of data required to implement service
-
may have more server-side-specific logic when DTOs are present in the design
-
DTOs and BOs can be same class(es) in simple or short-lived services
DTOs and BOs can be the same class in small services. However, supporting
multiple versions of clients over longer service lifetimes may even cause
small services to split the two data models into separate implementations.
|
156. Sample DTO Class
The following is an example DTO class we will look to use to
represent client view of data in a simple "Quote Service". The QuoteDTO
class
can start off as a simple POJO and — depending on the binding (e.g., JSON
or XML) and binding library (e.g., Jackson, JSON-B, or JAXB) - we may have
to add external configuration and annotations to properly shape
our information exchange.
The class is a vanilla POJO with a default constructor, public getters and setters,
and other convenience methods — mostly implemented by Lombok. The quote contains
three different types of fields (int, String, and LocalDate). The date
field
is represented using java.time.LocalDate
.
package info.ejava.examples.svc.content.quotes.dto;
import lombok.*;
import java.time.LocalDate;
@NoArgsConstructor (1)
@AllArgsConstructor
@Data (2)
@Builder
@With
public class QuoteDTO {
private int id;
private String author;
private String text;
private LocalDate date; (3)
private String ignored; (4)
}
1 | default constructor |
2 | public setters and getters |
3 | using Java 8, java.time.LocalDate to represent generic day of year without timezone |
4 | example attribute we will configure to be ignored |
Lombok @Builder and @With
@Builder will create a new instance of the class using incrementally defined properties.
@With creates a copy of the object with a new value for one of its properties.
@Builder can be configured to create a copy builder (i.e., a copy with no property value change).
|
Lombok @Builder and Constructors
@Builder requires an all-args-ctor and will defined a package-friendly one unless there is already a ctor defined.
Unmarshallers require a no-args-ctor and can be provided using @NoArgsConstructor .
The presence of the no-args-ctor turns off the required all-args-ctor for @Builder and can be re-enabled with @AllArgsConstructor .
|
157. Time/Date Detour
While we are on the topic of exchanging data — we might as well address time-related data that can cause numerous mapping issues. Our issues are on multiple fronts.
-
what does our time-related property represent?
-
e.g., a point in time, a point in time in a specific timezone, a birth date, a daily wake-up time
-
-
what type do we use to represent our expression of time?
-
do we use legacy Date-based types that have a lot of support despite ambiguity issues?
-
do we use the newer
java.time
types that are more explicit in meaning but have not fully caught on everywhere?
-
-
how should we express time within the marshalled DTO?
-
how can we properly unmarshal the time expression into what we need?
-
how can we handle the alternative time wire expressions with minimal pain?
157.1. Pre Java 8 Time
During pre-Java8, we primarily had the following time-related java.util
classes
represents a point in time without timezone or calendar information. The point is a Java long value that represents the number of milliseconds before or after 1970 UTC. This allows us to identify a millisecond between 292,269,055 BC and 292,278,994 AD when applied to the Gregorian calendar. |
|
interprets a Date according to an assigned calendar (e.g., Gregorian Calendar) into years, months, hours, etc. Calendar can be associated with a specific timezone offset from UTC and assumes the Date is relative to that value. |
During the pre-Java 8 time period, there was also a time-based library called Joda that became popular at providing time expressions that more precisely identified what was being conveyed.
157.2. java.time
The ambiguity issues with java.util.Date
and the expression and popularity of
Joda caused it to be adopted into Java 8 (
JSR 310).
The following are a few of the key java.time
constructs added in Java 8.
represents a point in time at 00:00 offset from UTC. The point is a nanosecond
and improves on |
|
adds |
|
adds timezone identity to OffsetDateTime — which can be used to determine the appropriate timezone offset (i.e., daylight savings time) ( |
|
a generic date, independent of timezone and time ( |
|
a generic time of day, independent of timezone or specific date.
This allows us to express "I set my alarm for 6am" - without specifying the actual dates that is performed ( |
|
a date and time but lacking a specific timezone offset from UTC ( |
These are some of the main data classes I will be using in this course. Visit the
javadocs for java.time to see other constructs like Duration
, Period
, and others.
157.3. Date/Time Formatting
There are two primary format frameworks for formatting and parsing time-related fields in text fields like XML or JSON:
This legacy |
|
This newer |
public static final DateTimeFormatter ISO_LOCAL_DATE_TIME;
static {
ISO_LOCAL_DATE_TIME = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.append(ISO_LOCAL_DATE)
.appendLiteral('T')
.append(ISO_LOCAL_TIME)
.toFormatter(ResolverStyle.STRICT, IsoChronology.INSTANCE);
}
This, wrapped with some optional and default value constructs to handle missing information makes for a pretty powerful time parsing and formatting tool.
157.4. Date/Time Exchange
There are a few time standards supported by Java date/time formatting frameworks:
ISO 8601 |
This standard is cited in many places but hard to track down an official example of each and every format — especially when it comes to 0 values and timezone offsets.
However, an example representing a ZonedDateTime and EST may look like the following: |
RFC 822/ RFC 1123 |
These are lesser followed standards for APIs and includes constructs like a English word
abbreviation for day of week and month. The DateTimeFormatter example for this group
is |
My examples will work exclusively with the ISO 8601 formats. The following example leverages the Java expression of time formatting to allow for multiple offset expressions (Z
, +00
, +0000
, and +00:00
) on top of a standard LOCAL_DATE_TIME expression.
public static final DateTimeFormatter UNMARSHALLER
= new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.append(DateTimeFormatter.ISO_LOCAL_DATE)
.appendLiteral('T')
.append(DateTimeFormatter.ISO_LOCAL_TIME)
.parseLenient()
.optionalStart().appendOffset("+HH", "Z").optionalEnd()
.optionalStart().appendOffset("+HH:mm", "Z").optionalEnd()
.optionalStart().appendOffset("+HHmm", "Z").optionalEnd()
.optionalStart().appendLiteral('[').parseCaseSensitive()
.appendZoneRegionId()
.appendLiteral(']').optionalEnd()
.parseDefaulting(ChronoField.OFFSET_SECONDS,0)
.parseStrict()
.toFormatter();
Use ISO_LOCAL_DATE_TIME Formatter by Default
Going through the details of |
158. Java Marshallers
I will be using four different data marshalling providers during this lecture:
|
the default JSON provider included within Spring and Spring Boot. It implements its own proprietary interface for mapping Java POJOs to JSON text. |
|
a relatively new Jakarta EE standard for JSON marshalling. The reference implementation is Yasson from the open source Glassfish project. It will be used to verify and demonstrate portability between the built-in Jackson JSON and other providers. |
|
a tightly integrated sibling of Jackson JSON. This requires a few extra module dependencies but offers a very similar setup and annotation set as the JSON alternative. I will use Jackson XML as my primary XML provider during examples. |
|
a well-seasoned XML marshalling framework that was the foundational requirement for early JavaEE servlet containers. I will use JAXB to verify and demonstrate portability between Jackson XML and other providers. |
Spring Boot comes with a Jackson JSON pre-wired with the web dependencies. It seamlessly
gets called from RestTemplate, WebClient and the RestController when application/json
or nothing has been selected. Jackson XML requires additional dependencies — but integrates just
as seamlessly with the client and server-side frameworks for application/xml
.
For those reasons — Jackson JSON and Jackson XML will be used as our core marshalling
frameworks. JSON-B and JAXB will just be used for portability testing.
159. JSON Content
JSON is the content type most preferred by Javascript UI frameworks and NoSQL databases. It has quickly overtaken XML as a preferred data exchange format.
{
"id" : 0,
"author" : "Hotblack Desiato",
"text" : "Parts of the inside of her head screamed at other parts of the inside of her head.",
"date" : "1981-05-15"
}
Much of the mapping can be accomplished using Java reflection. Provider-specific
annotations can be added to address individual issues. Lets take a look at how
both Jackson JSON and JSON-B can be used to map our QuoteDTO
POJO to the above
JSON content. The following is a trimmed down copy of the DTO class I showed you earlier.
What kind of things do we need to make that mapping?
@NoArgsConstructor
@AllArgsConstructor
@Data
public class QuoteDTO {
private int id;
private String author;
private String text;
private LocalDate date; (1)
private String ignored; (2)
}
1 | may need some LocalDate formatting |
2 | may need to mark as excluded |
159.1. Jackson JSON
For the simple cases, our DTO classes can be mapped to JSON with minimal effort using Jackson JSON. However, we potentially need to shape our document and can use Jackson annotations to customize. The following example shows using an annotation to eliminate a property from the JSON document.
Example Pre-Tweaked JSON Payload
|
Example QuoteDTO with Jackson Annotation(s)
|
Date/Time Formatting Handled at ObjectMapper/Marshaller Level
The example annotation above only addressed the ignore property.
We will address date/time formatting at the ObjectMapper/marshaller level below.
|
159.1.1. Jackson JSON Initialization
Jackson JSON uses an ObjectMapper
class to go to/from POJO and JSON. We can configure
the mapper with options or configure a reusable builder to create mappers with
prototype options. Choosing the later approach will be useful once we move inside the
server.
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import org.springframework.http.converter.json.Jackson2ObjectMapperBuilder;
We have the ability to simply create a default ObjectMapper directly.
ObjectMapper mapper = new ObjectMapper();
However, when using Spring it is useful to use the Spring Jackson2ObjectMapperBuilder
class to set many of the data marshalling types for us.
import org.springframework.http.converter.json.Jackson2ObjectMapperBuilder;
...
ObjectMapper mapper = new Jackson2ObjectMapperBuilder()
.featuresToEnable(SerializationFeature.INDENT_OUTPUT) (1)
.featuresToDisable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS) (2)
//more later
.createXmlMapper(false) (3)
.build();
1 | optional pretty print indentation |
2 | option to use ISO-based strings versus binary values and arrays |
3 | same Spring builder creates both XML and JSON ObjectMappers |
Use Injection When Inside Container
When inside the container, have the Jackson2ObjectMapperBuilder injected (i.e., not locally-instantiated) in order to pick up external and property configurations/customizations.
|
By default, Jackson will marshal zone-based timestamps as a decimal number
(e.g., -6106031876.123456789
) and generic date/times as an array of values
(e.g., [ 1776, 7, 4, 8, 2, 4, 123456789 ]
and [ 1966, 1, 9 ]
). By disabling this serialization
feature, Jackson produces ISO-based strings for all types of timestamps and
generic date/times (e.g., 1776-07-04T08:02:04.123456789Z
and
2002-02-14
)
The following example from the class repository shows a builder customizer being registered as a @Bean
factory to be able to adjust Jackson defaults used by the server.
The returned lambda function is called with a builder each time someone injects a Jackson2ObjectMapper
— provided the Jackson AutoConfiguration has not been overridden.
159.1.2. Jackson JSON Marshalling/Unmarshalling
The mapper created from the builder can then be used to marshal the POJO to JSON.
private ObjectMapper mapper;
public <T> String marshal(T object) throws IOException {
StringWriter buffer = new StringWriter();
mapper.writeValue(buffer, object);
return buffer.toString();
}
The mapper can just as easy — unmarshal the JSON to a POJO instance.
public <T> T unmarshal(Class<T> type, String buffer) throws IOException {
T result = mapper.readValue(buffer, type);
return result;
}
A packaged set of marshal/unmarshal convenience routines have been packaged inside ejava-dto-util
.
159.1.3. Jackson JSON Maven Aspects
For modules with only DTOs with Jackson annotations, only the direct dependency on jackson-annotations is necessary
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
</dependency>
Modules that will be marshalling/unmarshalling JSON will need the core libraries that can be conveniently brought in through a dependency on one of the following two starters.
-
spring-boot-starter-web
-
spring-boot-starter-json
org.springframework.boot:spring-boot-starter-web:jar
+- org.springframework.boot:spring-boot-starter-json:jar
| +- com.fasterxml.jackson.core:jackson-databind:jar
| | +- com.fasterxml.jackson.core:jackson-annotations:jar
| | \- com.fasterxml.jackson.core:jackson-core
| +- com.fasterxml.jackson.datatype:jackson-datatype-jdk8:jar (1)
| +- com.fasterxml.jackson.datatype:jackson-datatype-jsr310:jar (1)
| \- com.fasterxml.jackson.module:jackson-module-parameter-names:jar
1 | defines mapping for java.time types |
Jackson has built-in ISO mappings for Date and java.time
Jackson has built-in mappings to ISO for java.util.Date and java.time
data types.
|
159.2. JSON-B
JSON-B (the standard) and
Yasson (the reference implementation of JSON-B) can pretty much
render a JSON view of our simple DTO class right
out of the box. Customizations can be applied using
JSON-B annotations. In the following example, the ignore
Java property
is being excluded from the JSON output.
Example Pre-Tweaked JSON-B Payload
|
Example QuoteDTO with JSON-B Annotation(s)
|
159.2.1. JSON-B Initialization
JSON-B provides all mapping through a Jsonb
builder object
that can be configured up-front with various options.
import javax.json.bind.Jsonb;
import javax.json.bind.JsonbBuilder;
import javax.json.bind.JsonbConfig;
JsonbConfig config=new JsonbConfig()
.setProperty(JsonbConfig.FORMATTING, true); (1)
Jsonb builder = JsonbBuilder.create(config);
1 | adds pretty-printing features to payload |
159.2.2. JSON-B Marshalling/Unmarshalling
The following two examples show how JSON-B marshals and unmarshals the DTO POJO instances to/from JSON.
private Jsonb builder;
public <T> String marshal(T object) {
String buffer = builder.toJson(object);
return buffer;
}
public <T> T unmarshal(Class<T> type, String buffer) {
T result = (T) builder.fromJson(buffer, type);
return result;
}
159.2.3. JSON-B Maven Aspects
Modules defining only the DTO class require a dependency on the following API definition for the annotations.
<dependency>
<groupId>jakarta.json</groupId>
<artifactId>jakarta.json-api</artifactId>
</dependency>
Modules marshalling/unmarshalling JSON documents using JSON-B/Yasson implementation
require dependencies on binding-api
and a runtime dependency on yasson
implementation.
org.eclipse:yasson:jar
+- jakarta.json.bind:jakarta.json.bind-api:jar
+- jakarta.json:jakarta.json-api:jar
\- org.glassfish:jakarta.json:jar
160. XML Content
XML is preferred by many data exchange services that require rigor in their data definitions.
That does not mean that rigor is always required. The following two examples are
XML renderings of a QuoteDTO
.
The first example is a straight mapping of Java class/attribute to XML elements. The second example applies an XML namespace and attribute (for the id
property).
Namespaces become important when mixing similar data types from different sources. XML attributes are commonly used to host identity information. XML elements are commonly used for description information.
The sometimes arbitrary use of attributes over elements in XML leads to some confusion when trying to perform direct mappings between JSON and XML — since JSON has no concept of an attribute.
<QuoteDTO> (1)
<id>0</id> (2)
<author>Zaphod Beeblebrox</author>
<text>Nothing travels faster than the speed of light with the possible exception of bad news, which obeys its own special laws.</text>
<date>1927</date> (3)
<date>6</date>
<date>11</date>
<ignored>ignored</ignored> (4)
</QuoteDTO>
1 | root element name defaults to variant of class name |
2 | all properties default to @XmlElement mapping |
3 | java.time types are going to need some work |
4 | all properties are assumed to not be ignored |
<quote xmlns="urn:ejava.svc-controllers.quotes" id="0"> (1) (2) (3)
<author>Zaphod Beeblebrox</author>
<text>Nothing travels faster than the speed of light with the possible exception of bad news, which obeys its own special laws.</text>
<date>1927-06-11</date>
</quote> (4)
1 | quote is our targeted root element name |
2 | urn:ejava.svc-controllers.quotes is our targeted namespace |
3 | we want the id mapped as an attribute — not an element |
4 | we want certain properties from the DTO not to show up in the XML |
160.1. Jackson XML
Like Jackson JSON, Jackson XML will attempt to map a Java class solely on Java reflection and default mappings. However, to leverage key XML features like namespaces and attributes, we need to add a few annotations. The partial example below shows our POJO with Lombok and other mappings excluded for simplicity.
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlRootElement;
...
@JacksonXmlRootElement(localName = "quote", (1)
namespace = "urn:ejava.svc-controllers.quotes") (2)
public class QuoteDTO {
@JacksonXmlProperty(isAttribute = true) (3)
private int id;
private String author;
private String text;
private LocalDate date;
@JsonIgnore (4)
private String ignored;
}
1 | defines the element name when rendered as the root element |
2 | defines namespace for type |
3 | maps id property to an XML attribute — default is XML element |
4 | reuses Jackson JSON general purpose annotations |
160.1.1. Jackson XML Initialization
Jackson XML initialization is nearly identical to its JSON sibling as long as we want them to have the same options. In all of our examples I will be turning off binary dates expression in favor of ISO-based strings.
import com.fasterxml.jackson.databind.SerializationFeature;
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
import org.springframework.http.converter.json.Jackson2ObjectMapperBuilder;
XmlMapper mapper = new Jackson2ObjectMapperBuilder()
.featuresToEnable(SerializationFeature.INDENT_OUTPUT) (1)
.featuresToDisable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS) (2)
//more later
.createXmlMapper(true) (3)
.build();
1 | pretty print output |
2 | use ISO-based strings for time-based fields versus binary numbers and arrays |
3 | XmlMapper extends ObjectMapper |
160.1.2. Jackson XML Marshalling/Unmarshalling
public <T> String marshal(T object) throws IOException {
StringWriter buffer = new StringWriter();
mapper.writeValue(buffer, object);
return buffer.toString();
}
public <T> T unmarshal(Class<T> type, String buffer) throws IOException {
T result = mapper.readValue(buffer, type);
return result;
}
160.1.3. Jackson XML Maven Aspects
Jackson XML is not broken out into separate libraries as much as its JSON sibling. Jackson XML annotations are housed in the same library as the marshalling/unmarshalling code.
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-xml</artifactId>
</dependency>
160.2. JAXB
JAXB is more particular about the definition of the Java class to be mapped.
JAXB requires that the root element of a document be defined with
an @XmlRootElement
annotation with an optional name and namespace
defined.
com.sun.istack.SAXException2: unable to marshal type
"info.ejava.examples.svc.content.quotes.dto.QuoteDTO"
as an element because it is missing an @XmlRootElement annotation]
...
import javax.xml.bind.annotation.XmlRootElement;
...
@XmlRootElement(name = "quote", namespace = "urn:ejava.svc-controllers.quotes")
public class QuoteDTO { (1) (2)
1 | default name is quoteDTO if not supplied |
2 | default to no namespace if not supplied |
JAXB has no default definitions for java.time
classes
and must be handled with custom adapter code.
INFO: No default constructor found on class java.time.LocalDate java.lang.NoSuchMethodException: java.time.LocalDate.<init>()
This has always been an issue
for Date formatting even before java.time
and can easily be solved with
a custom adapter class that converts between a String and the unsupported
type. We can locate
packaged solutions on the web, but it is helpful to get comfortable with
the process on our own.
We first create an adapter class that extends XmlAdapter<ValueType, BoundType>
— where ValueType is a type known to JAXB and BoundType is the type we are mapping.
We can use DateFormatter.ISO_LOCAL_DATE to marshal and unmarshal the LocalDate
to/from text.
import javax.xml.bind.annotation.adapters.XmlAdapter;
...
public static class LocalDateJaxbAdapter extends extends XmlAdapter<String, LocalDate> {
@Override
public LocalDate unmarshal(String text) {
return LocalDate.parse(text, DateTimeFormatter.ISO_LOCAL_DATE);
}
@Override
public String marshal(LocalDate timestamp) {
return DateTimeFormatter.ISO_LOCAL_DATE.format(timestamp);
}
}
We next annotate the Java property with @XmlJavaTypeAdapter
, naming our
adapter class.
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
...
@XmlAccessorType(XmlAccessType.FIELD) (2)
public class QuoteDTO {
...
@XmlJavaTypeAdapter(LocalDateJaxbAdapter.class) (1)
private LocalDate date;
1 | custom adapter required for unsupported types |
2 | must manually set access to FIELD when annotating attributes |
The alternative is to use a package-level descriptor and have the adapter automatically applied to all properties of that type.
//package-info.java
@XmlSchema(namespace = "urn:ejava.svc-controllers.quotes")
@XmlJavaTypeAdapter(type= LocalDate.class, value=JaxbTimeAdapters.LocalDateJaxbAdapter.class)
package info.ejava.examples.svc.content.quotes.dto;
import javax.xml.bind.annotation.XmlSchema;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
import java.time.LocalDate;
160.2.1. JAXB Initialization
There is no sharable, up-front initialization for JAXB. All configuration
must be done on individual, non-sharable JAXBContext objects. However,
JAXB does have a package-wide annotation that the other frameworks
do not. The following example shows a package-info.java
file that
contains annotations to be applied to every class in the same Java package.
//package-info.java
@XmlSchema(namespace = "urn:ejava.svc-controllers.quotes")
package info.ejava.examples.svc.content.quotes.dto;
import javax.xml.bind.annotation.XmlSchema;
160.2.2. JAXB Marshalling/Unmarshalling
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Marshaller;
import javax.xml.bind.Unmarshaller;
public <T> String marshal(T object) throws JAXBException {
JAXBContext jbx = JAXBContext.newInstance(object.getClass());
Marshaller marshaller = jbx.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); (1)
StringWriter buffer = new StringWriter();
marshaller.marshal(object, buffer);
return buffer.toString();
}
1 | adds newline and indentation formatting |
public <T> T unmarshal(Class<T> type, String buffer) throws JAXBException {
JAXBContext jbx = JAXBContext.newInstance(type);
Unmarshaller unmarshaller = jbx.createUnmarshaller();
ByteArrayInputStream bis = new ByteArrayInputStream(buffer.getBytes());
T result = (T) unmarshaller.unmarshal(bis);
return result;
}
160.2.3. JAXB Maven Aspects
Modules that define DTO classes only will require a direct dependency on the
jaxb-api
library for annotations and interfaces.
<dependency>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
</dependency>
Modules marshalling/unmarshalling DTO classes using JAXB will require a dependency
on the following two artifacts. jaxb-core
contains visible utilities used map
between Java and XML Schema. jaxb-impl
is more geared towards runtime. Since
both are needed, I am not sure why there is not a dependency between one another
to make that automatic.
<dependency>
<groupId>com.sun.xml.bind</groupId>
<artifactId>jaxb-core</artifactId>
</dependency>
<dependency>
<groupId>com.sun.xml.bind</groupId>
<artifactId>jaxb-impl</artifactId>
</dependency>
161. Configure Server-side Jackson
161.1. Dependencies
Jackson JSON will already be on the classpath when using spring-boot-web-starter
.
To also support XML, make sure the server has an additional jackson-dataformat-xml
dependency.
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-xml</artifactId>
</dependency>
161.2. Configure ObjectMapper
Both XML and JSON mappers are instances of ObjectMapper. To configure their use in our application — we can go one step higher and create a builder for jackson to use as its base. That is all we need to know as long as we can configure them identically.
Jackson’s AutoConfiguration provides a layered approach to customizing the marshaller. One can configure using:
-
spring.jackson properties (e.g.,
spring.jackson.serialization.*
) -
Jackson2ObjectMapperBuilderCustomizer — a functional interface that will be passed a builder pre-configured using properties
...
import com.fasterxml.jackson.databind.SerializationFeature;
import org.springframework.http.converter.json.Jackson2ObjectMapperBuilder;
@SpringBootApplication
public class QuotesApplication {
public static void main(String...args) {
SpringApplication.run(QuotesApplication.class, args);
}
@Bean
public Jackson2ObjectMapperBuilderCustomizer jacksonMapper() { (1)
return (builder) -> { builder
.featuresToEnable(SerializationFeature.INDENT_OUTPUT)
.featuresToDisable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS)
.dateFormat(new ISODateFormat());
};
}
}
1 | returns a lambda function that is called with a Jackson2ObjectMapperBuilder to customize.
Jackson uses this same definition for both XML and JSON mappers |
161.3. Controller Properties
We can register what MediaTypes each method supports by adding a set of consumes
and produces properties to the @RequestMapping
annotation in the controller.
This is an array of MediaType values (e.g., ["application/json", "application/xml"]
)
that the endpoint should either accept or provide in a response.
@RequestMapping(path= QUOTES_PATH,
method= RequestMethod.POST,
consumes = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE},
produces = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<QuoteDTO> createQuote(@RequestBody QuoteDTO quote) {
QuoteDTO result = quotesService.createQuote(quote);
URI uri = ServletUriComponentsBuilder.fromCurrentRequestUri()
.replacePath(QUOTE_PATH)
.build(result.getId());
ResponseEntity<QuoteDTO> response = ResponseEntity.created(uri)
.body(result);
return response;
}
The Content-Type
request header is matched with one of the types listed in consumes
.
This is a single value and the following example uses an application/json
Content-Type
and the server uses our Jackson JSON configuration and DTO mappings to turn the JSON string
into a POJO.
POST http://localhost:64702/api/quotes
sent: [Accept:"application/xml", Content-Type:"application/json", Content-Length:"108"]
{
"id" : 0,
"author" : "Tricia McMillan",
"text" : "Earth: Mostly Harmless",
"date" : "1991-05-11"
}
If there is a match between Content-Type and consumes, the provider will map the body contents to the input type using the mappings we reviewed earlier. If we need more insight into the request headers — we can change the method mapping to accept a RequestEntity and obtain the headers from that object.
@RequestMapping(path= QUOTES_PATH,
method= RequestMethod.POST,
consumes={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE},
produces={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
// public ResponseEntity<QuoteDTO> createQuote(@RequestBody QuoteDTO quote) {
public ResponseEntity<QuoteDTO> createQuote(RequestEntity<QuoteDTO> request) {(1)
QuoteDTO quote = request.getBody();
log.info("CONTENT_TYPE={}", request.getHeaders().get(HttpHeaders.CONTENT_TYPE));
log.info("ACCEPT={}", request.getHeaders().get(HttpHeaders.ACCEPT));
QuoteDTO result = quotesService.createQuote(quote);
1 | injecting raw input RequestEntity versus input payload to inspect header properties |
The log statements at the start of the methods output the following two lines with request header information.
QuotesController#createQuote:38 CONTENT_TYPE=[application/json;charset=UTF-8]
QuotesController#createQuote:39 ACCEPT=[application/xml]
Whatever the service returns (success or error), the Accept
request header is
matched with one of the types listed in the produces
. This is a list of N
values listed in priority order. In the following example, the client used an
application/xml
Accept header and the server converted it to XML using our Jackson XML
configuration and mappings to turn the POJO into an XML response.
sent: [Accept:"application/xml", Content-Type:"application/json", Content-Length:"108"]
rcvd: [Location:"http://localhost:64702/api/quotes/1", Content-Type:"application/xml", Transfer-Encoding:"chunked", Date:"Fri, 05 Jun 2020 19:44:25 GMT", Keep-Alive:"timeout=60", Connection:"keep-alive"]
<quote xmlns="urn:ejava.svc-controllers.quotes" id="1">
<author xmlns="">Tricia McMillan</author>
<text xmlns="">Earth: Mostly Harmless</text>
<date xmlns="">1991-05-11</date>
</quote>
If there is no match between Content-Type and consumes, a 415
/Unsupported Media Type
error status is returned.
If there is no match between Accept and produces, a 406
/Not Acceptable
error status is returned. Most of this
content negotiation and data marshalling/unmarshalling is hidden from the controller.
162. Client Marshall Request Content
If we care about the exact format our POJO is marshalled to or the format the service returns,
we can no longer pass a naked POJO to the client library. We must wrap the POJO in a
RequestEntity
and supply a set of headers with format specifications. The following shows
an example using RestTemplate.
RequestEntity request = RequestEntity.post(quotesUrl) (1)
.contentType(contentType) (2)
.accept(acceptType) (3)
.body(validQuote);
ResponseEntity<QuoteDTO> response = restTemplate.exchange(request, QuoteDTO.class);
1 | create a POST request with client headers |
2 | express desired Content-Type for the request |
3 | express Accept types for the response |
The following example shows the request and reply information exchange for an application/json
Content-Type and Accept header.
POST http://localhost:49252/api/quotes, returned CREATED/201
sent: [Accept:"application/json", Content-Type:"application/json", Content-Length:"146"]
{
"id" : 0,
"author" : "Zarquon",
"text" : "Whatever your tastes, Magrathea can cater for you. We are not proud.",
"date" : "1920-08-17"
}
rcvd: [Location:"http://localhost:49252/api/quotes/1", Content-Type:"application/json", Transfer-Encoding:"chunked", Date:"Fri, 05 Jun 2020 20:17:35 GMT", Keep-Alive:"timeout=60", Connection:"keep-alive"]
{
"id" : 1,
"author" : "Zarquon",
"text" : "Whatever your tastes, Magrathea can cater for you. We are not proud.",
"date" : "1920-08-17"
}
The following example shows the request and reply information exchange for an application/xml
Content-Type and Accept header.
POST http://localhost:49252/api/quotes, returned CREATED/201
sent: [Accept:"application/xml", Content-Type:"application/xml", Content-Length:"290"]
<quote xmlns="urn:ejava.svc-controllers.quotes" id="0">
<author xmlns="">Humma Kavula</author>
<text xmlns="">In the beginning, the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move.</text>
<date xmlns="">1942-03-03</date>
</quote>
rcvd: [Location:"http://localhost:49252/api/quotes/4", Content-Type:"application/xml", Transfer-Encoding:"chunked", Date:"Fri, 05 Jun 2020 20:17:35 GMT", Keep-Alive:"timeout=60", Connection:"keep-alive"]
<quote xmlns="urn:ejava.svc-controllers.quotes" id="4">
<author xmlns="">Humma Kavula</author>
<text xmlns="">In the beginning, the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move.</text>
<date xmlns="">1942-03-03</date>
</quote>
163. Client Filters
The runtime examples above showed HTTP traffic and marshalled payloads. That can be very convenient for debugging purposes. There are two primary ways of examining marshalled payloads.
- Switch accepted Java type to String
-
Both our client and controller declare they expect a
QuoteDTO.class
to be the response. That causes the provider to map the String into the desired type. If the client or controller declared they expected aString.class
, they would receive the raw payload to debug or later manually parse using direct access to the unmarshalling code. - Add a filter
-
Both RestTemplate and WebClient accept filters in the request and response flow. RestTemplate is easier and more capable to use because of its synchronous behavior. We can register a filter to get called with the full request and response in plain view — with access to the body — using RestTemplate. WebClient, with its asynchronous design has a separate request and response flow with no easy access to the payload.
163.1. RestTemplate
The following code provides an example of a RestTemplate filter that shows the steps taken
to access the request and response payload. Note that reading the body of a request or response
is commonly a read-once restriction. The ability to read the body multiple times will be taken
care of within the @Bean
factory method registering this filter.
import org.springframework.http.client.ClientHttpRequestExecution;
import org.springframework.http.client.ClientHttpRequestInterceptor;
import org.springframework.http.client.ClientHttpResponse;
...
public class RestTemplateLoggingFilter implements ClientHttpRequestInterceptor {
public ClientHttpResponse intercept(HttpRequest request, byte[] body,(1)
ClientHttpRequestExecution execution) throws IOException {
ClientHttpResponse response = execution.execute(request, body); (1)
HttpMethod method = request.getMethod();
URI uri = request.getURI();
HttpStatus status = response.getStatusCode();
String requestBody = new String(body);
String responseBody = this.readString(response.getBody());
//... log debug
return response;
}
private String readString(InputStream inputStream) { ... }
...
}
1 | RestTemplate gives us access to the client request and response |
The following code shows an example of a @Bean
factory that creates RestTemplate
instances configured with the debug logging filter shown above.
@Bean
ClientHttpRequestFactory requestFactory() {
return new SimpleClientHttpRequestFactory(); (3)
}
@Bean
public RestTemplate restTemplate(RestTemplateBuilder builder,
ClientHttpRequestFactory requestFactory) { (3)
RestTemplate restTemplate = builder.requestFactory(
//used to read the streams twice -- so we can use the logging filter
()->new BufferingClientHttpRequestFactory(requestFactory)) (2)
.interceptors(List.of(new RestTemplateLoggingFilter())) (1)
.build();
return restTemplate;
}
1 | the overall intent of this @Bean factory is to register the logging filter |
2 | must configure RestTemplate with a buffer (BufferingClientHttpRequestFactory ) for body to enable multiple reads |
3 | providing a ClientRequestFactory to be forward-ready for SSL communications |
163.2. WebClient
The following code shows an example request and response filter. They are independent and are implemented using a Java 8 lambda function. You will notice that we have no easy access to the request or response body.
package info.ejava.examples.common.webflux;
import org.springframework.web.reactive.function.client.ExchangeFilterFunction;
...
public class WebClientLoggingFilter {
public static ExchangeFilterFunction requestFilter() {
return ExchangeFilterFunction.ofRequestProcessor((request) -> {
//access to
//request.method(),
//request.url(),
//request.headers()
return Mono.just(request);
});
}
public static ExchangeFilterFunction responseFilter() {
return ExchangeFilterFunction.ofResponseProcessor((response) -> {
//access to
//response.statusCode()
//response.headers().asHttpHeaders())
return Mono.just(response);
});
}
}
The code below demonstrates how to register custom filters for injected WebClient instances.
@Bean
public WebClient webClient(WebClient.Builder builder) {
return builder
.filter(WebClientLoggingFilter.requestFilter())
.filter(WebClientLoggingFilter.responseFilter())
.build();
}
164. Date/Time Lenient Parsing and Formatting
In our quote example, we had an easy LocalDateTime to format and parse, but that even required a custom adapter for JAXB. Integration of other time-based properties can get more involved as we get into complete timestamps with timezone offsets. So lets try to address the issues here before we complete the topic on content exchange.
The primary time-related issues we can encounter include:
Potential Issue | Description |
---|---|
type not supported |
We have already encountered that with JAXB and solved using a custom adapter. Each of the providers offer their own form of adapter (or serializer/deserializer), so we have a good headstart on how to solve the hard problems. |
non-UTC ISO offset style supported |
There are at least four or more expressions of a timezone offset (Z, +00, +0000, or +00:00) that could be used. Not all of them can be parsed by each provider out-of-the-box. |
offset versus extended offset zone formatting |
There are more verbose styles (Z[UTC]) of expressing timezone offsets that include the ZoneId |
fixed width or truncated |
Are all fields supplied at all times even when they are 0
(e.g., |
We should always strive for:
-
consistent (ISO) standard format to marshal time-related fields
-
leniently parsing as many formats as possible
Lets take a look at establishing an internal standard, determining which providers violate that standard, how to adjust them to comply with our standard, and how to leniently parse many formats with the Jackson parser since that will be our standard provider for the course.
164.1. Out of the Box Time-related Formatting
Out of the box, I found the providers marshalled OffsetDateTime
and Date
with the following format.
I provided an OffsetDateTime
and Date
timestamp with varying number of nanoseconds (123456789, 1, and 0 ns) and timezone UTC and -05:00) and the following table shows what was marshalled for the DTO.
Provider | OffsetDateTime | Trunc | Date | Trunc |
---|---|---|---|---|
Jackson |
1776-07-04T00:00:00.123456789Z 1776-07-04T00:00:00.1Z 1776-07-04T00:00:00Z 1776-07-03T19:00:00.123456789-05:00 1776-07-03T19:00:00.1-05:00 1776-07-03T19:00:00-05:00 |
Yes |
1776-07-04T00:00:00.123+00:00 1776-07-04T00:00:00.100+00:00 1776-07-04T00:00:00.000+00:00 |
No |
JSON-B |
1776-07-04T00:00:00.123456789Z 1776-07-04T00:00:00.1Z 1776-07-04T00:00:00Z 1776-07-03T19:00:00.123456789-05:00 1776-07-03T19:00:00.1-05:00 1776-07-03T19:00:00-05:00 |
Yes |
1776-07-04T00:00:00.123Z[UTC] 1776-07-04T00:00:00.1Z[UTC] 1776-07-04T00:00:00Z[UTC] |
Yes |
JAXB |
(not supported/ custom adapter required) |
n/a |
1776-07-03T19:00:00.123-05:00 1776-07-03T19:00:00.100-05:00 1776-07-03T19:00:00-05:00 |
Yes/ No |
Jackson and JSON-B — out of the box — use an ISO format that truncates
nanoseconds and uses "Z" and "+00:00" offset styles for java.time
types.
JAXB does not support java.time
types. When a non-UTC time is supplied,
the time is expressed using the targeted offset. You will notice that
Date is always modified to be UTC.
Jackson Date format is a fixed length, no truncation, always expressed
at UTC with an +HH:MM
expressed offset. JSON-B and JAXB Date formats
truncate milliseconds/nanoseconds. JSON-B uses extended timezone offset (Z[UTC]
) and JAXB
uses "+00:00" format. JAXB also always expresses the Date in EST
in my case.
164.2. Out of the Box Time-related Parsing
To cut down on our choices, I took a look at which providers out-of-the-box could parse the different timezone offsets. To keep things sane, my detailed focus was limited to the Date field. The table shows that each of the providers can parse the "Z" and "+00:00" offset format. They were also able to process variable length formats when faced with less significant nanosecond cases.
Provider | ISO Z | ISO +00 | ISO +0000 | ISO +00:00 | ISO Z[UTC] |
---|---|---|---|---|---|
Jackson |
Yes |
Yes |
Yes |
Yes |
No |
JSON-B |
Yes |
No |
No |
Yes |
Yes |
JAXB |
Yes |
No |
No |
Yes |
No |
The testing results show that timezone expressions "Z" or "+00:00" format should be portable and something to target as our marshalling format.
-
Jackson - no output change
-
JSON-B - requires modification
-
JAXB - requires no change
164.3. JSON-B DATE_FORMAT Option
We can configure JSON-B time-related field output using a java.time
format string.
java.time
permits optional characters. java.text
does not. The following expression
is good enough for Date output but will create a parser that is intolerant of varying
length timestamps. For that reason, I will not choose the type of option that locks
formatting with parsing.
JsonbConfig config=new JsonbConfig()
.setProperty(JsonbConfig.DATE_FORMAT, "yyyy-MM-dd'T'HH:mm:ss[.SSS][XXX]") (1)
.setProperty(JsonbConfig.FORMATTING, true);
builder = JsonbBuilder.create(config);
1 | a fixed formatting and parsing candidate option rejected because of parsing intolerance |
164.4. JSON-B Custom Serializer Option
A better JSON-B solution would be to create a serializer — independent of deserializer — that takes care of the formatting.
public class DateJsonbSerializer implements JsonbSerializer<Date> {
@Override
public void serialize(Date date, JsonGenerator generator, SerializationContext serializationContext) {
generator.write(DateTimeFormatter.ISO_INSTANT.format(date.toInstant()));
}
}
We add @JsonbTypeSerializer
annotation to the field we need to customize
and supply the class for our custom serializer.
@JsonbTypeSerializer(JsonbTimeSerializers.DateJsonbSerializer.class)
private Date date;
With the above annotation in place and the JsonConfig unmodified, we get output format we want from JSON-B without impacting its built-in ability to parse various time formats.
-
1776-07-04T00:00:00.123Z
-
1776-07-04T00:00:00.100Z
-
1776-07-04T00:00:00Z
164.5. Jackson Lenient Parser
All those modifications shown so far are good, but we would also like to have lenient
input parsing — possibly more lenient than built into the providers. Jackson provides
the ability to pass in a SimpleDateFormat
format string or an instance of class
that extends DateFormat
. SimpleDateFormat
does not make a good lenient parser,
therefore I created a lenient parser that uses DateTimeFormatter framework and plugged
that into the DateFormat
framework.
public class ISODateFormat extends DateFormat implements Cloneable {
public static final DateTimeFormatter UNMARSHALLER = new DateTimeFormatterBuilder()
//...
.toFormatter();
public static final DateTimeFormatter MARSHALLER = DateTimeFormatter.ISO_OFFSET_DATE_TIME;
public static final String MARSHAL_ISO_DATE_FORMAT = "yyyy-MM-dd'T'HH:mm:ss[.SSS]XXX";
@Override
public Date parse(String source, ParsePosition pos) {
OffsetDateTime odt = OffsetDateTime.parse(source, UNMARSHALLER);
pos.setIndex(source.length()-1);
return Date.from(odt.toInstant());
}
@Override
public StringBuffer format(Date date, StringBuffer toAppendTo, FieldPosition pos) {
ZonedDateTime zdt = ZonedDateTime.ofInstant(date.toInstant(), ZoneOffset.UTC);
MARSHALLER.formatTo(zdt, toAppendTo);
return toAppendTo;
}
@Override
public Object clone() {
return new ISODateFormat(); //we have no state to clone
}
}
I have built the lenient parser using the Java interface to DateTimeFormatter. It is designed to
-
handle variable length time values
-
different timezone offsets
-
a few different timezone offset expressions
public static final DateTimeFormatter UNMARSHALLER = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.append(DateTimeFormatter.ISO_LOCAL_DATE)
.appendLiteral('T')
.append(DateTimeFormatter.ISO_LOCAL_TIME)
.parseLenient()
.optionalStart().appendOffset("+HH", "Z").optionalEnd()
.optionalStart().appendOffset("+HH:mm", "Z").optionalEnd()
.optionalStart().appendOffset("+HHmm", "Z").optionalEnd()
.optionalStart().appendLiteral('[').parseCaseSensitive()
.appendZoneRegionId()
.appendLiteral(']').optionalEnd()
.parseDefaulting(ChronoField.OFFSET_SECONDS,0)
.parseStrict()
.toFormatter();
An instance of my ISODateFormat
class is then registered with the provider
to use on all interfaces.
mapper = new Jackson2ObjectMapperBuilder()
.featuresToEnable(SerializationFeature.INDENT_OUTPUT)
.featuresToDisable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS)
.dateFormat(new ISODateFormat()) (1)
.createXmlMapper(false)
.build();
1 | registering a global time formatter for Dates |
In the server, we can add that same configuration option to our builder @Bean
factory.
@Bean
public Jackson2ObjectMapperBuilderCustomizer jacksonMapper() {
return (builder) -> { builder
.featuresToEnable(SerializationFeature.INDENT_OUTPUT)
.featuresToDisable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS)
.dateFormat(new ISODateFormat()); (1)
};
}
1 | registering a global time formatter for Dates for JSON and XML |
At this point we have the insights into time-related issues and knowledge of how we can correct.
165. Summary
In this module we:
-
introduces the DTO pattern and contrasted it with the role of the Business Object
-
implemented a DTO class with several different types of fields
-
mapped our DTOs to/from a JSON and XML document using multiple providers
-
configured data mapping providers within our server
-
identified integration issues with time-related fields and learned how to create custom adapters to help resolve issues
-
learned how to implement client filters
-
took a deeper dive into time-related formatting issues in content and ways to address
Swagger
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
166. Introduction
The core charter of this course is to introduce you to framework solutions in Java and focus on core Spring and SpringBoot frameworks. Details of Web APIs, database access, and distributed application design are core topics of other sibling courses. We have been covering a modest amount of Web API topics in these last set of modules to provide a functional front door to our application implementations. You know by now how to implement basic CRUD Web APIs. I now want to wrap up the Web API coverage by introducing a functional way to call those Web APIs with minimal work using Swagger UI. Detailed aspects of configuring Swagger UI is considered out of scope for this course but many example implementation details are included in the Swagger Contest Example set of applications in the examples source tree.
166.1. Goals
You will learn to:
-
identify the items in the Swagger landscape and its central point — OpenAPI
-
generate an Open API interface specification from Java code
-
deploy and automatically configure a Swagger UI based on your Open API interface specification
-
invoke Web API endpoint operations using Swagger UI
166.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
generate an default Open API 3.0 interface specification using Springfox and Springdoc
-
configure and deploy a Swagger UI that calls your Web API using the Open API specification generated by your API
-
make HTTP CRUD interface calls to your Web API using Swagger UI
-
identify the starting point to make configuration changes to Springfox and Springdoc libraries
167. Swagger Landscape
The core portion of the Swagger landscape is made up of a line of standards and products geared towards HTTP-based APIs and supported by the company SmartBear. There are two types of things directly related to Swagger: the OpenAPI standard and tools. Although heavily focused on Java implementations, Swagger is generic to all HTTP API providers and not specific to Spring.
167.1. Open API Standard
OpenAPI — is an implementation-agnostic interface specification for HTTP-based APIs. This was originally baked into the Swagger tooling but donated to open source community in 2015 as a way to define and document interfaces.
-
Open API 2.0 - released in 2014 as the last version prior to transitioning to open source. This is equivalent to the Swagger 2.0 Specification.
-
Open API 3.x - released in 2017 as the first version after transitioning to open source.
167.2. Swagger-based Tools
Within the close Swagger umbrella, there are a set of Tools, both free/open source and commercial that are largely provided by Smartbear.
-
Swagger Open Source Tools - these tools are primarily geared towards single API at a time uses.
-
Swagger UI — is a user interface that can be deployed remotely or within an application. This tool displays descriptive information and provides the ability to execute API methods based on a provided OpenAPI specification.
-
Swagger Editor - is a tool that can be used to create or augment an OpenAPI specification.
-
Swagger Codegen - is a tool that builds server stubs and client libraries for APIs defined using OpenAPI.
-
-
Swagger Commercial Tools - these tools are primarily geared towards enterprise usage.
-
Swagger Inspector - a tool to create OpenAPI specifications using external call examples
-
Swagger Hub - repository of OpenAPI definitions
-
SmartBear offers another set of open source and commercial test tools called SoapUI which is geared at authoring and executing test cases against APIs and can read in OpenAPI as one of its API definition sources.
Our only requirement in going down this Swagger path is to have the capability to invoke HTTP methods of our endpoints with some ease. There are at least two libraries that focus on generating the Open API spec and packaging a version of the Swagger UI to document and invoke the API in Spring Boot applications: Springfox and Springdocs.
167.3. Springfox
Springfox is focused on delivering Swagger-based solutions to Spring-based API implementations but is not an official part of Spring, Spring Boot, or Smartbear. It is hard to even find a link to Springfox on the Spring documentation web pages.
Essentially Springfox is:
Springfox has been around many years. I found the initial commit in 2012. It supported Open API 2.0 when I originally looked at it in June 2020 (Open API 3.0 was released in 2017). At that time, the Webflux branch was also still in SNAPSHOT. However, a few weeks later a flurry of releases went out that included Webflux support but no releases have occurred in the year since then. It is not a fast evolving library. |
Figure 57. Example Springfox Swagger UI
|
Springfox does not work with >= Spring Boot 2.6
Springfox does not work with Spring Boot >= 2.6 where a patternParser was deprecated and causes an inspection error during initialization.
We can work around the issue for demonstration — but serious use of Swagger (as of July 2022) is now limited to Springdoc.
|
167.4. Springdoc
Springdoc is an independent project focused on delivering Swagger-based solutions to Spring Boot APIs. Like Springfox, Springdoc has no official ties to Spring, Spring Boot, or Pivotal Software. The library was created because of Springfox’s lack of support for Open API 3.x many years after its release.
Springdoc is relatively new compared to Springfox. I found its initial commit in July 2019 and has released several versions per month since. That indicates to me that they have a lot of catch-up to do to complete the product. However, they have the advantage of coming in when the standard is more mature and were able to bypass earlier Open API versions. Springdoc targets integration with the latest Spring Web API frameworks — including Spring MVC and Spring WebFlux. |
Figure 58. Example Springdoc SwaggerUI
|
168. Minimal Configuration
My goal in bringing the Swagger topics into the course is solely to provide us with a convenient way to issue example calls to our API — which is driving our technology solution within the application. For that reason, I am going to show the least amount of setup required to enable a Swagger UI and have it do the default thing.
The minimal configuration will be missing descriptions for endpoint operations, parameters, models, and model properties. The content will rely solely on interpreting the Java classes of the controller, model/DTO classes referenced by the controller, and their annotations. Springdoc definitely does a better job at figuring out things automatically but they are both functional in this state.
168.1. Springfox Minimal Configuration
Springfox requires one change to a web application to support Open API 3 and the SwaggerUI:
-
add Maven dependencies
168.1.1. Springfox Maven POM Dependency
Springfox requires two dependencies — which are both automatically brought in by the following starter dependency.
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-boot-starter</artifactId>
</dependency>
The starter automatically brings in the following dependencies — that no longer need to be explicitly named.
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId> (1)
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId> (2)
</dependency>
1 | support for generating the Open API spec |
2 | support for the Swagger UI |
If you are implementing a module with only the DTOs or controllers and working to further define the API with annotations, you would only need the springfox-swagger2
dependency.
168.1.2. Springfox Access
Once that is in place, you can access
The minimally configured Springfox will display more than what we want — but it has what we want. |
Figure 59. Minimally Configured SpringFox
|
168.1.3. Springfox Starting Customization
The starting point for adjusting the overall interface is done thru the definition of one or more Dockets. From here, we can control the path and dozens of other options. The specific option shown will reduce the operations shown to those that have paths that start with "/api/".
import springfox.documentation.builders.PathSelectors;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
@Configuration
public class SwaggerConfig {
@Bean
public Docket api() {
return new Docket(DocumentationType.SWAGGER_2)
.select()
.paths(PathSelectors.regex("/api/.*"))
.build();
}
}
Textual descriptions are primarily added to the annotations of each controller and model/DTO class.
168.2. Springdoc Minimal Configuration
Springdoc minimal configuration is as simple as it gets. All that is required is a single Maven dependency.
168.2.1. Springdoc Maven Dependency
Springdoc has a single top-level dependency that brings in many lower-level dependencies.
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
</dependency>
168.2.2. Springdoc Access
Once that is in place, you can access
The minimally configured Springdoc automatically filters out some of the Springboot overhead APIs and what we get is more tuned towards our developed API. |
Avoid the Petstore
The |
168.2.3. Springdoc Starting Customization
The starting point for adjusting the overall interface for Springdoc is done thru the definition of one or more GroupedOpenApi objects. From here, we can control the path and countless other options. The specific option shown will reduce the operations shown to those that have paths that start with "/api/".
...
import org.springdoc.core.GroupedOpenApi;
import org.springdoc.core.SpringDocUtils;
@Configuration
public class SwaggerConfig {
@Bean
public GroupedOpenApi api() {
SpringDocUtils.getConfig();
//...
return GroupedOpenApi.builder()
.group("contests")
.pathsToMatch("/api/**")
.build();
}
}
Textual descriptions are primarily added to the annotations of each controller and model/DTO class.
169. Example Use
By this point in time we are past-ready for a live demo. You are invited to
start both the Springfox and Springdoc version of the Contests Application
and poke around. The following commands are being run from the parent
swagger-contest-example
directory. They can also be run within the IDE.
$ mvn spring-boot:run -f springfox-contest-svc \(1)
-Dspring-boot.run.arguments=--server.port=8081 (2)
1 | starts the web application from within Maven |
2 | passes arguments from command line, thru Maven, to the Spring Boot application |
$ mvn spring-boot:run -f springdoc-contest-svc \
-Dspring-boot.run.arguments=--server.port=8082 (1)
1 | using a different port number to be able to compare side-by-side |
Access the two versions of the application using
-
Springfox: http://localhost:8081/swagger-ui/index.html
-
Springdoc: http://localhost:8082/swagger-ui.html
I will show an example thread here that is common to both.
169.1. Access Contest Controller POST Command
|
169.2. Invoke Contest Controller POST Command
|
169.3. View Contest Controller POST Command Results
|
170. Useful Configurations
I have created a set of examples under the Swagger Contest Example
that provide
a significant amount of annotations to add descriptions, provide accurate responses
to dynamic outcomes, etc. for both Springfox and Springdoc to get a sense of how they
performed.
Figure 60. Fully Configured Springfox Example
|
Figure 61. Fully Configured Springdoc Example
|
That is a lot of detail work and too much to cover here for what we are looking for. Feel free to look at the examples for details. However, I did encounter a required modification that made a feature go from unusable to usable and will show you that customization in order to give you a sense of how you might add other changes.
170.1. Customizing Type Expressions
java.time.Duration
has a simple
ISO string format expression that looks like PT60M
or PT3H
for
periods of time.
170.1.1. OpenAPI 2 Model Property Annotations
The following snippet shows the Duration property
enhanced with Open API 2 annotations to use a default PT60M
example value.
package info.ejava.examples.svc.springfox.contests.dto;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlRootElement;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
@JacksonXmlRootElement(localName="contest", namespace=ContestDTO.CONTEST_NAMESPACE)
@ApiModel(description="This class describes a contest between a home and, " +
" away team, either in the past or future.")
public class ContestDTO {
@JsonProperty(required = false)
@ApiModelProperty(position = 4,
example = "PT60M",
value="Each scheduled contest should have a period of time specified " +
"that identifies the duration of the contest. e.g., PT60M, PT2H")
private Duration duration;
170.1.2. OpenAPI 3 Model Property Annotations
The following snippet shows the Duration property
enhanced with Open API 3 annotations to use a default PT60M
example value.
package info.ejava.examples.svc.springdoc.contests.dto;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlRootElement;
import io.swagger.v3.oas.annotations.media.Schema;
@JacksonXmlRootElement(localName="contest", namespace=ContestDTO.CONTEST_NAMESPACE)
@Schema(description="This class describes a contest between a home and away team, "+
"either in the past or future.")
public class ContestDTO {
@JsonProperty(required = false)
@Schema(example = "PT60M",
description="Each scheduled contest should have a period of time specified "+
"that identifies the duration of the contest. e.g., PT60M, PT2H")
private Duration duration;
170.2. Duration Example Renderings
Both Springfox and Springdoc derive a more complicated schema either for JSON, XML, or both that was desired or usable.
Springfox has a complex definition for java.util.Duration
for both JSON and XML.
Springfox Default Duration JSON Expression
|
Springfox Default Duration XML Expression
|
Springdoc has a fine default for JSON but a similar issue for XML.
Springdoc Default Duration JSON Expression
|
Springdoc Default Duration XML Expression
|
We can correct the problem in Springfox by mapping the Duration class to a String.
I originally found this solution for one of the other java.time
types and it worked
here as well.
@Bean
public Docket api(SwaggerConfiguration config) {
return new Docket(DocumentationType.SWAGGER_2)
.select()
.paths(PathSelectors.regex("/api/.*"))
.build()
.directModelSubstitute(Duration.class, String.class)
//...
;
}
With the above configuration in place, Springfox provides an example that uses a simple string to express ISO duration values.
Springfox Duration Mapped to String JSON Expression
|
Springfox Duration Mapped to String XML Expression
|
Judging by the fact that Springdoc is new — post Java 8 and expresses a Duration as a string for JSON, tells me there has to be a good solution for the XML side. I did not have the time to get a perfect solution, but found a configuration option that at least expressed the Duration as an empty string that was easy to enter in a value.
@Bean
public GroupedOpenApi api(SwaggerConfiguration config) {
SpringDocUtils.getConfig()
.replaceWithSchema(Duration.class,
new Schema().example("PT120M")
);
return GroupedOpenApi.builder()
.group("contests")
.pathsToMatch("/api/contests/**")
.build();
}
The examples below shows the configuration above improved the XML example without breaking the JSON example that we were targeting from the beginning. I purposely chose an alternate Duration value so we could see that the global configuration for property types is overriding the individual annotations.
Springfox Duration Mapped to String JSON Expression
|
Springfox Duration Mapped to String XML Expression
|
171. Springfox / Springdoc Analysis
Both of these packages are surprisingly functional right out of the box with the minimal configuration — with the exception of some complex types. In early June 2020, Springdoc definitely understood the purpose of the Java code better than Springfox. That is likely because Springdoc is very much aware of Spring Boot 2.x and Springfox is slow to evolve.
The one feature I could not get to work in either — that I assume works — is "examples" for complex types. I worked until I got a legal JSON and XML example displayed but fell short of being able to supply an example that was meaningful to the problem domain (e.g., supplying team names versus "string"). A custom example is quite helpful if the model class has a lot of optional fields that are rarely used and unlikely to be used by someone using the Swagger UI.
(In early June 2020) Springfox had better documentation that shows you features ahead of time in logical order. Springdoc’s documentation was primarily a Q&A FAQ that showed features in random order. I could not locate a good Springdoc example — but after implementing with Springfox first, the translation was extremely easy.
Springfox has been around a long time but with the change from Open API 2 to 3, the addition of Webflux, and their slow rate of making changes — that library will likely not be a good choice for Open API or Webflux users. Springdoc seems like it is having some learning pains — where features may work easier but don’t always work 100%, lack of documentation and examples to help correct, and their existing FAQ samples do not always match the code. However, it seems solid already (in early June 2020) for our purpose and they are issuing many releases per month since they first commit in July 2019. By the time you read this much will have changed.
One thing I found after adding annotations for the technical frameworks (e.g., Lombok, WebMVC, Jackson JSON, Jackson XML) and then trying to document every corner of the API for Swagger in order to flesh out issues — it was hard to locate the actual code. My recommendation is to continue to make the names of controllers, models/DTO classes, parameters, and properties immediately understandable to save on the extra overhead of Open API annotations. Skip the obvious descriptions one can derive from the name and type, but still make it document the interface and usable to developers learning your API.
172. Summary
In this module we:
-
learned that Swagger is a landscape of items geared at delivering HTTP-based APIs
-
learned that the company Smartbear originated Swagger and then divided up the landscape into a standard interface, open source tools, and commercial tools
-
learned that the Swagger standard interface was released to open source at version 2 and is now Open API version 3
-
learned that two tools — Springfox and Springdoc — are focused on implementing Open API for Spring and Spring Boot applications and provide a packaging of the Swagger UI.
-
learned that Springfox and Springdoc have no formal ties to Spring, Spring Boot, Pivotal, Smartbear, etc. They are their own toolset and are not as polished as we have come to expect from the Spring suite of libraries.
-
learned that Springfox is older, originally supported Open API 2 and SpringMVC for many years, but now supports Open API 3 and WebFlux
-
learned that Springdoc is newer, active, and supports Open API 3, SpringMVC, and Webflux
-
learned how to minimally configure Springfox and Springdoc into our web application in order to provide the simple ability to invoke our HTTP endpoint operations.
Assignment 2 API
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
The parts of the API assignment make up a single assignment that is broken into focus areas that relate 1:1 with the lecture topics. You have the individual choice to start with any one area and either advance or jump repeatedly between them as you complete it as one overall solution. However, you are likely going to want to start out with modules area so that you have some concrete modules to begin your early work. It is always good to be able to perform a successful root level build of all targeted modules before you begin adding detailed dependencies, plugins, and Java code.
173. Overview
The API will include three main concepts. We are going to try to keep the business rules pretty simple at this point:
-
Home - an individual home that will be part of a sale
-
Homes can be added, modified, listed, and deleted
-
Homes can be deleted entirely at any time
-
-
Buyer - identification for a person that may be part of a Home purchase and is not specific to any one Home
-
Buyer information can be created, modified, listed, and deleted
-
Buyer information can be modified or deleted at any time
-
-
HomeSale - identifies a Home to be sold and purchased by a Buyer
-
HomeSales can be created for an existing Home to form a "listing"
-
Any Home information pertinent to Home will be locked into this HomeSale at creation time
-
-
HomeSales can be updated for an existing Buyer to complete a purchase
-
Any Buyer information will be permanent once assigned
-
-
HomeSale cannot be updated with a new Buyer once it has been purchased
-
HomeSale can be deleted and re-created.
-
173.1. Grading Emphasis
Grading emphasis will be focused on the demonstration of satisfaction of the listed learning objectives and --with the exception of the scenarios listed at the end — not on quantity. Most required capability/testing is focused on demonstration of what you know how to do. You are free to implement as much of the business model as you wish, but treat the individually stated requirements and completing the listed scenarios at the end of the assignment as the minimal functionality required.
173.2. HomeBuyer Support
You are given a complete implementation of Home and Buyer as examples and building blocks in order to complete the assignment. Your primary work with be in completing HomeSales.
173.2.1. HomeBuyer Service
The homebuyers-support-api-svc
module contains a full @RestController/Service/Repo thread for both Homes and Buyers. The module contains two Auto-configuration definitions that will automatically activate and configure the two services within a dependent application.
The following dependency can be added to your service solution to bring in the Homes and Buyers service examples to build upon.
<dependency>
<groupId>info.ejava.assignments.api.homesales</groupId>
<artifactId>homebuyers-support-api-svc</artifactId>
<version>${ejava.version}</version>
</dependency>
173.2.2. HomeBuyer Client
A client module is supplied that includes the DTOs and client to conveniently communicate with the APIs. Your HomeSale solution may inject the Homes and Buyers service components for interaction but your API tests will use the Home and Buyer APIs.
The following dependency can be added to your solution to bring in the Homes and Buyers client artifact examples to build upon.
<dependency>
<groupId>info.ejava.assignments.api.homesales</groupId>
<artifactId>homebuyers-support-api-client</artifactId> (1)
<version>${ejava.version}</version>
</dependency>
1 | dependency on client will bring in both client and dto modules |
173.2.3. HomeBuyer Tests
You are also supplied a set of tests that are meant to assist in your early development of the end-to-end capability. You are still encouraged to write your own tests and required to do so in specific sections and for the required scenarios. The supplied tests are made available to you using the following Maven dependency.
<dependency>
<groupId>info.ejava.assignments.api.homesales</groupId>
<artifactId>homebuyers-support-api-svc</artifactId>
<version>${ejava.version}</version>
<classifier>tests</classifier> (1)
<scope>test</scope>
</dependency>
1 | tests have been packaged within a separate -tests.java |
The tests require that
-
your HomeSale DTO class implement a
SaleDTO
"marker" interface provided by the support module. This interface has nothing defined and is only used to identify your DTO during the tests. -
implement a
ApiTestHelper<T extends SaleDTO>
and make that available to be injected into the test. A full skeleton of this class implementation has been supplied in the starter. -
supply a
@SpringBootTest
class that pulls in theHomeSalesApiNTest
test case as a base class from the support module. This test case evaluates your solution during several core steps of the assignment. Much of the skeletal boilerplate for this work is provided in the starter.
Enable the tests whenever you are ready to use them. This can be immediately or at the end.
174. Assignment 2a: Modules
-
2022-10-03: Updated API list response element name from type-specific "homes"/"buyers" to generic "contents".
174.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of establishing Maven modules for different portions of an application. You will:
-
package your implementation along proper module boundaries
174.2. Overview
In this portion of the assignment you will be establishing your source module(s) for development. Your new work should be spread between two modules:
-
a single client module for DTO and other API artifacts
-
a single application module where the Spring Boot executable JAR is built
Your client module should declare a dependency the provided homebuyers-support-api-client
to be able to make use of any DTO or API constructs.
Your service/App module should declare a dependency on homebuyers-support-api-svc
to be able to host the Home and Buyer services.
You do not copy or clone these "support" modules.
Create a Maven dependency on these and use them as delivered.
174.3. Requirements
-
Create your overall project as two (or more) Maven modules under a single parent
-
client module(s) should contain any dependencies required by a client of the Web API. This includes the DTOs, any helpers created to implement the API calls, and unit tests for the DTOs. This module produces a regular Java JAR.
homebuyers-support-api-client/dto
has been supplied for you use as an example and be part of your client modules. Create a dependency on the client module for access toHome
andBuyer
client classes. Do not copy/clone the support modules. -
svc module to include your HomeSales controller, service, and repository work.
homebuyers-support-api-svc
has been supplied for you to both be part of your solution and to use as an example. Create a Maven dependency on this support module. Do not copy/clone it. -
app module that contains the
@SpringBootApplication
class will produce a Spring Boot Executable JAR to instantiate the implemented services.The app and svc modules can be the same module. In this dual role, it will contain your HomeSale service solution and also host the @SpringBootApplication
.The Maven pom.xml in the assignment starter for the App builds both a standard library JAR and a separate executable JAR (bootexec) to make sure we retain the ability to offer the HomeSale service as a library to a downstream assignment. By following this approach, you can make this assignment immediately reusable in assignment 3. -
parent module that establishes a common groupId and version for the child modules and delegate build commands. This can be the same parent used for assignments 0 and 1. Only your app and client modules will be children of this parent.
-
-
Define the module as a Web Application (dependency on
spring-boot-starter-web
). -
Add a
@SpringBootApplication
class to the app module (already provided in starter for initial demo). -
Once constructed, the modules should be able to
-
build the project from the root level
-
build regular Java JARs for use in downstream modules
-
build a Spring Boot Executable JAR (bootexec) for the
@SpringBootApplication
module -
immediately be able to access the
/api/homes
and/api/buyers
resource API when the application is running — because of Auto-Configuration.Example Calls to Homes and Buyers Resource API$ curl -X GET http://localhost:8080/api/homes {"contents":[]} $ curl -X GET http://localhost:8080/api/buyers {"contents":[]}
-
174.4. Grading
Your solution will be evaluated on:
-
package your implementation along proper module boundaries
-
whether you have divided your solution into separate module boundaries
-
whether you have created appropriate dependencies between modules
-
whether your project builds from the root level module
-
whether you have successfully activated the Home and Buyer API
-
174.5. Additional Details
-
Pick a Maven hierarchical groupId for your modules that is unique to your overall work on this assignment.
175. Assignment 2b: Content
2022-10-03 - changed list attribute name to contents
in drawing and recommend you follow that naming
175.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of designing a Data Transfer Object that is to be marshalled/unmarshalled using various internet content standards. You will:
-
design a set of Data Transfer Objects (DTOs) to render information from and to the service
-
define a Java class content type mappings to customize marshalling/unmarshalling
-
specify content types consumed and produced by a controller
-
specify content types accepted by a client
175.2. Overview
In this portion of the assignment you will be implementing a set of DTO classes that will be used to represent a HomeSale. All information expressed in the HomeSale will be derived from the Home and Buyer objects — except for the ID and the milestone dates.
Lecture/Assignment Module Ordering
It is helpful to have a data model in place before writing your services.
However, the lectures are structured with a content-less (String) domain up front and focus on the Web API and services before tackling content.
If you are starting this portion of the assignment before we have covered the details of content, it is suggested that you simply create sparsely populated HomeSaleDTO class with at least an id field and the HomeSaleListDTO class to be able to complete the API interfaces.
Skip the details of this section until we have covered the Web content lecture.
|
HomeSale.id Avoids Compound Primary Key
The HomeSaleDTO id was added to keep from having to use a compound (homeId + buyerId) primary key.
This makes it an easier 1:1 example with Home and Buyer to follow.
|
String Primary Keys
Strings were used for the primary key type.
This will make it much easier and more portable when we use database repositories in a later assignment.
|
The provided homebuyers-support-api-dto
module has the Home and Buyer DTO classes.
-
HomeDTO - provides information specific to the home
-
BuyerDTO - provides information specific to the buyer
-
StreetAddress - provides properties specific to a location
-
MessageDTO - commonly used to provide error message information for request failures
-
<Type>ListDTO - used used to conveniently express typed lists of objects
MessageDTO is from ejava-dto-util Class Examples
The MessageDTO is supplied in the ejava-dto-util package and used in most of the class API examples.
You are free to create your own for use with the HomeSales portion of the assignment.
|
175.3. Requirements
-
Create a DTO class to represent HomeSale
-
use the attributes in the diagram above as candidate properties for each class
-
HomeSale.saleAge should be a calculation of years, rounded down, between the
Home.yearBuilt
and the date the HomeSale was added. -
buyerName should be the concatenation of non-blank Buyer.firstName and Buyer.lastName values
-
streetAddress should be a deep copy of the Home.location
Create a constructor that assembles the HomeSaleDTO
attributes from the availableHomeDTO
andBuyerDTO
attributes.
-
-
Create a
HomeSaleListDTO
class to provided a typed collection ofHomeSaleDTO
.I am recommending you name of the collection within the class a generic contents
for later reuse reasons. -
Map each DTO class to:
-
Jackson JSON (the only required form)
-
mapping to Jackson XML is optional
-
-
Create a unit test to verify your new DTO type(s) can be marshalled/unmarshalled to/from the targeted serialization type.
-
API TODO: Annotate controller methods to consume and produce supported content type(s) when they are implemented.
-
API TODO: Update clients used in unit tests to explicitly only accept supported content type(s) when they are implemented.
175.4. Grading
Your solution will be evaluated on:
-
design a set of Data Transfer Objects (DTOs) to render information from and to the service
-
whether DTO class(es) represent the data requirements of the assignment
-
-
define a Java class content type mappings to customize marshalling/unmarshalling
-
whether unit test(s) successfully demonstrate the ability to marshall and unmarshal to/from a content format
-
-
API TODO: specify content types consumed and produced by a controller
-
whether controller methods are explicitly annotated with consumes and produces definitions for supported content type(s)
-
-
API TODO: specify content types accepted by a client
-
whether the clients in the unit integration tests have been configured to explicitly supply and accept supported content type(s).
-
175.5. Additional Details
-
This portion of the assignment alone primarily produces a set of information classes that make up the primary vocabulary of your API and service classes.
-
Use of
lombok
is highly encouraged here and can tremendously reduce the amount of code you write for these classes -
Java
Period
class can easily calculate age in years between twoLocalDates
. -
There are several states for a
HomeSaleDTO
. It would be helpful to create constructors and compound business methods around these states within the in theHomeSaleDTO
class.-
proposed Sale - this is built client-side and is input to a createHomeSale()
-
homeId
is mandatory and comes from the (server-side) Home -
amount
andlistDate
are optional overrides. Otherwise they would default to Home.value and today’s date on the server-side.
-
-
listing - this is built server-side. All home details obtained are from the (server-side) Home and proposed HomeSale. Buyer properties are not used.
-
purchaseInfo - this is built client-side and is input to purchase()
-
homeId
andbuyerId
are mandatory and come from (listing) HomeSale.id and Buyer.id obtained from the server-side. -
amount
andsaleDate
are optional overrides. Otherwise they would default to the current HomeSale.amount and today’s date on the server-side.
-
-
completed Sale - this is built server-side. All HomeSale details are obtained from the (server-side) HomeSale listing, (client provided) HomeSale purchaseInfo, and (server-side) Buyer
-
-
The
homebuyers-support-api-client
module also provides aHomeDTOFactory
,BuyerDTOFactory
, andStreetAddressDTOFactory
that makes it easy for tests and other demonstration code to quickly assembly example instances. You are encouraged to follow that pattern. However, keep your known client-side states and information sources in mind when creating the factory methods. -
The
homebuyers-support-api-client
test cases for Home and Buyer demonstrate marshalling and unmarshalling DTO classes within a JUnit test. You should create a similar test of yourHomeBuyerDTO
class to satisfy the testing requirement. Note that those tests leverage aJsonUtil
class that is part of the class utility examples and simplifies example use of the Jackson JSON parser. -
The
homebuyers-support-api-client
and supplied starter unit tests make use of JUnit@ParameterizedTest
— which allows a single JUnit test method to be executed N times with variable parameters — pretty cool feature. Try it. -
Supporting multiple content types is harder than it initially looks — especially when trying to mix different libraries. WebClient does not currently support Jackson XML and will attempt to resort to using JAXB in the client. I provide an example of this later in the semester (Spring Data JPA End-to-End) and advise you to address the optional XML mapping last after all other requirements of the assignment are complete. If you do attempt to tackle both XML and WebClient together, know to use JacksonXML mappings for the server-side and JAXB mappings for the client-side.
176. Assignment 2c: Resources
176.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of designing a simple Web API. You will:
-
identify resources
-
define a URI for a resource
-
define the proper method for a call against a resource
-
identify appropriate response code family and value to use in certain circumstances
176.2. Overview
In this portion of the assignment you will be identifying a resource to implement the HomeSale API. Your results will be documented in a @RestController class. There is nothing to test here until the DTO and service classes are implemented.
The API will include three main concepts:
-
Homes (provided) - an individual home that can be part of a home sale
-
Home information can be created, modified, listed, and deleted
-
Home information can be modified or deleted at any time but changes do not impact previous home sales
-
-
Buyers (provided) - identification for a person that may participate in a home sale
-
Buyer information can be created, modified, listed, and deleted
-
Buyer information can be modified or deleted at any time but changes do not impact previous home sales
-
-
HomeSales (your assignment) - a transaction for one Home and 0..1 Buyer
-
HomeSales can be created for an existing Home (aka the "listing state")
-
HomeSales can be updated with a Buyer (aka the "puchased state")
-
Up to one Buyer can be added to a HomeSale
-
HomeSales can be listed and deleted at any time
-
Modifications are Replacements
All modifications are replacements. There are no individual field edits requested. |
176.3. Requirements
Capture the expression of the following requirements in a set of @RestController
class(es) to represent your resources, URIs, required methods, and status codes.
-
Identify your base resource(s) and sub-resource(s)
-
create URIs to represent each resource and sub-resource
Example Skeletal API Definitionspublic interface HomesAPI { public static final String HOMES_PATH="/api/homes"; public static final String HOME_PATH="/api/homes/{id}"; ...
-
create a separate
@RestController
class — at a minimum — for each base resourceExample Skeletal Controller@RestController public class HomesController {
-
-
Identify the
@RestController
methods required to represent the following actions for HomeSale. Assign them specific URIs and HTTP methods.-
create new resource
-
get a specific resource
-
update a specific resource
-
list resources with paging
-
accept optional pageNumber, pageSize, and optional query parameters
-
return
HomeSaleListDTO
containing contents ofList<HomeSaleDTO>
-
-
delete a specific resource
-
delete all instances of the resource
Example Skeletal Controller Method@RequestMapping(path=HomesAPI.HOME_PATH, method = RequestMethod.POST, consumes = {...}, produces = {...}) public ResponseEntity<HomeDTO> createHome(@RequestBody HomeDTO newHome) { throw new RuntimeException("not implemented"); }
-
-
CLIENT TODO: Identify the response status codes to be returned for each of the actions
-
account for success and failure conditions
-
authorization does not need to be taken into account at this time
-
176.4. Grading
Your solution will be evaluated on:
-
identify resources
-
whether your identified resource(s) represent thing(s)
-
-
define a URI for a resource
-
whether the URI(s) center on the resource versus actions performed on the resource
-
-
define the proper method for a call against a resource
-
whether proper HTTP methods have been chosen to represent appropriate actions
-
-
CLIENT TODO: identify appropriate response code family and value to use in certain circumstances
-
whether proper response codes been identified for each action
-
176.5. Additional Details
-
This portion of the assignment alone should produce a
@RestController
class with annotated methods that statically define your API interface (possibly missing content details). There is nothing to run or test in this portion alone. -
A simple and useful way of expressing your URIs can be through defining a set of public static attributes expressing the collection and individual instance of the resource type.
Example Template Resource Declarationpublic static final String (RESOURCE)S_PATH="(path)"; public static final String (RESOURCE)_PATH="(path)/{identifier(s)}";
-
If you start with this portion, you may find it helpful to
-
create sparsely populated DTO classes —
HomeSaleDTO
with just anid
andHomeSaleListDTO
— to represent the payloads that are accepted and returned from the methods -
have the controller simply throw a RuntimeException indicating that the method is not yet implemented. That would be a good excuse to also establish an exception advice to handle thrown exceptions.
-
-
The details of the HomeSale will be performed server-side — based upon IDs and optional properties provided by the client and the Home and Buyer values found server-side. The client never provides more than an ID to reference information available server-side.
-
There is nothing to code up relative to response codes at this point. However:
-
Finding zero resources to list is not a failure. It is a success with no resources in the collection.
-
Not finding a specific resource is a failure and the status code returned should reflect that.
-
Instances of Action Verbs can be Resource Nouns
If an action does not map cleanly to a resource+HTTP method, consider thinking of the action (e.g., cancel) as one instance of an action (e.g., cancellation) that is a sub-resource of the subject (e.g., subjects/{subjectId}/cancellations). How might you think of the action if it took days to complete? |
177. Assignment 2d: Client/API Interactions
177.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of designing and implementing the interaction between a Web client and API. You will:
-
implement a service method with Spring MVC synchronous annotated controller
-
implement a client using Spring MVC RestTemplate or Spring Webflux (in synchronous mode)
-
pass parameters between client and service over HTTP
-
return HTTP response details from service
-
access HTTP response details in client
177.2. Overview
In this portion of the assignment you will invoke your resource’s Web API from a client running within a JUnit test case.
There will be at least two primary tests in this portion of the assignment: handling success and handling failure. The failure will be either real or simulated through a temporary resource stub implementation.
177.3. Requirements
-
Implement stub behavior in the controller class as necessary to complete the example end-to-end calls.
Example Stub Responsereturn ResponseEntity.status(HttpStatus.CREATED) .body(HomeDTO.builder() .id("1") .build());
-
Implement a unit integration test to demonstrate a success path
-
use either a
RestTemplate
orWebClient
API client class for this test -
make at least one call that passes parameter(s) to the service and the results of the call depend on that passed parameter value
-
access the return status and payload in the JUnit test/client
-
evaluate the result based on the provided parameter(s) and expected success status
Example Response Evaluationthen(response.getStatusCode()).isEqualTo(HttpStatus.CREATED); then(homeResult).getId()).isNotBlank(); then(homeResult).isEqualTo(homeRequestDTO.withId(homeResult.getId()));
-
Examples use RestTemplate
The Home and Buyer examples only use the RestTemplate approach.
|
One Success, One Failure, and Move On
Don’t put too much work into more than a single success and failure path test before completing more of the end-to-end.
Your status and details will likely change.
|
177.4. Grading
Your solution will be evaluated on:
-
implement a service method with Spring MVC synchronous annotated controller
-
whether your solution implements the intended round-trip behavior for an HTTP API call to a service component
-
-
implement a client using Spring MVC RestTemplate or Spring Webflux WebClient (in synchronous mode)
-
whether you are able to perform an API call using either the RestTemplate or WebClient APIs
-
-
pass parameters between client and service over HTTP
-
whether you are able to successfully pass necessary parameters between the client and service
-
-
return HTTP response details from service
-
whether you are able to return service response details to the API client
-
-
access HTTP response details in client
-
whether you are able to access HTTP status and response payload
-
177.5. Additional Details
-
Your DTO class(es) have been placed in your Client module in a separate section of this assignment. You may want to add an optional API client class to that Client module — to encapsulate the details of the RestTemplate or WebClient calls. The
homebuyers-support-client
module contains example client API classes for Homes and Buyers usingRestTemplate
. -
Avoid placing extensive business logic into the stub portion of the assignment. The controller method details are part of a separate section of this assignment.
-
This portion of the assignment alone should produce a simple, but significant demonstration of client/API communications (success and failure) using HTTP and service as the model for implementing additional resource actions.
-
Inject the dependencies for the test from the Spring context. Anything that depends on the server’s port number must be delayed (
@Lazy
)@Bean @Lazy (2) public ServerConfig serverConfig(@LocalServerPort int port) { (1) return new ServerConfig().withPort(port).build(); } @Bean @Lazy (3) public HomesAPI homesAPI(RestTemplate restTemplate, ServerConfig serverConfig) { return new HomesAPIClient(restTemplate, serverConfig, MediaType.APPLICATION_JSON); } @SpringBootTest(...webEnvironment=... public class HomeSalesAPINTest { @Autowired private HomesAPI homesAPI;
1 server’s port# is not known until runtime 2 cannot eagerly create @Bean
until server port number available3 cannot eagerly create dependents of port number
178. Assignment 2e: Service/Controller Interface
178.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of separating the Web API facade details from the service implementation details and integrating the two. You will:
-
implement a service class to encapsulate business logic
-
turn @RestController class into a facade and delegate business logic details to an injected service class
-
implement an error reporting strategy
178.2. Overview
In this portion of the assignment you will be implementing the core of the HomeSale components and integrating them as seamlessly as possible.
-
the controller will delegate commands to a service class to implement the business logic.
-
the service will use internal logic and external services to implement the details of the business logic.
-
the repository will provide storage for the service.
Your Assignment is Primarily HomeSale and Integration
You have been provided complete implementation of the Homes and Buyers services.
You only have to implement the HomeSales components and integration that with Home and Buyer services .
|
A significant detail in this portion of the assignment is to design a way to convey success and failure when carrying out an API command. The controller should act only as a web facade. The service(s) will implement the details of the services and report the results.
Under the hood of the HomeSaleService
is a repository and external clients.
-
You will create a
HomeSaleService
interface that usesHomeSaleDTO
as its primary data type. This interface can be made reusable through the full semester of assignments.
This assignment will only work with the DTO types (no entities/BOs) and a simulated/stub Repository.
-
You will create a repository interface and implementation that mimic the behavior of a CRUD and Pageable Repository in a future assignment.
-
You will inject and implement calls to the Home and Buyer services. An API client is provided for both those interfaces.
178.3. Requirements
-
Implement a HomeSaleDTORepository interface and implementation component to simulate necessary behavior (e.g., save, findById) for the base HomeSaleDTO resource type. Don’t go overboard here. We just need some place to generate IDs and hold the data in memory.
-
implement a Java interface (e.g.,
HomeSaleDTORepository
).Try to make this interface conceptually consistent with the Spring Data CrudRepository and PagingAndSortingRepository (including the use of Pageable and Page) to avoid changes later on. This is just a tip and not a requirement — implement what you need for now. Start with just save()
. -
implement a component class stub (e.g.,
HomeSaleDTORepositoryStub
) using simple, in-memory storage (e.g.,HashMap
orConcurrentHashMap
) and an ID generation mechanism (e.g.,int
orAtomicInteger
)
You are free to make use of the POJORepositoryMapImpl<T>
class in thehomesales_support_api
module as your implementation for the repository. It comes with aPOJORepository<T>
interface and the Buyer repository and service provide an example of its use. Report any bugs you find. -
-
Implement a HomeSale service to implement actions and enforce business logic on the base resources
-
implement a Java interface This will accept and return HomeSaleDTO types.
-
implement a component class for the service.
-
inject the dependencies required to implement the business logic
-
(provided)
HomesService
- to verify existence of and obtain details of homes -
(provided)
BuyersService
- to verify existence of and obtain details of buyers -
(your)
HomeSaleDTORepository
- to store details that are important to home salesYou are injecting the service implementations (not the HTTP API) for the Home and Buyer services into your HomeSale service. That means they will be part of your application and you will have a Java ⇒ Java interface with them.
-
-
implement the business logic for the service
-
a HomeSale can only be created for an existing Home and will be populated using the values of that Home on the server-side
-
input a proposed HomeSale as a
HomeSaleDTO
filled in with only the following properties. The homeId will be used to obtain the current Home values from the HomeService.-
homeId (mandatory)
-
listingDate (optional — server-side default to now)
-
amount (optional — server-side default to Home.value)
-
-
-
during a purchase change, the HomeSale will be updated with sale information. A
HomeSaleDTO
will be filled in with only the following properties. homeId is also required but can be made more prominent as a parameter.-
buyerId (mandatory)
-
saleDate (optional - server-side default is today’s UTC date)
-
amount (optional) — server-side default is the original HomeSale.amount)
-
-
a HomeSale cannot be purchased more than once.
-
implement a basic getHomeSale returning the current state of the HomeSale
-
implement a paged
findHomeSales
that returns all. Use the Spring DataPageable
andPage
(andPageImpl
) classes to express pageNumber, pageSize, and page results (i.e.,Page findHomeSales(Pageable)
). You do not need to implement sort. -
augment the
findHomeSales
to optionally include a search for matching homeId, buyerId, or both.Implement Search Details within Repository classDelegate the gory details of searching through the data — to the repository class.
-
-
-
Design a means for service calls to
-
indicate success
-
indicate failure to include internal or client error reason. Client error reasons must include separate issues "not found" and "bad request" at a minimum.
-
-
Integrate services into controller components
-
complete and report successful results to API client
-
report errors to API client, to include the status code and a textual message that is specific to the error that just occurred
-
-
Implement a unit integration test to demonstrate at least one success and error path
-
access the return status and payload in the client
-
evaluate the result based on the provided parameter(s) and expected success/failure status
-
178.4. Grading
Your solution will be evaluated on:
-
implement a service class to encapsulate business logic
-
whether your service class performs the actions of the service and acts as the primary enforcer of stated business rules
-
-
turn @RestController class into a facade and delegate business logic details to an injected service class
-
whether your API tier of classes act as a thin adapter facade between the HTTP protocol and service component interactions
-
-
implement an error reporting strategy
-
whether your design has identified how errors are reported by the service tier and below
-
whether your API tier is able to translate errors into meaningful error responses to the client
-
178.5. Additional Details
-
This portion of the assignment alone primarily provides an implementation pattern for how services will report successful and unsuccessful requests and how the API will turn that into a meaningful HTTP response that the client can access.
-
The
homebuyers-support-api-svc
module contains a set of example DTO Repository Stubs.-
The
Homes
package shows an example of a fully exploded implementation. Take this approach if you wish to write all the code yourself. -
The
Buyers
package shows an example of how to use the templatedPOJORepository<T>
interface andPOJORepositoryMapImpl<T>
implementation. Take this approach if you want to delegate to an existing implementation and only provide the custom query methods.
POJORepositoryMapImpl<T>
provides a protectedfindAll(Predicate<T> predicate, Pageable pageable)
that returns aPage<T>
. All you have to provide are the predicates for the custom query methods. -
-
You are required to use the
Pageable
andPage
classes (from theorg.springframework.data.domain
Java package) for paging methods in your getAllHomeSales() service interface — to be forward compatible with later assignments that make use of Spring Data. You can find example use ofPageable
andPage
(andPageImpl
) in Home and Buyer examples. -
It is highly recommend that exceptions be used between the service and controller layers to identify error scenarios and specific exceptions be used to help identify which kind of error is occurring in order to report accurate status to the client. Leave non-exception paths for successful results. The Homes and Buyers example leverage the exceptions defined in the
ejava-dto-util
module. You are free to define your own. -
It is highly recommended that
ExceptionHandlers
andRestExceptionAdvice
be used to handle exceptions thrown and report status. The Homes and Buyers example leverage theExceptionHandlers
from theejava-web-util
module. You are free to define your own.
179. Assignment 2f: Required Test Scenarios
There are a set of minimum scenarios that are required of a complete project.
-
Creation of HomeSale for a Home
-
success (201/
CREATED
) -
failed creation because Home unknown (422/
UNPROCESSABLE_ENTITY
)
-
-
Update of HomeSale for a Buyer (purchase)
-
success (200/
OK
) -
failed because HomeSale does not exist (404/
NOT_FOUND
) -
failed because Buyer does not exist (422/
UNPROCESSABLE_ENTITY
)
-
179.1. Scenario: Creation of HomeSale for a Home
In this scenario, a HomeSale is created for a Home.
179.1.1. Primary Path: Success
In this primary path, the Home exists and the API client is able to successfully create a HomeSale for the Home. The desired status in the response is a 201/CREATED. A follow-on query for HomeSales will report the new entry.
179.1.2. Alternate Path: Home unknown
In this alternate path, the Home does not exist and the API client is unable to create a HomeSale for the Home. The desired response status is a 422/UNPROCESSABLE_ENTITY. The HomeSales resource understood the request (i.e., not a 400/BAD_REQUEST), but request contained information that could not be processed.
getHome() will return a 404/NOT_FOUND — which is not the same status requested here. HomeSales will need to account for that difference. |
179.2. Scenario: Update of HomeSale for a Buyer
In this scenario, a HomeSale is updated with a Buyer.
179.2.1. Primary Path: Success
In this primary path, the identified Buyer exists and the API client is able to successfully update a HomeSale with the Buyer. This update should be performed all on the server-side. The client primarily expresses Ids. A follow-on query for HomeSales will report the updated entry.
179.2.2. Alternate Path: HomeSale does not exist
In this alternate path, the requested HomeSale does not exist.
The expected response status code should be 404/NOT_FOUND
to express that the target resource could not be found.
179.2.3. Alternate Path: Buyer does not exist
In this alternate path, the requested HomeSale does exist but the identified Buyer does not.
The expected response status code should be 422/UNPROCESSABLE_ENTITY
to express that the request was understood and the target resource could be found. However, the request contained information that could not be processed.
179.3. Requirements
-
Implement the above scenarios within one or more integration unit tests.
-
Name the tests such that they are picked up and executed by the Surefire test phase of the maven build.
-
Turn in a cleaned source tree of the project under a single root parent project. The Home and Buyer modules do not need to be included.
-
The source tree should be ready to build in an external area that has access to the ejava-nexus repository.
179.4. Grading
-
create an integration test that verifies a successful scenario
-
whether you implemented a set of integration unit tests that verify the primary paths for HomeSales
-
-
create an integration test that verifies a failure scenario
-
whether you implemented a set of integration unit tests that verify the failure paths for HomeSales.
-
179.5. Additional Details
-
Place behavior in the proper place
-
The unit integration test is responsible for populating the Homes and Buyers. It will supply HomeDTOs and BuyerDTOs populated on the client-side — to the Homes and Buyers APIs/services.
-
The unit integration test will pass sparsely populated HomeSaleDTOs to the server-side with homeId, buyerId, etc. values express inputs for creating a listing or making a purchase. All details to populate the returned
HomeSaleDTOs
(i.e., Home and Buyer info) will come from the server-side. There should never be a need for the client to self-create/fully-populate a HomeSaleDTO.
-
Spring Security Introduction
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
180. Introduction
Much of what we have covered to date has been focused on delivering functional capability. Before we go much further into filling in the backend parts of our application or making deployments, we need to begin factoring in security concerns. Information Security is a practice of protecting information by mitigating risks [32] Risks are identified with their impact and appropriate mitigations.
We won’t get into the details of Information Security analysis and making specific trade-offs, but we will cover how we can address the potential mitigations through the use of a framework and how that is performed within Spring Security and Spring Boot.
180.1. Goals
You will learn:
-
key terms relative to implementing access control and privacy
-
the flexibility and power of implementing a filter-based processing architecture
-
the purpose of the core Spring Authentication components
-
how to enable Spring Security
-
to identify key aspects of the default Spring Security
180.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
define identity, authentication, and authorization and how they can help protect our software system
-
identify the purpose for and differences between encoding, encryption, and cryptographic hashes
-
identify the purpose of a filter-based processing architecture
-
identify the core components within Spring Authentication
-
identity where the current user authentication is held/located
-
how to activate default Spring Security configuration
-
identify and demonstrate the security features of the default Spring Security configuration
-
step through a series of calls through the Security filter chain
181. Access Control
Access Control is one of the key mitigation factors within a security solution.
- Identity
-
We need to know who the caller is and/or who is the request being made for. When you make a request in everyday life (e.g., make a pizza order) — you commonly have to supply your identity so that your request can be associated with you. There can be many layers of systems/software between the human and the action performed, so identity can be more complex than just a single value — but I will keep the examples to a simple username.
- Authentication
-
We need verification of the requester’s identity. This is commonly something known — e.g., a password, PIN, or generated token. Additional or alternate types of authentication like something someone has (e.g., access to a specific mobile phone number or email account, or assigned token generator) are also becoming more common today and are adding a needed additional level of security to more sensitive information.
- Authorization
-
Once we know and can confirm the identity of the requester, we then need to know what actions they are allowed to perform and information they are allowed to access or manipulate. This can be based on assigned roles (e.g., administrator, user), relative role (e.g., creator, owner, member), or releasability (e.g., access markings).
These access control decisions are largely independent of the business logic and can be delegated to the framework. That makes it much easier to develop and test business logic outside of the security protections and to be able to develop and leverage mature and potentially certified access control solutions.
182. Privacy
Privacy is a general term applied to keeping certain information or interactions secret from others. We use various encoding, encryption, and hash functions in order to achieve these goals.
182.1. Encoding
Encoding converts source information into an alternate form that is safe for communication and/or storage. [33] Two primary examples are URL and Base64 encoding of special characters or entire values. Encoding may obfuscate the data, but by itself is not encryption. Anyone knowing the encoding scheme can decode an encoded value and that is its intended purpose.
$ echo -n jim:password | base64 (1)
amltOnBhc3N3b3Jk
$ echo -n amltOnBhc3N3b3Jk | base64 -D
jim:password
1 | echo -n echos the supplied string without new line character added - which would pollute the value |
182.2. Encryption
Encryption is a technique of encoding "plaintext" information into an enciphered form ("ciphertext") with the intention that only authorized parties — in possession of the encryption/decryption keys — can convert back to plaintext. [34] Others not in possession of the keys would be forced to try to break the code thru (hopefully) a significant amount of computation.
There are two primary types of keys — symmetric and asymmetric. For encryption with symmetric keys, the encryptor and decryptor must be in possession of the same/shared key. For encryption with asymmetric keys — there are two keys: public and private. Plaintext encrypted with the shared, public key can only be decrypted with the private key. SSH is an example of using asymmetric encryption.
Asymmetric encryption is more computationally intensive than symmetric
Asymmetric encryption is more computationally intensive than symmetric — so you may find
that asymmetric encryption techniques will embed a dynamically generated symmetric key
used to encrypt a majority of the payload within a smaller area of the payload that is
encrypted with the asymmetric key.
|
$ echo -n "jim:password" > /tmp/plaintext.txt
$ openssl enc -aes-256-cbc -salt -in /tmp/plaintext.txt -base64 \(1)
-pass pass:password > /tmp/ciphertext
$ cat /tmp/ciphertext
U2FsdGVkX18mM2yNc337MS5r/iRJKI+roqkSym0zgMc=
$ openssl enc -d -aes-256-cbc -in /tmp/ciphertext -base64 -pass pass:password (2)
jim:password
$ openssl enc -d -aes-256-cbc -in /tmp/ciphertext -base64 -pass pass:password123 (3)
bad decrypt
4611337836:error:06FFF064:digital envelope routines:CRYPTO_internal:bad decrypt
1 | encrypting file of plaintext with a symmetric/shared key. Result is base64 encoded. |
2 | decrypting file of ciphertext with valid symmetric/shared key after being base64 decoded |
3 | failing to decrypt file of ciphertext with invalid key |
182.3. Cryptographic Hash
A Cryptographic Hash is a one-way algorithm that takes a payload of an arbitrary size and computes a value of a known size that is unique to the input payload. The output is deterministic such that multiple, separate invocations can determine if they were working with the same input value — even if the resulting hash is not technically the same. Cryptographic hashes are good for determining whether information has been tampered with or to avoid storing recoverable password values.
$ echo -n password | md5
5f4dcc3b5aa765d61d8327deb882cf99 (1)
$ echo -n password | md5
5f4dcc3b5aa765d61d8327deb882cf99 (1)
$ echo -n password123 | md5
482c811da5d5b4bc6d497ffa98491e38 (2)
1 | Core hash algorithms produce identical results for same inputs |
2 | Different value produced for different input |
Unlike encryption there is no way to mathematically obtain the original plaintext from the resulting hash. That makes it a great alternative to storing plaintext or encrypted passwords. However, there are still some unwanted vulnerabilities by having the calculated value be the same each time.
By adding some non-private variants to each invocation (called "Salt"), the resulting values can be technically different — making it difficult to use brute force dictionary attacks. The following example uses the Apache htpasswd command to generate a Cryptographic Hash with a Salt value that will be different each time. The first example uses the MD5 algorithm again and the second example uses the Bcrypt algorithm — which is more secure and widely accepted for creating Cryptographic Hashes for passwords.
$ htpasswd -bnm jim password
jim:$apr1$ctNOftbV$SZHs/IA3ytOjx0IZEZ1w5. (1)
$ htpasswd -bnm jim password
jim:$apr1$gLU9VlAl$ihDOzr8PdiCRjF3pna2EE1 (1)
$ htpasswd -bnm jim password123
jim:$apr1$9sJN0ggs$xvqrmNXLq0XZWjMSN/WLG.
1 | Salt added to help defeat dictionary lookups |
$ htpasswd -bnBC 10 jim password
jim:$2y$10$cBJOzUbDurA32SOSC.AnEuhUW269ACaPM7tDtD9vbrEg14i9GdGaS
$ htpasswd -bnBC 10 jim password
jim:$2y$10$RztUum5dBjKrcgiBNQlTHueqDFd60RByYgQPbugPCjv23V/RzfdVG
$ htpasswd -bnBC 10 jim password123
jim:$2y$10$s0I8X22Z1k2wK43S7dUBjup2VI1WUaJwfzX8Mg2Ng0jBxnjCEA0F2
183. Spring Web
Spring Framework operates on a series of core abstractions and a means to leverage them from different callchains. Most of the components are manually assembled through builders and components/beans are often integrated together through the Spring application context. For the web specifically, the callchains are implemented through an initial web interface implemented through the hosting or embedded web server. Often the web.xml will define a certain set of filters that add functionality to the request/response flow. |
Figure 76. Spring Web Framework Operates thru Flexibly Assembled Filters and Core Services
|
184. No Security
We know by now that we can exercise the Spring Application Filter Chain by implementing and calling a controller class. I have implemented a simple example class that I will be using throughout this lecture. At this point in time — no security has been enabled.
184.1. Sample GET
The example controller has two example GET calls that are functionally identical
at this point because we have no security enabled. The following is registered to
the /api/anonymous/hello
URI and the other to /api/authn/hello
.
@RequestMapping(path="/api/anonymous/hello",
method= RequestMethod.GET)
public String getHello(@RequestParam(name = "name", defaultValue = "you") String name) {
return "hello, " + name;
}
We can call the endpoint using the following curl or equivalent browser call.
$ curl -v -X GET "http://localhost:8080/api/anonymous/hello?name=jim"
> GET /api/anonymous/hello?name=jim HTTP/1.1
< HTTP/1.1 200
< Content-Length: 10
<
hello, jim
184.2. Sample POST
The example controller has three example POST calls that are functionally identical
at this point because we have no security or other policies enabled. The following
is registered to the /api/anonymous/hello
URI. The other two are mapped to the
/api/authn/hello
and /api/alt/hello
URIs.
[35].
@RequestMapping(path="/api/anonymous/hello",
method = RequestMethod.POST,
consumes = MediaType.TEXT_PLAIN_VALUE,
produces = MediaType.TEXT_PLAIN_VALUE)
public String postHello(@RequestBody String name) {
return "hello, " + name;
}
We can call the endpoint using the following curl command.
$ curl -v -X POST "http://localhost:8080/api/anonymous/hello" \
-H "Content-Type: text/plain" -d "jim"
> POST /api/anonymous/hello HTTP/1.1
< HTTP/1.1 200
< Content-Length: 10
<
hello, jim
184.3. Sample Static Content
I have not mentioned it before now — but not everything served up by the application has to be live content provided through a controller.
We can place static resources in the src/main/resources/static
folder and have that packaged up and served through URIs relative to the root.
static resource locations
Spring Boot will serve up static content found in src/main/resources/ `-- static `-- content `-- hello_static.txt Anything placed below target/classes/ `-- static <== classpath:/static at runtime `-- content <== /content URI at runtime `-- hello_static.txt |
This would be a common thing to do for css files, images, and other supporting web content. The following is a text file in our sample application.
Hello, static file
The following is an example GET of that static resource file.
$ curl -v -X GET "http://localhost:8080/content/hello_static.txt" > GET /content/hello_static.txt HTTP/1.1 < HTTP/1.1 200 < Content-Length: 19 < Hello, static file
185. Spring Security
The Spring Security framework is integrated into the web callchain using filters that form an internal Security Filter Chain. We will look at the Security Filter Chain in more detail shortly. At this point — just know that the framework is a flexible, filter-based framework where many different authentication schemes can be enabled. Lets take a look first at the core services used by the Security Filter Chain. |
Figure 77. Spring Security Implemented as Extension of Application Filter Chain
|
185.1. Spring Core Authentication Framework
Once we enable Spring Security — a set of core authentication services are instantiated and made available to the Security Filter Chain. The key players are a set of interfaces with the following roles.
Authentication
-
provides both an authentication request and result abstraction. All the key properties (principal, credentials, details) are defined as
java.lang.Object
to allow just about any identity and authentication abstraction be represented. For example, anAuthentication
request has a principal set to the username String and anAuthentication
response has the principal set toUserDetails
containing the username and other account information. AuthenticationManager
-
provides a front door to authentication requests that may be satisfied using one or more
AuthenticationProvider
AuthenticationProvider
-
a specific authenticator with access to
UserDetails
to complete the authentication.Authentication
requests are of a specific type. If this provider supports the type and can verify the identity claim of the caller — anAuthentication
result with additional user details is returned. UserDetailsService
-
a lookup strategy used by
AuthenticationProvider
to obtainUserDetails
by username. There are a few configurable implementations provided by Spring (e.g., JDBC) but we are encouraged to create our own implementations if we have a credentials repository that was not addressed. UserDetails
-
an interface that represents the minimal needed information for a user. This will be made part of the
Authentication
response in theprincipal
property.
185.2. SecurityContext
The authentication is maintained inside of a SecurityContext that can be
manipulated over time.
The current state of authentication is located through static methods
of the SecurityContextHolder
class. Although there are multiple strategies
for maintaining the current SecurityContext with Authentication — the most
common is ThreadLocal
.
186. Spring Boot Security AutoConfiguration
As with most Spring Boot libraries — we have to do very little to get started.
Most of what you were shown above is instantiated with a single additional dependency
on the spring-boot-starter-security
artifact.
186.1. Maven Dependency
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>
This artifact triggers three (3) AutoConfiguration classes in the spring-boot-autoconfiguration artifact.
-
For Spring Boot < 2.7, the auto-configuration classes will be named in META-INF/spring.factories:
# org.springframework-boot:spring-boot-autoconfigure/META-INF/spring.factories ... org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration,\ org.springframework.boot.autoconfigure.security.servlet.UserDetailsServiceAutoConfiguration,\ org.springframework.boot.autoconfigure.security.servlet.SecurityFilterAutoConfiguration,\
-
For Spring Boot >= 2.7, the auto-configuration classes will be named in META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports .Spring Boot Starter Security (>= 2.7)
# org.springframework-boot:spring-boot-autoconfigure/META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration org.springframework.boot.autoconfigure.security.servlet.UserDetailsServiceAutoConfiguration org.springframework.boot.autoconfigure.security.servlet.SecurityFilterAutoConfiguration
The details of this may not be that important except to understand how the default behavior was assembled and how future customizations override this behavior.
186.2. SecurityAutoConfiguration
The SecurityAutoConfiguration imports two @Configuration
classes that conditionally
wire up the security framework discussed with default implementations.
-
SpringBootWebSecurityConfiguration makes sure there is at least a default
SecurityFilterChain
(more on that later) which-
requires all URIs be authenticated
-
activates FORM and BASIC authentication
-
enables CSRF and other security protections
@Bean @Order(SecurityProperties.BASIC_AUTH_ORDER) //very low priority SecurityFilterChain defaultSecurityFilterChain(HttpSecurity http) throws Exception { http.authorizeRequests().anyRequest().authenticated(); http.formLogin(); http.httpBasic(); return http.build(); }
-
-
WebSecurityEnablerConfiguration activates all security components by supplying the
@EnableWebSecurity
annotation when the security classes are present in the classpath.
186.3. WebSecurityConfiguration
WebSecurityConfiguration
gathers all the SecurityFilterChain
beans, obtains filters for each, and forms the runtime FilterChains
.
186.4. UserDetailsServiceAutoConfiguration
The UserDetailsServiceAutoConfiguration simply defines an in-memory UserDetailsService
if one is not yet present. This is one of the provided implementations mentioned earlier — but still just a demonstration toy. The UserDetailsService
is populated with one user:
-
name:
user
, unless defined -
password: generated, unless defined
Example Output from Generated PasswordUsing generated security password: ff40aeec-44c2-495a-bbbf-3e0751568de3
Overrides can be supplied in properties
spring.security.user.name: user spring.security.user.password: password
186.5. SecurityFilterAutoConfiguration
The SecurityFilterAutoConfiguration establishes the springSecurityFilterChain
filter chain, implemented as a DelegatingFilterProxy
.
The delegate of this proxy is supplied by the details of the SecurityAutoConfiguration
.
187. Default FilterChain
When we activated Spring security we added a level of filters that
were added to the Application Filter Chain. The first was a
DelegatingFilterProxy
that lazily instantiated the filter using
a delegate obtained from the Spring application context. This delegate
ends up being a FilterChainProxy
which has a prioritized list of
SecurityFilterChain
(implemented using DefaultSecurityFilterChain
).
Each SecurityFilterChain
has a requestMatcher and a set of zero or
more Filters
. Zero filters essentially bypasses security for a particular
URI pattern.
188. Default Secured Application
With all that said — and all we really did was add an artifact dependency to the project — the following shows where the Auto-Configuration left our application.
188.1. Form Authentication Activated
Form Authentication has been activated and we are now stopped from accessing all URLs
without first entering a valid username and password. Remember, the default username is
user
and the default password was output to the console unless we supplied one in properties.
The following shows the result of a redirect when attempting to access any URL in the
application.
-
We entered http://localhost:8080/api/anonymous/hello?name=jim
-
Application saw there was no authentication for the session and redirected to /login page
-
Login URL, html, and CSS supplied by spring-boot-starter-security
If we call the endpoint from curl, without indicating we can visit an HTML page, we get flatly rejected with a 401/UNAUTHORIZED. The response does inform us that BASIC Authentication is available.
$ curl -v http://localhost:8080/authn/hello?name=jim > GET /authn/hello?name=jim HTTP/1.1 < HTTP/1.1 401 < Set-Cookie: JSESSIONID=D124368C884557286BF59F70888C0D39; Path=/; HttpOnly < WWW-Authenticate: Basic realm="Realm" (1) {"timestamp":"2020-07-01T23:32:39.909+00:00","status":401, "error":"Unauthorized","message":"Unauthorized","path":"/authn/hello"}
1 | WWW-Authenticate header indicates that BASIC Authentication is available |
If we add an Accept header to the curl request with text/html
, we get a
302/REDIRECT to the login page the browser automatically took us to.
$ curl -v http://localhost:8080/authn/hello?name=jim \ -H "Accept: text/plain,text/html" (1) > GET /authn/hello?name=jim HTTP/1.1 > Accept: text/plain, text/html < HTTP/1.1 302 < Set-Cookie: JSESSIONID=E132523FE23FA8D18B94E3D55820DF13; Path=/; HttpOnly < Location: http://localhost:8080/login < Content-Length: 0
1 | adding an Accept header accepting text initiates a redirect to login form |
The login (URI /login
) and logout (URI /logout
) forms are supplied as defaults.
If we use the returned JSESSIONID when accessing and successfully completing
the login form — we will continue on to our originally requested URL.
Since we are targeting APIs — we will be disabling that very soon and relying on more stateless authentication mechanisms.
188.2. Basic Authentication Activated
BASIC authentication is also activated by default. This is usable by our API
out of the gate, so we will use this a bit more in examples. The following shows an example
BASIC encoding of the username:password
values in a Base64 string and then supplying the
result of that encoding in an Authorization
header prefixed with the work "BASIC ".
$ echo -n user:ff40aeec-44c2-495a-bbbf-3e0751568de3 | base64
dXNlcjpmZjQwYWVlYy00NGMyLTQ5NWEtYmJiZi0zZTA3NTE1NjhkZTM=
$ curl -v -X GET http://localhost:8080/api/anonymous/hello?name=jim \
-H "Authorization: BASIC dXNlcjpmZjQwYWVlYy00NGMyLTQ5NWEtYmJiZi0zZTA3NTE1NjhkZTM="
> GET /api/anonymous/hello?name=jim HTTP/1.1
> Authorization: BASIC dXNlcjpmZjQwYWVlYy00NGMyLTQ5NWEtYmJiZi0zZTA3NTE1NjhkZTM=
>
< HTTP/1.1 200 (1)
< Content-Length: 10
hello, jim
1 | request with successful BASIC authentication gives us the results of intended URL |
Base64 web sites available if command-line tool not available
I am using a command-line tool for easy demonstration and privacy.
There are various
websites that will perform the encode/decode for
you as well. Obviously, using a public website for real usernames and passwords
would be a bad idea.
|
curl can Automatically Supply Authorization Header
You can avoid the manual step of base64 encoding the username:password and manually supplying the Authorization header with curl by using the plaintext -u username:password option.
|
188.3. Authentication Required Activated
If we do not supply the Authorization
header or do not supply a valid value,
we get a 401/UNAUTHORIZED status response back from the interface telling us
our credentials are either invalid (did not match username:password) or
were not provided.
$ echo -n user:badpassword | base64 (2)
dXNlcjpiYWRwYXNzd29yZA==
$ curl -v -X GET http://localhost:8080/api/anonymous/hello?name=jim -u user:badpassword (1)
> GET /api/anonymous/hello?name=jim HTTP/1.1
> Authorization: BASIC dXNlcjpiYWRwYXNzd29yZA== (2)
>
< HTTP/1.1 401
< WWW-Authenticate: Basic realm="Realm"
< Set-Cookie: JSESSIONID=32B6CDB8E899A82A1B7D55BC88CA5CBE; Path=/; HttpOnly
< WWW-Authenticate: Basic realm="Realm"
< Content-Length: 0
1 | bad username:password supplied |
2 | demonstrating source of Authorization header |
188.4. Username/Password Can be Supplied
To make things more consistent during this stage of our learning, we can manually assign a username and password using properties.
spring.security.user.name: user spring.security.user.password: password
$ curl -v -X GET "http://localhost:8080/api/authn/hello?name=jim" -u user:password > GET /api/authn/hello?name=jim HTTP/1.1 > Authorization: BASIC dXNlcjpwYXNzd29yZA== < HTTP/1.1 200 < Set-Cookie: JSESSIONID=7C5045AE82C58F0E6E7E76961E0AFF57; Path=/; HttpOnly < Content-Length: 10 hello, jim
188.5. CSRF Protection Activated
The default Security Filter chain contains CSRF protections — which is a defense mechanism developed to prevent alternate site from providing the client browser a page that performs an unsafe (POST, PUT, or DELETE) call to an alternate site the client has as established session with. The server makes a CSRF token available to us using a GET and will be expecting that value on the next POST, PUT, or DELETE.
$ curl -v -X POST "http://localhost:8080/api/authn/hello" \ -u user:password -H "Content-Type: text/plain" -d "jim" > POST /api/authn/hello HTTP/1.1 > Authorization: BASIC dXNlcjpwYXNzd29yZA== > Content-Type: text/plain > Content-Length: 3 < HTTP/1.1 401 < Set-Cookie: JSESSIONID=3EEB3625749482AD9E44A3B7E25A0EE4; Path=/; HttpOnly < WWW-Authenticate: Basic realm="Realm" < Content-Length: 0
188.6. Other Headers
Spring has, by default, generated additional headers to help with client interactions that primarily have to do with common security issues.
$ curl -v http://localhost:8080/api/anonymous/hello?name=jim -u user:password > GET /api/anonymous/hello?name=jim HTTP/1.1 > Authorization: BASIC dXNlcjpwYXNzd29yZA== > < HTTP/1.1 200 < Set-Cookie: JSESSIONID=EC5EB9D1182F8AC77E290D12AD3BF369; Path=/; HttpOnly < X-Content-Type-Options: nosniff < X-XSS-Protection: 1; mode=block < Cache-Control: no-cache, no-store, max-age=0, must-revalidate < Pragma: no-cache < Expires: 0 < X-Frame-Options: DENY < Content-Type: text/plain;charset=UTF-8 < Content-Length: 10 < Date: Thu, 02 Jul 2020 10:45:32 GMT < hello, jim
- Set-Cookie
-
a command header to set a small amount of information in the browser to be returned to the server on follow-on calls. [36] This permits the server to keep track of a user session so that a login state can be retained on follow-on calls.
- X-Content-Type-Options
-
informs the browser that supplied
Content-Type
header responses have been deliberately assigned [37] and to avoid Mime Sniffing — a problem caused by servers serving uploaded content meant to masquerade as alternate MIME types. - X-XSS-Protection
-
a header that informs the browser what to do in the event of a Cross-Site Scripting attack is detected. There seems to be a lot of skepticism of its value for certain browsers [38]
- Cache-Control
-
a header that informs the client how the data may be cached. [39] This value can be set by the controller response but is set to a non-cache state by default here.
- Pragma
-
an HTTP/1.0 header that has been replaced by Cache-Control in HTTP 1.1. [40]
- Expires
-
a header that contains the date/time when the data should be considered stale and should be re-validated with the server. [41]
- X-Frame-Options
-
informs the browser whether the contents of the page can be displayed in a frame. [41] This helps prevent site content from being hijacked in an unauthorized manner. This will not be pertinent to our API responses.
189. Default FilterChainProxy Bean
The above behavior was put in place by the default Security Auto-Configuration — which is primarily placed within an instance of the FilterChainProxy
class [42].
This makes the FilterChainProxy
class a convenient place for a breakpoint when debugging security flows.
The FilterChainProxy
is configured with a set of firewall rules that address such things
as bad URI expressions that have been known to hurt web applications and zero or more
SecurityFilterChains arranged in priority order (first match wins).
The default configuration has a single SecurityFilterChain
that matches all URIs,
requires authentication, and also adds the other aspects we have seen so far.
Below is a list of filters put in place by the default configuration. This — by far — is not all the available filters. I wanted to at least provide a description of the default ones before we start looking to selectively configure the chain.
It is a pretty dry topic to just list them off. It would be best if you had the svc/svc-security/noauthn-security-example
example loaded in an IDE with:
-
the pom updated to include the
spring-boot-starter-security
Starter Activates Default Security Policies<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>
-
a breakpoint set on "FilterChainProxy.doFilterInternal()" to clearly display the list of filters that will be used for the request.
-
another breakpoint set on "FilterChainProxy.VirtualFilterChain.doFilter()" to pause in between each filter.
-
a browser open with network logging active and ready to navigate to http://localhost:8080/api/authn/hello?name=jim
Whenever we make a request in the default state - we will most likely visit the following filters.
- WebAsyncManagerIntegrationFilter
-
Establishes an association between the SecurityContext (where the current caller’s credentials are held) and potential async responses making use of the
Callable
feature. Caller identity is normally unique to a thread and obtained through aThreadLocal
. Anything completing in an alternate thread must have a strategy to resolve the identity of this user by some other means. - SecurityContextPersistenceFilter
-
Manages SecurityContext in between calls. If appropriate — stores the SecurityContext and clears it from the call on exit. If present — restores the SecurityContext on following calls.
- HeaderWriterFilter
-
Issues standard headers (shown earlier) that can normally be set to a fixed value and optionally overridden by controller responses.
- CsrfFilter
-
Checks all non-safe (POST, PUT, and DELETE) calls for a special Cross-Site Request Forgery (CSRF) token either in the payload or header that matches what is expected for the session. This attempts to make sure that anything that is modified on this site — came from this site and not a malicious source. This does nothing for all safe (GET, HEAD, OPTIONS, and TRACE)
- LogoutFilter
-
Looks for calls to logout URI. If matches, it ends the login for all types of sessions, and terminates the chain.
- UsernamePasswordAuthenticationFilter
-
This instance of this filter is put in place to obtain the username and password submitted by the login page. Therefore anything that is not
POST /login
is ignored. The actualPOST /login
requests have their username and password extracted, authenticated - DefaultLoginPageGeneratingFilter
-
Handles requests for the login URI (
POST /login
). This produces the login page, terminates the chain, and returns to caller. - DefaultLogoutPageGeneratingFilter
-
Handles requests for the logout URI (
GET /logout
). This produces the logout page, terminates the chain, and returns to the caller. - BasicAuthenticationFilter
-
Looks for BASIC Authentication header credentials, performs authentication, and continues the flow if successful or if no credentials where present. If credentials were not successful it calls an authentication entry point that handles a proper response for BASIC Authentication and ends the flow.
- RequestCacheAwareFilter
-
This retrieves an original request that was redirected to a login page and continues it on that path.
- SecurityContextHolderAwareRequestFilter
-
Wraps the HttpServletRequest so that the security-related calls (isAuthenticated(), authenticate(), login(), logout()) are resolved using the Spring security context.
- AnonymousAuthenticationFilter
-
Assigns anonymous use to security context if no user is identified
- SessionManagementFilter
-
Performs any required initialization and security checks in order to setup the current session
- ExceptionTranslationFilter
-
Attempts to augment any thrown AccessDeniedException and AuthenticationException with details related to the denial. It does not add any extra value if those exceptions are not thrown. This will save the current request (for access by RequestCacheAwareFilter) and commence an authentication for AccessDeniedExceptions if the current user is anonymous. The saved current request will allow the subsequent login to complete with a resumption of the original target. If FORM Authentication is active — the commencement will result in a 302/REDIRECT to the
/login
URI. - FilterSecurityInterceptor
-
Applies the authenticated user against access constraints. It throws an AccessDeniedException if denied, which is caught by the ExceptionTranslationFilter.
This is also where the security filter chain hands control over to the application filter chain where the endpoint will get invoked.
190. Summary
In this module we learned:
-
the importance of identity, authentication, and authorization within security
-
the purpose for and differences between encoding, encryption, and cryptographic hashes
-
purpose of a filter-based processing architecture
-
the identity of the core components within Spring Authentication
-
where the current user authentication is held/located
-
how to activate default Spring Security configuration
-
the security features of the default Spring Security configuration
-
to step through a series of calls through the Security filter chain for the ability to debug future access problems
Spring Security Authentication
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
191. Introduction
In the previous example we accepted all defaults and inspected the filter chain and API responses to gain an understanding of the Spring Security framework. In this chapter we will begin customizing the authentication configuration to begin to show how and why this can be accomplished.
191.1. Goals
You will learn:
-
to create a customized security authentication configurations
-
to obtain the identity of the current, authenticated user for a request
-
to incorporate authentication into integration tests
191.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
create multiple, custom authentication filter chains
-
enable open access to static resources
-
enable anonymous access to certain URIs
-
enforce authenticated access to certain URIs
-
locate the current authenticated user identity
-
enable Cross-Origin Resource Sharing (CORS) exchanges with browsers
-
add an authenticated identity to RestTemplate client
-
add authentication to integration tests
192. Configuring Security
To override security defaults and define a customized FilterChainProxy
-- we
must supply one or more classes that define our own SecurityFilterChain(s)
.
192.1. WebSecurityConfigurer and Component-based Approaches
Spring provides two ways to do this:
-
WebSecurityConfigurer
/ WebSecurityConfigurerAdapter - is the legacy and recently deprecated (Spring Security 5.7.0-M2; 2022) definition class that acts as a modular factory for security aspects of the application. [43] There can be one-to-NWebSecurityConfigurers
and each can define aSecurityFilterChain
and supporting services. -
Component-based configuration - is the modern approach to defining security aspects of the application. The same types of components are defined with the component-based approach, but they are done independent of one another.
You will likely encounter the WebSecurityConfigurer
approach for a long while — so I will provide some coverage of that here — while focusing on the component-based approach.
To highlight that the FilterChainProxy
is populated with a prioritized list of SecurityFilterChain
— I am going to purposely create multiple chains.
-
one with the API rules (
APIConfiguration
) - highest priority -
one with the former default rules (
AltConfiguration
) - lowest priority -
one with access rules for Swagger (
SwaggerSecurity
) - medium priority
The priority indicates the order in which they will be processed and will also influence the order for the SecurityFilterChain
s they produce.
Normally I would not highlight Swagger in these examples — but it provides an additional example of how well we can customize Spring Security.
192.2. Core Application Security Configuration
The example will eventually contain several SecurityFilterChains
, but lets start with focusing on just one of them — the "API Configuration".
This initial configuration will define the configuration for access to static resources, dynamic resources, and how to authenticate our users.
192.2.1. WebSecurityConfigurerAdapter Approach
In the deprecated WebSecurityConfiguration
approach, we would start by defining a @Configuration
class that extends WebSecurityConfigurerAdapter
and overrides one or more of its configuration methods.
@Configuration(proxyBeanMethods = false)
@Order(0) (2)
public class APIConfiguration extends WebSecurityConfigurerAdapter { (1)
@Override
public void configure(WebSecurity web) throws Exception { ... } (3)
@Override
protected void configure(HttpSecurity http) throws Exception { ... } (4)
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception { ... } (5)
@Bean
@Override
public AuthenticationManager authenticationManagerBean() throws Exception { ... } (6)
1 | Create @Configuration class that extends WebSecurityConfigurerAdapter to customize SecurityFilterChain |
2 | APIConfiguration has a high priority resulting SecurityFilterChain for dynamic resources |
3 | configure a SecurityFilterChain for static web resources |
4 | configure a SecurityFilterChain for dynamic web resources |
5 | optionally configure an AuthenticationManager for multiple authentication sources |
6 | optionally expose AuthenticationManager as an injectable bean for re-use in other SecurityFilterChains |
Each SecurityFilterChain
will have a reference to its AuthenticationManager
.
The WebSecurityConfigurerAdapter
provides the chance to custom configure the AuthenticationManager
using a builder.
The adapter also provides an accessor method that can be used to expose the built AuthenticationManager
as a pre-built component for other SecurityFilterChains
to reference.
192.2.2. Component-based Approach
In the modern Component-based approach, we define each aspect of our security infrastructure as a separate component.
These @Bean
factory methods are within a normal @Configuration
class that requires no inheritance.
@Bean
public WebSecurityCustomizer apiStaticResources() { ... } (1)
@Bean
@Order(0) (3)
public SecurityFilterChain apiSecurityFilterChain(HttpSecurity http) throws Exception { ...} (2)
@Bean
public AuthenticationManager authnManager(HttpSecurity http, ...) throws Exception { (5)
AuthenticationManagerBuilder builder = http (4)
.getSharedObject(AuthenticationManagerBuilder.class);
... }
1 | define a bean to configure a SecurityFilterChain for static web resources |
2 | define a bean to configure a SecurityFilterChain for dynamic web resources |
3 | high priority assigned to SecurityFilterChain |
4 | optionally configure an AuthenticationManager for multiple authentication sources |
5 | expose AuthenticationManager as an injectable bean for use in SecurityFilterChains |
The SecurityFilterChain
for static resources gets defined within a lambda function implementing the WebSecurityCustomizer
interface.
The SecurityFilterChain
for dynamic resources gets directly defined by within the @Bean
factory method.
There is no longer any direct linkage between the configuration of the AuthenticationManager
and the SecurityFilterChains
being built.
The linkage is provided through a getSharedObject
call of the HttpSecurity
object that can be injected into the bean methods.
192.3. Ignoring Static Resources
One of the easiest rules to put into place is to provide open access to static content. This is normally image files, web CSS files, etc. Spring recommends not including dynamic content in this list. Keep it limited to static files.
Access is defined by configuring the WebSecurity
object.
-
In the
WebSecurityConfigurerAdapter
approach, the modification is performed within the method overriding theconfigure(WebSecurity)
method.Ignore Static Content Configuration - WebSecurityConfigurerAdapter approachimport org.springframework.security.config.annotation.web.builders.WebSecurity; @Configuration @Order(0) public class APIConfiguration extends WebSecurityConfigurerAdapter { @Override public void configure(WebSecurity web) throws Exception { web.ignoring().antMatchers("/content/**"); }
-
In the Component-based approach, a lambda function implementing the
WebSecurityCustomizer
functional interface is returned. That lambda will be called to customize theWebSecurity
object.Ignore Static Content Configuration - Component-based approachimport org.springframework.security.config.annotation.web.configuration.WebSecurityCustomizer; @Bean public WebSecurityCustomizer apiStaticResources() { return (web)->web.ignoring().antMatchers("/content/**"); }
WebSecurityCustomers Functional Interfacepublic interface WebSecurityCustomizer { void customize(WebSecurity web); }
Remember — our static content is packaged within the application by placing it
under the src/main/resources/static
directory of the source tree.
$ tree src/main/resources/
src/main/resources/
|-- application.properties
`-- static
`-- content
|-- hello.js
|-- hello_static.txt
`-- index.html
$ cat src/main/resources/static/content/hello_static.txt
Hello, static file
With that rule in place, we can now access our static file without any credentials.
$ curl -v -X GET http://localhost:8080/content/hello_static.txt
> GET /content/hello_static.txt HTTP/1.1
>
< HTTP/1.1 200
< Vary: Origin
< Vary: Access-Control-Request-Method
< Vary: Access-Control-Request-Headers
< Last-Modified: Fri, 03 Jul 2020 19:36:25 GMT
< Cache-Control: no-store
< Accept-Ranges: bytes
< Content-Type: text/plain
< Content-Length: 19
< Date: Fri, 03 Jul 2020 20:55:58 GMT
<
Hello, static file
192.4. SecurityFilterChain Matcher
The meat of the SecurityFilterChain
definition is within the configuration of the HttpSecurity
object.
The resulting SecurityFilterChain
will have a requestMatcher that identifies which URIs the identified rules apply to.
The default is "all" URIs.
In the example below I am limiting the configuration to two URIs (/api/anonymous
and /api/authn
) using an Ant Matcher.
A regular expression matcher is also available.
The matchers also allow a specific method to be declared in the definition.
-
In the
WebSecurityConfigurerAdapter
approach, configuration is performed in the method overriding theconfigure(HttpSecurity)
method.SecurityFilterChain Matcher - WebSecurityConfigurerAdapter approachimport org.springframework.security.config.annotation.web.builders.HttpSecurity; @Configuration @Order(0) public class APIConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.requestMatchers(m->m.antMatchers("/api/anonymous/**","/api/authn/**"));(1) //... (2) } ...
1 rules within this configuration will apply to URIs below /api/anonymous
and/api/authn
2 http.build()
is not called
This method returns void and the build() method of HttpSecurity should not be called.
|
-
In the Component-based approach, the configuration is performed in a
@Bean
method that will directly return theSecurityFilterChain
It has the sameHttpSecurity
object injected, but note thatbuild()
is called within this method to return aSecurityFilterChain
.Non-deprated HttpSecurity Configuration Alternative@Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.requestMatchers(m->m.antMatchers("/api/anonymous/**","/api/authn/**"));(1) //... return http.build(); (2) }
1 rules within this configuration will apply to URIs below /api/anonymous
and/api/authn
2 http.build()
is required for this@Bean
factory
This method returns the SecurityFilterChain result of calling the build() method of HttpSecurity .
This is different from the deprecated approach.
|
192.5. HttpSecurity Builder Methods
The HttpSecurity
object is "builder-based" and has several options on how it
can be called.
-
http.methodReturningBuilder().configureBuilder()
-
http.methodPassingBuilderToLambda(builder→builder.configureBuilder())
The builders are also designed to be chained. It is quite common to see the following syntax used.
http.authorizeRequests()
.anyRequest()
.authenticated()
.and().formLogin()
.and().httpBasic();
We can simply make separate calls. As much as I like chained builders — I am not a fan of that specific syntax when starting out. Especially if we are experimenting and commenting/uncommenting configuration statements. You will see me using separate calls with the pass-the-builder and configure with a lambda style. Either style functionally works the same.
http.authorizeRequests(cfg->cfg.anyRequest().authenticated());
http.formLogin();
http.httpBasic();
192.6. Match Requests
We first want to scope our HttpSecurity
configuration commands using requestMatchers()
(or one of its other variants).
The configurations specified here will only be applied to URIs matching the supplied matchers.
There are Ant and Regular Expression matchers available.
The default is to match all URIs.
http.requestMatchers(m->m.antMatchers("/api/anonymous/**","/api/authn/**"));
Notice the requestMatchers
are a primary item in the individual chains and the rest of the configuration is impacting the filters within that chain.
Notice also that our initial SecurityFilterChain is within the other chains in the example and is high in priority because of our @Order value assignment:
|
192.7. Authorize Requests
Next I am showing the authentication requirements of the SecurityFilterChain
.
Calls to the /api/anonymous
URIs do not require authentication.
Calls to the /api/authn
URIs do require authentication.
http.authorizeRequests(cfg->cfg.antMatchers("/api/anonymous/**").permitAll());
http.authorizeRequests(cfg->cfg.anyRequest().authenticated());
The permissions off the matcher include:
-
permitAll() - no constraints
-
denyAll() - nothing will be allowed
-
authenticated() - only authenticated callers may invoke these URIs
-
role restrictions that we won’t be covering just yet
You can also make your matcher criteria method-specific by adding in a HttpMethod
specification.
import org.springframework.http.HttpMethod;
...
...(cfg->cfg.antMatchers(HttpMethod.GET, "/api/anonymous/**")
192.8. Authentication
In this part of the example, I am enabling BASIC Auth and eliminating FORM-based authentication. For demonstration only — I am providing a custom name for the realm name returned to browsers.
http.httpBasic(cfg->cfg.realmName("AuthConfigExample")); (1)
http.formLogin(cfg->cfg.disable());
Realm name is not a requirement to activate Basic Authentication. It is shown here solely as an example of something easily configured. |
< HTTP/1.1 401 < WWW-Authenticate: Basic realm="AuthConfigExample" (1)
1 | Realm Name returned in HTTP responses requiring authentication |
192.9. Header Configuration
In this portion of the example, I am turning off two of the headers that were part of the default set: XSS protection and frame options. There seemed to be some debate on the value of the XSS header [44] and we have no concern about frame restrictions. By disabling them — I am providing an example of what can be changed.
CSRF
protections have also been disabled to make non-safe methods more sane to execute at this time.
Otherwise we would be required to supply a value in a POST that came from a previous GET (all maintained and enforced by optional filters).
http.headers(cfg->{
cfg.xssProtection().disable();
cfg.frameOptions().disable();
});
http.csrf(cfg->cfg.disable());
192.10. Stateless Session Configuration
I have no interest in using the Http Session to maintain identity
between calls — so this should eliminate the SET-COOKIE
commands
for the JSESSIONID
.
http.sessionManagement(cfg->
cfg.sessionCreationPolicy(SessionCreationPolicy.STATELESS));
193. Configuration Results
With the above configurations in place — we can demonstrate the desired functionality and trace the calls through the filter chain if there is an issue.
193.1. Successful Anonymous Call
The following shows a successful anonymous call and the returned headers.
Remember that we have gotten rid of several unwanted features with their headers.
The controller method has been modified to return the identity of the authenticated caller. We will take a look at that later — but know the source of the additional :caller=
string was added for this wave of examples.
$ curl -v -X GET http://localhost:8080/api/anonymous/hello?name=jim
> GET /api/anonymous/hello?name=jim HTTP/1.1
< HTTP/1.1 200
< X-Content-Type-Options: nosniff
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< Content-Type: text/plain;charset=UTF-8
< Content-Length: 25
< Date: Fri, 03 Jul 2020 22:11:11 GMT
<
hello, jim :caller=(null) (1)
1 | we have no authenticated user |
193.2. Successful Authenticated Call
The following shows a successful authenticated call and the returned headers.
$ curl -v -X GET http://localhost:8080/api/authn/hello?name=jim -u user:password (1)
> GET /api/authn/hello?name=jim HTTP/1.1
> Authorization: BASIC dXNlcjpwYXNzd29yZA==
< HTTP/1.1 200
< X-Content-Type-Options: nosniff
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< Content-Type: text/plain;charset=UTF-8
< Content-Length: 23
< Date: Fri, 03 Jul 2020 22:12:34 GMT
<
hello, jim :caller=user (2)
1 | example application configured with username/password of user/password |
2 | we have an authenticated user |
193.3. Rejected Unauthenticated Call Attempt
The following shows a rejection of an anonymous caller attempting to invoke a URI requiring an authenticated user.
$ curl -v -X GET http://localhost:8080/api/authn/hello?name=jim (1)
> GET /api/authn/hello?name=jim HTTP/1.1
< HTTP/1.1 401
< WWW-Authenticate: Basic realm="AuthConfigExample"
< X-Content-Type-Options: nosniff
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< Content-Type: application/json
< Transfer-Encoding: chunked
< Date: Fri, 03 Jul 2020 22:14:20 GMT
<
{"timestamp":"2020-07-03T22:14:20.816+00:00","status":401,
"error":"Unauthorized","message":"Unauthorized","path":"/api/authn/hello"}
1 | attempt to make anonymous call to authentication-required URI |
194. Authenticated User
Authenticating the identity of the caller is a big win. We likely will want their identity at some point during the call.
194.1. Inject UserDetails into Call
One option is to inject the UserDetails
containing the username (and authorities) for the
caller. Methods that can be called without authentication will receive the UserDetails
if the caller provides credentials but must protect itself against a null value if actually
called anonymously.
import org.springframework.security.core.annotation.AuthenticationPrincipal;
import org.springframework.security.core.userdetails.UserDetails;
...
public String getHello(@RequestParam(name = "name", defaultValue = "you") String name,
@AuthenticationPrincipal UserDetails user) {
return "hello, " + name + " :caller=" + (user==null ? "(null)" : user.getUsername());
}
194.2. Obtain SecurityContext from Holder
The other option is to lookup the UserDetails
through the SecurityContext
stored within
the SecurityContextHolder
class. This allows any caller in the call flow to obtain the
identity of the caller at any time.
import org.springframework.security.core.context.SecurityContextHolder;
public String getHelloAlt(@RequestParam(name = "name", defaultValue = "you") String name) {
UserDetails user = (UserDetails) SecurityContextHolder
.getContext().getAuthentication().getPrincipal();
return "hello, " + name + " :caller=" + user.getUsername();
}
195. Swagger BASIC Auth Configuration
Once we enabled default security on our application — we lost the ability to access the Swagger page without logging in.
We did not have to create a separate SecurityFilterChain
for just the Swagger endpoints — but doing so provides some nice modularity and excuse to further demonstrate Spring security configurability.
I have added a separate security configuration for the OpenAPI and Swagger endpoints.
195.1. Swagger Authentication Configuration
The following configuration allows the OpenAPI and Swagger endpoints to be accessed anonymously and handle authentication within OpenAPI/Swagger.
-
Swagger SecurityFilterChain using the
WebSecurityConfigurerAdapter
approach@Configuration(proxyBeanMethods = false) @Order(100) (1) public class SwaggerSecurity extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.requestMatchers(cfg->cfg .antMatchers("/swagger-ui*", "/swagger-ui/**", "/v3/api-docs/**")); http.authorizeRequests(cfg->cfg.anyRequest().permitAll()); http.csrf().disable(); } }
1 Priority (100) is after core application (0) and prior to default rules (1000) -
Swagger SecurityFilterChain using the Component-based approach
@Bean @Order(100) (1) public SecurityFilterChain swaggerSecurityFilterChain(HttpSecurity http) throws Exception { http.requestMatchers(cfg->cfg .antMatchers("/swagger-ui*", "/swagger-ui/**", "/v3/api-docs/**")); http.authorizeRequests(cfg->cfg.anyRequest().permitAll()); http.csrf().disable(); return http.build(); }
1 Priority (100) is after core application (0) and prior to default rules (1000)
195.2. Swagger Security Scheme
In order for Swagger to supply a username:password using BASIC Auth, we need
to define a SecurityScheme
for Swagger to use. The following
bean defines the core object the methods will be referencing.
package info.ejava.examples.svc.authn;
import io.swagger.v3.oas.models.Components;
import io.swagger.v3.oas.models.OpenAPI;
import io.swagger.v3.oas.models.security.SecurityScheme;
import org.springframework.context.annotation.Bean;
...
@Bean
public OpenAPI customOpenAPI() {
return new OpenAPI()
.components(new Components()
.addSecuritySchemes("basicAuth",
new SecurityScheme()
.type(SecurityScheme.Type.HTTP)
.scheme("basic")));
}
The @Operation
annotations can now reference the SecuritySchema
to
inform the SwaggerUI that BASIC Auth can be used against that specific
operation. Notice too that we needed to make the injected UserDetails
optional — or even better — hidden from OpenAPI/Swagger since it is
not part of the HTTP request.
package info.ejava.examples.svc.authn.authcfg.controllers;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter;
@RestController
public class HelloController {
...
@Operation(description = "sample authenticated GET",
security = @SecurityRequirement(name="basicAuth")) (1)
@RequestMapping(path="/api/authn/hello",
method= RequestMethod.GET)
public String getHelloAuthn(@RequestParam(name = "name", defaultValue = "you") String name,
@Parameter(hidden = true) (2)
@AuthenticationPrincipal UserDetails user) {
return "hello, " + name + " :caller=" + user.getUsername();
}
1 | added @SecurityRequirement to operation to express within OpenAPI
that this call accepts Basic Auth |
2 | Identified parameter as not applicable to HTTP callers |
With the |
Figure 85. Swagger with BASIC Auth Configured
|
When making a call — Swagger UI adds the Authorization header with the previously entered credentials. |
Figure 86. Swagger BASIC Auth Call
|
196. CORS
There is one more important security filter to add to our list before we end and it is complex enough to deserve its own section - Cross Origin Resource Sharing (CORS).
Without support for CORS, javascript loaded by browsers will not be able to call the API unless it was loaded from the same base URL as the API.
That even includes local development (i.e., javascript loaded from file system cannot invoke http://localhost:8080
).
In today’s modern web environments — it is common to deploy services independent of Javascript-based UI applications or to have the UI applications calling multiple services with different base URLs.
196.1. Default CORS Support
The following example shows the default CORS configuration for Spring Boot/Web MVC.
The server is ignoring the Origin
header supplied by the client and does not return any CORS-related authorization for the browser to use the response payload.
$ curl -v http://localhost:8080/api/anonymous/hello?name=jim
> GET /api/anonymous/hello?name=jim HTTP/1.1
> Host: localhost:8080
>
< HTTP/1.1 200
hello, jim :caller=(null)
$ curl -v http://localhost:8080/api/anonymous/hello?name=jim -H "Origin: http://127.0.0.1:8080"
> GET /api/anonymous/hello?name=jim HTTP/1.1
> Host: localhost:8080
> Origin: http://127.0.0.1:8080
>
< HTTP/1.1 200
hello, jim :caller=(null)
The lack of headers does not matter for curl, but the CORS response does get evaluated when executed within a browser.
196.2. Browser and CORS Response
196.2.1. Same Origin/Target Host
The following is an example of Javascript loaded from http://localhost:8080
and calling http://localhost:8080
.
No Origin
header is passed by the browser because it knows the Javascript was loaded from the same source it is calling.
196.2.2. Different Origin/Target Host
However, if we load the Javascript from an alternate source, the browser will fail to process the results.
The following is an example of some Javascript loaded from http://127.0.0.1:8080
and calling http://localhost:8080
.
196.3. Enabling CORS
To globally enable CORS support, we can invoke http.cors(config-lambda)
with a lamda function that will provide a configuration based on a given HttpServletRequest
.
This is being supplied when configuring the SecurityFilterChain
.
http.cors(cfg->cfg.configurationSource(corsPermitAllConfigurationSource()));
private CorsConfigurationSource corsPermitAllConfigurationSource() {
return (request) -> {
CorsConfiguration config = new CorsConfiguration();
config.applyPermitDefaultValues();
return config;
};
}
public interface CorsConfigurationSource {
CorsConfiguration getCorsConfiguration(HttpServletRequest request);
}
196.3.1. CORS Headers
With CORS enabled and permitting all, we see some new VARY headers, but that won’t be enough.
The browser will be looking for the Access-Control-Allow-Origin
header being returned with a value matching the Origin
header passed in (* being a wildcard match).
$ curl -v http://localhost:8080/api/anonymous/hello?name=jim
> GET /api/anonymous/hello?name=jim HTTP/1.1
> Host: localhost:8080
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
< Vary: Origin
< Vary: Access-Control-Request-Method
< Vary: Access-Control-Request-Headers
hello, jim :caller=(null)
$ curl -v http://localhost:8080/api/anonymous/hello?name=jim -H "Origin: http://127.0.0.1:8080"
> GET /api/anonymous/hello?name=jim HTTP/1.1
> Host: localhost:8080
> Origin: http://127.0.0.1:8080
>
< HTTP/1.1 200
< Vary: Origin
< Vary: Access-Control-Request-Method
< Vary: Access-Control-Request-Headers
< Access-Control-Allow-Origin: * (1)
hello, jim :caller=(null)
1 | Access-Control-Allow-Origin denotes approval for the given (* = wildcard) Origin |
196.3.2. Browser Accepts Access-Control-Allow-Origin Header
196.4. Constrained CORS
We can define more limited rules for CORS acceptance by using additional commands of the CorsConfiguration
object.
private CorsConfigurationSource corsLimitedConfigurationSource() {
return (request) -> {
CorsConfiguration config = new CorsConfiguration();
config.addAllowedOrigin("http://localhost:8080");
config.setAllowedMethods(List.of("GET","POST"));
return config;
};
}
196.5. CORS Server Acceptance
In this example, I have loaded the Javascript from http://127.0.0.1:8080
and making a call to http://localhost:8080
in order to match the configured Origin
matching rules.
The server is return a 200/OK
along with a Access-Control-Allow-Origin
value that matches the specific Origin
provided.
$ curl -v http://127.0.0.1:8080/api/anonymous/hello?name=jim -H "Origin: http://localhost:8080"
* Trying 127.0.0.1:8080...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET /api/anonymous/hello?name=jim HTTP/1.1
> Host: 127.0.0.1:8080 (1)
> Origin: http://localhost:8080 (2)
>
< HTTP/1.1 200
< Vary: Origin
< Vary: Access-Control-Request-Method
< Vary: Access-Control-Request-Headers
< Access-Control-Allow-Origin: http://localhost:8080 (2)
hello, jim :caller=(null)
1 | Example Host and Origin have been flipped to match approved localhost:8080 Origin |
2 | Access-Control-Allow-Origin denotes approval for the given Origin |
196.6. CORS Server Rejection
This additional definition is enough to produce a 403/FORBIDDEN
from the server versus a rejection from the browser.
$ curl -v http://localhost:8080/api/anonymous/hello?name=jim -H "Origin: http://127.0.0.1:8080" > GET /api/anonymous/hello?name=jim HTTP/1.1 > Host: localhost:8080 > Origin: http://127.0.0.1:8080 > < HTTP/1.1 403 < Vary: Origin < Vary: Access-Control-Request-Method < Vary: Access-Control-Request-Headers Invalid CORS request
196.7. Spring MVC @CrossOrigin Annotation
Spring also offers an annotation-based way to enable the CORS protocol.
In the example below,
@CrossOrigin
annotation has been added to the controller class or individual operations indicating CORS constraints.
This technique is static.
...
import org.springframework.web.bind.annotation.CrossOrigin;
...
@CrossOrigin (1)
@RestController
public class HelloController {
1 | defaults to all origins, etc. |
197. RestTemplate Authentication
Now that we have locked down our endpoints — requiring authentication — I want to briefly show how we can authenticate with RestTemplate
using an existing BASIC Authentication filter.
I am going to delay demonstrating WebClient
to limit the dependencies on the current example application — but we will do so in a similar way that does not change the interface to the caller.
@Bean public RestTemplate anonymousUser(RestTemplateBuilder builder) { RestTemplate restTemplate = builder.requestFactory( //used to read the streams twice -- so we can use the logging filter below ()->new BufferingClientHttpRequestFactory( new SimpleClientHttpRequestFactory())) .interceptors(new RestTemplateLoggingFilter()) .build(); (1) return restTemplate; }
1 | vanilla RestTemplate with our debug log interceptor |
@Bean public RestTemplate authnUser(RestTemplateBuilder builder) { RestTemplate restTemplate = builder.requestFactory( //used to read the streams twice -- so we can use the logging filter below ()->new BufferingClientHttpRequestFactory( new SimpleClientHttpRequestFactory())) .interceptors( new BasicAuthenticationInterceptor("user", "password"),(1) new RestTemplateLoggingFilter()) .build(); return restTemplate; }
1 | added BASIC Auth filter to add Authorization Header |
197.1. Authentication Integration Tests with RestTemplate
The following shows the different RestTemplate
instances being injected that have different credentials assigned.
The different attribute names, matching the @Bean
factory names act as a qualifier to supply the right instance of RestTemplate
.
@SpringBootTest(classes= ClientTestConfiguration.class,
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT,
properties = "test=true") (1)
public class AuthnRestTemplateNTest {
@Autowired
private RestTemplate anonymousUser;
@Autowired
private RestTemplate authnUser;
1 | test property triggers Swagger @Configuration and anything else not suitable during testing to disable |
198. Mock MVC Authentication
There are many test frameworks within Spring and Spring Boot that I did not cover them all earlier. I limited them because covering them all early on added limited value with a lot of volume. However, I do want to show you a small example of MockMvc and how it too can be configured for authentication. The following example shows a
-
normal injection of the mock that will be an anonymous user
-
how to associate a mock to the security context
@SpringBootTest(
properties = "test=true")
@AutoConfigureMockMvc
public class AuthConfigMockMvcNTest {
@Autowired
private WebApplicationContext context;
@Autowired
private MockMvc anonymous;
//example manual instantiation (1)
private MockMvc user;
private final String uri = "/api/anonymous/hello";
@BeforeEach
public void init() {
user = MockMvcBuilders
.webAppContextSetup(context)
.apply(SecurityMockMvcConfigurers.springSecurity())
.build();
}
1 | there is no functional difference between the injected or manually instantiated MockMvc the way it is performed here |
198.1. MockMvc Anonymous Call
The first test is a baseline example showing a call thru the mock to a service that allows all callers and no required authentication.
@Test
public void anonymous_can_call_get() throws Exception {
anonymous.perform(MockMvcRequestBuilders.get(uri).queryParam("name","jim"))
.andDo(print())
.andExpect(status().isOk())
.andExpect(content().string("hello, jim :caller=(null)"));
}
198.2. MockMvc Authenticated Call
The next example shows how we can inject an identity into the mock for use during the test method.
@WithMockUser("user")
@Test
public void user_can_call_get() throws Exception {
user.perform(MockMvcRequestBuilders.get(uri)
.queryParam("name","jim"))
.andDo(print())
.andExpect(status().isOk())
.andExpect(content().string("hello, jim :caller=user"));
}
Although I believe RestTemplate tests are pretty good at testing
client access — the WebMvc framework was a very convenient to quickly
verify and identify issues with the SecurityFilterChain
definitions.
199. Summary
In this module we learned:
-
how to configure a
SecurityFilterChain
-
how to define no security filters for static resources
-
how to customize the
SecurityFilterChain
for API endpoints -
how to expose endpoints that can be called from anonymous users
-
how to require authenticated users for certain endpoints
-
how to CORS-enable the API
-
how to define BASIC Auth for OpenAPI and for use by Swagger
User Details
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
200. Introduction
In previous sections we looked closely at how to authenticate a user obtained from a demonstration user source. The focus was on the obtained user and the processing that went on around it to enforce authentication using an example credential mechanism. There was a lot to explore with just a single user relative to establishing the security filter chain, requiring authentication, supplying credentials with the call, completing the authentication, and obtaining the authenticated user identity.
In this chapter we will focus on the UserDetailsService
framework that supports
the AuthenticationProvider
so that we can implement multiple users,
multiple user information sources, and to begin storing those users in
a database.
200.1. Goals
You will learn:
-
the interface roles in authenticating users within Spring
-
how to configure authentication and authentication sources for use by a security filter chain
-
how to implement access to user details from different sources
-
how to implement access to user details using a database
200.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
build various
UserDetailsService
implementations to host user accounts and be used as a source for authenticating users -
build a simple in-memory
UserDetailsService
-
build an injectable
UserDetailsService
-
build a
UserDetailsService
using access to a relational database -
configure an application to display the database UI
-
encode passwords
201. AuthenticationManager
The focus of this chapter is on providing authentication to stored users and
providing details about them. To add some context to this, lets begin the
presentation flow with the AuthenticationManager
.
AuthenticationManager
is an abstraction the code base looks for in
order to authenticate a set of credentials. Its input and output are
of the same interface type — Authentication
— but populated differently
and potentially implemented differently.
The input Authentication
primarily supplies the principal
(e.g., username) and credentials (e.g., plaintext password).
The output Authentication
of a successful authentication supplies
resolved UserDetails
and provides direct access to granted
authorities — which can come from those user details and will be
used during later authorizations.
Although the credentials (e.g., encrypted password hash) from
the stored UserDetails
is used to authenticate, it’s contents
are cleared before returning the response to the caller.
201.1. ProviderManager
The AuthenticationManager
is primarily implemented
using the ProviderManager
class and delegates authentication
to its assigned AuthenticationProviders
and/or parent
AuthenticationManager
to do the actual authentication.
Some AuthenticationProvider
classes are based off a UserDetailsService
to provide UserDetails
. However, that is not always the case — therefore the diagram below does not show a direct relationship
between the AuthenticationProvider
and UserDetailsService
.
201.2. AuthenticationManagerBuilder
It is the job of the AuthenticationManagerBuilder
to assemble an AuthenticationManager
with the required AuthenticationProviders
and — where appropriate — UserDetailsService
.
The AuthenticationManagerBuilder
is configured during the assembly of the SecurityFilterChain
in both the WebSecurityConfigurerAdapter and Component-based approaches.
One can custom-configure the AuthenticationProviders
for the AuthenticationManagerBuilder
in the WebSecurityConfigurerAdapter
approach by overriding the configure()
callback.
@Configuration(proxyBeanMethods = false)
public static class APIConfiguration extends WebSecurityConfigurerAdapter {
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
... (1)
}
1 | can custom-configure AuthenticationManagerBuilder here during a configure() callback |
One can custom-configure the AuthenticationProviders
for the AuthenticationManagerBuilder
in the component-based approach by obtaining it from an injected HttpSecurity
object using the getSharedObject()
call.
@Bean
public AuthenticationManager authnManager(HttpSecurity http, ...) throws Exception {
AuthenticationManagerBuilder builder =
http.getSharedObject(AuthenticationManagerBuilder.class);
...(1)
builder.parentAuthenticationManager(null); //prevent from being recursive (2)
return builder.build();
}
1 | can obtain and custom-configure AuthenticationManagerBuilder using injected HttpSecurity object |
2 | I found the need to explicitly define "no parent" in the Component-based approach |
201.3. AuthenticationManagerBuilder Builder Methods
We can use the local builder methods to custom-configure the AuthenticationManagerBuilder
.
These allow us to assemble one or more of the well-known AuthenticationProvider
types.
The following is an example of configuring an InMemoryUserDetailsManager
that our earlier examples used in the previous chapters.
However, in this case we get a chance to explicitly populate with users.
This is an early example demonstration toy |
PasswordEncoder encoder = ...
builder.inMemoryAuthentication() (1)
.passwordEncoder(encoder) (2)
.withUser("user1").password(encoder.encode("password1")).roles() (3)
.and()
.withUser("user2").password(encoder.encode("password1")).roles();
1 | adds a UserDetailsService to AuthenticationManager implemented in memory |
2 | AuthenticationProvider will need a password encoder to match passwords during authentication |
3 | users placed directly into storage must have encoded password |
201.3.1. Assembled AuthenticationProvider
The results of the builder configuration are shown below where the builder
assembled an AuthenticationManager
(ProviderManager
) and populated it with an
AuthenticationProvider
(DaoAuthenticationProvider
) that can work
with the UserDetailsService
(InMemoryUserDetailsManager
) we identified.
The builder also populated the UserDetailsService
with
two users: user1
and user2
with an encoded password using the
PasswordEncoder
also set on the AuthenticationProvider
.
201.3.2. Builder Authentication Example
With that in place — we can authenticate our two users using the UserDetailsService
defined and populated using the builder.
$ curl http://localhost:8080/api/authn/hello?name=jim -u user1:password1
hello, jim :caller=user1
$ curl http://localhost:8080/api/authn/hello?name=jim -u user2:password1
hello, jim :caller=user2
$ curl http://localhost:8080/api/authn/hello?name=jim -u userX:password -v
< HTTP/1.1 401
201.4. AuthenticationProvider
The AuthenticationProvider
can can answer two (2) questions:
|
201.5. AbstractUserDetailsAuthenticationProvider
For username/password authentication, Spring provides an
AbstractUserDetailsAuthenticationProvider
that supplies the core
authentication workflow that includes:
-
a
UserCache
to storeUserDetails
from previous successful lookups -
obtaining the
UserDetails
if not already in the cache -
pre and post-authorization checks to verify such things as the account locked/disabled/expired or the credentials expired.
-
additional authentication checks where the password matching occurs
The instance will support any authentication
token of type UsernamePasswordAuthenticationToken
but will need
at least two things:
-
user details from storage
-
a means to authenticate presented password
201.6. DaoAuthenticationProvider
Spring provides a concrete DaoAuthenticationProvider
extension of the
AbstractUserDetailsAuthenticationProvider
class that works directly with:
-
UserDetailService
to obtain theUserDetails
-
PasswordEncoder
to perform password matching
Now all we need is a PasswordEncoder
and UserDetailsService
to get all this rolling.
201.7. UserDetailsManager
Before we get too much further into the details of the UserDetailsService
, it will be good to be reminded that the interface supplies only a single loadUserByUsername()
method.
There is an extension of that interface
to address full lifecycle |
202. AuthenticationManagerBuilder Configuration
At this point we know the framework of objects that need to be in place for authentication to complete and how to build a toy InMemoryUserDetailsManager
using builder methods within the AuthenticationManagerBuilder
class.
In this section we will learn how we can configure additional sources with less assistance from the AuthenticationManagerBuilder
.
202.1. Fully-Assembled AuthenticationManager
We can directly assign a fully-assembled AuthenticationManager
to other SecurityFilterChains
by first exporting it as a @Bean
.
-
The
WebSecurityConfigurerAdapter
approach provides aauthenticationManagerBean()
helper method that can be exposed as a@Bean
by the derived class.@Bean AuthenticationManager — WebSecurityConfigurerAdapter approach@Configuration public class APIConfiguration extends WebSecurityConfigurerAdapter { @Bean @Override public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); }
-
The custom configuration of the
AuthenticatonManagerBuilder
within the Component-based approach occurs within the@Bean
factory that exposes it.@Bean AuthenticationManager — Component-based approach@Bean public AuthenticationManager authnManager(HttpSecurity http,...) throws Exception { AuthenticationManagerBuilder builder = http.getSharedObject(AuthenticationManagerBuilder.class); ... builder.parentAuthenticationManager(null); //prevent from being recursive return builder.build(); }
With the fully-configured AuthenticationManager
exposed as a @Bean
, we can look to directly wire it into the other SecurityFilterChains
.
202.2. Directly Wire-up AuthenticationManager
We can directly set the AuthenticationManager
to one created elsewhere.
The following examples shows setting the AuthenticationManager
during the building of the SecurityFilterChain
-
WebSecurityConfigurerAdapter
approachAssigning Parent AuthenticationManager — WebSecurityConfigurerAdapter Approach@Configuration @Order(500) @RequiredArgsConstructor public static class H2Configuration extends WebSecurityConfigurerAdapter { private final AuthenticationManager authenticationManager; (1) @Override protected void configure(HttpSecurity http) throws Exception { http.requestMatchers(m->m.antMatchers("/login","/logout", "/h2-console/**")); ... http.authenticationManager(authenticationManager); (2) } }
1 AuthenticationManager
assembled elsewhere and injected in this@Configuration
class2 injected AuthenticationManager
to be theAuthenticationManager
for what this builder builds -
Component-based approach
Assigning AuthenticationManager — Component-based Approach@Order(500) @Bean public SecurityFilterChain h2SecurityFilters(HttpSecurity http,(1) AuthenticationManager authMgr) throws Exception { http.requestMatchers(m->m.antMatchers("/login","/logout","/h2-console/**")); ... http.authenticationManager(authMgr); (2) return http.build(); }
1 AuthenticationManager
assembled elsewhere and injected in this@Bean
factory method2 injected AuthenticationManager
to be theAuthenticationManager
for what this builder builds
202.3. Directly Wire-up Parent AuthenticationManager
We can instead set the parent AuthenticationManager
using the SecurityAuthenticationManagerBuilder
.
-
The following example shows setting the parent
AuthenticationManager
during aWebSecurityConfigurerAdapter.configure()
callback in the WebSecurityConfigurerAdapter approach.Assigning Parent AuthenticationManager — WebSecurityConfigurerAdapter Approach@Configuration @Order(500) @RequiredArgsConstructor public static class H2Configuration extends WebSecurityConfigurerAdapter { private final AuthenticationManager authenticationManager; @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.parentAuthenticationManager(authenticationManager); (1) } }
1 injected AuthenticationManager
to be the parentAuthenticationManager
of what this builder builds -
The following example shows setting the parent
AuthenticationManager
during the build of theSecurityFilterChain
usinghttp.getSharedObject()
.Assigning Parent AuthenticationManager — Component-based Approach@Order(500) @Bean public SecurityFilterChain h2SecurityFilters(HttpSecurity http, AuthenticationManager authMgr) throws Exception { ... AuthenticationManagerBuilder builder = http.getSharedObject(AuthenticationManagerBuilder.class); builder.parentAuthenticationManager(authMgr); (1) return http.build();
1 injected AuthenticationManager
to be the parentAuthenticationManager
of what this builder builds
202.4. Define Service and Encoder @Bean
Another option in supplying a UserDetailsService
is to define a globally accessible UserDetailsService
@Bean
to inject to use with our builder.
However, in order to pre-populate the UserDetails
passwords, we must use a PasswordEncoder
that is consistent with the AuthenticationProvider
this UserDetailsService
will be combined with.
We can set the default PasswordEncoder
using a @Bean
factory.
@Bean (1)
public PasswordEncoder passwordEncoder() {
return ...
}
1 | defining a PasswordEncoder to be injected into default AuthenticationProvider |
@Bean
public UserDetailsService sharedUserDetailsService(PasswordEncoder encoder) { (1)
User.UserBuilder builder = User.builder().passwordEncoder(encoder::encode);(2)
List<UserDetails> users = List.of(
builder.username("user1").password("password2").roles().build(), (3)
builder.username("user3").password("password2").roles().build()
);
return new InMemoryUserDetailsManager(users);
}
1 | using an injected PasswordEncoder for consistency |
2 | using different UserDetails builder than before — setting password encoding function |
3 | username user1 will be in both UserDetailsService with different passwords |
202.4.1. Inject UserDetailService
We can inject the fully-assembled UserDetailsService
into the AuthenticationManagerBuilder
— just like before with the inMemoryAuthentication
, except this time the builder has no knowledge of the implementation being injected.
We are simply injecting a UserDetailsService
.
The builder will accept it and wrap that in an AuthenticationProvider
WebSecurityConfigurerAdapter
Approach@Configuration
@Order(0)
@RequiredArgsConstructor
public static class APIConfiguration extends WebSecurityConfigurerAdapter {
private final List<UserDetailsService> userDetailsServices;(1)
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
...
for (UserDetailsService uds: userDetailsServices) {
auth.userDetailsService(uds); (2)
}
}
1 | injecting UserDetailsService into configuration class |
2 | adding additional UserDetailsService to create additional AuthenticationProvider |
The same can be done in the Component-based approach and during the equivalent builder configuration I demonstrated earlier with the inMemoryAuthentication
.
The only difference is that I found the more I custom-configured the AuthenticationManagerBuilder
, I would end up in a circular configuration with the AuthenticationManager
pointing to itself as its parent unless I explicitly set the parent value to null.
@Bean
public AuthenticationManager authnManager(HttpSecurity http,
List<UserDetailsService> userDetailsServices ) throws Exception { (1)
AuthenticationManagerBuilder builder = http.getSharedObject(AuthenticationManagerBuilder.class);
...
for (UserDetailsService uds : userDetailsServices) {
builder.userDetailsService(uds); (2)
}
builder.parentAuthenticationManager(null); //prevent from being recursive
return builder.build();
}
1 | injecting UserDetailsService into bean method |
2 | adding additional UserDetailsService to create additional AuthenticationProvider |
202.4.2. Assembled Injected UserDetailsService
The results of the builder configuration are shown below where the builder
assembled an AuthenticationProvider
(DaoAuthenticationProvider
)
based on the injected UserDetailsService
(InMemoryUserDetailsManager
).
The injected UserDetailsService
also had two users — user1
and user3
— added with an encoded password based on the injected PasswordEncoder
bean.
This will be the same bean injected into the AuthenticationProvider
.
202.4.3. Injected UserDetailsService Example
With that in place, we can now authenticate user1
and user3
using the assigned
passwords using the AuthenticationProvider
with the injected UserDetailService
.
$ curl http://localhost:8080/api/authn/hello?name=jim -u user1:password2
hello, jim :caller=user1
$ curl http://localhost:8080/api/authn/hello?name=jim -u user3:password2
hello, jim :caller=user3
$ curl http://localhost:8080/api/authn/hello?name=jim -u userX:password -v
< HTTP/1.1 401
202.5. Combine Approaches
As stated before — the ProviderManager
can delegate to multiple
AuthenticationProviders
before authenticating or rejecting an authentication
request. We have demonstrated how to create an AuthenticationManager
multiple
ways. In this example, I am integrating the two AuthenticationProviders
into
a single AuthenticationManager
.
//AuthenticationManagerBuilder auth
PasswordEncoder encoder = ... (1)
auth.inMemoryAuthentication().passwordEncoder(encoder)
.withUser("user1").password(encoder.encode("password1")).roles()
.and()
.withUser("user2").password(encoder.encode("password1")).roles();
for (UserDetailsService uds : userDetailsServices) { (2)
builder.userDetailsService(uds);
}
1 | locally built AuthenticationProvider will use its own encoder |
2 | @Bean -built UserDetailsService injected and used to form second AuthenticationProvider |
202.5.1. Assembled Combined AuthenticationProviders
The resulting AuthenticationManager
ends up with two custom-configured AuthenticationProviders
.
Each AuthenticationProviders
are
-
implemented with the
DaoAuthenticationProvider
class -
make use of a
PasswordEncoder
andUserDetailsService
The left |
The right |
The two were brought together by one of our configuration approaches and now we have two sources of credentials to authenticate against.
202.5.2. Multiple Provider Authentication Example
With the two AuthenticationProvider
objects defined, we can now login as
user2 and user3, and user1 using both passwords. The user1 example shows that
an authentication failure from one provider still allows it to be inspected
by follow-on providers.
$ curl http://localhost:8080/api/authn/hello?name=jim -u user1:password1
hello, jim :caller=user1
$ curl http://localhost:8080/api/authn/hello?name=jim -u user1:password2
hello, jim :caller=user1
$ curl http://localhost:8080/api/authn/hello?name=jim -u user2:password1
hello, jim :caller=user2
$ curl http://localhost:8080/api/authn/hello?name=jim -u user3:password2
hello, jim :caller=user3
203. UserDetails
So now we know that all we need is to provide a UserDetailsService
instance and Spring will take care of most of the rest.
UserDetails
is an interface that we can implement any way we want.
For example — if we manage our credentials in MongoDB or use
Java Persistence API (JPA), we can create the proper classes for that mapping.
We won’t need to do that just yet because Spring provides a User
class that
can work for most POJO-based storage solutions.
204. PasswordEncoder
I have made mention several times about the PasswordEncoder
and earlier covered how it is
used to create a cryptographic hash. Whenever we configured a PasswordEncoder
for our
AuthenticationProvider
we have the choice of many encoders. I will highlight three of them.
204.1. NoOpPasswordEncoder
The NoOpPasswordEncoder
is what it sounds like. It does nothing when encoding the plaintext
password. This can be used for early development and debug but should not — obviously — be used with real credentials.
204.2. BCryptPasswordEncoder
The BCryptPasswordEncoder
uses a very strong Bcrypt algorithm and likely should be
considered the default in production environments.
204.3. DelegatingPasswordEncoder
The DelegatingPasswordEncoder
is a jack-of-all-encoders. It has one default way to
encode but can match passwords of numerous algorithms. This encoder writes and relies
on all passwords starting with an {encoding-key}
that indicates the type of encoding
to use.
{noop}password
{bcrypt}$2y$10$UvKwrln7xPp35c5sbj.9kuZ9jY9VYg/VylVTu88ZSCYy/YdcdP/Bq
Use the PasswordEncoderFactories
class to create a DelegatingPasswordEncoder
populated with a full compliment of encoders.
import org.springframework.security.crypto.factory.PasswordEncoderFactories;
@Bean
public PasswordEncoder passwordEncoder() {
return PasswordEncoderFactories.createDelegatingPasswordEncoder();
}
DelegatingPasswordEncoder encodes one way and matches multiple ways
DelegatingPasswordEncoder encodes using a single, designated encoder
and matches against passwords encoded using many alternate encodings — thus
relying on the password to start with a {encoding-key} .
|
205. JDBC UserDetailsService
Spring provides two Java Database Connectivity (JDBC) implementation classes
that we can easily use out of the box to begin storing UserDetails
in a database:
-
JdbcDaoImpl
- implements just the coreUserDetailsService
loadUserByUsername
capability -
JdbcUserDetailManager
- implements the fullUserDetailsManager
CRUD capability
JDBC is a database communications interface containing no built-in mapping
JDBC is a pretty low-level interface to access a relational database from Java.
All the mapping between the database inputs/outputs and our Java business objects
is done outside of JDBC. There is no mapping framework like with
Java Persistence API (JPA).
|
JdbcUserDetailManager
extends JdbcDaoImpl
. We only need JdbcDaoImpl
since we will only be performing authentication reads and not yet be implementing
full CRUD (Create, Read, Update, and Delete) with databases.
However, there would have been no harm in using
the full JdbcUserDetailManager
implementation in the examples below and
simply ignored the additional behavior.
To use the JDBC implementation, we are going to need a few things:
-
A relational database - this is where we will store our users
-
Database Schema - this defines the tables and columns of the database
-
Database Contents - this defines our users and passwords
-
javax.sql.DataSource
- this is a JDBC wrapper around a connection to the database -
construct the
UserDetailsService
(and potentially expose as a@Bean
) -
(potentially inject and) add JDBC
UserDetailsService
toAuthenticationManagerBuilder
205.1. H2 Database
There are
several lightweight databases that are very good for development
and demonstration (e.g.,
h2,
hsqldb,
derby,
SQLite).
They commonly offer in-memory, file-based, and server-based instances with
minimal scale capability but extremely simple to administer. In general,
they supply an interface that is compatible with the more enterprise-level
solutions that are more suitable for production. That makes them an ideal
choice for using in demonstration and development situations like this.
For this example, I will be using the h2
database but many others could
have been used as well.
205.2. DataSource: Maven Dependencies
To easily create a default DataSource, we can simply add a compile
dependency on spring-boot-starter-data-jdbc
and a runtime dependency
on the h2
database. This will cause our application to start with a
default DataSource connected to the an in-memory database.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jdbc</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
205.3. JDBC UserDetailsService
Once we have the spring-boot-starter-data-jdbc
and database dependency in place,
Spring Boot will automatically create a default javax.sql.DataSource
that can be
injected into a @Bean
factory so that we can create a JdbcDaoImpl
to implement
the JDBC UserDetailsService
.
import javax.sql.DataSource;
...
@Bean
public UserDetailsService jdbcUserDetailsService(DataSource userDataSource) {
JdbcDaoImpl jdbcUds = new JdbcDaoImpl();
jdbcUds.setDataSource(userDataSource);
return jdbcUds;
}
From there, we can inject the JDBC UserDetailsService
— like the in-memory version we injected earlier and add it to the builder.
205.4. Autogenerated Database URL
If we restart our application at this point, we will get a generated database URL using a UUID for the name.
H2 console available at '/h2-console'. Database available at
'jdbc:h2:mem:76567045-619b-4588-ae32-9154ba9ac01c'
205.5. Specified Database URL
We can make the URL more stable and well-known by setting the
spring.datasource.url
property.
spring.datasource.url=jdbc:h2:mem:users
H2 console available at '/h2-console'. Database available at 'jdbc:h2:mem:users'
h2-console URI can be modified
We can also control the URI for the h2-console by setting the
spring.h2.console.path property.
|
205.6. Enable H2 Console Security Settings
The h2 database can be used headless, but also comes with a convenient UI that will allow us to inspect the data in the database and manipulate it if necessary. However, with security enabled — we will not be able to access our console by default. We only addressed authentication for the API endpoints. Since this is a chapter focused on configuring authentication, it is a good exercise to go through the steps to make the h2 UI accessible but also protected. The following will:
-
require users accessing the
/h2-console/**
URIs to be authenticated -
enable FORM authentication and redirect successful logins to the
/h2-console
URI -
disable frame headers that would have placed constraints on how the console could be displayed
-
disable CSRF for the
/h2-console/**
URI but leave it enabled for the other URIs -
wire in the injected
AuthenticationManager
configured for the API
205.6.1. H2 Configuration - WebSecurityConfigurerAdapter Approach
@Configuration
@Order(500)
@RequiredArgsConstructor
public static class H2Configuration extends WebSecurityConfigurerAdapter {
private final AuthenticationManager authenticationManager; (1)
@Override
protected void configure(HttpSecurity http) throws Exception {
http.requestMatchers(m->m.antMatchers("/login","/logout", "/h2-console/**"));
http.authorizeRequests(cfg->cfg.antMatchers("/login","/logout").permitAll());(2)
http.authorizeRequests(cfg->cfg.antMatchers("/h2-console/**").authenticated());(3)
http.csrf(cfg->cfg.ignoringAntMatchers("/h2-console/**")); (4)
http.headers(cfg->cfg.frameOptions().disable()); (5)
http.formLogin().successForwardUrl("/h2-console"); (6)
http.authenticationManager(authenticationManager); (7)
}
}
1 | injected AuthenticationManager bean exposed by APIConfiguration |
2 | apply filter rules to H2 UI URIs as well as login/logout form |
3 | require authenticated users by the application to reach the console |
4 | turn off CSRF only for the H2 console |
5 | turn off display constraints for the H2 console |
6 | route successful logins to the H2 console |
7 | use pre-configured AuthenticationManager for authentication to UI |
205.6.2. H2 Configuration — Component-based Approach
@Order(500)
@Bean
public SecurityFilterChain h2SecurityFilters(HttpSecurity http,(1)
AuthenticationManager authMgr) throws Exception {
http.requestMatchers(m->m.antMatchers("/login","/logout","/h2-console/**"));(2)
http.authorizeRequests(cfg->cfg.antMatchers("/login","/logout").permitAll());
http.authorizeRequests(cfg->cfg.antMatchers("/h2-console/**").authenticated());(3)
http.csrf(cfg->cfg.ignoringAntMatchers("/h2-console/**")); (4)
http.headers(cfg->cfg.frameOptions().disable()); (5)
http.formLogin().successForwardUrl("/h2-console"); (6)
http.authenticationManager(authMgr); (7)
return http.build();
}
1 | injected AuthenticationManager bean exposed by API Configuration |
2 | apply filter rules to H2 UI URIs as well as login/logout form |
3 | require authenticated users by the application to reach the console |
4 | turn off CSRF only for the H2 console |
5 | turn off display constraints for the H2 console |
6 | route successful logins to the H2 console |
7 | use pre-configured AuthenticationManager for authentication to UI |
205.7. Form Login
When we attempt to reach a protected URI within the application with FORM authentication active — the FORM authentication form is displayed. We should be able to enter the site using any of the username/passwords
available to the |
If you enter a bad username/password at the point in time you will receive a JDBC error since we have not yet setup the user database. |
205.8. H2 Login
Once we get beyond the application FORM login, we are presented with the
H2 database login. The JDBC URL should be set to the value of the
|
205.9. H2 Console
Once successfully logged in, we are presented with a basic but functional SQL interface to the in-memory H2 database that will contain our third source of users — which we need to now setup. |
205.10. Create DB Schema Script
From the point in time when we added the spring-boot-starter-jdbc
dependency, we were ready to add database schema — which is the definition of tables, columns, indexes, and constraints of our database.
Rather than use a default filename, it is good to keep the schemas separated.
The following file is being placed in the src/main/resources/database
directory
of our source tree. It will be accessible to use within the classpath when we restart
the application. The bulk of this implementation comes from the
Spring Security Documentation Appendix. I have increased the size of the password
column to accept longer Bcrypt encoded password hash values.
--users-schema.ddl (1)
drop table authorities if exists; (2)
drop table users if exists;
create table users( (3)
username varchar_ignorecase(50) not null primary key,
password varchar_ignorecase(100) not null,
enabled boolean not null);
create table authorities ( (4)
username varchar_ignorecase(50) not null,
authority varchar_ignorecase(50) not null,
constraint fk_authorities_users foreign key(username) references users(username));(5)
create unique index ix_auth_username on authorities (username,authority); (6)
1 | file places in `src/main/resources/database/users-schema.ddl |
2 | dropping tables that may exist before creating |
3 | users table primarily hosts username and password |
4 | authorities table will be used for authorizing accesses after successful identity authentication |
5 | foreign key' constraint enforces that `user must exist for any authority |
6 | unique index constraint enforces all authorities are unique per user and places
the foreign key to the users table in an efficient index suitable for querying |
The schema file can be referenced through the spring.database.schema
property by
prepending classpath:
to the front of the path.
spring.datasource.url=jdbc:h2:mem:users
spring.sql.init.schema-locations=classpath:database/users-schema.ddl
205.11. Schema Creation
The following shows an example of the application log when the schema creation in action.
Executing SQL script from class path resource [database/users-schema.ddl]
SQL: drop table authorities if exists
SQL: drop table users if exists
SQL: create table users( username varchar_ignorecase(50) not null primary key,
password varchar_ignorecase(100) not null, enabled boolean not null)
SQL: create table authorities ( username varchar_ignorecase(50) not null,
authority varchar_ignorecase(50) not null,
constraint fk_authorities_users foreign key(username) references users(username))
SQL: create unique index ix_auth_username on authorities (username,authority)
Executed SQL script from class path resource [database/users-schema.ddl] in 48 ms.
H2 console available at '/h2-console'. Database available at 'jdbc:h2:mem:users'
205.12. Create User DB Populate Script
The schema file took care of defining tables, columns, relationships, and constraints.
With that defined, we can add population of users. The following user passwords
take advantage of knowing we are using the DelegatingPasswordEncoder and we made
{noop}plaintext
an option at first.
The JDBC UserDetailsService requires that all valid users have at least one authority
so I have defined a bogus known
authority to represent the fact the username is known.
--users-populate.sql insert into users(username, password, enabled) values('user1','{noop}password',true); insert into users(username, password, enabled) values('user2','{noop}password',true); insert into users(username, password, enabled) values('user3','{noop}password',true); insert into authorities(username, authority) values('user1','known'); insert into authorities(username, authority) values('user2','known'); insert into authorities(username, authority) values('user3','known');
We reference the population script thru a property and can place that in the application.properties file.
spring.datasource.url=jdbc:h2:mem:users
spring.sql.init.schema-locations=classpath:database/users-schema.ddl
spring.sql.init.data-locations=classpath:database/users-populate.sql
205.13. User DB Population
After the wave of schema commands has completed, the row population will take place filling the tables with our users, credentials, etc.
Executing SQL script from class path resource [database/users-populate.sql]
SQL: insert into users(username, password, enabled) values('user1','{noop}password',true)
SQL: insert into users(username, password, enabled) values('user2','{noop}password',true)
SQL: insert into users(username, password, enabled) values('user3','{noop}password',true)
SQL: insert into authorities(username, authority) values('user1','known')
SQL: insert into authorities(username, authority) values('user2','known')
SQL: insert into authorities(username, authority) values('user3','known')
Executed SQL script from class path resource [database/users-populate.sql] in 7 ms.
H2 console available at '/h2-console'. Database available at 'jdbc:h2:mem:users'
205.14. H2 User Access
With the schema created and users populated, we can view the results using the H2 console. |
205.15. Authenticate Access using JDBC UserDetailsService
We can now authenticate to access to the API using the credentials in this database.
$ curl http://localhost:8080/api/anonymous/hello?name=jim -u user1:password
hello, jim :caller=user1 (1)
$ curl http://localhost:8080/api/anonymous/hello?name=jim -u user1:password1
hello, jim :caller=user1 (2)
$ curl http://localhost:8080/api/anonymous/hello?name=jim -u user1:password2
hello, jim :caller=user1 (3)
1 | authenticating using credentials from JDBC UserDetailsService |
2 | authenticating using credentials from directly configured in-memory UserDetailsService |
3 | authenticating using credentials from injected in-memory UserDetailsService |
However, we still have plaintext passwords in the database. Lets look to clean that up.
205.16. Encrypting Passwords
It would be bad practice to leave the user passwords in plaintext when we have the ability to store cryptographic hash values instead.
We can do that through Java and the BCryptPasswordEncoder
.
The follow example shows using a shell script to obtain the encrypted password value.
$ htpasswd -bnBC 10 user1 password | cut -d\: -f2 (1) (2)
$2y$10$UvKwrln7xPp35c5sbj.9kuZ9jY9VYg/VylVTu88ZSCYy/YdcdP/Bq
$ htpasswd -bnBC 10 user2 password | cut -d\: -f2
$2y$10$9tYKBY7act5dN.2d7kumuOsHytIJW8i23Ua2Qogcm6OM638IXMmLS
$ htpasswd -bnBC 10 user3 password | cut -d\: -f2
$2y$10$AH6uepcNasVxlYeOhXX20.OX4cI3nXX.LsicoDE5G6bCP34URZZF2
1 | script outputs in format username:encoded-password |
2 | cut command is breaking the line at the ":" character and returning second field with just the encoded value |
205.16.1. Updating Database with Encrypted Values
I have updated the populate SQL script to modify the {noop}
plaintext
passwords with their {bcrypt}
encrypted replacements.
update users
set password='{bcrypt}$2y$10$UvKwrln7xPp35c5sbj.9kuZ9jY9VYg/VylVTu88ZSCYy/YdcdP/Bq'
where username='user1';
update users
set password='{bcrypt}$2y$10$9tYKBY7act5dN.2d7kumuOsHytIJW8i23Ua2Qogcm6OM638IXMmLS'
where username='user2';
update users
set password='{bcrypt}$2y$10$AH6uepcNasVxlYeOhXX20.OX4cI3nXX.LsicoDE5G6bCP34URZZF2'
where username='user3';
Don’t Store Plaintext or Decode-able Passwords
The choice of replacing the plaintext INSERTs versus using UPDATE is purely
a choice made for incremental demonstration. Passwords should always be stored in
their Cryptographic Hash form and never in plaintext in a real environment.
|
205.16.2. H2 View of Encrypted Passwords
Once we restart and run that portion of the SQL, the plaintext |
Figure 94. H2 User Access to Encrypted User Passwords
|
206. Final Examples
206.1. Authenticate to All Three UserDetailsServices
With all UserDetailsServices
in place, we are able to login
as each user using one of the three sources.
$ curl http://localhost:8080/api/authn/hello?name=jim -u user1:password -v (2)
> Authorization: Basic dXNlcjE6cGFzc3dvcmQ= (1)
hello, jim :caller=user1
$ curl http://localhost:8080/api/authn/hello?name=jim -u user2:password1 (3)
hello, jim :caller=user2
$ curl http://localhost:8080/api/authn/hello?name=jim -u user3:password2 (4)
hello, jim :caller=user3
1 | we are still sending a base64 encoding of the plaintext password. The cryptographic hash is created server-side. |
2 | password is from the H2 database |
3 | password1 is form the original in-memory user details |
4 | password2 is from the injected in-memory user details |
206.2. Authenticate to All Three Users
With the JDBC UserDetailsService
in place with encoded passwords,
we are able to authenticate against all three users.
$ curl http://localhost:8080/api/authn/hello?name=jim -u user1:password (1)
hello, jim :caller=user1
$ curl http://localhost:8080/api/authn/hello?name=jim -u user2:password (1)
hello, jim :caller=user2
$ curl http://localhost:8080/api/authn/hello?name=jim -u user3:password (1)
hello, jim :caller=user3
1 | three separate user credentials stored in H2 database |
207. Summary
In this module we learned:
-
the various interfaces and object purpose that are part of the Spring authentication framework
-
how to wire up an
AuthenticationManager
withAuthenticationProviders
to implement authentication for a configured security filter chain -
how to implement
AuthenticationProviders
using onlyPasswordEncoder
andUserDetailsSource
primitives -
how to implement in-memory
UserDetailsService
-
how to implement a database-backed
UserDetailsService
-
how to encode and match encrypted password hashes
-
how to configure security to display the H2 UI and allow it to be functional
Authorization
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
208. Introduction
We have spent a significant amount of time to date making sure we are identifying the caller, how to identify the caller, restricting access based on being properly authenticated, and the management of multiple users. In this lecture we are going to focus on expanding authorization constraints to both roles and permission-based authorities.
208.1. Goals
You will learn:
-
the purpose of authorities, roles, and permissions
-
how to express authorization constraints using URI-based and annotation-based constraints
-
how the enforcement of the constraints is accomplished
-
how to potentially customize the enforcement of constraints
208.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
define the purpose of a role-based and permission-based authority
-
identify how to construct an
AccessDecisionManager
and supply customizedAccessDecisionVoter
classes -
implement URI Path-based authorization constraints
-
implement annotation-based authorization constraints
-
implement role inheritance
-
implement an
AccessDeniedException
controller advice to hide necessary stack trace information and provide useful error information to the caller -
identify the detailed capability of expression-based constraints to be able to handle very intricate situations
209. Authorities, Roles, Permissions
An authority is a general term used for a value that is granular enough to determine whether a user will be granted access to a resource. There are different techniques for slicing up authorities to match the security requirements of the application. Spring uses roles and permissions as types of authorities.
A role is a course-grain authority assigned to the type of user accessing the system and the prototypical uses that they perform. For example ROLE_ADMIN, ROLE_CLERK, or ROLE_CUSTOMER are relative to the roles in a business application.
A permission is a more fine-grain authority that describes the action being performed versus the role of the user. For example "PRICE_CHECK", "PRICE_MODIFY", "HOURS_GET", and "HOURS_MODIFY" are relative to the actions in a business application.
No matter which is being represented by the authority value, Spring Security looks to grant or deny access to a user based on their assigned authorities and the rules expressed to protect the resources accessed.
Spring represents both roles and permissions using a GrantedAuthority
class with an authority
string carrying the value. Role authority values have, by default, a "ROLE_" prefix, which
is a configurable value. Permissions/generic authorities do not have a prefix value.
Aside from that, they look very similar but are not always treated equally.
Spring refers to authorities with ROLE_ prefix as "roles" when the prefix is
stripped away and anything with the raw value as "authorities". ROLE_ADMIN authority
represents an ADMIN role. PRICE_CHECK permission is a PRICE_CHECK authority.
|
210. Authorization Constraint Types
There are two primary ways we can express authorization constraints within Spring: path-based and annotation-based.
210.1. Path-based Constraints
Path-based constraints are specific to web applications and controller operations since the constraint is expressed against a URI pattern.
We define path-based authorizations using the same HttpSecurity
builder we used with authentication.
We can use either the WebSecurityConfigurerAdapter (deprecated) or Component-based approaches — but not both.
-
WebSecurityConfigurer Approach
Authn and Authz HttpSecurity Configuration@Configuration @Order(0) @RequiredArgsConstructor public static class APIConfiguration extends WebSecurityConfigurerAdapter { private final UserDetailsService jdbcUserDetailsService; @Override public void configure(WebSecurity web) throws Exception { ...} @Override protected void configure(HttpSecurity http) throws Exception { http.requestMatchers(cfg->cfg.antMatchers("/api/**")); //... http.httpBasic(); //remaining authn and upcoming authz goes here
-
Component-based Approach
Authn and Authz HttpSecurity Configuration@Bean public WebSecurityCustomizer authzStaticResources() { return (web) -> web.ignoring().antMatchers("/content/**"); } @Bean @Order(0) public SecurityFilterChain authzSecurityFilters(HttpSecurity http) throws Exception { http.requestMatchers(cfg->cfg.antMatchers("/api/**")); //... http.httpBasic(); //remaining authn and upcoming authz goes here return http.build(); }
The first example below shows a URI path restricted to the ADMIN
role. The second example shows a URI path restricted to the ROLE_ADMIN
, or ROLE_CLERK
, or PRICE_CHECK
authorities.
It is worth saying multiple times.
Pay attention to the use of the terms "role" and "authority" within Spring security.
ROLE_X is a "ROLE_X" authority and a "X" role.
|
http.authorizeRequests(cfg->cfg.antMatchers(
"/api/authorities/paths/admin/**")
.hasRole("ADMIN")); (1)
http.authorizeRequests(cfg->cfg.antMatchers(HttpMethod.GET,
"/api/authorities/paths/price")
.hasAnyAuthority("PRICE_CHECK", "ROLE_ADMIN", "ROLE_CLERK")); (2)
1 | ROLE_ prefix automatically added to role authorities |
2 | ROLE_ prefix must be manually added when expressed as a generic authority |
Out-of-the-box, path-based annotations support role inheritance, roles, and permission-based constraints. Path-based constraints also support Spring Expression Language (SpEL).
210.2. Annotation-based Constraints
Annotation-based constraints are not directly related to web applications and not associated with URIs. Annotations are placed on the classes and/or methods they are meant to impact. The processing of those annotations has default, built-in behavior that we can augment and modify. The descriptions here are constrained to out-of-the-box capability before trying to adjust anything.
There are three annotation options in Spring:
-
@Secured — this was the original, basic annotation Spring used to annotate access controls for classes and/or methods. Out-of-the-box, this annotation only supports roles and does not support role inheritance.
@Secured("ROLE_ADMIN") (1) @GetMapping(path = "admin", produces = {MediaType.TEXT_PLAIN_VALUE}) public ResponseEntity<String> doAdmin(
1 ROLE_
prefix must be included in string -
JSR 250 — this is an industry standard API for expressing access controls using annotations for classes and/or methods. This is also adopted by JakartaEE. Out-of-the-box, this too only supports roles and does not support role inheritance.
@RolesAllowed("ROLE_ADMIN") (1) @GetMapping(path = "admin", produces = {MediaType.TEXT_PLAIN_VALUE}) public ResponseEntity<String> doAdmin(
1 ROLE_
prefix must be included in string -
expressions — this annotation capability is based on the powerful Spring Expression Language (SpEL) that allows for ANDing and ORing of multiple values and includes inspection of parameters and current context. It does provide support for role inheritance.
@PreAuthorize("hasRole('ADMIN')") (1) @GetMapping(path = "admin", produces = {MediaType.TEXT_PLAIN_VALUE}) public ResponseEntity<String> doAdmin( ... @PreAuthorize("hasAnyRole('ADMIN','CLERK') or hasAuthority('PRICE_CHECK')") (2) @GetMapping(path = "price", produces = {MediaType.TEXT_PLAIN_VALUE}) public ResponseEntity<String> checkPrice(
1 ROLE_
prefix automatically added to role authorities2 ROLE_
prefix not added to generic authority references
211. Setup
The bulk of this lecture will be demonstrating the different techniques for
expressing authorization constraints. To do this, I have created four controllers — configured using each technique and an additional whoAmI
controller to return
a string indicating the name of the caller and their authorities.
211.1. Who Am I Controller
To help us demonstrate authorities, I have added a controller to the application that will accept an injected user and return a string that describes who called.
@RestController
@RequestMapping("/api/whoAmI")
public class WhoAmIController {
@GetMapping(produces={MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> getCallerInfo(
@AuthenticationPrincipal UserDetails user) { (1)
List<?> values = (user!=null) ?
List.of(user.getUsername(), user.getAuthorities()) :
List.of("null");
String text = StringUtils.join(values);
ResponseEntity<String> response = ResponseEntity.ok(text);
return response;
}
}
1 | UserDetails of authenticated caller injected into method call |
The controller will return the following when called without credentials.
$ curl http://localhost:8080/api/whoAmI
[null]
The controller will return the following when called with credentials
$ curl http://localhost:8080/api/whoAmI -u frasier:password
[frasier, [PRICE_CHECK, ROLE_CUSTOMER]]
211.2. Demonstration Users
Our user database has been populated with the following users. All have an
assigned role (Roles all start with ROLE_
prefix).
One (frasier) has an assigned permission.
insert into authorities(username, authority) values('sam','ROLE_ADMIN'); insert into authorities(username, authority) values('rebecca','ROLE_ADMIN'); insert into authorities(username, authority) values('woody','ROLE_CLERK'); insert into authorities(username, authority) values('carla','ROLE_CLERK'); insert into authorities(username, authority) values('norm','ROLE_CUSTOMER'); insert into authorities(username, authority) values('cliff','ROLE_CUSTOMER'); insert into authorities(username, authority) values('frasier','ROLE_CUSTOMER'); insert into authorities(username, authority) values('frasier','PRICE_CHECK'); (1)
1 | frasier is assigned a (non-role) permission |
211.3. Core Security FilterChain Setup
The following shows the initial/core SecurityFilterChain setup carried over from earlier examples. We will add to this in a moment.
//HttpSecurity http
http.httpBasic(cfg->cfg.realmName("AuthzExample"));
http.formLogin(cfg->cfg.disable());
http.headers(cfg->{
cfg.xssProtection().disable();
cfg.frameOptions().disable();
});
http.csrf(cfg->cfg.disable());
http.cors(cfg->new AuthzCorsConfigurationSource());
http.sessionManagement(cfg->cfg
.sessionCreationPolicy(SessionCreationPolicy.STATELESS));
http.authorizeRequests(cfg->cfg.antMatchers(
"/api/whoami",
"/api/authorities/paths/anonymous/**")
.permitAll());
//more ...
211.4. Controller Operations
The controllers in this overall example will accept API requests and delegate the call to the WhoAmIController. Many of the operations look like the snippet example below — but with a different URI.
@RestController
@RequestMapping("/api/authorities/paths")
@RequiredArgsConstructor
public class PathAuthoritiesController {
private final WhoAmIController whoAmI; (1)
@GetMapping(path = "admin", produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> doAdmin(
@AuthenticationPrincipal UserDetails user) {
return whoAmI.getCallerInfo(user); (2)
}
1 | whoAmI controller injected into each controller to provide consistent response with username and authorities |
2 | API-invoked controller delegates to whoAmI controller along with injected UserDetails |
212. Path-based Authorizations
In this example, I will demonstrate how to apply security constraints on controller methods based on the URI used to invoke them. This is very similar to the security constraints of legacy servlet applications.
The following snippet shows a summary of the URIs in the controller we will be implementing.
@RequestMapping("/api/authorities/paths")
@GetMapping(path = "admin", produces = {MediaType.TEXT_PLAIN_VALUE})
@GetMapping(path = "clerk", produces = {MediaType.TEXT_PLAIN_VALUE})
@GetMapping(path = "customer", produces = {MediaType.TEXT_PLAIN_VALUE})
@GetMapping(path = "price", produces = {MediaType.TEXT_PLAIN_VALUE})
@GetMapping(path = "authn", produces = {MediaType.TEXT_PLAIN_VALUE})
@GetMapping(path = "anonymous", produces = {MediaType.TEXT_PLAIN_VALUE})
@GetMapping(path = "nobody", produces = {MediaType.TEXT_PLAIN_VALUE})
212.1. Path-based Role Authorization Constraints
We have the option to apply path-based authorization constraints using roles. The following example locks down three URIs to one or more roles each.
http.authorizeRequests(cfg->cfg.antMatchers(
"/api/authorities/paths/admin/**")
.hasRole("ADMIN")); (1)
http.authorizeRequests(cfg->cfg.antMatchers(
"/api/authorities/paths/clerk/**")
.hasAnyRole("ADMIN", "CLERK")); (2)
http.authorizeRequests(cfg->cfg.antMatchers(
"/api/authorities/paths/customer/**")
.hasAnyRole("CUSTOMER")); (3)
1 | admin URI may only be called by callers having role ADMIN
(or ROLE_ADMIN authority) |
2 | clerk URI may only be called by callers having either the
ADMIN or CLERK roles (or ROLE_ADMIN or ROLE_CLERK authorities) |
3 | customer URI may only be called by callers having the role CUSTOMER
(or ROLE_CUSTOMER authority) |
212.2. Example Path-based Role Authorization (Sam)
The following is an example set of calls for sam
, one
of our users with role ADMIN
. Remember that role ADMIN
is basically the same as saying authority ROLE_ADMIN
.
$ curl http://localhost:8080/api/authorities/paths/admin -u sam:password (1) [sam, [ROLE_ADMIN]] $ curl http://localhost:8080/api/authorities/paths/clerk -u sam:password (2) [sam, [ROLE_ADMIN]] $ curl http://localhost:8080/api/authorities/paths/customer -u sam:password (3) {"timestamp":"2020-07-14T15:12:25.927+00:00","status":403,"error":"Forbidden", "message":"Forbidden","path":"/api/authorities/paths/customer"}
1 | sam has ROLE_ADMIN authority, so admin URI can be called |
2 | sam has ROLE_ADMIN authority and clerk URI allows both
roles ADMIN and CLERK |
3 | sam lacks role CUSTOMER required to call customer URI
and is rejected with 403/Forbidden error |
212.3. Example Path-based Role Authorization (Woody)
The following is an example set of calls for woody
, one
of our users with role CLERK
.
$ curl http://localhost:8080/api/authorities/paths/admin -u woody:password (1) {"timestamp":"2020-07-14T15:12:46.808+00:00","status":403,"error":"Forbidden", "message":"Forbidden","path":"/api/authorities/paths/admin"} $ curl http://localhost:8080/api/authorities/paths/clerk -u woody:password (2) [woody, [ROLE_CLERK]] $ curl http://localhost:8080/api/authorities/paths/customer -u woody:password (3) {"timestamp":"2020-07-14T15:13:04.158+00:00","status":403,"error":"Forbidden", "message":"Forbidden","path":"/api/authorities/paths/customer"}
1 | woody lacks role ADMIN required to call admin URI
and is rejected with 403/Forbidden |
2 | woody has ROLE_CLERK authority, so clerk URI can be called |
3 | woody lacks role CUSTOMER required to call customer URI
and is rejected with 403/Forbidden |
213. Path-based Authority Permission Constraints
The following example shows how we can assign permission authority constraints. It is also an example of being granular with the HTTP method in addition to the URI expressed.
http.authorizeRequests(cfg->cfg.antMatchers(HttpMethod.GET, (1)
"/api/authorities/paths/price")
.hasAnyAuthority("PRICE_CHECK", "ROLE_ADMIN", "ROLE_CLERK")); (2)
1 | definition is limited to GET method for URI price URI |
2 | must have permission PRICE_CHECK or roles ADMIN or CLERK |
213.1. Path-based Authority Permission (Norm)
The following example shows one of our users with the CUSTOMER
role being rejected from calling the GET price
URI.
$ curl http://localhost:8080/api/authorities/paths/customer -u norm:password (1) [norm, [ROLE_CUSTOMER]] $ curl http://localhost:8080/api/authorities/paths/price -u norm:password (2) {"timestamp":"2020-07-14T15:13:38.598+00:00","status":403,"error":"Forbidden", "message":"Forbidden","path":"/api/authorities/paths/price"}
1 | norm has role CUSTOMER required to call customer URI |
2 | norm lacks the ROLE_ADMIN , ROLE_CLERK , and PRICE_CHECK authorities
required to invoke the GET price URI |
213.2. Path-based Authority Permission (Frasier)
The following example shows one of our users with the CUSTOMER
role and PRICE_CHECK
permission. This user can call both the
customer
and GET price
URIs.
$ curl http://localhost:8080/api/authorities/paths/customer -u frasier:password (1) [frasier, [PRICE_CHECK, ROLE_CUSTOMER]] $ curl http://localhost:8080/api/authorities/paths/price -u frasier:password (2) [frasier, [PRICE_CHECK, ROLE_CUSTOMER]]
1 | frazier has the CUSTOMER role assigned required to call customer URI |
2 | frazier has the PRICE_CHECK permission assigned required to call GET price URI |
213.3. Path-based Authority Permission (Sam and Woody)
The following example shows that users with the ADMIN
and CLERK
roles are able to call the GET price
URI.
$ curl http://localhost:8080/api/authorities/paths/price -u sam:password (1) [sam, [ROLE_ADMIN]] $ curl http://localhost:8080/api/authorities/paths/price -u woody:password (2) [woody, [ROLE_CLERK]]
1 | sam is assigned the ADMIN role required to call the GET price URI |
2 | woody is assigned the CLERK role required to call the GET price URI |
213.4. Other Path Constraints
We can add a few other path constraints that do not directly relate to roles. For example, we can exclude or enable a URI for all callers.
http.authorizeRequests(cfg->cfg.antMatchers(
"/api/authorities/paths/nobody/**")
.denyAll()); (1)
http.authorizeRequests(cfg->cfg.antMatchers(
"/api/authorities/paths/authn/**")
.authenticated()); (2)
1 | all callers of the nobody URI will be denied |
2 | all authenticated callers of the authn URI will be accepted |
213.5. Other Path Constraints Usage
The following example shows a caller attempting to access the URIs that either deny all callers or accept all authenticated callers
$ curl http://localhost:8080/api/authorities/paths/authn -u frasier:password (1)
[frasier, [PRICE_CHECK, ROLE_CUSTOMER]]
$ curl http://localhost:8080/api/authorities/paths/nobody -u frasier:password (2)
{"timestamp":"2020-07-14T18:09:38.669+00:00","status":403,
"error":"Forbidden","message":"Forbidden","path":"/api/authorities/paths/nobody"}
$ curl http://localhost:8080/api/authorities/paths/authn (3)
{"timestamp":"2020-07-14T18:15:24.945+00:00","status":401,
"error":"Unauthorized","message":"Unauthorized","path":"/api/authorities/paths/authn"}
1 | frazier was able to access the authn URI because he was authenticated |
2 | frazier was not able to access the nobody URI because all have been denied for that URI |
3 | anonymous user was not able to access the authn URI because they were not authenticated |
214. Authorization
With that example in place, we can look behind the scenes to see how this occurred.
214.1. Review: FilterSecurityInterceptor At End of Chain
If you remember when we inspected the filter chain setup for our API during the breakpoint in FilterChainProxy.doFilterInternal() — there was a FilterSecurityInterceptor
at the end of the chain.
This is where our path-based authorization constraints get carried out.
214.2. Attempt Authorization Call
We can set a breakpoint int the AbstractSecurityInterceptor.attemptAuthorization()
method to observe the authorization process.
214.3. FilterSecurityInterceptor Calls
-
the
FilterSecurityInterceptor
is at the end of the Security FilterChain and calls theAccessDecisionManager
to decide whether the authenticated caller has access to the target object. The call quietly returns without an exception if accepted and throws anAccessDeniedException
if denied. -
the assigned
AccessDecisionManager
is pre-populated with a set ofAccessDecisionVoters
(e.g.,WebExpressionVoter
) based on security definitions and passed the authenticated user, a reference to the target object, and the relevant rules associated with that target to potentially be used by the voters. -
the
AccessDecisionVoter
returns an answer that is eitherACCESS_GRANTED
,ACCESS_ABSTAIN
, orACCESS_DENIED
.
The overall evaluation depends on the responses from the voters and the aggregate answer setting (e.g., affirmative, consensus, unanimous) of the manager.
214.4. AccessDecisionManager
The AccessDecisionManager
comes in three flavors and we can also create
our own.
The three flavors provided by Spring are
-
AffirmativeBased
- returns positive if any voter returns affirmative -
ConsensusBase
- returns positive if majority of voters return affirmative -
UnanimousBased
- returns positive if all voters return affirmative or abstain
Denial is signaled with a thrown AccessDeniedException
exception.
AffirmativeBased
is the default.
There is a setting in each for how to handle 100% abstain results — the default is access denied.
214.5. Assigning Custom AccessDecisionManager
The following code snippet
shows an example of creating a UnanimousBased
AccessDecisionManager
and populating it with a custom list of voters.
@Bean
public AccessDecisionManager accessDecisionManager() {
return new UnanimousBased(List.of(
new WebExpressionVoter(),
new RoleVoter(),
new AuthenticatedVoter()));
}
A custom AccessDecisionManager
can be assigned to the builder returned
from the access restrictions call.
http.authorizeRequests(cfg->cfg.antMatchers(
"/api/authorities/paths/admin/**")
.hasRole("ADMIN").accessDecisionManager(/* custom ADM here*/));
214.6. AccessDecisionVoter
There are several AccessDecisionVoter
classes that take care of determining
whether the specific constraints are satisfied, violated, or no determination.
We can also create our own by extending or re-implementing any of the existing
implementations and register using the technique shown in the snippets above.
In our first case, Spring converted our rules to be resolved to the
WebExpressionVoter
. Because of that — we will see many similarities to
the constraint behavior of URI-based constraints and expression-based
constraints covered towards the end of this lecture.
215. Role Inheritance
Role inheritance provides an alternative to listing individual roles per
URI constraint. Lets take our case of sam
with the ADMIN
role. He is
forbidden from calling the customer
URI.
$ curl http://localhost:8080/api/authorities/paths/customer -u sam:password
{"timestamp":"2020-07-14T20:15:19.931+00:00","status":403,"error":"Forbidden",
"message":"Forbidden","path":"/api/authorities/paths/customer"}
215.1. Role Inheritance Definition
We can define a @Bean
that provides a RoleHierarchy
expressing
which roles inherit from other roles. The syntax to this constructor
is a String — based on the legacy XML definition interface.
@Bean
public RoleHierarchy roleHierarchy() {
RoleHierarchyImpl roleHierarchy = new RoleHierarchyImpl();
roleHierarchy.setHierarchy(StringUtils.join(List.of(
"ROLE_ADMIN > ROLE_CLERK", (1)
"ROLE_CLERK > ROLE_CUSTOMER"), (2)
System.lineSeparator())); (3)
return roleHierarchy;
}
1 | role ADMIN will inherit all accessed applied to role CLERK |
2 | role CLERK will inherit all accessed applied to role CUSTOMER |
3 | String expression built using new lines |
With the above @Bean
in place and restarting our application, users
with role ADMIN
or CLERK
are able to invoke the customer
URI.
$ curl http://localhost:8080/api/authorities/paths/customer -u sam:password
[sam, [ROLE_ADMIN]]
216. @Secured
As stated before, URIs are one way to identify a target meant for access control. However, it is not always the case that we are protecting a controller or that we want to express security constraints so far from the lower-level component method calls needing protection.
We have at least three options when implementing component method-level access control:
-
@Secured
-
JSR-250
-
expressions
I will cover @Secured and JSR-250 first — since they have a similar, basic constraint capability and save expressions to the end.
216.1. Enabling @Secured Annotations
@Secured annotations are disabled by default. We can enable them
be supplying a @EnableGlobalMethodSecurity
annotation with
securedEnabled
set to true.
@Configuration
@EnableGlobalMethodSecurity(
securedEnabled = true //@Secured({"ROLE_MEMBER"})
)
@RequiredArgsConstructor
public class SecurityConfiguration {
216.2. @Secured Annotation
We can add the @Secured
annotation to the class and method level of the targets we
want protected. Values are expressed in authority value. Therefore, since the following
example requires the ADMIN
role, we must express it as ROLE_ADMIN
authority.
@RestController
@RequestMapping("/api/authorities/secured")
@RequiredArgsConstructor
public class SecuredAuthoritiesController {
private final WhoAmIController whoAmI;
@Secured("ROLE_ADMIN") (1)
@GetMapping(path = "admin", produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> doAdmin(
@AuthenticationPrincipal UserDetails user) {
return whoAmI.getCallerInfo(user);
}
1 | caller checked for ROLE_ADMIN authority when calling doAdmin method |
216.3. @Secured Annotation Checks
@Secured
annotation supports requiring one or more authorities in order to invoke
a particular method.
$ curl http://localhost:8080/api/authorities/secured/admin -u sam:password (1)
[sam, [ROLE_ADMIN]]
$ curl http://localhost:8080/api/authorities/secured/admin -u woody:password (2)
{"timestamp":"2020-07-14T21:11:00.395+00:00","status":403,
"error":"Forbidden","trace":"org.springframework.security.access.AccessDeniedException: ...(lots!!!)
1 | sam has the required ROLE_ADMIN authority required to invoke doAdmin |
2 | woody lacks required ROLE_ADMIN authority needed to invoke doAdmin and is rejected
with an AccessDeniedException and a very large stack trace |
216.4. @Secured Many Roles
@Secured will support many roles ORed together.
@Secured({"ROLE_ADMIN", "ROLE_CLERK"})
@GetMapping(path = "price", produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> checkPrice(
A user with either ADMIN
or CLERK
role will be given access to checkPrice()
.
$ curl http://localhost:8080/api/authorities/secured/price -u woody:password
[woody, [ROLE_CLERK]]
$ curl http://localhost:8080/api/authorities/secured/price -u sam:password
[sam, [ROLE_ADMIN]]
216.5. @Secured Only Processing Roles
However, @Secured
evaluates using a RoleVoter
, which only processes roles.
@Secured({"ROLE_ADMIN", "ROLE_CLERK", "PRICE_CHECK"}) (1)
@GetMapping(path = "price", produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> checkPrice(
1 | PRICE_CHECK permission will be ignored |
Therefore, we cannot assign a @Secured
to allow a permission
like we did with the URI constraint.
$ curl http://localhost:8080/api/authorities/paths/price -u frasier:password (1)
[frasier, [PRICE_CHECK, ROLE_CUSTOMER]]
$ curl http://localhost:8080/api/authorities/secured/price -u frasier:password (2)
{"timestamp":"2020-07-14T21:24:20.665+00:00","status":403,
"error":"Forbidden","trace":"org.springframework.security.access.AccessDeniedException ...(lots!!!)
1 | frasier can call URI GET paths/price because he has permission PRICE_CHECK and URI-based
constraints honor non-role authorities (i.e., permissions) |
2 | frasier cannot call URI GET secured/price because checkPrice() is constrained
by @Secured and that only supports roles |
216.6. @Secured Does Not Support Role Inheritance
@Secured annotation does not appear to support role inheritance we put in place when securing URIs.
$ curl http://localhost:8080/api/authorities/paths/clerk -u sam:password (1)
[sam, [ROLE_ADMIN]]
$ curl http://localhost:8080/api/authorities/secured/clerk -u sam:password (2)
{"timestamp":"2020-07-14T21:48:40.063+00:00","status":403,
"error":"Forbidden","trace":"org.springframework.security.access.AccessDeniedException ...(lots!!!)
1 | sam is able to call paths/clerk URI because of ADMIN role inherits access from CLERK role |
2 | sam is unable to call doClerk() method because @Secured does not honor role inheritance |
217. Controller Advice
When using URI-based constraints, 403/Forbidden checks were done before calling the controller and
is handled by a default exception advice that limits the amount of data emitted in the response.
When using annotation-based constraints, an AccessDeniedException
is thrown during the call to the
controller and is currently missing a exception advice. That causes a very large stack trace to be
returned to the caller (abbreviated here with "…(lots!!!)").
$ curl http://localhost:8080/api/authorities/secured/clerk -u sam:password (2)
{"timestamp":"2020-07-14T21:48:40.063+00:00","status":403,
"error":"Forbidden","trace":"org.springframework.security.access.AccessDeniedException ...(lots!!!)
217.1. AccessDeniedException Exception Handler
We can correct that information bleed by adding an @ExceptionHandler
to address AccessDeniedException
.
In the example below I am building a string with the caller’s identity and filling in the
standard fields for the returned MessageDTO used in the error reporting in my BaseExceptionAdvice
.
...
import org.springframework.security.access.AccessDeniedException;
import org.springframework.security.core.context.SecurityContextHolder;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.RestControllerAdvice;
@RestControllerAdvice
public class ExceptionAdvice extends info.ejava.examples.common.web.BaseExceptionAdvice { (1)
@ExceptionHandler({AccessDeniedException.class}) (2)
public ResponseEntity<MessageDTO> handle(AccessDeniedException ex) {
String text=String.format("caller[%s] is forbidden from making this request",
getPrincipal());
return this.buildResponse(HttpStatus.FORBIDDEN, null, text, (Instant)null);
}
protected String getPrincipal() {
try { (3)
return SecurityContextHolder.getContext().getAuthentication().getName();
} catch (NullPointerException ex) {
return "null";
}
}
}
1 | extending base class with helper methods and core set of exception handlers |
2 | adding an exception handler to intelligently handle access denial exceptions |
3 | SecurityContextHolder provides Authentication object for current caller |
217.2. AccessDeniedException Exception Result
With the above @ExceptionAdvice
in place, the stack trace from the AccessDeniedException
has been reduced to the following useful information returned to the caller.
The caller is told, what they called and who the caller identity was when they called.
$ curl http://localhost:8080/api/authorities/secured/clerk -u sam:password
{"url":"http://localhost:8080/api/authorities/secured/clerk","message":"Forbidden",
"description":"caller[sam] is forbidden from making this request",
"timestamp":"2020-07-14T21:56:32.743996Z"}
218. JSR-250
JSR-250 is an industry Java standard — also adopted by JakartaEE — for expressing
common aspects (including authorization constraints) using annotations. It has the
ability to express the same things as @Secured
and a bit more. @Secured
lacks
the ability to express "permit all" and "deny all". We can do that with JSR-250
annotations.
218.1. Enabling JSR-250
JSR-250 authorization annotations are also disabled by default. We can enable them
the same as @Secured by setting the @EnableGlobalMethodSecurity.jsr250Enabled
value to true.
@Configuration
@EnableGlobalMethodSecurity(
jsr250Enabled = true //@RolesAllowed({"ROLE_MANAGER"})
)
@RequiredArgsConstructor
public class SecurityConfiguration {
218.2. @RolesAllowed Annotation
JSR-250 has a few annotations, but its core @RolesAllowed
is a 1:1 match for what
we can do with @Secured
. The following example shows the doAdmin()
method
restricted to callers with the admin role, expressed as its ROLE_ADMIN
authority expression.
@RestController
@RequestMapping("/api/authorities/jsr250")
@RequiredArgsConstructor
public class Jsr250AuthoritiesController {
private final WhoAmIController whoAmI;
@RolesAllowed("ROLE_ADMIN") (1)
@GetMapping(path = "admin", produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> doAdmin(
@AuthenticationPrincipal UserDetails user) {
return whoAmI.getCallerInfo(user);
}
1 | role is expressed with ROLE_ prefix |
218.3. @RolesAllowed Annotation Checks
The @RollsAllowed annotation is restricting callers of doAdmin()
to have
authority ROLE_ADMIN
.
$ curl http://localhost:8080/api/authorities/jsr250/admin -u sam:password (1)
[sam, [ROLE_ADMIN]]
$ curl http://localhost:8080/api/authorities/jsr250/admin -u woody:password (2)
{"url":"http://localhost:8080/api/authorities/jsr250/admin","message":"Forbidden",
"description":"caller[woody] is forbidden from making this request",
"timestamp":"2020-07-14T22:10:31.177471Z"}
1 | sam can invoke doAdmin() because he has the ROLE_ADMIN authority |
2 | woody cannot invoke doAdmin() because he does not have the ROLE_ADMIN authority |
218.4. Multiple Roles
The @RollsAllowed annotation can express multiple authorities the caller may have.
@RolesAllowed({"ROLE_ADMIN", "ROLE_CLERK", "PRICE_CHECK"})
@GetMapping(path = "price", produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> checkPrice(
218.5. Multiple Role Check
The following shows were both sam
and woody
are able to invoke
checkPrice()
because they have one of the required authorities.
$ curl http://localhost:8080/api/authorities/jsr250/price -u sam:password (1)
[sam, [ROLE_ADMIN]]
$ curl http://localhost:8080/api/authorities/jsr250/price -u woody:password (2)
[woody, [ROLE_CLERK]]
1 | sam can invoke checkPrice() because he has the ROLE_ADMIN authority |
2 | woody can invoke checkPrice() because he has the ROLE_ADMIN authority |
218.6. JSR-250 Does not Support Non-Role Authorities
Out-of-the-box, JSR-250 authorization annotation processing does not support
non-Role authorizations. The following example shows where frazer
is able to
call URI GET paths/price
but unable to call checkPrice()
of the JSR-250 controller
even though it was annotated with one of his authorities.
$ curl http://localhost:8080/api/authorities/paths/price -u frasier:password (1)
[frasier, [PRICE_CHECK, ROLE_CUSTOMER]]
$ curl http://localhost:8080/api/authorities/jsr250/price -u frasier:password (2)
{"url":"http://localhost:8080/api/authorities/jsr250/price","message":"Forbidden",
"description":"caller[frasier] is forbidden from making this request",
"timestamp":"2020-07-14T22:13:26.247328Z"}
1 | frasier can invoke URI GET paths/price because he has the PRICE_CHECK authority
and URI-based constraints support non-role authorities |
2 | frazier cannot invoke JSR-250 constrained checkPrice() even though he has
PRICE_CHECK permission because JSR-250 does not support non-role authorities |
219. Expressions
As demonstrated, @Secured and JSR-250-based constraints are functional but very basic.
If we need more robust handling of constraints we can use Spring Expression Language and Pre/Post Constraints.
Expression support is enabled by adding the following setting to the @EnableGlobalMethodSecurity
annotation.
@EnableGlobalMethodSecurity(
prePostEnabled = true //@PreAuthorize("hasAuthority('ROLE_ADMIN')"), @PreAuthorize("hasRole('ADMIN')")
)
219.1. Expression Role Constraint
Expressions support many callable features and I am only going to scratch the
surface here. The primary annotation is @PreAuthorize
and whatever the constraint is — it is checked prior to calling the method. There are also features to
filter inputs and outputs based on flexible configurations. I will be sticking
to the authorization basics and not be demonstrating the other features here.
Notice that the contents of the string
looks like a function call — and it is. The following example constrains the doAdmin()
method to users with the role ADMIN
.
@RestController
@RequestMapping("/api/authorities/expressions")
@RequiredArgsConstructor
public class ExpressionsAuthoritiesController {
private final WhoAmIController whoAmI;
@PreAuthorize("hasRole('ADMIN')") (1)
@GetMapping(path = "admin", produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> doAdmin(
@AuthenticationPrincipal UserDetails user) {
return whoAmI.getCallerInfo(user);
}
1 | hasRole automatically adds the ROLE prefix |
219.2. Expression Role Constraint Checks
Much like @Secured and JSR-250, the following shows the caller being checked by expression
whether they have the ADMIN
role. The ROLE_
prefix is automatically applied.
$ curl http://localhost:8080/api/authorities/expressions/admin -u sam:password (1)
[sam, [ROLE_ADMIN]]
$ curl http://localhost:8080/api/authorities/expressions/admin -u woody:password (2)
{"url":"http://localhost:8080/api/authorities/expressions/admin","message":"Forbidden",
"description":"caller[woody] is forbidden from making this request",
"timestamp":"2020-07-14T22:31:07.669546Z"}
1 | sam can invoke doAdmin() because he has the ADMIN role |
2 | woody cannot invoke doAdmin() because he does not have the ADMIN role |
219.3. Expressions Support Permissions and Role Inheritance
As noted earlier with URI-based constraints, expressions support non-role authorities and role inheritance.
The following example checks whether the caller has an authority and chooses to manually supply the ROLE_
prefix.
@PreAuthorize("hasAuthority('ROLE_CLERK')")
@GetMapping(path = "clerk", produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> doClerk(
The following execution demonstrates that a caller with ADMIN
role will
be able to call a method that requires the CLERK
role because we earlier
configured ADMIN
role to inherit all CLERK
role accesses.
$ curl http://localhost:8080/api/authorities/expressions/clerk -u sam:password
[sam, [ROLE_ADMIN]]
219.4. Supports Permissions and Boolean Logic
Expressions can get very detailed. The following shows two evaluations being called and their result ORed together. The first evaluation checks whether the caller has certain roles. The second checks whether the caller has a certain permission.
@PreAuthorize("hasAnyRole('ADMIN','CLERK') or hasAuthority('PRICE_CHECK')")
@GetMapping(path = "price", produces = {MediaType.TEXT_PLAIN_VALUE})
public ResponseEntity<String> checkPrice(
$ curl http://localhost:8080/api/authorities/expressions/price -u sam:password (1)
[sam, [ROLE_ADMIN]]
$ curl http://localhost:8080/api/authorities/expressions/price -u woody:password (2)
[woody, [ROLE_CLERK]]
1 | sam can call checkPrice() because he satisfied the hasAnyRole() check by having the ADMIN role |
2 | woody can call checkPrice() because he satisfied the hasAnyRole() check by having the CLERK role |
$ curl http://localhost:8080/api/authorities/expressions/price -u frasier:password (1)
[frasier, [PRICE_CHECK, ROLE_CUSTOMER]]
1 | frazier can call checkPrice() because he satisfied the hasAuthority() check by having the
PRICE_CHECK permission |
$ curl http://localhost:8080/api/authorities/expressions/customer -u norm:password (1)
[norm, [ROLE_CUSTOMER]]
$ curl http://localhost:8080/api/authorities/expressions/price -u norm:password (2)
{"url":"http://localhost:8080/api/authorities/expressions/price","message":"Forbidden",
"description":"caller[norm] is forbidden from making this request",
"timestamp":"2020-07-14T22:48:04.771588Z"}
1 | norm can call doCustomer() because he satisfied the hasRole() check by having the
CUSTOMER role |
2 | norm cannot call checkPrice() because failed both the hasAnyRole() and hasAuthority()
checks by not having any of the looked for authorities. |
220. Summary
In this module we learned:
-
the purpose of authorities, roles, and permissions
-
how to express authorization constraints using URI-based and annotation-based constraints
-
how to enforcement of the constraints is accomplished
-
how the access control framework centers around an
AccessDecisionManager
andAccessDecisionVoter
classes -
how to implement role inheritance for URI and expression-based constraints
-
to implement an
AccessDeniedException
controller advice to hide necessary stack trace information and provide useful error information to the caller -
expression-based constraints are limitless in what they can express
JWT/JWS Token Authn/Authz
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
221. Introduction
In previous lectures we have covered many aspects of the Spring/Spring Boot authentication and authorization frameworks and have mostly demonstrated that with HTTP Basic Authentication. In this lecture we are going to use what we learned about the framework to implement a different authentication strategy — JSON Web Token (JWT) and JSON Web Signature (JWS).
The focus on this lecture will be a brief introduction to JSON Web Tokens (JWT) and how they could be implemented in the Spring/Spring Boot Security Framework. The real meat of this lecture is to provide a concrete example of how to leverage and extend the provided framework.
221.1. Goals
You will learn:
-
what is a JSON Web Token (JWT) and JSON Web Secret (JWS)
-
what problems does JWT/JWS solve with API authentication and authorization
-
how to write and integrate custom authentication and authorization framework classes to implement an alternate security mechanism
-
how to leverage Spring Expression Language to evaluate parameters and properties of the
SecurityContext
221.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
construct and sign a JWT with claims representing an authenticated user
-
verify a JWS signature and parse the body to obtain claims to re-instantiate an authenticated user details
-
identify the similarities and differences in flows between HTTP Basic and JWS authentication/authorization flows
-
build a custom JWS authentication filter to extract login information, authenticate the user, build a JWS bearer token, and populate the HTTP response header with its value
-
build a custom JWS authorization filter to extract the JWS bearer token from the HTTP request, verify its authenticity, and establish the authenticated identity for the current security context
-
implement custom error reporting with authentication and authorization
222. Identity and Authorities
Some key points of security are to identify the caller and determine authorities they have.
-
When using BASIC authentication, we presented credentials each time. This was all in one shot, every time on the way to the operation being invoked.
-
When using FORM authentication, we presented credentials (using a FORM) up front to establish a session and then referenced that session on subsequent calls.
The benefit to BASIC is that is stateless and can work with multiple servers — whether clustered or peer services. The bad part about BASIC is that we must present the credentials each time and the services must have access to our user details (including passwords) to be able to do anything with them.
The benefit to FORM is that we present credentials one time and then reference the work of that authentication through a session ID. The bad part of FORM is that the session is on the server and harder to share with members of a cluster and impossible to share with peer services.
What we intend to do with token-based authentication is to mimic the one-time login of FORM and stateless aspects of BASIC. To do that — we must give the client at login, information they can pass to the services hosting operations that can securely identify them (at a minimum) and potentially identify the authorities they have without having that stored on the server hosting the operation.
222.1. BASIC Authentication/Authorization
To better understand the token flow, I would like to start by reviewing the BASIC Auth flow.
-
the
BasicAuthenticationFilter
("the filter") is called in its place within theFilterChainProxy
-
the filter extracts the username/password credentials from the
Authorization
header and stages them in aUsernamePasswordAuthenticationToken
("the authRequest") -
the filter passes the authRequest to the
AuthenticationManager
to authenticate -
the
AuthenticationManager
, thru its assignedAuthenticationProvider
, successfully authenticates the request and builds an authResult -
the filter receives the successful response with the authResult hosting the user details — including username and granted authorities
-
the filter stores the authResult in the
SecurityContext
-
the filter invokes the next filter in the chain — which will eventually call the target operation
All this — authentication and user details management — must occur within the same server as the operation for BASIC Auth.
223. Tokens
With token authentication, we are going to break the flow into two parts: authentication/login and authorization/operation.
223.1. Token Authentication/Login
The following is a conceptual depiction of the authentication flow.
It differs from the BASIC Authentication flow in that nothing is
stored in the SecurityContext
during the login/authentication.
Everything needed to authorize the
follow-on operation call is encoded into a Bearer Token
and
returned to the caller in an Authorization
header. Things encoded
in the bearer token are referred to as "claims".
Step 2 extracts the username/password from a POST payload — very similar to FORM Auth. However, we could have just as easily implemented the same extract technique used by BASIC Auth.
Step 7 returns the the token representation of the authResult back to the caller that just successfully authenticated. They will present that information later when they invoke an operation in this or a different server. There is no requirement for the token returned to be used locally. The token can be used on any server that trusts tokens created by this server. The biggest requirement is that we must trust the token is built by something of trust and be able to verify that it never gets modified.
223.2. Token Authorization/Operation
To invoke the intended operation, the caller must include an Authorization
header with the bearer token returned to them from the login.
This will carry their identity (at a minimum) and authorities encoded
in the bearer token’s claims section.
-
the Token AuthorizationFilter ("the filter") is called by the
FilterChainProxy
-
the filter extracts the bearer token from the
Authorization
header and wraps that in an authRequest -
the filter passes the authRequest to the
AuthenticationManager
to authenticate -
the
AuthenticationManager
with its TokenAuthenticationProvider
are able to verify the contents of the token and re-build the necessary portions of the authResult -
the authResult is returned to the filter
-
the filter stores the authResult in the
SecurityContext
-
the filter invokes the next filter in the chainβββ which will eventually call the target operation
Bearer Token has Already Been Authenticated
Since the filter knows this is a bearer token, it could have
bypassed the call to the AuthenticationManager . However, by doing so — it makes the responsibilities of the classes consistent with their
original purpose and also gives the AuthenticationProvider the option
to obtain more user details for the caller.
|
223.3. Authentication Separate from Authorization
Notice the overall client to operation call was broken into two independent workflows. This enables the client to present their credentials a limited amount of times and for the operations to be spread out through the network. The primary requirement to allow this to occur is TRUST.
We need the ability for the authResult to be represented in a token, carried around by the caller, and presented later to the operations with the trust that it was not modified.
JSON Web Tokens (JWT) are a way to express the user details within the body of a token. JSON Web Signature (JWS) is a way to assure that the original token has not been modified. JSON Web Encryption (JWE) is a way to assure the original token stays private. This lecture and example will focus in JWS — but it is common to refer to the overall topic as JWT.
223.4. JWT Terms
The following table contains some key, introductory terms related to JWT.
a compact JSON claims representation that makes up the payload
of a JWS or JWE structure e.g.,
Basically — this is where we place what we want to represent. In our case, we will be representing the authenticated principal and their assigned authorities. |
|
represents content secured with a digital signature (signed with a private key and verifiable using a sharable public key) or Message Authentication Codes (MACs) (signed and verifiable using a shared, symmetric key) using JSON-based data structures |
|
represents encrypted content using JSON-based data structures |
|
a registry of required, recommended, and optional algorithms and identifiers to be used with JWS and JWE |
|
JSON Object Signing and Encryption (JOSE) Header |
JSON document containing cryptographic operations/parameters used.
e.g., |
JWS Payload |
the message to be secured — an arbitrary sequence of octets |
JWS Signature |
digital signature or MAC over the header and payload |
Unsecured JWS |
JWS without a signature ( |
JWS Compact Serialization |
a representation of the JWS as a compact, URL-safe String meant for
use in query parameters and HTTP headers |
JWS JSON Serialization |
a JSON representation where individual fields may be signed using one or more keys. There is no emphasis for compact for this use but it makes use of many of the underlying constructs of JWS. |
224. JWT Authentication
With the general workflows understood and a few concepts of JWT/JWS introduced, I want to update the diagrams slightly with real classnames from the examples and walk through how we can add JWT authentication to Spring/Spring Boot.
224.2. Example JWT Authorization/Operation Call Flow
Lets take a look at the implementation to be able to fully understand both JWT/JWS and leveraging the Spring/Spring Boot Security Framework.
225. Maven Dependencies
Spring does not provide its own standalone JWT/JWS library or contain a direct reference to any. I happen to be using the jjwt library from jsonwebtoken.
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-api</artifactId>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-impl</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-jackson</artifactId>
<scope>runtime</scope>
</dependency>
226. JwtConfig
At the bottom of the details of our JWT/JWS authentication and authorization
example is a @ConfigurationProperties
class to represent the configuration.
@ConfigurationProperties(prefix = "jwt")
@Data
@Slf4j
public class JwtConfig {
@NotNull
private String loginUri; (1)
private String key; (2)
private String authoritiesKey = "auth"; (3)
private String headerPrefix = "Bearer "; (4)
private int expirationSecs=60*60*24; (5)
public String getKey() {
if (key==null) {
key=UUID.randomUUID().toString();
log.info("generated JWT signing key={}",key);
}
return key;
}
public SecretKey getSigningKey() {
return Keys.hmacShaKeyFor(getKey().getBytes(Charset.forName("UTF-8")));
}
public SecretKey getVerifyKey() {
return getSigningKey();
}
}
1 | login-uri defines the URI for the JWT authentication |
2 | key defines a value to build a symmetric SecretKey |
3 | authorities-key is the JSON key for the user’s assigned authorities
within the JWT body |
4 | header-prefix defines the prefix in the Authorization header.
This will likely never change, but it is good to define it in a single,
common place |
5 | expiration-secs is the number of seconds from generation for when
the token will expire. Set this to a low value to test expiration and
large value to limit login requirements |
226.1. JwtConfig application.properties
The following shows an example set of properties defined for the
@ConfigurationProperties
class.
(1) jwt.key=123456789012345678901234567890123456789012345678901234567890 jwt.expiration-secs=300000000 jwt.login-uri=/api/login
1 | the key must remain protected — but for symmetric keys must be shared
between signer and verifiers |
227. JwtUtil
This class contains all the algorithms that are core to implementing
token authentication using JWT/JWS. It is configured by value in JwtConfig
.
@RequiredArgsConstructor
public class JwtUtil {
private final JwtConfig jwtConfig;
227.1. Dependencies on JwtUtil
The following diagram shows the dependencies on JwtUtil
and
also on JwtConfig
.
-
JwtAuthenticationFilter
needs to process requests to the loginUri, generate a JWS token for successfully authenticated users, and set that JWS token on the HTTP response -
JwtAuthorizationFilter
processes all messages in the chain and gets the JWS token from theAuthorization
header. -
JwtAuthenticationProvider
parses the String token into anAuthentication
result.
JwtUtil
handles the meat of that work relative to JWS. The other
classes deal with plugging that work into places in the security
flow.
227.2. JwtUtil: generateToken()
The following code snippet shows creating a JWS builder that will end up signing the header and payload. Individual setters are called for well-known claims. A generic claim(key, value) is used to add the authorities.
import io.jsonwebtoken.Jwts;
...
public String generateToken(Authentication authenticated) {
String token = Jwts.builder()
.setSubject(authenticated.getName()) (1)
.setIssuedAt(new Date())
.setExpiration(getExpires()) (2)
.claim(jwtConfig.getAuthoritiesKey(), getAuthorities(authenticated))
.signWith(jwtConfig.getSigningKey())
.compact();
return token;
}
1 | JWT has some well-known claim values |
2 | claim(key,value) used to set custom claim values |
227.3. JwtUtil: generateToken() Helper Methods
The following helper methods are used in setting the claim values of the JWT.
protected Date getExpires() { (1)
Instant expiresInstant = LocalDateTime.now()
.plus(jwtConfig.getExpirationSecs(), ChronoUnit.SECONDS)
.atZone(ZoneOffset.systemDefault())
.toInstant();
return Date.from(expiresInstant);
}
protected List<String> getAuthorities(Authentication authenticated) {
return authenticated.getAuthorities().stream() (2)
.map(a->a.getAuthority())
.collect(Collectors.toList());
}
1 | calculates an instant in the future — relative to local time — the token will expire |
2 | strip authorities down to String authorities to make marshalled value less verbose |
The following helper method in the JwtConfig
class generates a SecretKey
suitable for signing the JWS.
...
import io.jsonwebtoken.security.Keys;
import javax.crypto.SecretKey;
public class JwtConfig {
public SecretKey getSigningKey() {
return Keys.hmacShaKeyFor(getKey() (1)
.getBytes(Charset.forName("UTF-8")));
}
1 | the hmacSha algorithm and the 40 character key will
generate a HS384 SecretKey for signing |
227.4. Example Encoded JWS
The following is an example of what the token value will look like. There are three base64 values separated by a period "." each. The first represents the header, the second the body, and the third the cryptographic signature of the header and body.
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJmcmFzaWVyIiwiaWF0IjoxNTk0ODk1Nzk3LCJleHAiOjE1OTQ4OTk1MTcsImF1dGhvcml0aWVzIjpbIlBSSUNFX0NIRUNLIiwiUk9MRV9DVVNUT01FUiJdLCJqdGkiOiI5NjQ3MzE1OS03MTNjLTRlN2EtYmE4Zi0zYWMwMzlmODhjZGQifQ.ED-j7mdO2bwNdZdI4I2Hm_88j-aSeYkrbdlEacmjotU
(1)
1 | base64(JWS Header).base64(JWS body).base64(sign(header + body)) |
There is no set limit to the size of HTTP headers. However, it has been pointed out that Apache defaults to an 8KB limit and IIS is 16KB. The default size for Tomcat is 4KB. In case you were counting, the above string is 272 characters long. |
227.5. Example Decoded JWS Header and Body
Example Decoded JWS Header and Body
|
The following is what is produced if we base64 decode the first two
sections. We can use sites like
jsonwebtoken.io
and
jwt.io to inspect JWS tokens.
The header identifies the type and signing algorithm.
The body carries the claims.
Some claims (e.g., subject/ |
227.6. JwtUtil: parseToken()
The parseToken()
method verifies the contents of the JWS has not been
modified, and re-assembles an authenticated Authentication
object
to be returned by the AuthenticationProvider
and AuthenticationManager
and placed into the SecurityContext
for when the operation is executed.
...
import io.jsonwebtoken.Claims;
import io.jsonwebtoken.JwtException;
import io.jsonwebtoken.Jwts;
public Authentication parseToken(String token) throws JwtException {
Claims body = Jwts.parserBuilder()
.setSigningKey(jwtConfig.getVerifyKey()) (1)
.build()
.parseClaimsJws(token)
.getBody();
User user = new User(body.getSubject(), "", getGrantedAuthorities(body));
Authentication authentication=new UsernamePasswordAuthenticationToken(
user, token, (2)
user.getAuthorities());
return authentication;
}
1 | verification and signing keys are the same for symmetric algorithms |
2 | there is no real use for the token in the authResult. It was placed in the password position in the event we wanted to locate it. |
227.7. JwtUtil: parseToken() Helper Methods
The following helper method extracts the authority strings stored in the (parsed) token and wraps them in GrantedAuthority
objects to be used by the authorization framework.
protected List<GrantedAuthority> getGrantedAuthorities(Claims claims) {
List<String> authorities = (List) claims.get(jwtConfig.getAuthoritiesKey());
return authorities==null ? Collections.emptyList() :
authorities.stream()
.map(a->new SimpleGrantedAuthority(a)) (1)
.collect(Collectors.toList());
}
1 | converting authority strings from token into GrantedAuthority objects used by Spring security framework |
The following helper method returns the verify key to be the same as the signing key.
public class JwtConfig {
public SecretKey getSigningKey() {
return Keys.hmacShaKeyFor(getKey().getBytes(Charset.forName("UTF-8")));
}
public SecretKey getVerifyKey() {
return getSigningKey();
}
228. JwtAuthenticationFilter
The JwtAuthenticationFilter
is the target filter for generating new
bearer tokens. It accepts POSTS to a configured /api/login
URI
with the username and password, authenticates those credentials,
generates a bearer token with JWS, and returns that value in the
Authorization
header. The following is an example of making the
end-to-end authentication call. Notice the bearer token returned.
We will need this value in follow-on calls.
$ curl -v -X POST http://localhost:8080/api/login -d '{"username":"frasier", "password":"password"}'
> POST /api/login HTTP/1.1
< HTTP/1.1 200
< Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJmcmFzaWVyIiwiaWF0IjoxNTk0OTgwMTAyLCJleHAiOjE4OTQ5ODM3MDIsImF1dGgiOlsiUFJJQ0VfQ0hFQ0siLCJST0xFX0NVU1RPTUVSIl19.u2MmzTxaDoVNFGGCnrAcWBusS_NS2NndZXkaT964hLgcDTvCYAW_sXtTxRw8g_13
The JwtAuthenticationFilter
delegates much of the detail work
handling the header and JWS token to the JwtUtil
class shown earlier.
@Slf4j
public class JwtAuthenticationFilter extends UsernamePasswordAuthenticationFilter {
private final JwtUtil jwtUtil;
228.1. JwtAuthenticationFilter Relationships
The JwtAuthenticationFilter
fills out the abstract workflow of the
AbstractAuthenticationProcessingFilter
by implementing two primary
methods: attemptAuthentication()
and successfulAuthentication()
.
The attemptAuthenticate()
callback is used to perform all the steps necessary
to authenticate the caller. Unsuccessful attempts are returned the the caller
immediately with a 401/Unauthorized status.
The successfulAuthentication()
callback is used to generate the JWS
token from the authResult and return that in the response header. The
call is returned immediately to the caller with a 200/OK status and an
Authorization header containing the constructed token.
228.2. JwtAuthenticationFilter: Constructor
The filter constructor sets up the object to only listen to POSTs against
the configured loginUri. The base class we are extending holds onto the
AuthenticationManager
used during the attemptAuthentication()
callback.
public JwtAuthenticationFilter(JwtConfig jwtConfig, AuthenticationManager authm) {
super(new AntPathRequestMatcher(jwtConfig.getLoginUri(), "POST"));
this.jwtUtil = new JwtUtil(jwtConfig);
setAuthenticationManager(authm);
}
228.3. JwtAuthenticationFilter: attemptAuthentication()
The attemptAuthentication()
method has two core jobs: obtain credentials
and authenticate.
-
The credentials could have been obtained in a number of different ways. I have simply chosen to create a DTO class with username and password to carry that information.
-
The credentials are stored in an
Authentication
object that acts as the authRequest. The authResult from theAuthenticationManager
is returned from the callback.
Any failure (getCredentials()
or authenticate()
) will result in an
AuthenticationException
thrown.
@Override
public Authentication attemptAuthentication(
HttpServletRequest request, HttpServletResponse response)
throws AuthenticationException { (1)
LoginDTO login = getCredentials(request);
UsernamePasswordAuthenticationToken authRequest =
new UsernamePasswordAuthenticationToken(login.getUsername(), login.getPassword());
Authentication authResult = getAuthenticationManager().authenticate(authRequest);
return authResult;
}
1 | any failure to obtain a successful Authentication result will throw an AuthenticationException |
228.4. JwtAuthenticationFilter: attemptAuthentication() DTO
The LoginDTO
is a simple POJO class that will get marshalled as JSON and placed
in the body of the POST.
package info.ejava.examples.svc.auth.cart.security.jwt;
import lombok.Getter;
import lombok.Setter;
@Setter
@Getter
public class LoginDTO {
private String username;
private String password;
}
228.5. JwtAuthenticationFilter: attemptAuthentication() Helper Method
We can use the Jackson Mapper to easily unmarshal the POST payload into DTO form
any rethrown any failed parsing as a BadCredentialsException
. Unfortunately for debugging,
the default 401/Unauthorized response to the caller does not provide details we supply here
but I guess that is a good thing when dealing with credentials and login attempts.
...
import com.fasterxml.jackson.databind.ObjectMapper;
...
protected LoginDTO getCredentials(HttpServletRequest request) throws AuthenticationException {
try {
return new ObjectMapper().readValue(request.getInputStream(), LoginDTO.class);
} catch (IOException ex) {
log.info("error parsing loginDTO", ex);
throw new BadCredentialsException(ex.getMessage()); (1)
}
}
1 | BadCredentialsException extends AuthenticationException |
228.6. JwtAuthenticationFilter: successfulAuthentication()
The successfulAuthentication()
is called when authentication was successful. It has two
primary jobs: encode the authenticated result in a JWS token and set the value in the
response header.
@Override
protected void successfulAuthentication(
HttpServletRequest request, HttpServletResponse response, FilterChain chain,
Authentication authResult) throws IOException, ServletException {
String token = jwtUtil.generateToken(authResult); (1)
log.info("generated token={}", token);
jwtUtil.setToken(response, token); (2)
}
1 | authResult represented within the claims of the JWS |
2 | caller given the JWS token in the response header |
This callback fully overrides the parent method to eliminate setting the SecurityContext
and issuing a redirect. Neither have relevance in this situation. The authenticated caller
will not require a SecurityContext
now — this is the login. The SecurityContext
will
be set as part of the call to the operation.
229. JwtAuthorizationFilter
The JwtAuthorizationFilter
is responsible for realizing any provided JWS bearer tokens
as an authResult within the current SecurityContext
on the way to invoking an operation.
The following end-to-end operation call shows the caller supplying the bearer token in
order to identity themselves to the server implementing the operation. The example operation uses
the username of the current SecurityContext
as a key to locate information for the caller.
$ curl -v -X POST http://localhost:8080/api/carts/items?name=thing \
-H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJmcmFzaWVyIiwiaWF0IjoxNTk0OTgwMTAyLCJleHAiOjE4OTQ5ODM3MDIsImF1dGgiOlsiUFJJQ0VfQ0hFQ0siLCJST0xFX0NVU1RPTUVSIl19.u2MmzTxaDoVNFGGCnrAcWBusS_NS2NndZXkaT964hLgcDTvCYAW_sXtTxRw8g_13"
> POST /api/carts/items?name=thing HTTP/1.1
...
< HTTP/1.1 200
{"username":"frasier","items":["thing"]} (1) (2)
1 | username is encoded within the JWS token |
2 | cart with items is found by username |
The JwtAuthorizationFilter
did not seem to match any of the Spring-provided
authentication filters — so I directly extended a generic filter support class
that assures it will only get called once per request.
This class also relies on JwtUtil to implement the details of working with the JWS bearer token
public class JwtAuthorizationFilter extends OncePerRequestFilter {
private final JwtUtil jwtUtil;
private final AuthenticationManager authenticationManager;
private final AuthenticationEntryPoint failureResponse = new JwtEntryPoint();
229.1. JwtAuthorizationFilter Relationships
The JwtAuthorizationFilter
extends the generic framework of OncePerRequestFilter
and performs all of its work in the doFilterInternal()
callback.
The JwtAuthenticationFilter
obtains the raw JWS token from the request header,
wraps the token in the JwsAuthenticationToken
authRequest and requests authentication
from the AuthenticationManager
. Placing this behavior in an AuthenticationProvider
was optional but seemed to be consistent with the framework. It also provided the
opportunity to lookup further user details if ever required.
Supporting the AuthenticationManager
is the JwtAuthenticationProvider
,
which verifies the JWS token and re-builds the authResult from the JWS token claims.
The filter finishes by setting the authResult in the SecurityContext
prior to advancing
the chain further towards the operation call.
229.2. JwtAuthorizationFilter: Constructor
The JwtAuthorizationFilter
relies on the JwtUtil
helper class to implement the meat
of the JWS token details. It also accepts an AuthenticationManager
that is assumed to be
populated with the JwtAuthenticationProvider
.
public JwtAuthorizationFilter(JwtConfig jwtConfig, AuthenticationManager authenticationManager) {
jwtUtil = new JwtUtil(jwtConfig);
this.authenticationManager = authenticationManager;
}
229.3. JwtAuthorizationFilter: doFilterInternal()
Like most filters the JwtAuthorizationFilter
initially determines if there is anything to do.
If there is no Authorization
header with a "Bearer " token, the filter is quietly bypassed and
the filter chain is advanced.
If a token is found, we request authentication — where the JWS token is verified and converted
back into an Authentication
object to store in the SecurityContext
as the authResult.
Any failure to complete authentication when the token is present in the header will result in the chain terminating and an error status returned to the caller.
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException {
String token = jwtUtil.getToken(request);
if (token == null) { //continue on without JWS authn/authz
filterChain.doFilter(request, response); (1)
return;
}
try {
Authentication authentication = new JwtAuthenticationToken(token); (2)
Authentication authenticated = authenticationManager.authenticate(authentication);
SecurityContextHolder.getContext().setAuthentication(authenticated); (3)
filterChain.doFilter(request, response); //continue chain to operation (4)
} catch (AuthenticationException fail) {
failureResponse.commence(request, response, fail); (5)
return; //end the chain and return error to caller
}
}
1 | chain is quietly advanced forward if there is no token found in the request header |
2 | simple authRequest wrapper for the token |
3 | store the authenticated user in the SecurityContext |
4 | continue the chain with the authenticated user now present in the SecurityContext |
5 | issue an error response if token is present but we are unable to complete authentication |
229.4. JwtAuthenticationToken
The JwtAuthenticationToken
has a simple job — carry the raw JWS token string through the authentication process and be able to provide it to the JwtAuthenticationProvider
.
I am not sure whether I gained much by extending the AbstractAuthenticationToken
.
The primary requirement was to implement the Authentication
interface. As you can see, the implementation simply carries the value and returns it for just about every question asked.
It will be the job of JwtAuthenticationProvider
to turn that token into an Authentication
instance that represents the authResult, carrying authorities and other properties that have more exposed details.
public class JwtAuthenticationToken extends AbstractAuthenticationToken {
private final String token;
public JwtAuthenticationToken(String token) {
super(Collections.emptyList());
this.token = token;
}
public String getToken() {
return token;
}
@Override
public Object getCredentials() {
return token;
}
@Override
public Object getPrincipal() {
return token;
}
}
The JwtAuthenticationProvider
class implements two key methods: supports()
and authenticate()
public class JwtAuthenticationProvider implements AuthenticationProvider {
private final JwtUtil jwtUtil;
public JwtAuthenticationProvider(JwtConfig jwtConfig) {
jwtUtil = new JwtUtil(jwtConfig);
}
@Override
public boolean supports(Class<?> authentication) {
return JwtAuthenticationToken.class.isAssignableFrom(authentication);
}
@Override
public Authentication authenticate(Authentication authentication)
throws AuthenticationException {
try {
String token = ((JwtAuthenticationToken)authentication).getToken();
Authentication authResult = jwtUtil.parseToken(token);
return authResult;
} catch (JwtException ex) {
throw new BadCredentialsException(ex.getMessage());
}
}
}
The supports()
method returns true only if the token type is the JwtAuthenticationToken
type.
The authenticate()
method obtains the raw token value, confirms its validity,
and builds an Authentication
authResult from its claims. The result is simply returned
to the AuthenticationManager
and the calling filter.
Any error in authenticate()
will result in an AuthenticationException
. The most likely
is an expired token — but could also be the result of a munged token string.
229.5. JwtEntryPoint
The JwtEntryPoint
class implements an AuthenticationEntryPoint
interface
that is used elsewhere in the framework for cases when an error handler is needed because
of an AuthenticationException
. We are using it within the JwtAuthorizationProvider
to report an error with authentication — but you will also see it show up elsewhere.
package info.ejava.examples.svc.auth.cart.security.jwt;
import org.springframework.http.HttpStatus;
import org.springframework.security.core.AuthenticationException;
import org.springframework.security.web.AuthenticationEntryPoint;
public class JwtEntryPoint implements AuthenticationEntryPoint {
@Override
public void commence(HttpServletRequest request, HttpServletResponse response,
AuthenticationException authException) throws IOException {
response.sendError(HttpStatus.UNAUTHORIZED.value(), authException.getMessage());
}
}
230. API Security Configuration
With all the supporting framework classes in place, I will now show how
we can wire this up. This, of course, takes us back to the WebSecurityConfigurer
class.
-
We inject required beans into the configuration class. The only thing that is new is the
JwtConfig
@ConfigurationProperties
class. TheUserDetailsService
provides users/passwords and authorities from a database -
configure(HttpSecurity)
is where we setup ourFilterChainProxy
-
configure(AuthenticationManagerBuilder)
is where we setup ourAuthenticationManager
used by our filters in theFilterChainProxy
.
@Configuration
@Order(0)
@RequiredArgsConstructor
@EnableConfigurationProperties(JwtConfig.class) (1)
public class APIConfiguration extends WebSecurityConfigurerAdapter {
private final JwtConfig jwtConfig; (2)
private final UserDetailsService jdbcUserDetailsService; (3)
@Override
protected void configure(HttpSecurity http) throws Exception {
// details here ...
}
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
//details here ...
}
1 | enabling the JwtConfig as a @ConfigurationProperties bean |
2 | injecting the JwtConfig bean into out configuration class |
3 | injecting a source of user details (i.e., username/password and authorities) |
230.1. API Authentication Manager Builder
The configure(AuthenticationManagerBuilder)
configures the builder with
two AuthenticationProviders
-
one containing real users/passwords and authorities
-
a second with the ability to instantiate an
Authentication
from a JWS token
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.userDetailsService(jdbcUserDetailsService); (1)
auth.authenticationProvider(new JwtAuthenticationProvider(jwtConfig));
}
1 | configuring an AuthenticationManager with both the UserDetailsService and our
new JwtAuthenticationProvider |
The UserDetailsService
was injected because it required setup elsewhere. However, the
JwtAuthenticationProvider
is stateless — getting everything it needs from a startup
configuration and the authentication calls.
230.2. API HttpSecurity Key JWS Parts
The following snippet shows the key parts to wire in the JWS handling.
-
we register the
JwtAuthenticationFilter
to handle authentication of logins -
we register the
JwtAuthorizationFilter
to handle restoring theSecurityContext
when the caller presents a valid JWS bearer token -
not required — but we register a custom error handler that leaks some details about why the caller is being rejected when receiving a 403/Forbidden
@Override
protected void configure(HttpSecurity http) throws Exception {
//...
http.addFilterAt(new JwtAuthenticationFilter(jwtConfig, (1)
authenticationManager()),
UsernamePasswordAuthenticationFilter.class);
http.addFilterAfter(new JwtAuthorizationFilter(jwtConfig, (2)
authenticationManager()),
JwtAuthenticationFilter.class);
http.exceptionHandling(cfg->cfg.defaultAuthenticationEntryPointFor( (3)
new JwtEntryPoint(),
new AntPathRequestMatcher("/api/**")));
http.authorizeRequests(cfg->cfg.antMatchers("/api/login").permitAll());
http.authorizeRequests(cfg->cfg.antMatchers("/api/carts/**").authenticated());
}
1 | JwtAuthenticationFilter being registered at location normally used for
UsernamePasswordAuthenticationFilter |
2 | JwtAuthorizationFilter being registered after the authn filter |
3 | adding an optional error reporter |
230.3. API HttpSecurity Full Details
The following shows the full contents of the configure(HttpSecurity)
method.
In this view you can see how FORM and BASIC Auth have been disabled and we are
operating in a stateless mode with various header/CORS options enabled.
@Override
protected void configure(HttpSecurity http) throws Exception {
http.requestMatchers(m->m.antMatchers("/api/**"));
http.httpBasic(cfg->cfg.disable());
http.formLogin(cfg->cfg.disable());
http.headers(cfg->{
cfg.xssProtection().disable();
cfg.frameOptions().disable();
});
http.csrf(cfg->cfg.disable());
http.cors();
http.sessionManagement(cfg->cfg
.sessionCreationPolicy(SessionCreationPolicy.STATELESS));
http.addFilterAt(new JwtAuthenticationFilter(jwtConfig,
authenticationManager()),
UsernamePasswordAuthenticationFilter.class);
http.addFilterAfter(new JwtAuthorizationFilter(jwtConfig,
authenticationManager()),
JwtAuthenticationFilter.class);
http.exceptionHandling(cfg->cfg.defaultAuthenticationEntryPointFor(
new JwtEntryPoint(),
new AntPathRequestMatcher("/api/**")));
http.authorizeRequests(cfg->cfg.antMatchers("/api/login").permitAll());
http.authorizeRequests(cfg->cfg.antMatchers("/api/whoami").permitAll());
http.authorizeRequests(cfg->cfg.antMatchers("/api/carts/**").authenticated());
}
231. Example JWT/JWS Application
Now that we have thoroughly covered the addition of the JWT/JWS to the
security framework of our application, it is time to look at the application and
with a focus on authorizations. I have added a few unique aspects since the
previous lecture’s example use of @PreAuthorize
.
-
we are using JWT/JWS — of course
-
access annotations are applied to the service interface versus controller class
-
access annotations inspect the values of the input parameters
231.1. Roles and Role Inheritance
I have reused the same users, passwords, and role assignments from the authorities example and will demonstrate with the following users.
-
ROLE_ADMIN -
sam
-
ROLE_CLERK -
woody
-
ROLE_CUSTOMER -
norm
andfrasier
However, role inheritance is only defined for ROLE_ADMIN inheriting all accesses from ROLE_CLERK. None of the roles inherit from ROLE_CUSTOMER.
@Bean
public RoleHierarchy roleHierarchy() {
RoleHierarchyImpl roleHierarchy = new RoleHierarchyImpl();
roleHierarchy.setHierarchy(StringUtils.join(Arrays.asList(
"ROLE_ADMIN > ROLE_CLERK"),System.lineSeparator()));
return roleHierarchy;
}
231.2. CartsService
We have a simple CartsService with a Web API and service implementation. The code below
shows the interface to the service. It has been annotated with @PreAuthorize
expressions
that use the Spring Expression Language to evaluate the principal from the SecurityContext
and parameters of the call.
package info.ejava.examples.svc.auth.cart.services;
import info.ejava.examples.svc.auth.cart.dto.CartDTO;
import org.springframework.security.access.prepost.PreAuthorize;
public interface CartsService {
@PreAuthorize("#username == authentication.name and hasRole('CUSTOMER')") (1)
CartDTO createCart(String username);
@PreAuthorize("#username == authentication.name or hasRole('CLERK')") (2)
CartDTO getCart(String username);
@PreAuthorize("#username == authentication.name") (3)
CartDTO addItem(String username, String item);
@PreAuthorize("#username == authentication.name or hasRole('ADMIN')") (4)
boolean removeCart(String username);
}
1 | anyone with the CUSTOMER role can create a cart but it must be for their username |
2 | anyone can get their own cart and anyone with the CLERK role can get anyone’s cart |
3 | users can only add item to their own cart |
4 | users can remove their own cart and anyone with the ADMIN role can remove anyone’s cart |
231.3. Login
The following shows creation of tokens for four example users
$ curl -v -X POST http://localhost:8080/api/login -d '{"username":"sam", "password":"password"}' (1)
> POST /api/login HTTP/1.1
< HTTP/1.1 200
< Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJzYW0iLCJpYXQiOjE1OTUwMTcwNDQsImV4cCI6MTg5NTAyMDY0NCwiYXV0aCI6WyJST0xFX0FETUlOIl19.ICzAn1r2UyrpGJQSYk9uqxMAAq9QC1Dw7GKe0NiGvCyTasMfWSStrqxV6Uit-cb4
1 | sam has role ADMIN and inherits role CLERK |
$ curl -v -X POST http://localhost:8080/api/login -d '{"username":"woody", "password":"password"}' (1)
> POST /api/login HTTP/1.1
< HTTP/1.1 200
< Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJ3b29keSIsImlhdCI6MTU5NTAxNzA1MSwiZXhwIjoxODk1MDIwNjUxLCJhdXRoIjpbIlJPTEVfQ0xFUksiXX0.kreSFPgTIr2heGMLcjHFrglydvhPZKR7Iy4F6b76WNIvAkbZVhfymbQxekuPL-Ai
1 | woody has role CLERK |
$ curl -v -X POST http://localhost:8080/api/login -d '{"username":"norm", "password":"password"}' (1)
> POST /api/login HTTP/1.1
< HTTP/1.1 200
< Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJub3JtIiwiaWF0IjoxNTk1MDE3MDY1LCJleHAiOjE4OTUwMjA2NjUsImF1dGgiOlsiUk9MRV9DVVNUT01FUiJdfQ.UX4yPDu0LzWdEAObbJliOtZ7ePU1RSIH_o_hayPrlmNxhjU5DL6XQ42iRCLLuFgw
$ curl -v -X POST http://localhost:8080/api/login -d '{"username":"frasier", "password":"password"}' (1)
> POST /api/login HTTP/1.1
< HTTP/1.1 200
< Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJmcmFzaWVyIiwiaWF0IjoxNTk1MDE3MDcxLCJleHAiOjE4OTUwMjA2NzEsImF1dGgiOlsiUFJJQ0VfQ0hFQ0siLCJST0xFX0NVU1RPTUVSIl19.ELAe5foIL_u2QyhpjwDoqQbL4Hl1Ikuir9CJPdOT8Ow2lI5Z1GQY6ZaKvW883txI
1 | norm and frasier have role CUSTOMER |
231.4. createCart()
The access rules for createCart()
require the caller be a customer and be creating a cart
for their username.
@PreAuthorize("#username == authentication.name and hasRole('CUSTOMER')") (1)
CartDTO createCart(String username); (1)
1 | #username refers to the username method parameter |
Woody is unable to create a cart because he lacks the CUSTOMER
role.
$ curl -X GET http://localhost:8080/api/whoAmI -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJ3b29keSIsImlhdCI6MTU5NTAxNzA1MSwiZXhwIjoxODk1MDIwNjUxLCJhdXRoIjpbIlJPTEVfQ0xFUksiXX0.kreSFPgTIr2heGMLcjHFrglydvhPZKR7Iy4F6b76WNIvAkbZVhfymbQxekuPL-Ai" #woody
[woody, [ROLE_CLERK]]
$ curl -X POST http://localhost:8080/api/carts -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJ3b29keSIsImlhdCI6MTU5NTAxNzA1MSwiZXhwIjoxODk1MDIwNjUxLCJhdXRoIjpbIlJPTEVfQ0xFUksiXX0.kreSFPgTIr2heGMLcjHFrglydvhPZKR7Iy4F6b76WNIvAkbZVhfymbQxekuPL-Ai" #woody
{"url":"http://localhost:8080/api/carts","message":"Forbidden","description":"caller[woody] is forbidden from making this request","timestamp":"2020-07-17T20:24:14.159507Z"}
Norm is able to create a cart because he has the CUSTOMER
role.
$ curl -X GET http://localhost:8080/api/whoAmI -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJub3JtIiwiaWF0IjoxNTk1MDE3MDY1LCJleHAiOjE4OTUwMjA2NjUsImF1dGgiOlsiUk9MRV9DVVNUT01FUiJdfQ.UX4yPDu0LzWdEAObbJliOtZ7ePU1RSIH_o_hayPrlmNxhjU5DL6XQ42iRCLLuFgw" #norm
[norm, [ROLE_CUSTOMER]]
$ curl -X POST http://localhost:8080/api/carts -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJub3JtIiwiaWF0IjoxNTk1MDE3MDY1LCJleHAiOjE4OTUwMjA2NjUsImF1dGgiOlsiUk9MRV9DVVNUT01FUiJdfQ.UX4yPDu0LzWdEAObbJliOtZ7ePU1RSIH_o_hayPrlmNxhjU5DL6XQ42iRCLLuFgw" #norm
{"username":"norm","items":[]}
231.5. addItem()
The addItem()
access rules only allow users to add items to their own cart.
@PreAuthorize("#username == authentication.name")
CartDTO addItem(String username, String item);
Frasier is forbidden from adding items to Norm’s cart because his identity does not match the username for the cart.
$ curl -X GET http://localhost:8080/api/whoAmI -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJmcmFzaWVyIiwiaWF0IjoxNTk1MDE3MDcxLCJleHAiOjE4OTUwMjA2NzEsImF1dGgiOlsiUFJJQ0VfQ0hFQ0siLCJST0xFX0NVU1RPTUVSIl19.ELAe5foIL_u2QyhpjwDoqQbL4Hl1Ikuir9CJPdOT8Ow2lI5Z1GQY6ZaKvW883txI" #frasier
[frasier, [PRICE_CHECK, ROLE_CUSTOMER]]
$ curl -X POST "http://localhost:8080/api/carts/items?username=norm&name=chardonnay" -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJmcmFzaWVyIiwiaWF0IjoxNTk1MDE3MDcxLCJleHAiOjE4OTUwMjA2NzEsImF1dGgiOlsiUFJJQ0VfQ0hFQ0siLCJST0xFX0NVU1RPTUVSIl19.ELAe5foIL_u2QyhpjwDoqQbL4Hl1Ikuir9CJPdOT8Ow2lI5Z1GQY6ZaKvW883txI" #frasier
{"url":"http://localhost:8080/api/carts/items?username=norm&name=chardonnay","message":"Forbidden","description":"caller[frasier] is forbidden from making this request","timestamp":"2020-07-17T20:40:10.451578Z"} (1)
1 | frasier received a 403/Forbidden error when attempting to add to someone else’s cart |
Norm can add items to his own cart because his username matches the username of the cart.
$ curl -X POST http://localhost:8080/api/carts/items?name=beer -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJub3JtIiwiaWF0IjoxNTk1MDE3MDY1LCJleHAiOjE4OTUwMjA2NjUsImF1dGgiOlsiUk9MRV9DVVNUT01FUiJdfQ.UX4yPDu0LzWdEAObbJliOtZ7ePU1RSIH_o_hayPrlmNxhjU5DL6XQ42iRCLLuFgw" #norm
{"username":"norm","items":["beer"]}
231.6. getCart()
The getCart()
access rules only allow users to get their own cart, but also allows users with the CLERK
role to get anyone’s cart.
@PreAuthorize("#username == authentication.name or hasRole('CLERK')") (2)
CartDTO getCart(String username);
Frasier cannot get Norm’s cart because anyone lacking the CLERK
role can only get a cart that
matches their authenticated username.
$ curl -X GET http://localhost:8080/api/carts?username=norm -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJmcmFzaWVyIiwiaWF0IjoxNTk1MDE3MDcxLCJleHAiOjE4OTUwMjA2NzEsImF1dGgiOlsiUFJJQ0VfQ0hFQ0siLCJST0xFX0NVU1RPTUVSIl19.ELAe5foIL_u2QyhpjwDoqQbL4Hl1Ikuir9CJPdOT8Ow2lI5Z1GQY6ZaKvW883txI" #frasier
{"url":"http://localhost:8080/api/carts?username=norm","message":"Forbidden","description":"caller[frasier] is forbidden from making this request","timestamp":"2020-07-17T20:44:05.899192Z"}
Norm can get his own cart because the username of the cart matches the authenticated username of his accessing the cart.
$ curl -X GET http://localhost:8080/api/carts -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJub3JtIiwiaWF0IjoxNTk1MDE3MDY1LCJleHAiOjE4OTUwMjA2NjUsImF1dGgiOlsiUk9MRV9DVVNUT01FUiJdfQ.UX4yPDu0LzWdEAObbJliOtZ7ePU1RSIH_o_hayPrlmNxhjU5DL6XQ42iRCLLuFgw" #norm
{"username":"norm","items":["beer"]}
Woody can get Norm’s cart because he has the CLERK
role.
$ curl -X GET http://localhost:8080/api/carts?username=norm -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJ3b29keSIsImlhdCI6MTU5NTAxNzA1MSwiZXhwIjoxODk1MDIwNjUxLCJhdXRoIjpbIlJPTEVfQ0xFUksiXX0.kreSFPgTIr2heGMLcjHFrglydvhPZKR7Iy4F6b76WNIvAkbZVhfymbQxekuPL-Ai" #woody
{"username":"norm","items":["beer"]}
231.7. removeCart()
The removeCart()
access rules only allow carts to be removed by their owner or by someone with the
ADMIN
role.
@PreAuthorize("#username == authentication.name or hasRole('ADMIN')")
boolean removeCart(String username);
Woody cannot remove Norm’s cart because his authenticated username does not match the cart and
he lacks the ADMIN
role.
$ curl -X DELETE http://localhost:8080/api/carts?username=norm -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJ3b29keSIsImlhdCI6MTU5NTAxNzA1MSwiZXhwIjoxODk1MDIwNjUxLCJhdXRoIjpbIlJPTEVfQ0xFUksiXX0.kreSFPgTIr2heGMLcjHFrglydvhPZKR7Iy4F6b76WNIvAkbZVhfymbQxekuPL-Ai" #woody
{"url":"http://localhost:8080/api/carts?username=norm","message":"Forbidden","description":"caller[woody] is forbidden from making this request","timestamp":"2020-07-17T20:48:40.866193Z"}
Sam can remove Norm’s cart because he has the ADMIN
role. Once Same deletes the cart,
Norm receives a 404/Not Found because it is not longer there.
$ curl -X GET http://localhost:8080/api/whoAmI -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJzYW0iLCJpYXQiOjE1OTUwMTcwNDQsImV4cCI6MTg5NTAyMDY0NCwiYXV0aCI6WyJST0xFX0FETUlOIl19.ICzAn1r2UyrpGJQSYk9uqxMAAq9QC1Dw7GKe0NiGvCyTasMfWSStrqxV6Uit-cb4" #sam
[sam, [ROLE_ADMIN]]
$ curl -X DELETE http://localhost:8080/api/carts?username=norm -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJzYW0iLCJpYXQiOjE1OTUwMTcwNDQsImV4cCI6MTg5NTAyMDY0NCwiYXV0aCI6WyJST0xFX0FETUlOIl19.ICzAn1r2UyrpGJQSYk9uqxMAAq9QC1Dw7GKe0NiGvCyTasMfWSStrqxV6Uit-cb4" #sam
$ curl -X GET http://localhost:8080/api/carts -H "Authorization: Bearer eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJub3JtIiwiaWF0IjoxNTk1MDE3MDY1LCJleHAiOjE4OTUwMjA2NjUsImF1dGgiOlsiUk9MRV9DVVNUT01FUiJdfQ.UX4yPDu0LzWdEAObbJliOtZ7ePU1RSIH_o_hayPrlmNxhjU5DL6XQ42iRCLLuFgw" #norm
{"url":"http://localhost:8080/api/carts","message":"Not Found","description":"no cart found for norm","timestamp":"2020-07-17T20:50:59.465210Z"}
232. Summary
I don’t know about you — but I had fun with that!
To summarize — in this module we learned:
-
to separate the authentication from the operation call such that the operation call could be in a separate server or even an entirely different service
-
what is a JSON Web Token (JWT) and JSON Web Secret (JWS)
-
how trust is verified using JWS
-
how to write and/or integrate custom authentication and authorization framework classes to implement an alternate security mechanism in Spring/Spring Boot
-
how to leverage Spring Expression Language to evaluate parameters and properties of the
SecurityContext
Enabling HTTPS
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
233. Introduction
In all the examples to date (and likely forward), we have been using the HTTP protocol. This has been very easy option to use, but I likely do not have to tell you that straight HTTP is NOT secure for use and especially NOT appropriate for use with credentials or any other authenticated information.
Hypertext Transfer Protocol Secure (HTTPS) — with trusted certificates — is the secure way to communicate using APIs in modern environments. We still will want the option of simple HTTP in development and most deployment environments provide an external HTTPS proxy that can take care of secure communications with the external clients. However, it will be good to take a short look at how we can enable HTTPS directly within our Spring Boot application.
233.1. Goals
You will learn:
-
the basis of how HTTPS forms trusted, private communications
-
the difference between self-signed certificates and those signed by a trusted authority
-
how to enable HTTPS/TLS within our Spring Boot application
233.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
define the purpose of a public certificate and private key
-
generate a self-signed certificate for demonstration use
-
enable HTTPS/TLS within Spring Boot
-
optionally implement an HTTP to HTTPS redirect
-
implement a Maven Failsafe integration test using HTTPS
234. HTTP Access
I have cloned the "noauth-security-example" to form the "https-hello-example" and left most of the insides intact. You may remember the ability to execute the following authenticated command.
$ curl -v -X GET http://localhost:8080/api/authn/hello?name=jim -u "user:password" (1) > GET /api/authn/hello?name=jim HTTP/1.1 > Host: localhost:8080 > Authorization: Basic dXNlcjpwYXNzd29yZA== (2) > < HTTP/1.1 200 hello, jim
1 | curl supports credentials with -u option |
2 | curl Base64 encodes credentials and adds Authorization header |
We get rejected when no valid credentials are supplied.
$ curl -X GET http://localhost:8080/api/authn/hello?name=jim {"timestamp":"2020-07-18T14:43:39.670+00:00","status":401, "error":"Unauthorized","message":"Unauthorized","path":"/api/authn/hello"}
It works as we remember it, but the issue is that our slightly encoded
(dXNlcjpwYXNzd29yZA==
), plaintext password was issued in the clear.
We can fix that by enabling HTTPS.
235. HTTPS
Hypertext Transfer Protocol Secure (HTTPS) is an extension of HTTP encrypted with Transport Layer Security (TLS) for secure communication between endpoints — offering privacy and integrity (i.e., hidden and unmodified). HTTPS formerly offered encryption with the now deprecated Secure Sockets Layer (SSL). Although the SSL name still sticks around, TLS is only supported today. [45] [46]
235.1. HTTPS/TLS
At the heart of HTTPS/TLS are X.509 certificates and the Public Key Infrastructure (PKI). Public keys are made available to describe the owner (subject), the issuer, and digital signatures that prove the contents have not been modified. If the receiver can verify the certificate and trusts the issuer — the communication can continue. [47]
With HTTPS/TLS, there is one-way and two-way option with one-way being the most common. In one-way TLS — only the server contains a certificate and the client is anonymous at the network level. Communications can continue if the client trusts the certificate presented by the server. In two-way TLS, the client also presents a signed certificate that can identify them to the server and form two-way authentication at the network level. Two-way is very secure but not as common except in closed environments (e.g., server-to-server environments with fixed/controlled communications). We will stick to one-way TLS in this lecture.
235.2. Keystores
A keystore is repository of security certificates - both private and public keys. There are two primary types: Public Key Cryptographic Standards (PKCS12) and Java KeyStore (JKS). PKCS12 is an industry standard and JKS is specific to Java. [48] They both have the ability to store multiple certificates and use an alias to identify them. Both use password protection.
There are typically two uses for keystores: your identity (keystore) and the identity of certificates you trust (truststore). The former is used by servers and must be well protected. The later is necessary for clients. The truststore can be shared but its contents need to be trusted.
235.3. Tools
There are two primary tools when working with certificates and keystores: keytool and openssl.
keytool comes with the JDK and can easily generate and manage certificates for Java applications. Keytool originally used the JKS format but since Java 9 switched over to PKCS12 format.
openssl is a standard, open source tool that is not specific to any environment. It is commonly used to generate and convert to/from all types of certificates/keys.
235.4. Self Signed Certificates
The words "trust" and "verify" were used a lot in the paragraphs above when describing certificates.
When we visit various web sites — that locked icon next to the "https" URL indicates the certificate presented by the server was verified and came from a trusted source. |
Verified Server Certificate
|
Trusted certificates come from sources that are pre-registered in the browsers and Java JRE truststore and are obtained through purchase.
We can generate self-signed certificates that are not immediately trusted until we either ignore checks or enter them into our local browsers and/or truststore(s).
236. Enable HTTPS/TLS in Spring Boot
To enable HTTPS/TLS in Spring Boot — we must do the following
-
obtain a digital certificate - we will generate a self-signed certificate without purchase or much fanfare
-
add TLS properties to the application
-
optionally add an HTTP to HTTPS redirect - useful in cases where clients forget to set the protocol to
https://
and usehttp://
or use the wrong port number.
236.1. Generate Self-signed Certificate
The following example shows the creation of a self-signed certificate using keytool. Refer to the keytool reference page for details on the options. The following Java Keytool page provides examples of several use cases. I kept the values of the certificate extremely basic since there is little chance we will ever use this in a trusted environment.
$ keytool -genkeypair -keyalg RSA -keysize 2048 -validity 3650 \(1)
-keystore keystore.p12 -alias https-hello \(2)
-storepass password
What is your first and last name?
[Unknown]: localhost
What is the name of your organizational unit?
[Unknown]:
What is the name of your organization?
[Unknown]:
What is the name of your City or Locality?
[Unknown]:
What is the name of your State or Province?
[Unknown]:
What is the two-letter country code for this unit?
[Unknown]:
Is CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct?
[no]: yes
1 | specifying a valid date 10 years in the future |
2 | assigning the alias https-hello to what is generated in the keystore |
236.2. Place Keystore in Reference-able Location
The keytool command output a keystore file called keystore.p12
. I placed that
in the resources area of the application — which will be can be referenced at runtime
using a classpath reference.
$ tree src/main/resources/
src/main/resources/
|-- application.properties
`-- keystore.p12
Incremental Learning Example Only: Don’t use Source Tree for Certs
This example is trying hard to be simple and using a classpath for the keystore to be portable.
You should already know how to convert the classpath reference to a file or other reference to keep sensitive information protected and away from the code base.
Do not store credentials or other sensitive information in the |
236.3. Add TLS properties
The following shows a minimal set of properties needed to enable TLS. [49]
server.port=8443(1)
server.ssl.enabled=true
server.ssl.key-store=classpath:keystore.p12(2)
server.ssl.key-store-password=password(3)
server.ssl.key-alias=https-hello
1 | using an alternate port - optional |
2 | referencing keystore in the classpath — could also use a file reference |
3 | think twice before placing credentials in a properties file |
Do not place credentials in CM system
Do not place real credentials in files checked into CM.
Have them resolved from a source provided at runtime.
|
Note the presence of the legacy "ssl" term in the property name even though "ssl" is deprecated and we are technically setting up "tls". |
237. Untrusted Certificate Error
Once we restart the server, we should be able to connect using HTTPS and port 8443. However, there will be a trust error. The following shows the error from curl.
$ curl https://localhost:8443/api/authn/hello?name=jim -u user:password
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
238. Accept Self-signed Certificates
curl and older browsers have the ability to accept self-signed certificates either by ignoring their inconsistencies or adding them to their truststore.
The following is an example of curl’s -insecure
option (-k
abbreviation)
that will allow us to communicate with a server presenting a certificate that
fails validation.
$ curl -kv -X GET https://localhost:8443/api/authn/hello?name=jim -u "user:password"
* Connected to localhost (::1) port 8443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=Unknown; ST=Unknown; L=Unknown; O=Unknown; OU=Unknown; CN=localhost
* start date: Jul 18 13:46:35 2020 GMT
* expire date: Jul 16 13:46:35 2030 GMT
* issuer: C=Unknown; ST=Unknown; L=Unknown; O=Unknown; OU=Unknown; CN=localhost
* SSL certificate verify result: self signed certificate (18), continuing anyway.
* Server auth using Basic with user 'user'
> GET /api/authn/hello?name=jim HTTP/1.1
> Host: localhost:8443
> Authorization: Basic dXNlcjpwYXNzd29yZA==
>
< HTTP/1.1 200
hello, jim
238.1. Optional Redirect
To handle clients that may address our application using the wrong protocol or port number — we can optionally setup a redirect to go from the common port to the TLS port. The following snippet was taken directly from a ZetCode article but I have seen this near exact snippet many times elsewhere.
@Bean
public ServletWebServerFactory servletContainer() {
var tomcat = new TomcatServletWebServerFactory() {
@Override
protected void postProcessContext(Context context) {
SecurityConstraint securityConstraint = new SecurityConstraint();
securityConstraint.setUserConstraint("CONFIDENTIAL");
SecurityCollection collection = new SecurityCollection();
collection.addPattern("/*");
securityConstraint.addCollection(collection);
context.addConstraint(securityConstraint);
}
};
tomcat.addAdditionalTomcatConnectors(redirectConnector());
return tomcat;
}
private Connector redirectConnector() {
var connector = new Connector("org.apache.coyote.http11.Http11NioProtocol");
connector.setScheme("http");
connector.setPort(8080);
connector.setSecure(false);
connector.setRedirectPort(8443);
return connector;
}
238.2. HTTP:8080 ⇒ HTTPS:8443 Redirect Example
With the optional redirect in place, the following shows an example of the client being
sent from their original http://localhost:8080
call to https://localhost:8443
.
$ curl -kv -X GET http://localhost:8080/api/authn/hello?name=jim -u "user:password" > GET /api/authn/hello?name=jim HTTP/1.1 > Host: localhost:8080 > Authorization: Basic dXNlcjpwYXNzd29yZA== > < HTTP/1.1 302 (1) < Location: https://localhost:8443/api/authn/hello?name=jim (2)
1 | HTTP 302/Redirect Returned |
2 | Location header provides the full URL to invoke — including the protocol |
238.3. Follow Redirects
Browsers automatically follow redirects and we can get curl to automatically follow
redirects by adding the --location
option (or -L
abbreviated). The following
command snippet shows curl being requested to connect to an HTTP port , receiving
a 302/Redirect, and then completing the original command using the URL provided
in the Location
header of the redirect.
$ curl -kvL -X GET http://localhost:8080/api/authn/hello?name=jim -u "user:password" (1)
> GET /api/authn/hello?name=jim HTTP/1.1
> Host: localhost:8080
> Authorization: Basic dXNlcjpwYXNzd29yZA==
>
< HTTP/1.1 302
< Location: https://localhost:8443/api/authn/hello?name=jim
<
* Issue another request to this URL: 'https://localhost:8443/api/authn/hello?name=jim'
...
* Server certificate:
* subject: C=Unknown; ST=Unknown; L=Unknown; O=Unknown; OU=Unknown; CN=localhost
* start date: Jul 18 13:46:35 2020 GMT
* expire date: Jul 16 13:46:35 2030 GMT
* issuer: C=Unknown; ST=Unknown; L=Unknown; O=Unknown; OU=Unknown; CN=localhost
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /api/authn/hello?name=jim HTTP/1.1
> Host: localhost:8443
> Authorization: Basic dXNlcjpwYXNzd29yZA==
>
< HTTP/1.1 200
hello, jim
1 | -L (--location) redirect option causes curl to follow the 302/Redirect response |
238.4. Caution About Redirects
One note of caution I will give about redirects is the tendency for Intellij to leave orphan processes which seems to get worse with the Tomcat redirect in place. Since our targeted interfaces are for API clients — which should have a documented source of how to communicate with our server — there should be no need for the redirect. The redirect is primarily valuable for interfaces that switch between HTTP and HTTPS we are either all HTTP or all HTTPS and no need to be eclectic.
Eliminating the optional redirect also eliminates the need for the redirect code and reduces our required steps to obtaining the certificate and setting a few simple properties.
239. Maven Integration Test
Since we are getting close to real deployments to test environments and we have hit unit integration tests pretty hard, I wanted to demonstrate a test of the HTTPS configuration using a true integration test and the Maven Failsafe plugin.
Figure 102. Maven Failsafe Integration Test
|
A Maven Failsafe integration test is very similar to the other Web API unit integration tests you are use to seeing in this course. The primary difference is that there are no server-side components in the JUnit Spring context. All the server-side components are in a separate executable. The following diagram shows the participants that directly help to implement the integration test. This will be accomplished with the aid of the Maven Failsafe, Spring Boot, and Build Maven Helper plugins. |
With that said, we will still want to be able to execute simple integration tests like this within the IDE. Therefore expect some setup aspects to support both IDE-based and Maven-based integration testing setup in the paragraphs that follow.
239.1. Maven Integration Test Phases
Maven executes integration tests using four (4) phases
-
pre-integration-test - start resources
-
integration-test - execute tests
-
post-integration-test - stop resources
-
verify - evaluate/assert test results
We will make use of three (3) plugins to perform that work within Maven. Each is also accompanied by steps to mimic the Maven capability on a small scale with the IDE:
-
spring-boot-maven-plugin
- used to start and stop the server-side Spring Boot process-
(use IDE, "java -jar" command, or "mvn springboot:run" command to manually start, restart, and stop the server)
-
-
build-maven-helper-plugin
- used to allocate a random network port for server-
(within the IDE you will use a property file that uses a well-known port# used one-at-a-time)
-
-
maven-failsafe-plugin
- used to run the JUnit JVM with the tests — passing in the port# — and verifying/asserting the results.-
(use IDE to run test following server-startup)
-
239.2. Spring Boot Maven Plugin
The spring-boot-maven-plugin
will be configured with at least 2 executions to support Maven integration testing.
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
...
</executions>
</plugin>
239.2.1. SpringBoot: pre-integration-test Phase (start)
The following snippet shows the plugin being used to start
the server in the background (versus a blocking run
).
The execution is configured to supply a Spring Boot server.port
property with the HTTP port to use.
We will use a separate plugin to generate the port number and have that assigned to the Maven server.http.port
property at build time.
The client-side Spring Boot Test will also need this port value for the client(s) in the integration tests.
<execution>
<id>pre-integration-test</id> (1)
<phase>pre-integration-test</phase> (2)
<goals>
<goal>start</goal> (3)
</goals>
<configuration>
<skip>${skipITs}</skip> (4)
<arguments> (5)
<argument>--server.port=${server.http.port}</argument>
</arguments>
</configuration>
</execution>
1 | each execution must have a unique ID |
2 | this execution will be tied to the pre-integration-test phase |
3 | this execution will start the server in the background |
4 | -DskipITs=true will deactivate this execution |
5 | --server.port is being assigned at runtime and used by server for HTTP/S listen port |
Failsafe is overriding the fixed value from application-https.properties
.
server.port=8443
The above execution phase has the same impact as if we launched the JAR manually with spring.profiles.active
and whether server.port
was supplied on the command.
This allows multiple IT tests to run concurrently without colliding on network port number.
It also permits the use of a well-known/fixed value for use with IDE-based testing.
$ java -jar target/https-hello-example-*-SNAPSHOT.jar --spring.profiles.active=https
Tomcat started on port(s): 8443 (https) with context path ''(1)
$ java -jar target/https-hello-example-*-SNAPSHOT.jar --spring.profiles.active=https --server.port=7712 (2)
Tomcat started on port(s): 7712 (http) with context path '' (2)
1 | Spring Boot using well-known/fixed port# supplied in application-https.properties |
2 | Spring Boot using runtime server.port property to override port to use |
239.2.2. SpringBoot: post-integration-test Phase (stop)
The following snippet shows the Spring Boot plugin being used to stop
a running server.
<execution>
<id>post-integration-test</id> (1)
<phase>post-integration-test</phase> (2)
<goals>
<goal>stop</goal> (3)
</goals>
<configuration>
<skip>${skipITs}</skip> (4)
</configuration>
</execution>
1 | each execution must have a unique ID |
2 | this execution will be tied to the post-integration-test phase |
3 | this execution will stop the running server |
4 | -DskipITs=true will deactivate this execution |
skipITs support
Most plugins offer a skip option to bypass a configured execution and sometimes map that to a Maven property that can be expressed on the command line.
Failsafe maps their property to skipITs .
By mapping the Maven skipITs property to the plugin’s skip configuration element, we can inform related plugins to do nothing.
This allows one to run the Maven install phase without requiring integration tests to run and pass.
|
239.3. Build Helper Maven Plugin
The build-helper-maven-plugin
contains various utilities that are helpful to create a repeatable, portable build.
We are using the reserve-network-port
goal to select an available HTTP port at build-time.
The allocated port number is assigned to the Maven server.http.port
property.
This was shown picked up by the Spring Boot Maven Plugin earlier.
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<executions>
<execution>
<id>reserve-network-port</id>
<phase>process-resources</phase> (1)
<goals>
<goal>reserve-network-port</goal> (2)
</goals>
<configuration>
<portNames> (3)
<portName>server.http.port</portName>
</portNames>
</configuration>
</execution>
</executions>
</plugin>
1 | execute during the process-resources Maven phase — which is well before pre-integration-test |
2 | execute the reserve-network-port goal of the plugin |
3 | assigned the identified port to the Maven server.http.port property |
239.4. Failsafe Plugin
The Failsafe plugin has some default behavior, but once we start configuring it — we need to restate much of what it would have done automatically for us.
<plugin>
<artifactId>maven-failsafe-plugin</artifactId>
<executions>
...
</executions>
</plugin>
239.4.1. Failsafe: integration-test Phase
In the snippet below, we are primarily configuring Failsafe to launch the JUnit test with an it.server.port
property.
This will be read in by the ServerConfig
@ConfigurationProperties
class
<execution>
<id>integration-test</id>
<phase>integration-test</phase> (1)
<goals> (1)
<goal>integration-test</goal>
</goals>
<configuration>
<includes> (1)
<include>**/*IT.java</include>
</includes>
<systemPropertyVariables> (2)
<it.server.port>${server.http.port}</it.server.port>
</systemPropertyVariables>
<additionalClasspathElements> (3)
<additionalClasspathElement>${basedir}/target/classes</additionalClasspathElement>
</additionalClasspathElements>
<useModulePath>false</useModulePath> (4)
</configuration>
</execution>
1 | re-state some Failsafe default relative to phase, goal, includes |
2 | add a -Dit.server.port=${server.http.port} system property to the execution |
3 | adding target/classes to classpath when JUnit test using classes from "src/main" |
4 | turning off some Java 9 module features |
Full disclosure.
I need to refresh my memory on exactly why default additionalClasspathElements and useModulePath did not work here.
|
239.4.2. Failsafe: verify Phase
The snippet below shows the final phase for Failsafe. After the integration resources have been taken down, the only thing left is to assert the results. This pass/fail assertion is delayed a few phases so that the build does not fail while integration resources are still running.
<execution>
<id>verify</id>
<phase>verify</phase>
<goals>
<goal>verify</goal>
</goals>
</execution>
239.5. JUnit @SpringBootTest
With the Maven setup complete — that brings us back to a familiar looking JUnit test and @SpringBootTest
However, there is no application or server-side resources in the Spring context,
@SpringBootTest(classes={ClientTestConfiguration.class}, (1)
webEnvironment = SpringBootTest.WebEnvironment.NONE) (2)
@ActiveProfiles({"its"}) (3)
public class HttpsRestTemplateIT {
@Autowired (4)
private RestTemplate authnUser;
@Autowired (5)
private URI authnUrl;
1 | no application class in this integration test. Everything is server-side. |
2 | have only a client-side web environment. No listen port necessary |
3 | activate its profile for scope of test case |
4 | inject RestTemplate configured with user credentials that can authenticate |
5 | inject URL to endpoint test will be calling |
Since we have no RANDOM_PORT and a late @LocalServerPort injection, we can move ServerConfig to the configuration class and inject the baseURL product.
|
239.6. ClientTestConfiguration
This trimmed down @Configuration
class is all that is needed for JUnit test to be a client of a remote process.
The @SpringBootTest
will demand to have a @SpringBootConfiguration
and we technically do not have the @SpringBootApplication
during the test.
@SpringBootConfiguration(proxyBeanMethods = false)
@EnableAutoConfiguration
@Slf4j
public class ClientTestConfiguration {
...
@Bean
@ConfigurationProperties("it.server")
public ServerConfig itServerConfig() { ...
@Bean
public URI authnUrl(ServerConfig serverConfig) { ...(1)
@Bean
public RestTemplate authnUser(RestTemplateBuilder builder,...(2)
...
1 | baseUrl of remote server |
2 | RestTemplate with authentication and HTTPS filters applied |
239.7. application-its.properties
The following snippet shows the its
profile-specific configuration file, complete with
-
it.server.scheme
(https) -
it.server.port
(8443) -
trustStore properties pointing at the server-side identity keystore.
it.server.scheme=https
#must match self-signed values in application-https.properties
it.server.trust-store=keystore.p12
it.server.trust-store-password=password
#used in IDE, overridden from command line during failsafe tests
it.server.port=8443 (1)
1 | default port when working in IDE. Overridden by command line properties by Failsafe |
The keystore/truststore used in this example is for learning and testing. Do not store operational certs in the source tree. Those files end up in the searchable CM system and the JARs with the certs end up in a Nexus repository. |
239.8. username/password Credentials
The following shows the username and password credentials being injected using values from the properties. In this test’s case — they should always be provided. Therefore, no default String is defined.
public class ClientTestConfiguration {
@Value("${spring.security.user.name}")
private String username;
@Value("${spring.security.user.password}")
private String password;
239.9. ServerConfig
The following shows the primary purpose for ServerConfig
as a @ConfigurationProperties
class with flexible prefix.
In this particular case it is being instructed to read in all properties with prefix "it.server" and instantiate a ServerConfig.
@Bean
@ConfigurationProperties("it.server")
public ServerConfig itServerConfig() {
return new ServerConfig();
}
From the property file earlier, you will notice that the URL scheme will be "https" and the port will be "8443" or whatever property override is supplied on the command line.
The resulting value will be injected into the @Configuration
class.
239.10. authnUrl URI
Since we don’t have the late-injected @LocalServerPort
for the web-server and our ServerConfig
is now all property-based, we can now delegate baseUrls to injectable beans.
The following shows the baseUrl
from ServerConfig
being used to construct a URL for "api/authn/hello".
@Bean
public URI authnUrl(ServerConfig serverConfig) {
URI baseUrl = serverConfig.getBaseUrl();
return UriComponentsBuilder.fromUri(baseUrl).path("/api/authn/hello").build().toUri();
}
239.11. authUser RestTemplate
By no surprise, authnUser()
is adding a BasicAuthenticationInterceptor
containing the injected username and password to a new RestTemplate
for use in the test.
The injected ClientHttpRequestFactory
will take care of the HTTP/HTTPS details.
@Bean
public RestTemplate authnUser(RestTemplateBuilder builder,
ClientHttpRequestFactory requestFactory) {
RestTemplate restTemplate = builder.requestFactory(
//used to read the streams twice -- so we can use the logging filter below
()->new BufferingClientHttpRequestFactory(requestFactory))
.interceptors(new BasicAuthenticationInterceptor(username, password),
new RestTemplateLoggingFilter())
.build();
return restTemplate;
}
239.12. HTTPS ClientHttpRequestFactory
The HTTPS-based ClientHttpRequestFactory is built by following some excellent instructions and short article provided by Geoff Bourne. The following intermediate factory relies on the ability to construct an SSLContext.
import org.springframework.http.client.HttpComponentsClientHttpRequestFactory;
import org.springframework.http.client.ClientHttpRequestFactory;
/*
TLS configuration based on great/short article by Geoff Bourne
https://medium.com/@itzgeoff/using-a-custom-trust-store-with-resttemplate-in-spring-boot-77b18f6a5c39
*/
@Bean
public ClientHttpRequestFactory httpsRequestFactory(SSLContext sslContext,
ServerConfig serverConfig) {
HttpClient httpsClient = HttpClientBuilder.create()
.setSSLContext(serverConfig.isHttps() ? sslContext : null)
.build();
return new HttpComponentsClientHttpRequestFactory(httpsClient);
}
239.13. SSL Context
The SSLContext @Bean
factory locates and loads the trustStore based on the properties within ServerConfig
.
If found, it uses the SSLContextBuilder
from the apache HTTP libraries to create a SSLContext
.
import org.apache.http.ssl.SSLContextBuilder;
import javax.net.ssl.SSLContext;
...
@Bean
public SSLContext sslContext(ServerConfig serverConfig) {
try {
URL trustStoreUrl = null;
if (serverConfig.getTrustStore()!=null) {
trustStoreUrl = HttpsExampleApp.class.getResource("/" + serverConfig.getTrustStore());
if (null==trustStoreUrl) {
throw new IllegalStateException("unable to locate truststore:/" + serverConfig.getTrustStore());
}
}
SSLContextBuilder builder = SSLContextBuilder.create()
.setProtocol("TLSv1.2");
if (trustStoreUrl!=null) {
builder.loadTrustMaterial(trustStoreUrl, serverConfig.getTrustStorePassword());
}
return builder.build();
} catch (Exception ex) {
throw new IllegalStateException("unable to establish SSL context", ex);
}
}
239.14. JUnit @Test
The core parts of the JUnit test are pretty basic once we have the HTTPS/Authn-enabled RestTemplate
and baseUrl injected.
From here it is just a normal test, but activity is remote on the server side.
public class HttpsRestTemplateIT {
@Autowired (1)
private RestTemplate authnUser;
@Autowired (2)
private URI authnUrl;
@Test
public void user_can_call_authenticated() {
//given a URL to an endpoint that accepts only authenticated calls
URI url = UriComponentsBuilder.fromUri(authnUrl)
.queryParam("name", "jim").build().toUri();
//when called with an authenticated identity
ResponseEntity<String> response = authnUser.getForEntity(url, String.class);
//then expected results returned
then(response.getStatusCode()).isEqualTo(HttpStatus.OK);
then(response.getBody()).isEqualTo("hello, jim");
}
}
1 | RestTemplate with authentication and HTTPS aspects addressed using filters |
2 | authnUrl built from ServerConfig and injected into test |
239.15. Maven Verify
When we execute mvn verify
(with option to add clean
), we see the port being determined and assigned to the server.http.port
Maven property.
$ mvn verify ... - build-helper-maven-plugin:3.1.0:reserve-network-port (reserve-network-port) Reserved port 52024 for server.http.port (1) ...(2) - maven-surefire-plugin:3.0.0-M5:test (default-test) @ https-hello-example --- ... - spring-boot-maven-plugin:2.4.2:repackage (package) @ https-hello-example --- Replacing main artifact with repackaged archive (3) - spring-boot-maven-plugin:2.4.2:start (pre-integration-test) @ https-hello-example ---
1 | the port identified by build-helper-maven-plugin as 52024 |
2 | Surefire tests firing at an earlier test phase |
3 | server starting in the pre-integration-test phase |
239.15.1. Server Output
When the server starts, we can see that the https
profile is activate and Tomcat was assigned the 52024
port value from the build.
HttpsExampleApp#logStartupProfileInfo:664 The following profiles are active: https (1) TomcatWebServer#initialize:108 Tomcat initialized with port(s): 52024 (https) (2) TomcatWebServer#start:220 Tomcat started on port(s): 52024 (https) with context path '' (2)
1 | https profile has been activated on the server |
2 | server HTTP(S) port assigned to 52024 |
239.15.2. JUnit Client Output
When the JUnit client starts, we can see that SSL is enabled and the baseURL contains https
and the dynamically assigned port 52024
.
HttpsRestTemplateIT#logStartupProfileInfo:664 The following profiles are active: its (1) ClientTestConfiguration#authnUrl:64 baseUrl=https://localhost:52024 (2) ClientTestConfiguration#authnUser:107 enabling SSL requests (3)
1 | its profile is active in JUnit client |
2 | baseUrl is assigned https and port 52024 , with the latter dynamically assigned at build-time |
3 | SSL has been enabled on client |
239.15.3. JUnit Test DEBUG
There is some DEBUG logged during the activity of the test(s).
GET /api/authn/hello?name=jim, headers=[accept:"text/plain, application/json, application/xml, application/*+json, text/xml, application/*+xml, */*", authorization:"Basic dXNlcjpwYXNzd29yZA==", host:"localhost:52024", connection:"Keep-Alive", user-agent:"masked", accept-encoding:"gzip,deflate"]]
239.15.4. Failsafe Test Results
Test results are reported.
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.409 s - in info.ejava.examples.svc.https.HttpsRestTemplateIT [INFO] Results: [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
239.15.5. Server is Stopped
Server is stopped.
[INFO] --- spring-boot-maven-plugin:2.4.2:stop (post-integration-test) [INFO] Stopping application... 15:29:42.178 RMI TCP Connection(4)-127.0.0.1 INFO XBeanRegistrar$SpringApplicationAdmin#shutdown:159 Application shutdown requested.
239.15.6. Test Results Asserted
Test results are asserted.
[INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M5:verify (verify) @ https-hello-example --- [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS
240. Summary
In this module we learned:
-
the basis of how HTTPS forms trusted, private communications
-
how to generate a self-signed certificate for demonstration use
-
how to enable HTTPS/TLS within our Spring Boot application
-
how to add an optional redirect and why it may not be necessary
-
how to setup and run a Maven Failsafe Integration Test
Assignment 3: Security
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
This is a single assignment that has been broken into incremental, compatible portions based on the completed API assignment. It should be turned in as a single tree of Maven modules that can build from the root level down.
241. Assignment Starter
There is a project within homesales-starter/assignment3-homesales-security/homesales-security-svc
that contains some ground work for the security portions of the assignment.
It contains:
-
a set of
@Configuration
classes nested within theSecurityConfiguration
class. Each@Configuration
class or construct is profile-constrained to match a section of the assignment. -
the shell of a secure HomeSalesService wrapper
It is your choice whether to use the "layered/wrapped" approach (where you implement separate/layered API and security modules) or "embedded/enhanced" approach (where you simply enhance the API solution with the security requirements). -
a
@Configuration
class that instantiates the correct HomeSaleService under the given profile/runtime context. -
base unit integration tests that align with the sections of the assignment and pull in base tests from the support module. Each test case activates one or more profiles identified by the assignment.
The meat of the assignment is focused in the following areas
-
configuring web security beans to meet the different security levels of the assignment. You may use the deprecated
WebSecurityConfigurerAdapter
or its component-based replacement. Each profile will start over. The starter is configured for the new component-based approach. You will end up copy/pasting filtering rules forward to follow-on configurations of advancing profiles. -
the unit tests are well populated
-
each of the tests have been
@Disabled
and rely on your test helper from the API tests to map HomeSaleDTO references to your HomeSale DTO class.
-
-
For the other server-side portions of the assignment
-
there is a skeletal
IdentityController
that lays out portions of thewhoAmI
andauthorities
endpoints. -
there is a skeletal
SecureHomeSalesServiceWrapper
and@Configuration
class that can be used to optionally wrap your assignment 2 implementation. The satisfactory alternative is to populate your existing assignment 2 implementations to meet the assignment 3 requirements. Neither path (layer versus enhance in-place) will get you more or less credit. Choose the path that makes the most sense to you.The layered approach provides an excuse to separate the solution into layers of components and provide practice doing so. If you layer your assignment3 over your assignment2 solution as separate modules, you will need to make sure that the dependencies on assignment2 are vanilla JARs and not Spring Boot executable JARs. Luckily the ejava-build-parent
made that easy to do with classifying the executable JAR withbootexec
.
-
242. Assignment Support
The assignment support module(s) in homebuyers-support/homebuyers-support-security
again provide some examples and ground work for you to complete the assignment — by adding a dependency.
I used a layered approach to secure Homes and Buyers.
This better highlighted what was needed for security because it removed most of the noise from the assignment 2 functional threads.
It also demonstrated some weaving of components within the auto-configuration.
Adding the dependency on the homebuyers-support-security-svc
adds this layer to the homebuyers-support-api-svc
components.
The following dependency can replace your current dependency on the homebuyers-support-api-svc
.
<dependency>
<groupId>info.ejava.assignments.security.homesales</groupId>
<artifactId>homebuyers-support-security-svc</artifactId>
<version>${ejava.version}</version>
</dependency>
The support module comes with an extensive amount of tests that permit you to focus your attention on the security configuration and security implementation of the HomeSale service. The following test dependency can provide you with many test constructs.
<dependency>
<groupId>info.ejava.assignments.security.homesales</groupId>
<artifactId>homebuyers-support-security-svc</artifactId>
<classifier>tests</classifier>
<version>${ejava.version}</version>
<scope>test</scope>
</dependency>
242.1. High Level View
Between the support (primary and test) modules and starter examples, most of your focus can be placed on completing the security configuration and service implementation to satisfy the security requirements.
The support module provides
-
Home and Buyer services that will operate within your application and will be secured by your security configuration. Necessary internal security checks are included within the Home and Buyer services, but your security configuration will use path-based security access to provide interface access control to these externally provided resources.
The support test modules provide
-
@TestConfiguration
classes that supply the necessary beans for the tests to be completed. -
Test cases that are written to be base classes of
@SpringBootTest
test cases supplied in your assignment. The starter provides most of what you will need for your security tests.
Your main focus should be within the security configuration and HomeSale service implementation classes and running the series of provided tests.
The individual sections of this assignment are associated with one or more Spring profiles.
The profiles are activated by your @SpringBootTest
test cases.
The profiles will activate certain test configurations and security configurations your are constructing.
-
Run a test
-
Fix a test result with either a security configuration or service change
-
rinse and repeat
The tests are written to execute from the sub-class in your area. With adhoc navigation, sometimes the IDE can get lost — lose the context of the sub-class and provide errors as if there were only the base class. If that occurs — make a more direct IDE command to run the sub-class to clear the issue. |
243. Assignment 3a: Security Authentication
243.1. Anonymous Access
243.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of configuring Spring Web Security for authentication requirements. You will:
-
activate Spring Security
-
create multiple, custom authentication filter chains
-
enable open access to static resources
-
enable anonymous access to certain URIs
-
enforce authenticated access to certain URIs
243.1.2. Overview
In this portion of the assignment you will be activating and configuring the security configuration to require authentication to certain resource operations while enabling access to other resources operations.
243.1.3. Requirements
-
Add a static text file
past_transactions.txt
that will be made available below the/content/
URI. Place the following title in the first line so that the following will be made available from a web client.past_transactions.txt$ curl -X GET http://localhost:8080/content/past_transactions.txt Past HomeSales
Static Resources Served from classpathBy default, static content is served out of the classpath from several named locations including
classpath:/static/
,classpath:/public/
,classpath:/resources/
, andclasspath:/META-INF/resources/
-
Configure anonymous access for the following resources methods
-
static content below
/content
Configure HttpSecurity to ignore all calls below /content/
. Leverage theantMatchers()
to express the pattern./content/**
blob indicates "anything below"/content
. -
HEAD
for all resources -
GET
for home and homeSale resources..Configure HttpSecurity to permit all HEAD
calls matching any URI — whether it exists or not. Leverage theantMatcher()
to express the method and pattern.Configure HttpSecurity to permit all GET
calls matching URIs below homes and homeSales. Leverage theantMatcher()
to express the method and pattern.
-
-
Turn off
CSRF
protections.Configure HttpSecurity to disable CSRF processing. This will prevent ambiguity between a CSRF or authorization rejection for non-safe HTTP methods. -
Configure authenticated access for the following resource operations
-
GET
calls for buyer resources. No one can gain access to a Buyer without being authenticated. -
non-safe calls (
POST
,PUT
, andDELETE
) for homes, buyers, and homeSales resourcesConfigure HttpSecurity to authenticate any request that was not yet explicitly permitted
-
-
Create a unit integration test case that verifies (provided)
-
anonymous user access granted to static content
-
anonymous user access granted to a
HEAD
call to home and buyer resources -
anonymous user access granted to a
GET
call to home and homeSale resources -
anonymous user access denial to a
GET
call to buyer resources -
anonymous user access denial to non-safe call to each resource type
Denial must be because of authentication requirements and not because of a CSRF failure. Disable CSRF for all API security configurations. All tests will use an anonymous caller in this portion of the assignment. Authenticated access is the focus of a later portion of the assignment.
-
-
Restrict this security configuration to the
anonymous-access
profile and activate that profile while testing this section.
|
243.1.4. Grading
Your solution will be evaluated on:
-
activate Spring Security
-
whether Spring security has been enabled
-
-
create multiple, custom authentication filter chains
-
whether access is granted or denied for different resource URIs and methods
-
-
enable open access to static resources
-
whether anonymous access is granted to static resources below
/content
-
-
enable anonymous access to certain URIs
-
whether anonymous access has been granted to dynamic resources for safe (
GET
) calls
-
-
enforce authenticated access to certain URIs
-
whether anonymous access is denied for dynamic resources for unsafe (
POST
,PUT
, andDELETE
) calls
-
243.1.5. Additional Details
-
No accounts are necessary for this portion of the assignment. All testing is performed using an anonymous caller.
243.2. Authenticated Access
243.2.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of authenticating a client and identifying the identity of a caller. You will:
-
add an authenticated identity to
RestTemplate
orWebClient
client -
locate the current authenticated user identity
243.2.2. Overview
In this portion of the assignment you will be authenticating with the API (using RestTemplate
or WebClient
) and tracking the caller’s identity within the application.
You’re starting point should be an application with functional Homes, Buyers, and HomeSales API services. Homes and Buyers were provided to you. You implemented HomeSales in assignment2. All functional tests for HomeSales are passing.
243.2.3. Requirements
-
Configure the application to use a test username/password of
ted/secret
during testing with theauthenticated-access
profile.Account properties should only be active when using the
authenticated-access
profile.spring.security.user.name: spring.security.user.password:
Active profiles can be named for individual Test Cases.
@SpringBootTest(...) @ActiveProfiles({"test", "authenticated-access"})
Property values can be injected into the test configurations in order to supply known values.
@Value("${spring.security.user.name:}") (1) private String username; @Value("${spring.security.user.password:}") private String password;
1 value injected into property if defined — blank if not defined -
Configure the application to support
BASIC
authentication.Configure HttpSecurity to enable HTTP BASIC authentication. -
Turn off
CSRF
protections.Configure HttpSecurity to disable CSRF processing. -
Turn off sessions, and any other filters that prevent API interaction beyond authentication.
Configure HttpSecurity to use Stateless session management and other properties as required. -
Add a new resource
api/whoAmI
-
supply two access methods:
GET
andPOST
. Configure security such that neither require authentication.Configure HttpSecurity to permit all method requests for /api/whoAmI
. No matter which HttpMethod is used. Pay attention to the order of the authorize requests rules definition. -
both methods must determine the identity of the current caller and return that value to the caller. When called with no authenticated identity, the methods should should return a String value of “(null)” (open_paren + null + close_paren)
You may inject or programmatically lookup the user details for the caller identity within the server-side controller method.
-
-
Create a set of unit integration tests that demonstrate the following (provided):
-
authentication denial when using a known username but bad password
Any attempt to authenticate with a bad credential will result in a 401/UNAUTHORIZED
error no matter if the resource call requires authentication or not.Credentials can be applied to RestTemplate
using interceptors. -
successful authentication using a valid username/password
-
successful identification of the authenticated caller identity using the
whoAmI
resource operations -
successful authenticated access to POST/create home, buyer, and homeSale resource operations
-
-
Restrict this security configuration to the
authenticated-access
profile and activate that profile during testing this section.@Configuration(proxyBeanMethods = false) @Profile({"authenticated-access", "userdetails"}) (1) public class PartA2_AuthenticatedAccess { === @SpringBootTest(classes={...}, webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) @ActiveProfiles({"test", "authenticated-access"}) (2) public class MyA2_AuthenticatedAccessNTest extends A2_AuthenticatedAccessNTest {
1 activated with authenticated-access
profile2 activating desired profiles -
Establish a security configuration that is active for the
nosecurity
profile which allows for but does not require authentication to call each resource/method. This will be helpful to simplify some demonstration scenarios where security is not essential.@Configuration(proxyBeanMethods = false) @Profile("nosecurity") (1) public class PartA2b_NoSecurity { === @SpringBootTest(classes= { ... @ActiveProfiles({"test", "nosecurity"}) (1) public class MyA2b_NoSecurityNTest extends A2b_NoSecurityNTest {
The
security
profile will allow unauthenticated access to operations that are also free of CSRF checks.curl -X POST http://localhost:8080/api/homes -H 'Content-Type: application/json' -d '{ ... }' { ... }
243.2.4. Grading
Your solution will be evaluated on:
-
add an authenticated identity to
RestTemplate
orWebClient
client-
whether you have implemented stateless API authentication (
BASIC
) in the server -
whether you have successfully completed authentication using a Java client
-
whether you have correctly demonstrated and tested for authentication denial
-
whether you have demonstrated granted access to an unsafe methods for the home, buyer, and homeSale resources.
-
-
locate the current authenticated user identity
-
whether your server-side components are able to locate the identity of the current caller (authenticated or not).
-
243.3. User Details
243.3.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of assembling a UserDetailsService
to track the credentials of all users is the service.
You will:
-
build a
UserDetailsService
implementation to host user accounts and be used as a source for authenticating users -
build an injectable UserDetailsService
-
encode passwords
243.3.2. Overview
In this portion of the assignment, you will be starting with a security configuration with the authentication requirements of the previous section. The main difference here is that there will be multiple users and your server-side code needs to manage multiple accounts and permit them each to authenticate.
To accomplish this, you will be constructing an AuthenticationManager
to provide the user credential authentication required by the policies in the SecurityFilterChain
.
Your supplied UserDetailService
will be populated with at least 5 users when the application runs with the userdetails
profile.
The homebuyers-support-security-svc
module contains a YAML file activated with the userdetails
profile. The YAML file expresses the following users with credentials.
These are made available by injecting the Accounts
@ConfigurationProperty
bean.
-
mary/secret
-
lou/secret
-
murry/secret
-
ted/secret
-
sueann/betty
243.3.3. Requirements
-
Create a
UserDetailsService
that is activated during theuserdetails
profile.The InMemoryUserDetailsManager
is fine for this requirement. You have the option of using other implementation types if you wish but they cannot require the presence of an external resource (i.e., JDBC option must use an in-memory database).-
expose the
UserDetailsService
as a@Bean
that can be injected into other factories.@Bean public UserDetailsService userDetailsService(PasswordEncoder encoder, ...) {
-
populate it with the 5 users from an injected
Accounts
@ConfigurationProperty
beanThe Accounts
bean is provided for you in theProvidedAuthorizationTestHelperConfiguration
in the support module. -
store passwords for users using a BCrypt hash algorithm.
Both the
BCryptPasswordEncoder
and theDelegatingPasswordEncoder
will encrypt with Bcrypt.
-
-
Create a unit integration test case that (provided):
-
activates the
userdetails
profile@Configuration(proxyBeanMethods = false) @Profile({"nosecurity","userdetails", "authorities", "authorization"})(1) public class PartA3_UserDetailsPart { === @SpringBootTest(classes={...}, @ActiveProfiles({"test","userdetails"}) (2) public class MyA3_UserDetailsNTest extends A3_UserDetailsNTest {
1 activated with userdetails
profile2 activating desired profiles -
verifies successful authentication and identification of the authenticated username using the
whoAmI
resource -
verifies successful access to a one POST/create home, buyer, and homeSale resource operation for each user
The test(s) within the support module provides much/all of this test coverage.
-
243.3.4. Grading
Your solution will be evaluated on:
-
build a
UserDetailsService
implementation to host user accounts and be used as a source for authenticating users-
whether your solution can host credentials for multiple users
-
whether your tests correctly identify the authenticated caller for each of the users
-
whether your tests verify each authenticated user can create a Home, Buyer, and HomeSale
-
-
build an injectable UserDetailsService
-
whether the
UserDetailsService
was exposed using a@Bean
factory
-
-
encode passwords
-
whether the password encoding was explicitly set to create BCrypt hash
-
243.3.5. Additional Details
-
There is no explicit requirement that the
UserDetailsService
be implemented using a database. If you do use a database, use an in-memory RDBMS so that there are no external resources required. -
You may use a DelegatingPasswordEncoder to satisfy the BCrypt encoding requirements, but the value stored must be in BCrypt form.
-
The implementation choice for
PasswordEncoder
andUserDetailsService
is separate from one another and can be made in separate@Bean
factories.@Bean public PasswordEncoder passwordEncoder() {...} @Bean public UserDetailsService userDetailsService(PasswordEncoder encoder, ...) {...}
-
The commonality of tests differentiated by different account properties is made simpler with the use of JUnit
@ParameterizedTest
. However, by default method sources are required to be declared as Java static methods — unable to directly reference beans injected into non-static attributes.@TestInstance(TestInstance.Lifecycle.PER_CLASS)
can be used to allow the method source to be declared a Java non-static method and directly reference the injected Spring context resources. -
You may annotate a
@Test
or@Nested
testcase class with@DirtiesContext
to indicate that the test makes changes that can impact other tests and the Spring context should be rebuilt after finishing.
244. Assignment 3b: Security Authorization
244.1. Authorities
244.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of authorities. You will:
-
define role-based and permission-based authorities
244.1.2. Overview
In this portion of the assignment, you will start with the authorization and user details configuration of the previous section and enhance each user with authority information.
You will be assigning authorities to users and verifying them with a unit test. You will add an additional ("authorities") resource to help verify the assertions.
Figure 108. Authorities Test Resource
|
Figure 109. Assignment Principles and Authorities
|
The homebuyers-support-security-svc
module contains a YAML file activated with the authorities
and authorization
profiles. The YAML file expresses the following users, credentials, and authorities
-
mary: ROLE_ADMIN, ROLE_MEMBER
-
lou: ROLE_MGR (no role member)
-
murry: ROLE_MEMBER, PROXY
-
ted: ROLE_MEMBER
-
sueann: ROLE_MEMBER
244.1.3. Requirements
-
Create an
api/authorities
resource with aGET
method-
accepts an "authority" query parameter
-
returns (textual) "TRUE" or "FALSE" depending on whether the caller has the authority assigned
-
-
Create one or more unit integration test cases that (provided):
-
activates the
authorities
profile -
verifies the unauthenticated caller has no authorities assigned
-
verifies each the authenticated callers have proper assigned authorities
-
244.1.4. Grading
Your solution will be evaluated on:
-
define role-based and permission-based authorities
-
whether you have assigned required authorities to users
-
whether you have verified an unauthenticated caller does not have identified authorities assigned
-
whether you have verified successful assignment of authorities for authenticated users as clients
-
244.1.5. Additional Details
-
There is no explicit requirement to use a database for the user details in this assignment. However, if you do use an database, please use an in-memory RDBMS instance so there are no external resources required.
-
The repeated tests due to different account data can be simplified using a
@ParameterizedTest
. However, you will need to make use of@TestInstance(TestInstance.Lifecycle.PER_CLASS)
in order to leverage the Spring context in the@MethodSource
.
244.2. Authorization
244.2.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of implementing access control based on authorities assigned for URI paths and methods. You will:
-
implement URI path-based authorization constraints
-
implement annotation-based authorization constraints
-
implement role inheritance
-
implement an AccessDeniedException controller advice to hide necessary stack trace information and provide useful error information to the caller
244.2.2. Overview
In this portion of the assignment you will be restricting access to specific resource operations based on path and expression-based resource restrictions.
244.2.3. Requirements
-
Update the resources to store the identity of the caller with the resource created to identify the owner
-
Homes will store the username (as username) of the home creator (provided)
-
Buyers will store the username (as username) of the buyer creator (provided)
-
HomeSales should store the username (as username) of the home it represents (new)
The Home creator may create the HomeSale or a user with the PROXY permission (murry) may create the HomeSale for the creator of the Home. -
the identity should not be included in the marshalled DTO returned to callers
The DTO may have this field omitted or marked transient so that the element is never included.
-
-
Define access constraints for resources using path and expression-based authorizations. Specific authorization restriction details are in the next requirement.
-
Use path-based authorizations for Home and Buyer resources, assigned to the URIs since you are not working with modifiable source code for these two resources
Configure HttpSecurity to enforce the required roles for homes and buyers calls -
Use expression-based authorizations for homeSales resources, applied to the service methods
Use annotations on HomeSale service methods to trigger authorization checks. Remember to enable global method security for the prePostEnabled
annotation form for your application and tests.
-
-
Restrict access according to the following
-
Homes (path-based authorizations)
-
continue to restrict non-safe methods to authenticated users (from Authenticated Access)
-
authenticated users may modify homes that match their login (provided)
-
authenticated users may delete homes that match their login or have the
MGR
role (provided) -
only authenticated users with the
ADMIN
role can delete all homes (new)
-
-
Buyers (path-based authorizations)
-
continue to restrict non-safe methods to authenticated users (from Authenticated Access)
-
authenticated users may create a single Buyer to represent themselves (provided)
-
authenticated users may modify a buyer that matches their login (provided)
-
authenticated users may delete a buyer that matches their login or have the
MGR
role (provided) -
only authenticated users with the
ADMIN
role can delete all buyers (new)
-
-
HomeSales (annotation-based authorizations) — through the use of
@PreAuthorize
and programmatic checks.-
authenticated users may create a HomeSale for an existing Home they created (new)
-
authenticated users with the
MEMBER
role may update (purchase) aHomeSale
with aBuyer
they created (new)i.e. caller "sueann" can update (purchase) existing HomeSale (owned by ted
) with Buyer representing "sueann" -
authenticated users with the
MGR
role orPROXY
permission may update (purchase) a HomeSale with anyBuyer
(new)i.e. caller "lou" and "murry" can update (purchase) any existing HomeSale for any Buyer -
authenticated users may only delete a HomeSale for a Home matching their username (new)
-
authenticated users with the
MGR
role may delete any HomeSale (new) -
authenticated users with the
ADMIN
role may delete all HomeSales (new)
-
-
-
Form the following role inheritance
-
ROLE_ADMIN
inherits fromROLE_MGR
so that users withADMIN
role will also be able to performMGR
role operationsRegister a RoleHierarchy
relationship definition between inheriting roles.
-
-
Implement a mechanism to hide stack trace or other details from the API caller response when an
AccessDeniedException
occurs. From this point forward — stack traces can be logged on the server-side but should not be exposed in the error payload. -
Create unit integration test(s) to demonstrate the behavior defined above
244.2.4. Grading
Your solution will be evaluated on:
-
implement URI path-based authorization constraints
-
whether path-based authorization constraints were properly defined and used for Home and Buyer resource URIs
-
-
implement annotation-based authorization constraints
-
whether expression-based authorization constraints were properly defined for HomeSale service methods
-
-
implement role inheritance
-
whether users with the
ADMIN
role were allowed to invoke methods constrained to theMGR
role.
-
-
implement an AccessDeniedException controller advice to hide necessary stack trace information and provide useful error information to the caller
-
whether stack trace or other excessive information was hidden from the access denied caller response
-
244.2.5. Additional Details
-
Role inheritance can be defined using a
RoleHierarchy
bean.
245. Assignment 3c: HTTPS
245.1. HTTPS
245.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of protecting sensitive data exchanges with one-way HTTPS encryption. You will:
-
generate a self-signed certificate for demonstration use
-
enable HTTPS/TLS within Spring Boot
-
implement a Maven Failsafe integration test using HTTPS
245.1.2. Overview
In this portion of the assignment you will be configuring your server to support HTTPS only when the https
profile is active.
245.1.3. Requirements
-
Implement an HTTPS-only communication in the server when running with the
https
profile.-
package a demonstration certificate in the
src
tree -
supply necessary server-side properties in a properties file
With this in place and the application started with the
authorities
,authorization
, andhttps
profiles active …java -jar target/homesales-security-svc-1.0-SNAPSHOT-bootexec.jar --spring.profiles.active=authorities,authorization,https
should enable the following results
curl -k -X GET https://localhost:8443/api/whoAmI -u "mary:secret" && echo mary $ curl -k -X DELETE https://localhost:8443/api/homes -u mary:secret $ curl -k -X DELETE https://localhost:8443/api/homes -u sueann:betty {"timestamp":"2022-09-25T00:51:41.200+00:00","status":403,"error":"Forbidden","path":"/api/homes"}
-
-
Implement an IT/Failsafe integration test that will
-
start the server with the
authorities
,authorization
, andhttps
profiles activeThe starter module has the Maven pom.xml plugin basics for this. -
configure and start JUnit with an integration test
The starter module has the Maven pom.xml plugin basics for this. -
establish an HTTPS connection with
RestTemplate
orWebClient
The starter module provides a @TestConfiguration
class that will instantiate a RestTemplate capable of using HTTPS and the test activates the "its" Spring profile. You need to supply the details of the HTTPS client properties within that (missing) properties file. -
successfully invoke using HTTPS and evaluate the result
The starter module provides a skeletal test for you to complete. The actual test performed by you can be any end-to-end communication with the server that uses HTTPS.
-
245.1.4. Grading
Your solution will be evaluated on:
-
generate a self-signed certificate for demonstration use
-
whether a demonstration PKI certificate was supplied for demonstrating the HTTPS capability of the application
-
-
enable HTTPS/TLS within Spring Boot
-
whether the integration test client was able to perform round-trip communication with the server
-
whether the communications used HTTPS protocol
-
245.1.5. Additional Details
-
There is no requirement to implement an HTTP to HTTPS redirect
-
See the
svc/svc-security/https-hello-example
for Maven andRestTemplate
setup example. -
Implement the end-to-end integration test with HTTP before switching to HTTPS to limit making too many concurrent changes.
246. Assignment 3d: AOP and Method Proxies
In this assignment, we are going to use some cross-cutting aspects of Spring AOP and dynamic capabilities of Java reflection to modify component behavior. As a specific example, you are going to modify Home and Buyer service behavior without changing the Homes/Buyers source code. We are going to add a requirement that certain fields be null and non-null.
The first two sections (reflection and dynamic proxies) of the AOP assignment lead up to the final solution (aspects) in the third section.
No Homes/Buyers Compilation Dependencies
The src/main portions of the assignment must have no compilation dependency on Homes and Buyers.
All compilation dependencies and most knowledge of Homes and Buyers will be in the JUnit tests.
|
246.1. Reflection
246.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of using reflection to obtain and invoke a method proxy. You will:
-
obtain a method reference and invoke it using Java Reflection
246.1.2. Overview
In this portion of the assignment you will implement a set of helper methods for a base class (NullPropertyAssertion
) located in the homebuyers-support-aop
module — tasked with validating whether objects have nulls for an identified property.
If the assertion fails — an exception will be thrown by the base class.
Your derived class will assist in locating the method reference to the "getter" and invoking it to obtain the current property value.
You will find an implementation class shell, @Configuration
class with @Bean
factory, and JUnit test in the assignment 3 security "starter".
No Home/Buyer Compilation Dependency
Note that you see no mention of Home or Buyer in the above description/diagram.
Everything will be accomplished using Java reflection.
|
You will need to create a dependency on the Spring Boot AOP starter, the ejava AOP support JAR, and AOP test JAR.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>info.ejava.assignments.aop.homesales</groupId>
<artifactId>homesales-support-aop</artifactId>
<version>${ejava.version}</version>
</dependency>
<dependency>
<groupId>info.ejava.assignments.aop.homesales</groupId>
<artifactId>homesales-support-aop</artifactId>
<classifier>tests</classifier>
<version>${ejava.version}</version>
<scope>test</scope>
</dependency>
246.1.3. Requirements
-
implement the
getGetterMethod()
to locate and return thejava.lang.reflect.Method
for the requested (getter) method name.-
return the
Method
object if it exists -
use an
Optional<Method>
return type and return anOptional.empty()
if it does not exist. It is not an error if the getter does not exist.It is not considered an error if the getterName requested does not exist. JUnit tests — which know the full context of the call — will decide if the result is correct or not.
-
-
implement the
getValue()
method to return the value reported by invoking the getter method using reflection-
return the value returned
-
report any exception thrown as a runtime exception
Any exception calling an existing getter is unexpected and should be reported as a (subclass of) RuntimeException
to the higher-level code to indicate this type of abnormality
-
-
use the supplied JUnit unit tests to validate your solution. There is no Spring context required/used for this test.
The tests will use non-HomeDTO/BuyerDTO objects on purpose. The validator under test must work with any type of object passed in. Only "getter" access will be supported for this capability. No "field" access will be performed.
246.1.4. Grading
Your solution will be evaluated on:
-
obtain a method reference and invoke it using Java Reflection
-
whether you are able to return a reference to the specified getter method using reflection
-
whether you are able to obtain the current state of the property using the getter method reference using reflection
-
246.1.5. Additional Details
-
The JUnit test (
MyD1_ReflectionMethodTest
) is supplied and should not need to be modified beyond enabling it. -
The tests will feed your solution with DTO instances in various valid and invalid state according to the isNull and notNull calls. There will also be some non-DTO classes used to verify the logic is suitable for generic parameter types.
246.2. Dynamic Proxies
246.2.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of implementing a dynamic proxy. You will:
-
create a JDK Dynamic Proxy to implement adhoc interface(s) to form a proxy at runtime for implementing advice
246.2.2. Overview
In this portion of the assignment you will implement a dynamic proxy that will invoke the NullPropertyAssertion
from the previous part of the assignment.
The primary work will be in implementing an InvocationHandler
implementation that will provide the implementation "advice" to the target object for selected methods.
The advice will be a null check of specific properties of objects passed as parameters to the target object.
The constructed Proxy
will be used as a stepping stone to better understand the Aspect solution you will implement in the next section.
The handler/proxy will not be used in the final solution.
It will be instantiated and tested within the JUnit test using an injected HomesService and BuyersService from the Spring context.
No HomeDTO/BuyerDTO Compilation Dependency
There will be no mention of or direct dependency on Homes or Buyers in your solution.
Everything will be accomplished using Java reflection.
|
246.2.3. Requirements
-
implement the details for the
NullValidatorHandler
class that-
implements the
java.lang.reflect.InvocationHandler
interface -
has implementation properties
-
nullPropertyAssertion — implements the check (from previous section)
-
target object that it is the proxy for
-
methodName it will operate against on the target object
-
nullProperties — propertyNames it will test for null (used also in previous section)
-
nonNullProperties — propertyNames it will test for non-null (used also in previous section)
-
-
has a static builder method called
newInstance
that accepts the above properties and returns ajava.lang.reflect.Proxy
implementing all interfaces of the targetIn order for the tests to locate your factory method, it must have the following exact signature.
//NullPropertyAssertion.class, Object.class, String.class, List.class, List.class public static <T> T newInstance( NullPropertyAssertion nullPropertyAssertion, T target, String methodName, List<String> nullProperties, List<String> nonNullProperties) {
org.apache.commons.lang3.ClassUtils
can be used to locate all interfaces of a class. -
implements the
invoke()
method of theInvocationHandler
interface to validate the arguments passed to the method matching themethodName
. Checks properties matchingnullProperties
andnonNullProperties
.nullPropertyAssertion.assertConditions(arg, isNullProperties, true); nullPropertyAssertion.assertConditions(arg, nonNullProperties, false);
-
if the arguments are found to be in error — let the nullPropertyAssertion throw its exception
-
otherwise allow the call to continue to the target object/method with provided args
Validator limitsEach proxy instance will only validate parameters of the named method but all parameters to that method. There is no way to distinguish between param1.propertyName and param2.propertyName with our design and will not need to
-
-
-
Use the provided JUnit tests (
MyD2_DynamnicProxyNTest
) verify the functionality of the requirements above once you activate.
246.2.4. Grading
Your solution will be evaluated on:
-
create a JDK Dynamic Proxy to implement adhoc interface(s) to form a proxy at runtime for implementing advice
-
whether you created a
java.lang.reflect.InvocationHandler
that would perform the validation on proxied targets -
whether you successfully created and returned a dynamic proxy from the
newInstance
factory method -
whether your returned proxy was able to successfully validate arguments passed to target
-
246.2.5. Additional Details
-
The JUnit test case uses real HomeService and BuyerService @Autowired from the Spring context, but only instantiates the proxy as a POJO within the test case.
-
The JUnit tests make calls to the HomeService and BuyerService, passing valid and invalid instances according to how your proxy was instantiated during the call to
newInstance()
. -
The completed dynamic proxy will not be used beyond this section of the assignment. However, try to spot what the dynamic proxy has in common with the follow-on Aspect solution — since Spring interface Aspects are implemented using Dynamic Proxies.
246.3. Aspects
246.3.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of adding functionality to a point in code using an Aspect. You will:
-
implement dynamically assigned behavior to methods using Spring Aspect-Oriented Programming (AOP) and AspectJ
-
identify method join points to inject using pointcut expressions
-
implement advice that executes before join points
-
implement parameter injection into advice
246.3.2. Overview
In this portion of the assignment you will implement a ValidatorAspect
that will "advise" service calls to provided secure Home/Buyer wrapper services.
The aspect will be part of your overall Spring context and will be able to change the behavior of the Homes and Buyers used within your runtime application and tests when activated. The aspect will specifically reject any object passed to these services that violate defined create/update constraints. The aspect will be defined to match the targeted methods and arguments but will have no compilation dependency that is specific to Homes or Buyers.
No HomeDTO/BuyerDTO Compilation Dependency
There will be no direct dependency on Homes or Buyers.
Everything will be accomplished using AOP expressions and Java reflection constructs.
|
246.3.3. Requirements
-
add AOP dependencies using the
spring-boot-starter-aop
module -
enable AspectJ auto proxy handling within your application
-
create a
ValidatorAspect
component-
inject a
NullPropertyAssertion
bean andList<MethodConstraints>
(populated from application-aop.yaml from the support module) -
make the class an Aspect
-
make the overall component conditional on the
aop
profileThis means the Home and Buyer services will act as delivered when the aop
profile is not active and enforce the new constraints when the profile is active.
-
-
define a "pointcut" that will target the calls to the SecureHomesWrapper and SecureBuyersWrapper services. This pointcut should both:
-
define a match pattern for the "join point"
-
-
define an "advice" method to execute before the "join point"
-
uses the previously defined "pointcut" to identify its "join point"
-
uses typed or dynamic advice parameters
Using typed advice parameters will require a close relationship between advice and service methods. Using dynamic advice parameters ( JoinPoint
) will allow a single advice method be used for all service methods. The former is more appropriate for targeted behavior. The latter is more appropriate to general purpose behavior. -
invokes the
NullPropertyAssertion
bean with the parameters passed to the method and asks to validate for isNull and notNull.nullPropertyAssertion.assertConditions(arg, conditions.getIsNull(), true); nullPropertyAssertion.assertConditions(arg, conditions.getNotNull(), false);
-
-
Use the provided JUnit test cases to verify completion of the above requirements
-
the initial
MyD3a1_NoAspectSvcNTest
deactivates theaop
profile to demonstrate baseline behavior we want to change/not-change -
the second
MyD3a2_AspectSvcNTest
activates theaop
profile and asserts different results -
the third
MyD3b_AspectWebNTest
demonstrates that the aspect was added to the beans injected into the Homes and Buyers controllers.If AspectSvcNTest
passes, andAspectWebNTest
fails, check that you are advising the secure wrapper and not the base service.
-
246.3.4. Grading
Your solution will be evaluated on:
-
implement dynamically assigned behavior to methods using Spring Aspect-Oriented Programming (AOP) and AspectJ
-
whether you activated aspects in your solution
-
whether you supplied an aspect as a component
-
-
identify method join points to inject using pointcut expressions
-
whether your aspect class contained a pointcut that correctly matched the target method join points
-
-
implement advice that executes before join points
-
whether your solution implements required validation before allowing target to execute
-
whether your solution will allow the target to execute if validation passes
-
whether your solution will prevent the target from executing and report the error if validation fails
-
-
implement parameter injection into advice
-
whether you have implemented typed or dynamic access to the arguments passed to the target method
-
246.3.5. Additional Details
-
You are to base your AOP validation based on the data found within the injected
List<MethodConstraints>
. Each instance contains the name of the method and a list of property names that should be subject to isNull or notNull assertions.application-aop.yaml (in support)aop: validation: - methodName: createHome isNull: [id, username] notNull: [value, yearBuilt, bedRooms ] - methodName: updateHome isNull: [username] - methodName: createBuyer isNull: [id, username] notNull: [firstName, lastName, email] - methodName: updateBuyer isNull: [username]
-
The JUnit test case uses real, secured HomesService and BuyerServices @Autowired from the Spring context augmented with aspects. Your aspect will be included when the
aop
profile is active. -
The JUnit tests will invoke the security service wrappers directly and through the controllers — calling with valid and invalid parameters according to the definition in the
application-aop.yaml
file.Note: The
Home
andBuyer
services has some base validation built in and will not accept a non-null ID during the create calls. You cannot change that behavior and will not have to. -
Ungraded activity — create a breakpoint in the Advice and Homes/BuyersService(s) when executing the tests. Observe the call stack to see how you got to that point, where you are headed, and what else is in that call stack.
Spring AOP and Method Proxies
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
247. Introduction
Many times, business logic must execute additional behavior that is outside of its core focus. For example, auditing, performance metrics, transaction control, retry logic, etc. We need a way to bolt on additional functionality ("advice") without knowing what the implementation code ("target") will be, what interfaces it will implement, or even if it will implement an interface.
Frameworks must solve this problem every day. To fully make use of advanced frameworks like Spring and Spring Boot, it is good to understand and be able to implement solutions using some of the dynamic behavior available like:
-
Java Reflection
-
Dynamic (Interface) Proxies
-
CGLIB (Class) Proxies
-
Aspect Oriented Programming (AOP)
247.1. Goals
You will learn:
-
to decouple potentially cross-cutting logic away from core business code
-
to obtain and invoke a method reference
-
to wrap add-on behavior to targets in advice
-
to construct and invoke a proxy object containing a target reference and decoupled advice
-
to locate callable join point methods in a target object and apply advice at those locations
247.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
obtain a method reference and invoke it using Java Reflection
-
create a JDK Dynamic Proxy to implement adhoc interface(s) to form a proxy at runtime for implementing advice
-
create a CGLIB Proxy to dynamically create a subclass to form a proxy at runtime for implementing advice
-
implement dynamically assigned behavior to methods using Spring Aspect-Oriented Programming (AOP) and AspectJ
-
identify method join points to inject using pointcut expressions
-
implement advice that executes before, after, and around join points
-
implement parameter injection into advice
248. Rationale
Our problem starts off with two independent classes depicted as ClassA
and ClassB
and
a caller labelled as Client
. doSomethingA()
is unrelated to doSomethingB()
but may
share some current or future things in common — like transactions, database connection,
or auditing requirements.
Figure 115. New Cross-Cutting Design Decision
|
We come to a point where Reuse is good, but depending on how you reuse may get you in trouble. |
248.1. Adding More Cross-Cutting Capabilities
Of course, it does not end there and we have established what could be a bad pattern
of coupling the core business code of What other choice do we have? |
Figure 116. More Cross-Cutting Capabilities
|
248.2. Using Proxies
What we can do instead is leave However, there is a slight flaw to overcome. |
We need to tie these unrelated parts together. Lets begin to solve this with Java Reflection.
249. Reflection
Java Reflection provides a means to examine a Java class and determine facts about it that can be useful in describing it and invoking it.
Lets say I am in |
We can use Java Reflection to solve this problem by
-
inspecting the target object’s class (
ClassA
orClassB
) to obtain a reference to the method (doSomethingA()
ordoSomethingB()
) we wish to call -
identify the arguments to be passed to the call
-
identify the target object to call
Let’s take a look at this in action.
249.1. Reflection Method
Java Reflection provides the means to obtain a handle to
Fields
and
Methods
of a class. In the example below, I show code that obtains a reference to the createItem
method, in the ItemsService
interface, and accepting objects of type ItemDTO
.
import info.ejava.examples.svc.aop.items.services.ItemsService;
import java.lang.reflect.Method;
...
Method method = ItemsService.class.getMethod("createItem", ItemDTO.class); (1)
log.info("method: {}", method);
...
1 | getting reference to method within ItemsService interface |
Java Class has numerous methods that allow us to inspect interfaces and classes
for fields, methods, annotations, and related types (e.g., inheritance).
getMethod() looks for a method with the String name ("createItem") provided
that accepts the supplied type(s) (ItemDTO
). Arguments is a vararg array,
so we can pass in as many types as necessary to match the intended call.
The result is a Method
instance that we can use to refer to the specific method
to be called — but not the target object or specific argument values.
method: public abstract info.ejava.examples.svc.aop.items.dto.ItemDTO
info.ejava.examples.svc.aop.items.services.ItemsService.createItem(
info.ejava.examples.svc.aop.items.dto.ItemDTO)
249.2. Calling Reflection Method
We can invoke the Method reference with a target object and arguments
and receive the response as a java.lang.Object
.
import info.ejava.examples.svc.aop.items.dto.BedDTO;
import info.ejava.examples.svc.aop.items.services.ItemsService;
import java.lang.reflect.Method;
...
ItemsService<BedDTO> bedsService = ... (1)
Method method = ...
//invoke method using target object and args
Object[] args = new Object[] { BedDTO.bedBuilder().name("Bunk Bed").build() }; (2)
log.info("invoke calling: {}({})", method.getName(), args);
Object result = method.invoke(bedsService, args); (3)
log.info("invoke {} returned: {}", method.getName(), result);
1 | we must obtain a target object to invoke |
2 | arguments are passed into invoke() using a varargs array |
3 | invoke the method on the object and obtain the result |
invoke calling: createItem([BedDTO(super=ItemDTO(id=0, name=Bunk Bed))])
invoke createItem returned: BedDTO(super=ItemDTO(id=1, name=Bunk Bed))
249.3. Reflection Method Result
The end result is the same as if we called the BedsServiceImpl
directly.
//obtain result from invoke() return
BedDTO createdBed = (BedDTO) result;
log.info("created bed: {}", createdBed);----
created bed: BedDTO(super=ItemDTO(id=1, name=Bunk Bed))
There, of course, is more to Java Reflection that can fit into a single
example — but lets now take that fundamental knowledge of a Method
reference and use that to form some more encapsulated proxies using
JDK Dynamic (Interface) Proxies and CGLIG (Class) Proxies.
250. JDK Dynamic Proxies
The JDK offers a built-in mechanism for creating dynamic proxies for interfaces. These are dynamically generated classes — when instantiated at runtime — will be assigned an arbitrary set of interfaces to implement. This allows the generated proxy class instances to be passed around in the application, masquerading as the type(s) they are a proxy for. This is useful in frameworks to implement features for implementation types they will have no knowledge of until runtime. This eliminates the need for compile-time generated proxies. [50]
250.1. Creating Dynamic Proxy
We create a JDK Dynamic Proxy using the static newProxyInstance()
method
of the java.lang.reflect.Proxy
class. It takes three arguments: the classloader
for the supplied interfaces, the interfaces to implement, and handler to implement
the custom advice details of the proxy code and optionally complete the intended call
(e.g., security policy check handler).
In the example below, GrillServiceImpl
extends ItemsServiceImpl<T>
, which implements
ItemsService<T>
. We are creating a dynamic proxy that will implement
that interface and delegate to an advice instance of MyInvocationHandler
that we write.
import info.ejava.examples.svc.aop.items.aspects.MyDynamicProxy;
import info.ejava.examples.svc.aop.items.services.GrillsServiceImpl;
import info.ejava.examples.svc.aop.items.services.ItemsService;
import java.lang.reflect.Proxy;
...
ItemsService<GrillDTO> grillService = new GrillsServiceImpl(); (1)
ItemsService<GrillDTO> grillServiceProxy = (ItemsService<GrillDTO>)
Proxy.newProxyInstance( (2)
grillService.getClass().getClassLoader(),
new Class[]{ItemsService.class}, (3)
new MyInvocationHandler(grillService) (4)
);
log.info("created proxy {}", grillServiceProxy.getClass());
log.info("handler: {}",
Proxy.getInvocationHandler(grillServiceProxy).getClass());
log.info("proxy implements interfaces: {}",
ClassUtils.getAllInterfaces(grillsServiceProxy.getClass()));
1 | create target implementation object unknown to dynamic proxy |
2 | instantiate dynamic proxy instance and underlying dynamic proxy class |
3 | identify the interfaces implemented by the dynamic proxy class |
4 | provide advice instance that will handle adding proxy behavior and invoking target instance |
250.2. Generated Dynamic Proxy Class Output
The output below shows the $Proxy86
class that was dynamically created
and that it implements the ItemsService
interface and will delegate to
our custom MyInvocationHandler
advice.
created proxy: class com.sun.proxy.$Proxy86
handler: class info.ejava.examples.svc.aop.items.aspects.MyInvocationHandler
proxy implements interfaces:
[interface info.ejava.examples.svc.aop.items.services.ItemsService, (1)
interface java.io.Serializable] (2)
1 | ItemService interface supplied at runtime |
2 | Serializable interface implemented by DynamicProxy implementation class |
250.3. Alternative Proxy All Construction
Alternatively, we can write a convenience builder that simply forms a proxy for all implemented interfaces of the target instance. The Apache Commons ClassUtils utility class is used to obtain a list of all interfaces implemented by the target object’s class and parent classes.
import org.apache.commons.lang3.ClassUtils;
...
@RequiredArgsConstructor
public class MyInvocationHandler implements InvocationHandler {
private final Object target;
public static Object newInstance(Object target) {
return Proxy.newProxyInstance(target.getClass().getClassLoader(),
ClassUtils.getAllInterfaces(target.getClass()).toArray(new Class[0]),(1)
new MyInvocationHandler(target));
}
1 | Apache Commons ClassUtils used to obtain all interfaces for target object |
250.4. InvocationHandler Class
JDK Dynamic Proxies require an instance that implements the InvocationHandler
interface to implement the custom work (aka "advice") and delegate the call to the target instance (aka "around advice").
This is a class that we write. The InvocationHandler
interface defines a single reflection-oriented invoke()
method taking the proxy, method, and arguments to the call.
Construction of this object is up to us — but the raw target object is likely a minimum requirement — as we will need that to make a clean, delegated call.
...
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
...
@RequiredArgsConstructor
public class MyInvocationHandler implements InvocationHandler { (1)
private final Object target; (2)
@Override
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable { (3)
//proxy call
}
}
1 | class must implement InvocationHandler |
2 | raw target object to invoke |
3 | invoke() is provided reflection information for call |
250.5. InvocationHandler invoke() Method
The invoke()
method performs any necessary advice before or after the proxied call
and uses standard method reflection to invoke the target method.
You should recall the Method
class from the earlier discussion on Java Reflection.
The response or thrown exception can be directly returned or thrown from this method.
@Override
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
//do work ...
log.info("invoke calling: {}({})", method.getName(), args);
Object result = method.invoke(target, args);
//do work ...
log.info("invoke {} returned: {}", method.getName(), result);
return result;
}
Must invoke raw target instance — not the proxy
Calling the supplied proxy instance versus the raw target instance would
result in a circular loop. We must somehow have a reference to the raw target
to be able to directly invoke that instance.
|
250.6. Calling Proxied Object
The following is an example of the proxied object being called using its implemented interface.
GrillDTO createdGrill = grillServiceProxy.createItem(
GrillDTO.grillBuilder().name("Broil King").build());
log.info("created grill: {}", createdGrill);
The following shows that the call was made to the target object,
work was able to be performed before and after the call within the
InvocationHandler
, and the result was passed back as the result
of the proxy.
invoke calling: createItem([GrillDTO(super=ItemDTO(id=0, name=Broil King))]) (1)
invoke createItem returned: GrillDTO(super=ItemDTO(id=1, name=Broil King)) (2)
created grill: GrillDTO(super=ItemDTO(id=1, name=Broil King)) (3)
1 | work performed within the InvocationHandler advice prior to calling target |
2 | work performed within the InvocationHandler advice after calling target |
3 | target method’s response returned to proxy caller |
JDK Dynamic Proxies are definitely a level up from constructing and
calling Method
directly as we did with straight Java Reflection.
They are the proxy type of choice within Spring but have the limitation
that they can only be used to proxy interface-based objects and not
no-interface classes.
If we need to proxy a class that does not implement an interface — CGLIB is an option.
251. CGLIB
Code Generation Library (CGLIB) is a byte instrumentation library that allows the manipulation or creation of classes at runtime. [51]
Where JDK Dynamic Proxies implement a proxy behind an interface, CGLIB dynamically implements a sub-class of the class proxied.
This library has been fully integrated into spring-core
, so there is nothing
additional to add to begin using it directly (and indirectly when we get to Spring AOP).
251.1. Creating CGLIB Proxy
The following code snippet shows a CGLIB proxy being constructed for a ChairsServiceImpl
class that implements no interfaces.
Take note that there is no separate target instance — our generated proxy class will
be a subclass of ChairsServiceImpl
and it will be part of the target instance.
The real target will be in the base class of the instance.
We register an instance of MethodInterceptor
to handle
the custom advice and optionally complete the call. This is a class that we write when authoring
CGLIB proxies.
...
import info.ejava.examples.svc.aop.items.aspects.MyMethodInterceptor;
import info.ejava.examples.svc.aop.items.services.ChairsServiceImpl;
import org.springframework.cglib.proxy.Enhancer;
...
Enhancer enhancer = new Enhancer();
enhancer.setSuperclass(ChairsServiceImpl.class); (1)
enhancer.setCallback(new MyMethodInterceptor()); (2)
ChairsServiceImpl chairsServiceProxy = (ChairsServiceImpl)enhancer.create(); (3)
log.info("created proxy: {}", chairsServiceProxy.getClass());
log.info("proxy implements interfaces: {}",
ClassUtils.getAllInterfaces(chairsServiceProxy.getClass()));
1 | create CGLIB proxy as sub-class of target class |
2 | provide instance that will handing adding proxy advice behavior and invoking base class |
3 | instantiate CGLIB proxy — this is our target object |
The following output shows that the proxy class is of a CGLIB proxy type and
implements no known interface other than the CGLIB Factory
interface.
Note that we were able to successfully cast this proxy to the ChairsServiceImpl
type — the assigned base class of the dynamically built proxy class.
created proxy: class info.ejava.examples.svc.aop.items.services.GrillsServiceImpl$$EnhancerByCGLIB$$a4035db5
proxy implements interfaces: [interface org.springframework.cglib.proxy.Factory] (1)
1 | Factory interface implemented by CGLIB proxy implementation class |
251.2. MethodInterceptor Class
To intelligently process CGLIB callbacks, we need to supply an advice class that implements
MethodInterceptor
. This gives us access to the proxy instance being invoked,
the reflection Method
reference, call arguments, and a new type of parameter — MethodProxy
, which is a reference to the target method implementation
in the base class.
...
import org.springframework.cglib.proxy.MethodInterceptor;
import org.springframework.cglib.proxy.MethodProxy;
import java.lang.reflect.Method;
public class MyMethodInterceptor implements MethodInterceptor {
@Override
public Object intercept(Object proxy, Method method, Object[] args,
MethodProxy methodProxy) (1)
throws Throwable {
//proxy call
}
}
1 | additional method used to invoke target object implementation in base class |
251.3. MethodInterceptor intercept() Method
The details of the intercept()
method are much like the other proxy techniques
we have looked at and will look at in the future. The method has a chance to
do work before and after calling the target method, optionally calls the target method,
and returns the result. The main difference is that this proxy is operating within
a subclass of the target object.
import org.springframework.cglib.proxy.MethodProxy;
import java.lang.reflect.Method;
...
@Override
public Object intercept(Object proxy, Method method, Object[] args,
MethodProxy methodProxy) throws Throwable {
//do work ...
log.info("invoke calling: {}({})", method.getName(), args);
Object result = methodProxy.invokeSuper(proxy, args); (1)
//do work ...
log.info("invoke {} returned: {}", method.getName(), result);
//return result
return result;
}
1 | invoking target object implementation in base class |
251.4. Calling CGLIB Proxied Object
The net result is that we are still able to reach the target object’s method and also have the additional capability implemented around the call of that method.
ChairDTO createdChair = chairsServiceProxy.createItem(
ChairDTO.chairBuilder().name("Recliner").build());
log.info("created chair: {}", createdChair);
invoke calling: createItem([ChairDTO(super=ItemDTO(id=0, name=Recliner))])
invoke createItem returned: ChairDTO(super=ItemDTO(id=1, name=Recliner))
created chair: ChairDTO(super=ItemDTO(id=1, name=Recliner))
252. Interpose
OK — all that dynamic method calling was interesting, but what sets all that up? Why do we see proxies sometimes and not other times in our Spring application? We will get to the setup in a moment, but lets first address when can we expect this type of behavior magically setup for us and not. What occurs automatically is primarily a matter of "interpose". Interpose is a term used when we have a chance to insert a proxy in between the caller and target object. The following diagram depicts three scenarios: buddy methods of same class, calling method of manually instantiated class, and calling method of injected object.
-
Buddy Method: For the
ClassA
withm1()
andm2()
in the same class, Spring will normally not attempt to interpose a proxy in between those two methods (e.g.,@PreAuthorize
,@Cacheable
). It is a straight Java call between methods of a class. That means no matter what annotations and constraints we define form2()
they will not be honored unless they are also onm1()
. There is at least one exception for buddy methods, for@Configuration(proxyBeanMethods=true)
— where a CGLIB proxy class will intercept calls between@Bean
methods to prevent direct buddy method calls from instantiating independent POJO instances per call (i.e., not singleton components). -
Manually Instantiated: For
ClassB
wherem2()
has been moved to a separate class but manually instantiated — no interpose takes place. This is a straight Java call between methods of two different classes. That also means that no matter what annotations are defined form2()
, they will not be honored unless they are also in place onm1()
. It does not matter thatClassC
is annotated as a@Component
sinceClassB.m1()
manually instantiated it versus obtaining it from the Spring Context. -
Injected: For
ClassD
, an instance ofClassC
is injected. That means that the injected object has a chance to be a proxy class (either JDK Dynamic Proxy or CGLIB Proxy) to enforce the constraints defined onClassC.m2()
.
Keep this in mind as you work with various Spring configurations and review the following sections.
253. Spring AOP
Spring Aspect Oriented Programming (AOP) provides a framework where we can define
cross-cutting behavior to injected @Components
using one or more of the available
proxy capabilities behind the scenes. Spring AOP Proxy uses JDK Dynamic Proxy to
proxy beans with interfaces and CGLIB to proxy bean classes lacking interfaces.
Spring AOP is a very capable but scaled back and simplified implementation of AspectJ. All the capabilities of AspectJ are allowed within Spring. However, the features integrated into Spring AOP itself are limited to method proxies formed at runtime. The compile-time byte manipulation offered by AspectJ is not part of Spring AOP.
253.1. AOP Definitions
The following represent some core definitions to AOP. Advice, AOP proxy, target object and (conceptually) the join point should look familiar to you. The biggest new concept here is the pointcut predicate that is used to locate the join point and how that is all modularized through a concept called aspect.
Figure 118. AOP Key Terms
|
Join Point is a point in the program (e.g., calling a method or throwing exception) in which we want to inject some code. For Spring AOP — this is always an event related to a method. AspectJ offers more types of join points. Pointcut is a predicate rule that matches against a join point (i.e., a method begin, success, exception, or finally) and associates advice (i.e., more code) to execute at that point in the program. Spring uses the AspectJ pointcut language. Advice is an action to be taken at a join point. This can be before, after (success, exception, or always), or around a method call. Advice chains are formed much the same as Filter chains of the web tier. AOP proxy is an object created by AOP framework to implement advice against join points that match the pointcut predicate rule. Aspect is a modularization of a concern that cuts across multiple classes/methods (e.g., timing measurement, security auditing, transaction boundaries). An aspect is made up of one or more advice action(s) with an assigned pointcut predicate. Target object is an object being advised by one or more aspects. Spring uses proxies to implement advised (target) objects. |
Introduction is declaring additional methods or fields on behalf of a type for an advised object, allowing us to add an additional interface and implementation.
Weaving is the linking aspects to objects. Spring AOP does this at runtime. AspectJ offers compile-time capabilities.
253.2. Enabling Spring AOP
To use Spring AOP, we must first add a dependency on spring-boot-starter-aop
.
That adds a dependency on spring-aop
and aspectj-weaver
.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
We enable Spring AOP within our Spring Boot application by adding the @EnableAspectJProxy
annotation to a @Configuration
class or to the @SpringBootApplication
class.
...
import org.springframework.context.annotation.EnableAspectJAutoProxy;
...
@Configuration
@EnableAspectJAutoProxy
public class ...
253.3. Aspect Class
Starting at the top — we have the Aspect
class. This is a special @Component
that defines
the pointcut predicates to match and advice (before, after success, after throws, after finally,
and around) to execute for join points.
...
import org.aspectj.lang.annotation.Aspect;
@Component (1)
@Aspect (2)
public class ItemsAspect {
//pointcuts
//advice
}
1 | annotated @Component to be processed by the application context |
2 | annotated as @Aspect to have pointcuts and advice inspected |
253.4. Pointcut
In Spring AOP — a pointcut is a predicate rule that identifies the method join points to match against for Spring beans (only). To help reduce complexity of definition, when using annotations — pointcut predicate rules are expressed in two parts:
-
pointcut expression that determines exactly which method executions we are interested in
-
signature with name and parameters
The signature is a method that returns void. The method name and parameters will be usable in later advice declarations. Although, the abstract example below does not show any parameters, they will become quite useful when we begin injecting typed parameters.
import org.aspectj.lang.annotation.Pointcut;
...
@Pointcut(/* pointcut expression*/) (1)
public void serviceMethod(/* pointcut parameters */) {} //pointcut signature (2)
1 | pointcut expression defines predicate matching rule(s) |
2 | pointcut signature defines a name and parameter types for the pointcut expression |
253.5. Pointcut Expression
The Spring AOP pointcut expressions use the the AspectJ pointcut language. Supporting the following designators
|
match method execution join points |
|
match methods below a package or type |
|
match methods of a type that has been annotated with a given annotation |
|
match the proxy for a given type — useful when injecting typed advice arguments |
|
match the target for a given type — useful when injecting typed advice arguments |
|
match methods of a type that has been annotated with specific annotation |
|
match methods that have been annotated with a given annotation |
|
match methods that accept arguments matching this criteria |
|
match methods that accept arguments annotated with a given annotation |
|
Spring AOP extension to match Spring bean(s) based on a name or wildcard name expression |
Don’t use pointcut contextual designators for matching
Spring AOP Documentation recommends we use within and/or execution as our first choice of performant predicate matching and add contextual designators (args , @annotation , this , target , etc.) when needed for additional work versus using contextual designators alone for matching.
|
253.6. Example Pointcut Definition
The following example will match against any method in the services package, taking any number of arguments and returning any return type.
//execution(<return type> <package>.<class>.<method>(params))
@Pointcut("execution(* info.ejava.examples.svc.aop.items.services.*.*(..))") //expression
public void serviceMethod() {} //signature
253.7. Combining Pointcut Expressions
We can combine pointcut definitions into compound definitions by referencing them
and joining with a boolean ("&&" or "||") expression. The example below adds
an additional condition to serviceMethod()
that restricts matches to methods
accepting a single parameter of type GrillDTO
.
@Pointcut("args(info.ejava.examples.svc.aop.items.dto.GrillDTO)") //expression
public void grillArgs() {} //signature
@Pointcut("serviceMethod() && grillArgs()") //expression
public void serviceMethodWithGrillArgs() {} //signature
253.8. Advice
The code that will act on the join point is specified in a method of the
@Aspect
class and annotated with one of the advice annotations. The following
is an example of advice that executes before a join point.
...
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
@Component
@Aspect
@Slf4j
public class ItemsAspect {
...
@Before("serviceMethodWithGrillArgs()")
public void beforeGrillServiceMethod() {
log.info("beforeGrillServiceMethod");
}
The following table contains a list of the available advice types:
@Before |
runs prior to calling join point |
@AfterReturning |
runs after successful return from join point |
@AfterThrowing |
runs after exception from join point |
@After |
runs after join point no matter — i.e., finally |
@Around |
runs around join point. Advice must call join point and return result. |
An example of each is towards the end of these lecture notes. For now, lets go into detail on some of the things we have covered.
254. Pointcut Expression Examples
Pointcut expressions can be very expressive and can take some time to fully understand. The following examples should provide a head start in understanding the purpose of each and how they can be used. Other examples are available in the Spring AOP page.
254.1. execution Pointcut Expression
The execution expression allows for the definition of several pattern elements that can identify the point of a method call. The full format is as follows. [52]
execution(modifiers-pattern? ret-type-pattern declaring-type-pattern?name-pattern(param-pattern) throws-pattern?)
However, only the return type, name, and parameter definitions are required.
execution(ret-type-pattern name-pattern(param-pattern))
The specific patterns include:
-
modifiers-pattern - OPTIONAL access definition (e.g., public, protected)
-
ret-type-pattern - MANDATORY type pattern for return type
Example Return Type Patternsexecution(info.ejava.examples.svc.aop.items.dto.GrillDTO *(..)) (1) execution(*..GrillDTO *(..)) (2)
1 matches methods that return an explicit type 2 matches methods that return GrillDTO
type from any package -
declaring-type-pattern - OPTIONAL type pattern for package and class
Example Declaring Type (package and class) Patternexecution(* info.ejava.examples.svc.aop.items.services.GrillsServiceImpl.*(..)) (1) execution(* *..GrillsServiceImpl.*(..)) (2) execution(* info.ejava.examples.svc..Grills*.*(..)) (3)
1 matches methods within an explicit class 2 matches methods within a GrillsServiceImpl
class from any package3 matches methods from any class below …svc
and start with lettersGrills
-
name-pattern - MANDATORY pattern for method name
Example Name (method) Patternexecution(* createItem(..)) (1) execution(* *..GrillsServiceImpl.createItem(..)) (2) execution(* create*(..)) (3)
1 matches any method called createItem
of any class of any package2 matches any method called createItem
within classGrillsServiceImpl
of any package3 matches any method of any class of any package that starts with the letters create
-
param-pattern - MANDATORY pattern to match method arguments.
()
will match a method with no arguments.(*)
will match a method with a single parameter.(T,*)
will match a method with two parameters with the first parameter of typeT
.(..)
will match a method with 0 or more parametersExample noargs () Patternexecution(void info.ejava.examples.svc.aop.items.services.GrillsServiceImpl.deleteItems())(1) execution(* *..GrillsServiceImpl.*()) (2) execution(* *..GrillsServiceImpl.delete*()) (3)
1 matches an explicit method that takes no arguments 2 matches any method within a GrillsServiceImpl
class of any package and takes no arguments3 matches any method from the GrillsServiceImpl
class of any package, taking no arguments, and the method name starts withdelete
Example Single Argument Patternsexecution(* info.ejava.examples.svc.aop.items.services.GrillsServiceImpl.createItem(*))(1) execution(* createItem(info.ejava.examples.svc.aop.items.dto.GrillDTO)) (2) execution(* *(*..GrillDTO)) (3)
1 matches an explicit method that accepts any single argument 2 matches any method called createItem
that accepts a single parameter of a specific type3 matches any method that accepts a single parameter of GrillDTO
from any packageExample Multiple Argument Patternsexecution(* info.ejava.examples.svc.aop.items.services.GrillsServiceImpl.updateItem(*,*))(1) execution(* updateItem(int,*)) (2) execution(* updateItem(int,*..GrillDTO)) (3)
1 matches an explicit method that accepts two arguments of any type 2 matches any method called updateItem
that accepts two arguments of typeint
and any second type3 matches any method called updateItem
that accepts two arguments of typeint
andGrillDTO
from any package
254.2. within Pointcut Expression
The within pointcut expression is similar to supplying an execution
expression
with just the declaring type pattern specified.
within(info.ejava.examples.svc.aop.items..*) (1) within(*..ItemsService+) (2) within(*..BedsServiceImpl) (3)
1 | match all methods in package info.ejava.examples.svc.aop.items and its subpackages |
2 | match all methods in classes that implement ItemsService interface |
3 | match all methods in BedsServiceImpl class |
254.3. target and this Pointcut Expressions
The target
and this
pointcut designators are very close in concept
to within
when used in the following way. The difference will show up
when we later use them to inject typed arguments into the advice.
These are considered "contextual" designators and are primarily placed in
the predicate to pull out members of the call for injection.
target(info.ejava.examples.svc.aop.items.services.BedsServiceImpl) (1) this(info.ejava.examples.svc.aop.items.services.BedsServiceImpl) (2)
1 | matches methods of target object — object being proxied — is of type |
2 | matches methods of proxy object — object implementing proxy — is of type |
@target(org.springframework.stereotype.Service) (1) @annotation(org.springframework.core.annotation.Order) (2)
1 | matches all methods in class annotated with @Service |
2 | matches all methods having annotation @Order |
255. Advice Parameters
Our advice methods can accept two types of parameters:
-
typed using context designators
-
dynamic using
JoinPoint
Context designators like args
, @annotation
, target
, and this
allow us to assign a logical name to a specific part of a method call so
that can be injected into our advice method.
Dynamic injection involves a single JointPoint
object that can answer
the contextual details of the call.
Do not use context designators alone as predicates to locate join points
The Spring AOP documentation recommends using within and execution
designators to identify a pointcut and contextual designators like args
to bind aspects of the call to input parameters. That is guidance is not
fully followed in the following context examples. We easily could have made
the non-contextual designators more explicit.
|
255.1. Typed Advice Parameters
We can use the args
expression in the pointcut to identify criteria for
parameters to the method and to specifically access one or more of them.
The left side of the following pointcut expression matches on all executions of
methods called createGrill()
taking any number of arguments. The right side of the
pointcut expression matches on methods with a single argument. When we match that
with the createGrill
signature — the single argument must be of the type GrillDTO
@Pointcut("execution(* createItem(..)) && args(grillDTO)") (1) (2)
public void createGrill(GrillDTO grillDTO) {} (3)
@Before("createGrill(grill)") (4)
public void beforeCreateGrillAdvice(GrillDTO grill) { (5)
log.info("beforeCreateGrillAdvice: {}", grill);
}
1 | left hand side of pointcut expression matches execution of createItem methods with any parameters |
2 | right hand side of pointcut expression matches methods with a single argument and maps that to name grillDTO |
3 | pointcut signature maps grillDTO to a Java type — the names within the pointcut must match |
4 | advice expression references createGrill pointcut and maps first parameter to name grill |
5 | advice method signature maps name grill to a Java type — the names within the advice must match
but do not need to match the names of the pointcut |
The following is logged before the createGrill method is called.
beforeCreateGrillAdvice: GrillDTO(super=ItemDTO(id=0, name=weber))
255.2. Multiple,Typed Advice Parameters
We can use the args
designator to specify multiple arguments as well. The right hand side
of the pointcut expression matches methods that accept two parameters. The pointcut method
signature maps these to parameters to Java types. The example advice references the pointcut
but happens to use different parameter names. The names used match the parameters used in the
advice method signature.
@Pointcut("execution(* updateItem(..)) && args(grillId, updatedGrill)")
public void updateGrill(int grillId, GrillDTO updatedGrill) {}
@Before("updateGrill(id, grill)")
public void beforeUpdateGrillAdvice(int id, GrillDTO grill) {
log.info("beforeUpdateGrillAdvice: {}, {}", id, grill);
}
The following is logged before the updateGrill method is called.
beforeUpdateGrillAdvice: 1, GrillDTO(super=ItemDTO(id=0, name=green egg)
255.3. Annotation Parameters
We can target annotated classes and methods and make the value of the annotation
available to the advice using the pointcut signature mapping. In the example
below, we want to match on all methods below the items
package that have
an @Order
annotation and pass that annotation as a parameter to the advice.
import org.springframework.core.annotation.Order;
...
@Pointcut("@annotation(order)") (1)
public void orderAnnotationValue(Order order) {} (2)
@Before("within(info.ejava.examples.svc.aop.items..*) && orderAnnotationValue(order)")
public void beforeOrderAnnotation(Order order) { (3)
log.info("before@OrderAnnotation: order={}", order.value()); (4)
}
1 | we are targeting methods with an annotation and mapping that to the name order |
2 | the name order is being mapped to the type org.springframework.core.annotation.Order |
3 | the @Order annotation instance is being passed into advice |
4 | the value for the @Order annotation can be accessed |
I have annotated one of the candidate methods with the @Order
annotation
and assigned a value of 100
.
import org.springframework.core.annotation.Order;
...
@Service
public class BedsServiceImpl extends ItemsServiceImpl<BedDTO> {
@Override
@Order(100)
public BedDTO createItem(BedDTO item) {
In the output below — we see that the annotation was passed into the
advice and provided with the value 100
.
before@OrderAnnotation: order=100
Annotations can pass contextual values to advice
Think how a feature like this — where an annotation on a method with attribute values — can be of use with security role annotations.
|
255.4. Target and Proxy Parameters
We can map the target and proxy references into the advice method using
the target()
and this()
designators. In the example below, the
target
name is mapped to the ItemsService<BedsDTO>
interface
and the proxy
name is mapped to a vanilla java.lang.Object
.
The target
type mapping constrains this to calls to the BedsServiceImpl
.
@Before("target(target) && this(proxy)")
public void beforeTarget(ItemsService<BedDTO> target, Object proxy) {
log.info("beforeTarget: target={}, proxy={}",target.getClass(),proxy.getClass());
}
The advice prints the name of each class. The output below shows that the target is of the target implementation type (i.e., no proxy layer) and the proxy is of a CGLIB proxy type (i.e., it is the proxy to the target).
beforeTarget:
target=class info.ejava.examples.svc.aop.items.services.BedsServiceImpl,
proxy=class info.ejava.examples.svc.aop.items.services.BedsServiceImpl$$EnhancerBySpringCGLIB$$a38982b5
255.5. Dynamic Parameters
If we have generic pointcuts and do not know ahead of time which parameters we will
get and in what order, we can inject a
JoinPoint
parameter as the first argument
to the advice.
This object has many methods that provide dynamic access
to the context of the method — including parameters.
The example below logs the classname, method, and array of parameters in the call.
@Before("execution(* *..Grills*.*(..))")
public void beforeGrillsMethodsUnknown(JoinPoint jp) {
log.info("beforeGrillsMethodsUnknown: {}.{}, {}",
jp.getTarget().getClass().getSimpleName(),
jp.getSignature().getName(),
jp.getArgs());
}
255.6. Dynamic Parameters Output
The following output shows two sets of calls: createItem
and updateItem
. Each
were intercepted at the controller and service level.
beforeGrillsMethodsUnknown: GrillsController.createItem,
[GrillDTO(super=ItemDTO(id=0, name=weber))]
beforeGrillsMethodsUnknown: GrillsServiceImpl.createItem,
[GrillDTO(super=ItemDTO(id=0, name=weber))]
beforeGrillsMethodsUnknown: GrillsController.updateItem,
[1, GrillDTO(super=ItemDTO(id=0, name=green egg))]
beforeGrillsMethodsUnknown: GrillsServiceImpl.updateItem,
[1, GrillDTO(super=ItemDTO(id=0, name=green egg))]
256. Advice Types
We have five advice types:
-
@Before
-
@AfterReturning
-
@AfterThrowing
-
@After
-
@Around
For the first four — using JoinPoint
is optional. The last type (@Around
)
is required to inject ProceedingJoinPoint
— a subclass of JoinPoint
— in order to
delegate to the target and handle the result. Lets take a look at each in order to
have a complete set of examples.
To demonstrate, I am going to define an advice of each type that will use the same pointcut below.
@Pointcut("execution(* *..MowersServiceImpl.updateItem(*,*)) && args(id,mowerUpdate)")(1)
public void mowerUpdate(int id, MowerDTO mowerUpdate) {} (2)
1 | matches all updateItem methods calls in the MowersServiceImpl class taking two arguments |
2 | arguments will be mapped to type int and MowerDTO |
There will be two matching calls:
-
the first will be successful
-
the second will throw a NotFound RuntimeException.
256.1. @Before
The Before advice will be called prior to invoking the join point method. It has access to the input parameters and can change the contents of them. This advice does not have access to the result.
@Before("mowerUpdate(id, mowerUpdate)")
public void beforeMowerUpdate(JoinPoint jp, int id, MowerDTO mowerUpdate) {
log.info("beforeMowerUpdate: {}, {}", id, mowerUpdate);
}
The before advice only has access to the input parameters prior to making the call. It can modify the parameters, but not swap them around. It has no insight into what the result will be.
beforeMowerUpdate: 1, MowerDTO(super=ItemDTO(id=0, name=bush hog))
Since the before advice is called prior to the join point, it is oblivious that this call ended in an exception.
beforeMowerUpdate: 2, MowerDTO(super=ItemDTO(id=0, name=john deer))
256.2. @AfterReturning
After returning advice will get called when a join point successfully returns without throwing an exception. We have access to the result through an annotation field and can map that to an input parameter.
@AfterReturning(value = "mowerUpdate(id, mowerUpdate)",
returning = "result")
public void afterReturningMowerUpdate(JoinPoint jp, int id, MowerDTO mowerUpdate, MowerDTO result) {
log.info("afterReturningMowerUpdate: {}, {} => {}", id, mowerUpdate, result);
}
The @AfterReturning
advice is called only after the successful call and not the exception case.
We have access to the input parameters and the result. The result can be changed before
returning to the caller. However, the input parameters have already been processed.
afterReturningMowerUpdate: 1, MowerDTO(super=ItemDTO(id=1, name=bush hog))
=> MowerDTO(super=ItemDTO(id=1, name=bush hog))
256.3. @AfterThrowing
The @AfterThrowing
advice is called only when an exception is thrown. Like the successful
sibling, we can map the resulting exception to an input variable to make it accessible to the
advice.
@AfterThrowing(value = "mowerUpdate(id, mowerUpdate)", throwing = "ex")
public void afterThrowingMowerUpdate(JoinPoint jp, int id, MowerDTO mowerUpdate, ClientErrorException.NotFoundException ex) {
log.info("afterThrowingMowerUpdate: {}, {} => {}", id,mowerUpdate,ex.toString());
}
The @AfterThrowing
advice has access to the input parameters and the exception.
The exception will still be thrown after the advice is complete.
I am not aware of any ability to squelch the exception and return a non-exception here. Look to @Around
to give you that capability at a minimum.
afterThrowingMowerUpdate: 2, MowerDTO(super=ItemDTO(id=0, name=john deer))
=> info.ejava.examples.common.exceptions.ClientErrorException$NotFoundException: item[2] not found
256.4. @After
@After
is called after a successful return or exception thrown. It represents logic
that would commonly appear in a finally
block to close out resources.
@After("mowerUpdate(id, mowerUpdate)")
public void afterMowerUpdate(JoinPoint jp, int id, MowerDTO mowerUpdate) {
log.info("afterReturningMowerUpdate: {}, {}", id, mowerUpdate);
}
The @After
advice is always called once the joint point finishes executing.
afterReturningMowerUpdate: 1, MowerDTO(super=ItemDTO(id=1, name=bush hog))
afterReturningMowerUpdate: 2, MowerDTO(super=ItemDTO(id=0, name=john deer))
256.5. @Around
@Around
is the most capable advice but possibly the more expensive one to execute.
It has full control over the input and return values and whether the call is made at all.
The example below logs the various paths through the advice.
@Around("mowerUpdate(id, mowerUpdate)")
public Object aroundMowerUpdate(ProceedingJoinPoint pjp, int id, MowerDTO mowerUpdate) throws Throwable {
Object result = null;
try {
log.info("entering aroundMowerUpdate: {}, {}", id, mowerUpdate);
result = pjp.proceed(pjp.getArgs());
log.info("returning after successful aroundMowerUpdate: {}, {} => {}", id, mowerUpdate, result);
return result;
} catch (Throwable ex) {
log.info("returning after aroundMowerUpdate excdeption: {}, {} => {}", id, mowerUpdate, ex.toString());
result = ex;
throw ex;
} finally {
log.info("returning after aroundMowerUpdate: {}, {} => {}",
id, mowerUpdate, (result==null ? null :result.toString()));
}
}
The @Around
advice example will log activity prior to calling the join point, after
successful return from join point, and finally after all advice complete.
entering aroundMowerUpdate: 1, MowerDTO(super=ItemDTO(id=0, name=bush hog))
returning after successful aroundMowerUpdate: 1, MowerDTO(super=ItemDTO(id=1, name=bush hog))
=> MowerDTO(super=ItemDTO(id=1, name=bush hog))
returning after aroundMowerUpdate: 1, MowerDTO(super=ItemDTO(id=1, name=bush hog))
=> MowerDTO(super=ItemDTO(id=1, name=bush hog))
The @Around
advice example will log activity prior to calling the join point, after
an exception from the join point, and finally after all advice complete.
entering aroundMowerUpdate: 2, MowerDTO(super=ItemDTO(id=0, name=john deer))
returning after aroundMowerUpdate exception: 2, MowerDTO(super=ItemDTO(id=0, name=john deer))
=> info.ejava.examples.common.exceptions.ClientErrorException$NotFoundException: item[2] not found
returning after aroundMowerUpdate: 2, MowerDTO(super=ItemDTO(id=0, name=john deer))
=> info.ejava.examples.common.exceptions.ClientErrorException$NotFoundException: item[2] not found
257. Other Features
We have covered a lot of capability in this chapter and likely all you will need. However, know there were a few other topics left unaddressed that I thought might be of interest in certain circumstances.
-
Ordering - useful when we declare multiple advice for the same join point and need one to run before the other
-
Introductions - a way to add additional state and behavior to a join point/target instance
-
Programmatic Spring AOP proxy creation - a way to create Spring AOP proxies on the fly versus relying on injection. This is useful for data value objects that are typically manually created to represent a certain piece of information.
-
Schema Based AOP Definitions - Spring also offers an means to express AOP behavior using XML. They are very close in capability — so if you need the ability to flexibly edit aspects in production without changing the Java code — this is an attractive option.
258. Summary
In this module we learned:
-
how we can decouple potentially cross-cutting logic from business code using different levels of dynamic invocation technology
-
to obtain and invoke a method reference using Java Reflection
-
to encapsulate advice within proxy classes using interfaces and JDK Dynamic Proxies
-
to encapsulate advice within proxy classes using classes and CGLIB dynamically written sub-classes
-
to integrate Spring AOP into our project
-
to identify method join points using AspectJ language
-
to implement different types of advice (before, after (completion, exception, finally), and around)
-
to inject contextual objects as parameters to advice
After learning this material you will surely be able to automatically envision the implementation techniques used by Spring in order to add framework capabilities to our custom business objects. Those interfaces we implement and annotations we assign are likely the target of many Spring AOP aspects, adding advice in a configurable way.
Heroku Deployments
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
259. Introduction
To date we have been worrying about the internals of our applications, how to configure them, test them, interface with them, and how to secure them. We need others to begin seeing our progress as we continue to fill in the details to make our applications useful.
In this lecture — we will address deployment to a cloud provider. We will take a hands-on look at deploying to Heroku — a cloud platform provider that makes deploying Spring Boot and Docker-based applications part of their key business model without getting into more complex hosting frameworks.
After over 10 years of availability, Heroku has announced that their free deployments will terminate Nov 28, 2022. Obviously, this impacts the specific deployment aspects provided in this lecture. However, it does not impact the notion of what is deployable to alternate platforms when identified. |
259.1. Goals
You will learn:
-
to deploy an application under development to an cloud provider to make it accessible to Internet users
-
to deploy incremental and iterative application changes
259.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
create a new Heroku application with a choice of names
-
deploy a Spring Boot application to Heroku using the Heroku Maven Plugin
-
interact with your developed application on the Internet
-
make incremental and iterative changes
260. Heroku Background
According to their website, Heroku is a cloud provider that provides the ability to "build, deliver, monitor, and scale apps". They provide a fast way to go from "idea to URL" by bypassing company managed infrastructure. [53]
There are many cloud providers but not many are in our sweet spot of offering a platform for Spring Boot and Docker applications without the complexity of bare metal OS or a Kubernetes cluster. They also offer these basic deployments for no initial cost for non-commercial applications — such as proof of concepts and personal projects that stay within a 512MB memory limit.
There is a lot to Heroku that will not be covered here. However, this lecture will provide a good covering of how to achieve successful deployment of a Spring Boot application. In a follow-on lecture we will come back to Heroku to deploy Docker images and see the advantages it doing so. The following lists a few resources on the Heroku web site
261. Setup Heroku
You will need to setup an account with Heroku in order to use their cloud deployment environment. This is a free account and stays free until we purposely choose otherwise. If we exceed free constraints — our deployment simply will not run. There will be no surprise bill.
-
visit the Heroku Web Site
-
select [Sign Up For Free]
-
create a free account and complete the activation
-
I would suggest skipping 2-factor authentication for simplicity for class use. You can always activate it later.
-
Salesforce bought Heroku and now has some extra terms to agree to
-
-
install the command line interface (CLI) for your platform. It will be necessary to work at the shell level quite a bit
-
refer to the Heroku CLI reference as necessary
-
262. Heroku Login
Once we have an account and the CLI installed — we need to login using the CLI. This will redirect us to the browser where we can complete the login.
$ heroku login
heroku: Press any key to open up the browser to login or q to exit:
Opening browser to https://cli-auth.heroku.com/auth/cli/browser/f944d777-93c7-40af-b772-0a1c5629c609
Logging in... done
Logged in as ...
263. Create Heroku App
At this point you are ready to perform a one-time (per deployment app) process that will reserve an app-name for you on herokuapp.com.
When working with Heroku — think of app-name as a deployment target with an Internet-accessible URL that shares the same name.
For example, my app-name of ejava-boot
is accessible using https://ejava-boot.herokuapp.com
.
I can deploy one of many Spring Boot applications to that app-name (one at a time).
I can also deploy the same Spring Boot application to multiple Heroku app-names (e.g., integration and production)
Let jim have ejava : )
Please use you own naming constructs.
I am kind of fond of the ejava- naming prefix.
|
$ heroku create [app-name] (1)
Creating β¬’ [app-name]... done
https://app-name.herokuapp.com/ | https://git.heroku.com/app-name.git
1 | if app-name not supplied, a random app-name will be generated |
Heroku also uses Git repositories for deployment
Heroku creates a Git repository for the app-name that can
also be leveraged as a deployment interface. I will not be covering
that option.
|
You can create more than one heroku app and the app can be renamed with the following apps:rename
command.
$ heroku apps:rename --app oldname newname
Visit the Heroku apps page to locate technical details related to your apps.
Heroku will try to determine the resources required for the application when it is deployed the first time. Sometimes we have to give it details (e.g., provision DB)
264. Create Spring Boot Application
For this demonstration, I have created a simple Spring Boot web application (docker-hello-example) that will be part of a series of lectures this topic area. Don’t worry about the "Docker" naming for now. We will be limiting the discussion relative to this application to only the Spring Boot portions during this lecture.
264.1. Example Source Tree
The following structure shows the simplicity of the web application.
docker-hello-example/
|-- pom.xml
`-- src/main/java/info.ejava.examples.svc.docker
| `-- hello
| |-- DockerHelloExampleApp.java
| `-- controllers
| |-- ExceptionAdvice.java
| `-- HelloController.java
`-- resources
`-- application.properties
264.1.1. HelloController
The supplied controller is a familiar "hello" example, with optional authentication.
The GET method will return a hello to the name supplied in the name
query parameter.
If authenticated, the controller will also issue the caller’s associated username.
@RestController
public class HelloController {
@GetMapping(path="/api/hello",
produces = {MediaType.TEXT_PLAIN_VALUE})
public String hello(
@RequestParam("name")String name,
@AuthenticationPrincipal UserDetails user) {
String username = user==null ? null : user.getUsername();
String greeting = "hello, " + name;
return username==null ? greeting : greeting + " (from " + username + ")";
}
}
264.2. Starting Example
We can start the web application using the Spring Boot plugin run
goal.
$ mvn spring-boot:run
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (2.7.0)
...
Tomcat started on port(s): 8080 (http) with context path ''
Started DockerHelloExampleApp in 1.972 seconds (JVM running for 2.401)
264.3. Client Access
Once started, we can access the HelloController
running on localhost and the assigned 8080
port.
$ curl http://localhost:8080/api/hello?name=jim hello, jim
Security is enabled, so we can also access the same endpoint with credentials and get authentification feedback.
$ curl http://localhost:8080/api/hello?name=jim -u "user:password" hello, jim (from user)
264.4. Local Unit Integration Test
The example also includes a set of unit integration tests that perform the same sort of functionality that we demonstrated with curl a moment ago.
docker-hello-example/ `-- src/test/java/info/ejava/examples/svc | `-- docker | `-- hello | |-- ClientTestConfiguration.java | `-- HelloLocalNTest.java `-- resources `-- application-test.properties
$ mvn clean test
10:12:54.692 main INFO i.e.e.svc.docker.hello.HelloLocalNTest#init:38 baseUrl=http://localhost:51319
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.079 s - in info.ejava.examples.svc.docker.hello.HelloLocalNTest
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
265. Maven Heroku Deployment
When ready for application deployment, Heroku provides two primary styles of deployment with for a normal Maven application:
-
git repository
-
Maven plugin
The git repository requires that your deployment follow a pre-defined structure from the root — which is not flexible enough for a class demonstration tree with nested application modules. If you go that route, it may also require a separate Procfile to address startup.
The Heroku Maven plugin encapsulates everything we need to define our application startup and has no restriction on root repository structure.
265.1. Spring Boot Maven Plugin
The heroku-maven-plugin
will deploy our Spring Boot executable JAR.
We, of course, need to make sure our heroku-maven-plugin
and spring-boot-maven-plugin
configurations are consistent.
The ejava-build-parent
defines a classifier value, which gets used to separate the Spring Boot executable JAR from the standard Java library JAR.
<properties>
<spring-boot.classifier>bootexec</spring-boot.classifier>
</properties>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>${springboot.version}</version>
<configuration>
<classifier>${spring-boot.classifier}</classifier> (1)
</configuration>
<executions>
<execution>
<id>package</id>
<phase>package</phase>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
1 | used in naming the built Spring Boot executable JAR |
265.1.1. Child Project Spring Boot Maven Plugin Declaration
The child module declares the spring-boot-maven-plugin
, picking up the pre-configured repackage
goal.
<plugin> <!-- builds a Spring Boot Executable JAR -->
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
The Spring Boot executable JAR has the bootexec
classifier name appended to the version.
$ mvn clean package
...
target/
|...
|-- [ 31M] docker-hello-example-6.0.1-SNAPSHOT-bootexec.jar (2)
|-- [9.7K] docker-hello-example-6.0.1-SNAPSHOT.jar (1)
1 | standard Java library JAR |
2 | Spring Boot Executable JAR to be deployed to Heroku |
265.2. Heroku Maven Plugin
The following snippets show an example use of the Heroku Maven Plugin used in this example.
Documentation details are available on GitHub.
It has been parameterized to be able to work with most applications and is defined in the pluginDependencies section of the ejava-build-parent
parent pom.xml.
<properties>
<java.source.version>17</java.source.version>
<java.target.version>17</java.target.version>
<heroku-maven-plugin.version>3.0.4</heroku-maven-plugin.version>
</properties>
<plugin>
<groupId>com.heroku.sdk</groupId>
<artifactId>heroku-maven-plugin</artifactId>
<version>${heroku-maven-plugin.version}</version>
<configuration>
<jdkVersion>${java.target.version}</jdkVersion>
<includeTarget>false</includeTarget> (1)
<includes> (2)
<include>target/${project.build.finalName}-${spring-boot.classifier}.jar</include>
</includes>
<processTypes> (3)
<web>java $JAVA_OPTS -jar target/${project.build.finalName}-${spring-boot.classifier}.jar --server.port=$PORT $JAR_OPTS
</web>
</processTypes>
</configuration>
</plugin>
1 | don’t deploy entire contents of target directory |
2 | identify specific artifacts to deploy; Spring Boot executable JAR — accounting for classifier |
3 | takes on role of Procfile ; supplying the launch command |
You will see mention of the $PORT
parameter in the Heroku Profile documentation.
This is a value we need to set our server port to when deployed.
We can easily do that with the --server.port
property.
$JAR_OPTS
is an example of being able to define other properties to be expanded — even though we don’t have a reason at this time.
Any variables in the command line can be supplied/overridden with the configVars element.
For example, we could use that property to set the Spring profile(s).
<configVars>
<JAR_OPTS>--spring.profiles.active=authorities,authorization</JAR_OPTS>
</configVars>
265.2.1. Child Project Heroku Maven Plugin Declaration
The child module declares the heroku-maven-plugin
, picking up the pre-configured plugin.
<plugin>
<groupId>com.heroku.sdk</groupId>
<artifactId>heroku-maven-plugin</artifactId>
</plugin>
265.3. Deployment appName
The deployment will require an app-name. Heroku recommends creating a profile for each of the deployment environments (e.g., development, integration, and production) and supplying the appName in those profiles. However, I am showing just a single deployment — so I set the appName separately through a property in my settings.xml.
<properties>
<heroku.appName>ejava-boot</heroku.appName> (1)
</properties>
1 | the Heroku Maven Plugin can have its appName set using a Maven property or element. |
265.4. Example settings.xml Profile
The following shows an example of setting our heroku.appName
Maven property using $HOME/.m2/settings.xml
.
The upper profiles
portion is used to define the profile.
The lower activeProfiles
portion is used to statically declare the profile to always be active.
<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<profiles>
<profile> (1)
<id>ejava</id>
<properties>
<heroku.appName>ejava-boot</heroku.appName>
</properties>
</profile>
</profiles>
<activeProfiles> (2)
<activeProfile>ejava</activeProfile>
</activeProfiles>
</settings>
1 | defines a group of of Maven properties to be activated with a Maven profile |
2 | profiles can be statically defined to be activated |
The alternative to activeProfiles
is to use either an activation in the pom.xml or on the command line.
$ mvn (command) -Pejava
265.5. Using Profiles
If we went the profile route, it could look something like the following with dev
being unique per developer and stage
having a more consistent name across the team.
<profiles>
<profile>
<id>dev</id>
<properties> (1)
<heroku.appName>${my.dev.name}</heroku.appName>
</properties>
</profile>
<profile>
<id>stage</id>
<properties> (2)
<heroku.appName>our-stage-name</heroku.appName>
</properties>
</profile>
</profiles>
1 | variable expansion based on individual settings.xml values when -Pdev profile set |
2 | well-known-name for staging environment when -Pstage profile set |
265.6. Maven Heroku Deploy Goal
The following shows the example output for the heroku:deploy
Maven goal.
$ mvn heroku:deploy
...
[INFO] --- heroku-maven-plugin:3.0.3:deploy (default-cli) @ docker-hello-example ---
[INFO] -----> Packaging application...
[INFO] - including: target/docker-hello-example-6.0.1-SNAPSHOT-SNAPSHOT.jar
[INFO] - including: pom.xml
[INFO] -----> Creating build...
[INFO] - file: /var/folders/zm/cskr47zn0yjd0zwkn870y5sc0000gn/T/heroku-deploy10792228069435401014source-blob.tgz
[INFO] - size: 22MB
[INFO] -----> Uploading build...
[INFO] - success
[INFO] -----> Deploying...
[INFO] remote:
[INFO] remote: -----> heroku-maven-plugin app detected
[INFO] remote: -----> Installing JDK 11... done
[INFO] remote: -----> Discovering process types
[INFO] remote: Procfile declares types -> web
[INFO] remote:
[INFO] remote: -----> Compressing...
[INFO] remote: Done: 81.6M
[INFO] remote: -----> Launching...
[INFO] remote: Released v3
[INFO] remote: https://ejava-boot.herokuapp.com/ deployed to Heroku
[INFO] remote:
[INFO] -----> Done
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 35.516 s
265.7. Tail Logs
We can gain some insight into the application health by tailing the logs.
$ heroku logs --app ejava-boot --tail
Starting process with command `--server.port\=\$\{PORT:-8080\}`
...
Tomcat started on port(s): 54644 (http) with context path ''
Started DockerHelloExampleApp in 9.194 seconds (JVM running for 9.964)
265.8. Access Site
We can access the deployed application at this point using HTTPS.
$ curl -v https://ejava-boot.herokuapp.com/api/hello?name=jim
hello, jim
Notice that we deployed an HTTP application and must access the site using HTTPS. Heroku is providing the TLS termination without any additional work on our part.
* Server certificate: * subject: CN=*.herokuapp.com * start date: Jun 1 00:00:00 2021 GMT * expire date: Jun 30 23:59:59 2022 GMT * subjectAltName: host "ejava-boot.herokuapp.com" matched cert's "*.herokuapp.com" * issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon * SSL certificate verify ok.
265.9. Access Via Swagger
We can also access the site via swagger with a minor amount of configuration.
To configure Swagger to ignore the injected @AuthenticationPrincipal
parameter — we need to annotate it as hidden, using a Swagger annotation.
import io.swagger.v3.oas.annotations.Parameter;
...
public String hello(
@RequestParam("name")String name,
@Parameter(hidden = true) //for swagger
@AuthenticationPrincipal UserDetails user) {
266. Remote IT Test
We have seen many times that there are different levels of testing that include:
-
unit tests (with Mocks)
-
unit integration tests (horizontal and vertical; with Spring context)
-
integration tests (heavyweight process; failsafe)
No one type of test is going to be the best in all cases. In this particular case we are going to assume that all necessary unit (core functionality) and unit integration (Spring context integration) tests have been completed and we want to evaluate our application in an environment that resembles production deployment.
266.1. JUnit IT Test Case
To demonstrate the remote test, I have created a single HelloHerokuIT
JUnit test and customized the @Configuration
to be able to be used to express remote server aspects.
266.1.1. Test Case Definition
The failsafe integration test case looks like most unit integration test cases by naming @Configuration
class(es), active profiles, and following a file naming convention (IT
).
The @Configuration
is used to define beans for the IT test to act as a client of the remote server.
The heroku
profile contains properties defining identity of remote server.
@SpringBootTest(classes=ClientTestConfiguration.class, (1)
webEnvironment = SpringBootTest.WebEnvironment.NONE) (2)
@ActiveProfiles({"test","heroku"}) (3)
public class HelloHerokuIT { (4)
1 | @Configuration defines beans for client testing |
2 | no server active within JUnit IT test JVM |
3 | activate property-specific profiles |
4 | failsafe test case class name ends with IT |
266.1.2. Injected Components and Setup
This specific test injects 2 users (anonymous and authenticated), the username of the authenticated user, and the baseUrl of the remote application. The baseUrl is used to define a template for the specific call being executed.
@Autowired
private RestTemplate anonymousUser;
@Autowired
private RestTemplate authnUser;
@Autowired
private String authnUsername;
private UriComponentsBuilder helloUrl;
@BeforeEach
void init(@Autowired URI baseUrl) {
log.info("baseUrl={}", baseUrl);
helloUrl = UriComponentsBuilder.fromUri(baseUrl).path("api/hello")
.queryParam("name","{name}"); (1)
}
1 | helloUrl is a baseUrl + /api/hello?name={name} template |
266.2. IT Properties
The integration test case pulls production properties from src/main
and test properties from src/test
.
266.2.1. Application Properties
The application is using a single user with its username and password statically defined in application.properties
.
These values will be necessary to authenticate with the server — even when remote.
spring.security.user.name=user
spring.security.user.password=password
Do Not Store Credentials in JAR
Do not store credentials in a resource file within the application.
Resource files are generally checked into CM repositories and part of JARs published to artifact repositories.
A resource file is used here to simpify the class example.
A realistic solution would point the application at a protected directory or source of properties at runtime.
|
266.2.2. IT Test Properties
The application-heroku.properties
file contains 3 non-default properties for the ServerConfig
.
scheme
is hardcoded to https
, but the host
and port
are defined with ${placeholder}
variables that will be filled in with Maven properties using the maven-resources-plugin
.
-
We do this for
host
, so that theheroku.appName
can be pulled from an environment-specific properties -
We do this for
port
, to be certain thatserver.http.port
is set within thepom.xml
because theejava-build-parent
configures failsafe to pass the value of that property asit.server.port
.
it.server.scheme=https (1)
it.server.host=${heroku.appName}.herokuapp.com (2)
it.server.port=${server.http.port} (3) (4)
1 | using HTTPS protocol |
2 | Maven resources plugin configured to filter value during compile |
3 | Maven filtered version of property used directly within IDE |
4 | runtime failsafe configuration will provide value override |
266.2.3. Maven Property Filtering
Maven copies resource files from the source tree to the target tree by default using the maven-resources-plugin
.
This plugin supports file filtering when copying files from the src/main
and src/test
areas.
This is so common, that the definition can be expressed outside the boundaries of the plugin.
The snippet below shows the setup of filtering a single file from src/test/resources
and uses elements testResource/testResources
.
Filtering a file from src/main
(not used here) would use elements resources/resource
.
The filtering is setup in two related definitions: what we are filtering (filtering=true) and everything else (filtering=false). If we accidentally leave out the filtering=false definition, then only the filtered files will get copied. We could have simply filtered everything but that can accidentally destroy binary files (like images and truststores) if they happen to be placed in that path. It is safer to be explicit about what must be filtered.
<build> (1)
<testResources> <!-- used to replace ${variables} in property files -->
<testResource> (2)
<directory>src/test/resources</directory>
<includes> <!-- replace ${heroku.appName} -->
<include>application-heroku.properties</include>
</includes>
<filtering>true</filtering>
</testResource>
<testResource> (3)
<directory>src/test/resources</directory>
<excludes>
<exclude>application-heroku.properties</exclude>
</excludes>
<filtering>false</filtering>
</testResource>
</testResources>
....
1 | Maven/resources-maven-plugin configured here to filter a specific file in src/test |
2 | application-heroku.properties will be filtered when copied |
3 | all other files will be copied but not filtered |
Maven Resource Filtering can Harm Some Files
Maven resource filtering can damage binary files and naively constructed property files (that are meant to be evaluated at runtime versus build time).
It is safer to enumerate what needs to be filtered than to blindly filter all resources.
|
266.2.4. Property Value Sources
The source for the Maven properties can come from many places.
The example sets a default within the pom.xml.
We expect the heroku.appName
to be environment-specific, so if you deploy the example using Maven — you will need to add -Dheroku.appName=your-app-name
to the command line or through your local settings.xml
file.
<properties> (1)
<heroku.appName>ejava-boot</heroku.appName>
<server.http.port>443</server.http.port>
</properties>
1 | default values - can be overridden by command and settings.xml values |
266.2.5. Maven Process Resource Phases
The following snippet shows the two resource phases being executed.
Our testResources
are copied and filtered in the second phase.
$ mvn clean process-test-resources
[INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ docker-hello-example ---
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ docker-hello-example ---
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ docker-hello-example ---
[INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ docker-hello-example ---
$ cat target/test-classes/application-heroku.properties
it.server.scheme=https
it.server.host=ejava-boot.herokuapp.com
it.server.port=443
The following snippet shows the results of the property filtering using a custom value for heroku.appName
$ mvn clean process-test-resources -Dheroku.appName=other-name (1)
$ cat target/test-classes/application-heroku.properties
it.server.scheme=https
it.server.host=other-name.herokuapp.com (2)
it.server.port=443
1 | custom Maven property supplied on command-line |
2 | supplied value expanded during resource filtering |
266.3. Configuration
The @Configuration
class sets up 2 RestTemplate @Bean factories: anonymousUser and authnUser.
Everything else is there to mostly to support the setup of the HTTPS connection.
This same @Configuration
is used for both the unit and failsafe integration tests.
The ServerConfig
is injected during the failsafe IT test (using application-heroku.properties
) and instantiated locally during the unit integration test (using @LocalPort
and default values).
@Configuration(proxyBeanMethods = false)
@EnableConfigurationProperties //used to set it.server properties
@EnableAutoConfiguration
public class ClientTestConfiguration {
266.3.1. Authentication
Credentials (from application.properties
) are injected into the @Configuration
class using @Value
injection.
The username for the credentials is made available as a @Bean
to evaluate test results.
@Value("${spring.security.user.name}") (1)
private String username;
@Value("${spring.security.user.password}")
private String password;
@Bean
public String authnUsername() { return username; } (2)
1 | default values coming from application.properties |
2 | username exposed only to support evaluating authentication results |
266.3.2. Server Configuration (Client Properties)
The remote server configuration is derived from properties available at runtime and scoped under the "it.server" prefix.
The definitions within the ServerConfig
instance can be used to form the baseUrl for the remote server.
@Bean
@ConfigurationProperties(prefix = "it.server")
public ServerConfig itServerConfig() {
return new ServerConfig();
}
//use for IT tests
@Bean (1)
public URI baseUrl(ServerConfig serverConfig) {
URI baseUrl = serverConfig.build().getBaseUrl();
return baseUrl;
}
1 | baseUrl resolves to https://ejava-boot.herokuapp.com:443 |
266.3.3. anonymousUser
An injectable RestTemplate is exposed with no credentials as "anonymousUser".
As with most of our tests, the BufferingClientHttpRequestFactory
has been added to support multiple reads required by the RestTemplateLoggingFilter
(which provides debug logging).
The ClientHttpRequestFactory
was made injectable to support HTTP/HTTPS connections.
@Bean
public RestTemplate anonymousUser(RestTemplateBuilder builder,
ClientHttpRequestFactory requestFactory) { (1)
return builder.requestFactory(
//used to read the streams twice (3)
()->new BufferingClientHttpRequestFactory(requestFactory))
.interceptors(new RestTemplateLoggingFilter()) (2)
.build();
}
1 | requestFactory will determine whether HTTP or HTTPS connection created |
2 | RestTemplateLoggingFilter provides HTTP debug statements |
3 | BufferingClientHttpRequestFactory caches responses, allowing it to be read multiple times |
266.3.4. authnUser
An injectable RestTemplate is exposed with valid credentials as "authnUser".
This is identical to anonymousUser
except credentials are provided through a BasicAuthenticationInterceptor
.
@Bean
public RestTemplate authnUser(RestTemplateBuilder builder,
ClientHttpRequestFactory requestFactory) {
return builder.requestFactory(
//used to read the streams twice
()->new BufferingClientHttpRequestFactory(requestFactory))
.interceptors(
new BasicAuthenticationInterceptor(username, password), (1)
new RestTemplateLoggingFilter())
.build();
}
1 | valid credentials added |
266.3.5. ClientHttpRequestFactory
The builder requires a requestFactory
and we have already shown that it will be wrapped in a BufferingClientHttpRequestFactory
to support debug logging.
However, the core communications is implemented by the org.apache.http.client.HttpClient
class.
import org.apache.http.client.HttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import javax.net.ssl.SSLContext;
...
@Bean
public ClientHttpRequestFactory httpsRequestFactory(
ServerConfig serverConfig, (1)
SSLContext sslContext) { (2)
HttpClient httpsClient = HttpClientBuilder.create()
.setSSLContext(serverConfig.isHttps() ? sslContext : null)
.build();
return new HttpComponentsClientHttpRequestFactory(httpsClient);
}
1 | ServerConfig provided to determine whether HTTP or HTTPS required |
2 | SSLContext provided for when HTTPS is required |
266.3.6. SSL Context
The SSLContext is provided by the org.apache.http.ssl.SSLContextBuilder
class.
In this particular instance, we expect the deployment environment to use commercial, trusted certs.
This will eliminate the need to load a custom truststore.
import org.apache.http.ssl.SSLContextBuilder;
import javax.net.ssl.SSLContext;
...
@Bean
public SSLContext sslContext(ServerConfig serverConfig) {
try {
URL trustStoreUrl = null;
//using trusted certs, no need for customized truststore
//...
SSLContextBuilder builder = SSLContextBuilder.create()
.setProtocol("TLSv1.2");
if (trustStoreUrl!=null) {
builder.loadTrustMaterial(trustStoreUrl, serverConfig.getTrustStorePassword());
}
return builder.build();
} catch (Exception ex) {
throw new IllegalStateException("unable to establish SSL context", ex);
}
}
266.4. JUnit IT Test
The following shows two sanity tests for our deployed application.
They both use a base URL of https://ejava-boot.herokuapp.com/api/hello?name={name}
and supply the request-specific name
property through the UriComponentsBuilder.build(args)
method.
266.5. Simple Communications Test
When successful, the simple communications test will return a 200/OK with the text "hello, jim"
@Test
void can_contact_server() {
//given
String name="jim";
URI url = helloUrl.build(name);
RequestEntity<Void> request = RequestEntity.get(url).build();
//when
ResponseEntity<String> response = anonymousUser.exchange(request, String.class);
//then
then(response.getStatusCode()).isEqualTo(HttpStatus.OK);
then(response.getBody()).isEqualTo("hello, " + name); (1)
}
1 | "hello, jim" |
266.6. Authentication Test
When successful, the authentication test will return a 200/OK with the text "hello, jim (from user)".
The name for "user" will be the username injected from the application.properties
file.
@Test
void can_authenticate_with_server() {
//given
String name="jim";
URI url = helloUrl.build(name);
RequestEntity<Void> request = RequestEntity.get(url).build();
//when
ResponseEntity<String> response = authnUser.exchange(request, String.class);
//then
then(response.getStatusCode()).isEqualTo(HttpStatus.OK);
then(response.getBody()).isEqualTo("hello, " +name+ " (from " +authnUsername+ ")");(1)
}
1 | "hello, jim (from user)" |
266.7. Automation
The IT tests have been disabled to avoid attempts to automatically deploy the application in every build location. Automation can be enabled at two levels: test and deployment.
266.7.1. Enable IT Test
We can enable the IT tests alone by adding -DskipITs=value
, where value
is anything but true
, false
, or blank.
-
skipITs (blank) and skipITs=true will cause failsafe to not run. This is a standard failsafe behavior.
-
skipITs=false will cause the application to be re-deployed to Heroku. This is part of our custom pom.xml definition that will be shown in a moment.
$ mvn verify -DitOnly -DskipITs=not_true (1) (2) (3)
...
GET https://ejava-boot.herokuapp.com:443/api/hello?name=jim, returned OK/200
hello, jim
...
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
1 | verify goal completes the IT test phases |
2 | itOnly - defined by ejava-build-parent to disable surefire tests |
3 | skipITs - controls whether IT tests are performed |
skipITs can save Time and Build-time Dependencies
Setting skipITs=true can save time and build-time dependencies when all that is desired it a resulting artifact produced by mvn install .
|
266.7.2. Enable Heroku Deployment
The pom also has conditionally added the heroku:deploy
goal to the pre-integration
phase if skipITs=false
is explicitly set.
This is helpful if changes have been made.
However, know that a full upload and IT test execution is a significant amount of time to spend.
Therefore, it is not the thing one would use in a rapid test, code, compile, test repeat scenario.
<profiles>
<profile> <!-- deploys a Spring Boot Executable JAR -->
<id>heroku-it-deploy</id>
<activation>
<property> (1)
<name>skipITs</name>
<value>false</value>
</property>
</activation>
<properties> (2)
<spring-boot.repackage.skip>false</spring-boot.repackage.skip>
</properties>
<build>
<plugins>
<plugin> (3)
<groupId>com.heroku.sdk</groupId>
<artifactId>heroku-maven-plugin</artifactId>
<executions>
<execution>
<id>deploy</id>
<phase>pre-integration-test</phase>
<goals>
<goal>deploy</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
1 | only fire of skipITs has the value false |
2 | make sure that JAR is a Spring Boot executable JAR |
3 | add deploy step in pre-integration phase |
$ mvn verify -DitOnly -DskipITs=false
...
[INFO] --- spring-boot-maven-plugin:2.4.2:repackage (package) @ docker-hello-example ---
[INFO] Replacing main artifact with repackaged archive
[INFO] <<< heroku-maven-plugin:3.0.3:deploy (deploy) < package @ docker-hello-example <<<
[INFO] --- heroku-maven-plugin:3.0.3:deploy (deploy) @ docker-hello-example ---
[INFO] jakarta.el-3.0.3.jar already exists in destination.
...
[INFO] -----> Done
[INFO] --- maven-failsafe-plugin:3.0.0-M5:integration-test (integration-test) @ docker-hello-example ---
...
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
[INFO] --- maven-failsafe-plugin:3.0.0-M5:verify (verify) @ docker-hello-example ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
267. Summary
In this module we learned:
-
to deploy an application under development to Heroku cloud provider to make it accessible to Internet users
-
using naked Spring Boot form
-
-
to deploy incremental and iterative changes to the application
-
how to interact with your developed application on the Internet
Docker Images
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
268. Introduction
We have seen where we already have many of the tools we need to be able to develop, test, and deploy a functional application. However, there will become a point where things will get complicated.
-
What if everything is not a Spring Boot application and requires a unique environment?
-
What if you end up with dozens of applications and many versions?
-
Will everyone on your team be able to understand how to instantiate them?
-
Lets take a user-level peek at the Docker container in order to create a more standardized look to all our applications.
268.1. Goals
You will learn:
-
the purpose of an application container
-
to identify some open standards in the Docker ecosystem
-
to build a Docker images using different techniques
-
to build a layered Docker image
268.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
build a basic Docker image with an executable JAR using a Dockerfile and docker commands
-
build a basic Docker image with the Spring Boot Maven Plugin and buildpack
-
build a layered Docker image with the Spring Boot Maven Plugin and buildpack
-
build a layered Docker image using a Dockerfile and docker commands
-
run a docker image hosting a Spring Boot application
269. Containers
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
"What is a Container" A standardized unit of software
269.1. Container Deployments
The following diagrams represent three common application deployment strategies: native, virtual machine, and container.
Figure 120. Native Deployment
|
Figure 121. VM Deployment
|
Figure 122. Container Deployment
|
-
native - has the performance advantage of running on bare metal but the disadvantage of having full deployment details exposed and the vulnerability of directly sharing the same host operating system with other processes.
-
virtual machine - (e.g., VMWare, VirtualBox) has the advantage of isolation from other processes and potential encapsulation of installation details but the disadvantage of a separate and distinct guest operating systems running on the same host with limited sharing of resources.
-
container - has the advantage of isolation from other processes, encapsulation of installation details, and runs in a lightweight container runtime that efficiently shares the resources of the host OS with each container.
270. Docker Ecosystem
Docker is an ecosystem of tooling that covers a lot of topics. Two of which are the container image and runtime. The specifications of both of these have been initiated by Docker — the company — and transitioned to the Open Container Initiative (OCI) — a standards body — that maintains the definition of the image and runtime specs and certifications.
This has allowed independent toolsets (for building Docker images) and runtimes (for running Docker images under different runtime and security conditions). For example, the following is a sample of the alternative builders and runtimes available.
270.1. Container Builders
Docker — the company — offers a Docker image builder. However, the builder requires a daemon with a root-level installation. Some of the following simply implement a builder tool:
I use kaniko on a daily basis to build images within a CI/CD build pipeline. Since the jobs within the pipeline all run within Docker images, it helps avoid having to setup Docker within Docker and running the images in privileged mode.
270.2. Container Runtimes
Docker — the company — offers a container runtime. However, this container runtime has a complex lifecycle that includes daemons and extra processes. Some of the following simply run an image.
271. Docker Images
A Docker image is a tar file of layered, intermediate levels of the application. A layer within a Docker image contains a tar file of the assigned artifacts for that layer. If two or more Docker files share the same base layer — there is no need to repeat the base layer in that repository. If we change the upper levels of a Docker file, there is no need to rebuild the lower levels. These aspects will be demonstrated within this lecture and optimized in the tooling available to use within Spring Boot.
272. Basic Docker Image
We can build a basic Docker image from a normal executable JAR created from the Spring Boot Maven Plugin.
To prove that — we will return to the hello-docker-example
used in the previous Heroku deployment lecture.
Example Requires Docker Installed
Implementing the first example will require docker — the product — to be installed.
Please see the development environment Docker setup for references.
|
The following shows us starting with a typical example web application that listens to port 8080 when built and launched.
The build happens to automatically invoke the spring-boot:repackage
goal.
However, if that is not the case, just run mvn spring-boot:repackage
to build the Spring Boot executable JAR.
$ mvn clean package (1)
...
target/
|-- [ 31M] docker-hello-example-6.0.1-SNAPSHOT-SNAPSHOT-bootexec.jar
|-- [9.7K] docker-hello-example-6.0.1-SNAPSHOT.jar
$ java -jar target/docker-hello-example-6.0.1-SNAPSHOT-bootexec.jar (2)
...
Tomcat started on port(s): 8080 (http) with context path ''
Started DockerHelloExampleApp in 3.058 seconds (JVM running for 3.691)
1 | building the executable Spring Boot JAR |
2 | running the application |
272.1. Basic Dockerfile
We can build a basic Docker image manually by adding a Dockerfile and issuing a Docker command to build it.
The basic Dockerfile below extends a base OpenJDK 17 image from the global Docker repository, adds the executable JAR, and registers the default commands to use when running the image.
It happens to have the name Dockerfile.execjar
, which will be referenced by a later command.
FROM openjdk:17.0.2 (1)
COPY target/*-bootexec.jar application.jar (2)
ENTRYPOINT ["java", "-jar", "application.jar"] (3)
1 | building off a base openjdk 14 image |
2 | copying executable JAR into the image |
3 | establishing default command to run the executable JAR |
272.2. Basic Docker Image Build Output
The Docker build command processes the Dockerfile
and produces an image. We supply the Dockerfile,
the directory (.
) of the source files referenced by the Dockerfile,
and an image name and tag.
$ docker build . -f Dockerfile.execjar -t docker-hello-example:execjar (1) (2) (3) (4)
...
=> [1/2] FROM docker.io/library/openjdk:17.0 ... 5.3s
=> [2/2] COPY target/*-bootexec.jar application.jar 0.8s
...
Step 1/3 : FROM adoptopenjdk:14-jre-hotspot
Step 2/3 : COPY target/*.jar application.jar
Step 3/3 : ENTRYPOINT ["java", "-jar", "application.jar"]
Successfully built eda93db54671
Successfully tagged docker-hello-example:execjar
1 | build - command to build Docker image |
2 | . - current directory is default source |
3 | -f - path to Dockerfile, if not Dockerfile in current directory |
4 | name:tag - name and tag of image to create |
Dockerfile is default name for Dockerfile
Default Docker file name is Dockerfile .
This example will use multiple Dockerfiles, so the explicit -f naming has been used.
|
272.3. Local Docker Registry
Once the build is complete, the image is available in our local repository with the name and tag we assigned.
$ docker images | egrep 'docker-hello-example|REPO'
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-hello-example execjar eda93db54671 12 minutes ago 504MB
-
REPOSITORY - names the primary name of the Docker image
-
TAG - primarily used to identify versions and variants of repository name.
latest
is the default tag -
IMAGE ID - is a hex string value that identifies the image. The repository:tag label just happens to point to that version right now, but will advance in a future change/build.
-
SIZE - is total size if exported. Since Docker images are layered, multiple images sharing the same base image will supply much less overhead than reported here while staged in a repository
272.4. Running Docker Image
We can run the image with the docker run
command.
The following example shows
running the docker-hello-image
with tag execjar
,
exposing port 8080 within the image as port 9090 on localhost (-p 9090:8080
),
running in interactive mode (-it
; optional here, but important when using as interactive shell), and
removing the runtime image when complete (--rm
).
$ docker run --rm -it -p 9090:8080 docker-hello-example:execjar (1) (2) (3) (4)
. ____ _ __ _ _ (5)
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (2.7.0)
...
Tomcat started on port(s): 8080 (http) with context path ''
Started DockerHelloExampleApp in 4.049 seconds (JVM running for 4.784)
1 | run - run a command in a new Docker image |
2 | --rm - remove the image instance when complete |
3 | -it allocate a pseudo-TTY (-t ) for an interactive (`-i) shell |
4 | -p - map external port 9090 to 8080 of the internal process |
5 | Spring Boot App launched with no arguments |
272.5. Docker Run Command with Arguments
Arguments can also be passed into the image. The example below passes in a standard Spring Boot property to turn off printing of the startup banner.
$ docker run --rm -it -p 9090:8080 docker-hello-example:execjar --spring.main.banner-mode=off
... (1)
Tomcat started on port(s): 8080 (http) with context path ''
Started DockerHelloExampleApp in 4.049 seconds (JVM running for 4.784)
1 | spring.main.banner-mode property passed to Spring Boot App and disabled banner printing |
272.6. Running Docker Image
We can verify the process is running using the Docker ps
command.
docker ps
Command$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8078f6369a59 docker-hello-example:execjar "java -jar applicatiβ¦" 4 minutes ago Up 4 minutes 0.0.0.0:9090->8080/tcp practical_agnesi
-
CONTAINER ID - hex string we can use to refer to this running (or later terminated) instance
-
IMAGE - REPO:TAG executed
-
COMMAND - command executed upon entry
-
CREATED - when started
-
STATUS - run status. Use
docker ps -a
to locate all images and not just running images -
PORTS - lists ports exposed within image and what they are mapped to externally on the host
-
NAMES - textual name alias for instance. Can be used interchangeably with containerId. Can be explicitly set with
--name foo
option prior to the image parameter, but must be unique
272.7. Using the Docker Image
We can call our Spring Boot process within the image using the mapped 9090 port.
$ curl http://localhost:9090/api/hello?name=jim
hello, jim
272.8. Docker Image is Layered
The Docker image is a TAR file that is made up of layers
$ docker save docker-hello-example:execjar > image.tar
Mac:image$ tar tf image.tar
27dcc15ccaaac941791ba5826356a254e70c85d4c9c8954e9c4eb2873506a4c8/
27dcc15ccaaac941791ba5826356a254e70c85d4c9c8954e9c4eb2873506a4c8/VERSION
27dcc15ccaaac941791ba5826356a254e70c85d4c9c8954e9c4eb2873506a4c8/json
27dcc15ccaaac941791ba5826356a254e70c85d4c9c8954e9c4eb2873506a4c8/layer.tar
304740117a5a0c15c8ea43b7291479207b357b9fc08cc47a5e4a357f5e9a1768/
304740117a5a0c15c8ea43b7291479207b357b9fc08cc47a5e4a357f5e9a1768/VERSION
304740117a5a0c15c8ea43b7291479207b357b9fc08cc47a5e4a357f5e9a1768/json
304740117a5a0c15c8ea43b7291479207b357b9fc08cc47a5e4a357f5e9a1768/layer.tar
...
a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/
a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/VERSION
a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/json
a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/layer.tar
...
manifest.json
repositories
This specific example has seven (7) layers.
$ tar tf image.tar | grep layer.tar | wc -l
7
272.9. Application Layer
If we untar the Docker image and poke around, we can locate the layer that contains our executable JAR file. All 25M of it in one place.
$ tar tf ./a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/layer.tar
application.jar (1)
ls -lh ./a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/layer.tar
25M ./a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/layer.tar
1 | one of the layers contains our application layer and is made up of a single Spring Boot executable JAR |
There are a few things to note about what we uncovered in this section
-
the Docker image is not a closed, binary representation. It is an openly accessible layer of files as defined by the OCI Image Format Specification.
-
our application is currently implemented as a single 25MB layer with a single Spring Boot executable JAR. Our code was likely only a few KBytes of that 25MB.
Hold onto both of those points when covering the next topic.
272.10. Spring Boot Plugin
Starting with Spring Boot 2.3 and its enhanced support for cloud technologies, the Spring Boot Maven Plugin now provides support for building a Docker image using buildpack — not Docker and no Dockerfile.
$ mvn spring-boot:help ... spring-boot:build-image Package an application into a OCI image using a buildpack.
Buildpack is an approach to building Docker images based on strict layering concepts that Docker has always prescribed. The main difference with buildpack is that the layers are more autonomous — backed by a segment of industry — allowing for higher level application layers to be quickly rebased on top of patched operating system layers without fully rebuilding the image.
Joe Kutner from Heroku stated at a Spring One Platform conference that they were able to patch 10M applications overnight when a serious bug was corrected in a base layer. This was due to being able to rebase the application specific layers with a new base image using buildpack technology and without having to rebuild the images. [54]
272.11. Building Docker Image using Buildpack
If we look at the portions of the generated output, we will see
-
15 candidate buildpacks being downloaded
-
one of the 5 used buildpacks is specific to spring-boot
-
various layers are generated and reused to build the image
-
our application still ends up in a single layer
-
the image is generated, by default using the Maven artifactId as the image name and version number as the tag
$ mvn clean package spring-boot:build-image -DskipTests
...
[INFO] --- spring-boot-maven-plugin:2.7.0:build-image (default-cli) @ docker-hello-example ---
[INFO] Building image 'docker.io/library/docker-hello-example:6.0.1-SNAPSHOT'
[INFO]
[INFO] > Pulling builder image 'gcr.io/paketo-buildpacks/builder:base-platform-api-0.3' 6%
...
[INFO] > Pulling builder image 'gcr.io/paketo-buildpacks/builder:base-platform-api-0.3' 100%
[INFO] > Pulled builder image 'gcr.io/paketo-buildpacks/builder@sha256:6d625fe00a2b5c4841eccb6863ab3d8b6f83c3138875f48ba69502abc593a62e'
[INFO] > Pulling run image 'gcr.io/paketo-buildpacks/run:base-cnb' 100%
[INFO] > Pulled run image 'gcr.io/paketo-buildpacks/run@sha256:087a6a98ec8846e2b8d75ae1d563b0a2e0306dd04055c63e04dc6172f6ff6b9d'
[INFO] > Executing lifecycle version v0.8.1
[INFO] > Using build cache volume 'pack-cache-2432a78c0232.build'
[INFO]
[INFO] > Running creator
[INFO] [creator] ===> DETECTING
[INFO] [creator] 5 of 16 buildpacks participating
...
[INFO] [creator] paketo-buildpacks/spring-boot 2.4.1
...
[INFO] [creator] ===> EXPORTING
[INFO] [creator] Reusing layer 'launcher'
[INFO] [creator] Adding layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[INFO] [creator] Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
...
[INFO] [creator] Adding 1/1 app layer(s)
[INFO] [creator] Adding layer 'config'
[INFO] [creator] *** Images (10a764b20812):
[INFO] [creator] docker.io/library/docker-hello-example:6.0.1-SNAPSHOT
[INFO]
[INFO] Successfully built image 'docker.io/library/docker-hello-example:6.0.1-SNAPSHOT'
272.12. Buildpack Image in Local Docker Repository
The newly built image is now installed into the local Docker registry. It is using the Maven GAV artifactId for the repository and version for the tag.
$ docker images | egrep 'docker-hello-example|IMAGE'
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-hello-example execjar eda93db54671 40 minutes ago 315MB (1)
docker-hello-example 6.0.1-SNAPSHOT 10a764b20812 41 years ago 279MB (1)
1 | NOTE: sizes were from a later build using newer versions of Spring Boot |
One odd thing is the timestamp used (41 years ago) for the created date with the build pack image. Since it is referring to the year 1970 (new java.util.Date(0) UTC), we can likely assume there was a 0 value in a timestamp field somewhere. |
272.13. Buildpack Image Execution
Notice that when we run the newly built image that was built with buildpack, we get a little different behavior at the beginning where some base level memory tuning is taking place.
$ docker run --rm -it -p 9090:8080 docker-hello-example:6.0.1-SNAPSHOT
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=87032K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx449543K (Head Room: 0%, Loaded Class Count: 12952, Thread Count: 250, Total Memory: 1.0G)
Adding 127 container CA certificates to JVM truststore
Spring Cloud Bindings Boot Auto-Configuration Enabled
...
Tomcat started on port(s): 8080 (http) with context path ''
Started DockerHelloExampleApp in 3.589 seconds (JVM running for 4.3)
The following shows we are able to call the new running image.
$ curl http://localhost:9090/api/hello?name=jim
hello, jim
272.14. Inspecting Buildpack Image
If we save off the newly built image and briefly inspect, we will see that is contains the same TAR-based layering scheme but will 21 versus 7 layers in this specific example.
$ docker save docker-hello-example:6.0.1-SNAPSHOT > image.tar
$ tar tf image.tar | grep layer.tar | wc -l
21
If we untar the mage and poke around, we can eventually locate our application and notice that it happens to be in exploded form versus executable JAR form. We can see our code and dependency libraries separately.
$ tar tf 6e2b5eb3b4b11627cce2ca7c8aeb7de68a7a54b56b15ea4d43e4a14d2b1f0b9a/layer.tar
...
/workspace/BOOT-INF/classes/info/ejava/examples/svc/docker/hello/DockerHelloExampleApp.class
/workspace/BOOT-INF/classes/info/ejava/examples/svc/docker/hello/controllers/ExceptionAdvice.class
/workspace/BOOT-INF/classes/info/ejava/examples/svc/docker/hello/controllers/HelloController.class
...
/workspace/BOOT-INF/lib/classgraph-4.8.69.jar
/workspace/BOOT-INF/lib/commons-lang3-3.10.jar
/workspace/BOOT-INF/lib/ejava-dto-util-6.0.1-SNAPSHOT.jar
/workspace/BOOT-INF/lib/ejava-util-6.0.1-SNAPSHOT.jar
/workspace/BOOT-INF/lib/ejava-web-util-6.0.1-SNAPSHOT.jar
As a reminder, when we built the Docker image with a Docker file and vanilla docker commands — we ended up with an application layer with a single, Spring Boot executable JAR (with a few KBytes of our code and 24.9 MB of dependency artifacts).
$ tar tf ./a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/layer.tar
application.jar
ls -lh ./a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/layer.tar
25M ./a3651512f2a9241ae11ad8498df67b4f943ea4943f4fae8f88bcb0b81168803d/layer.tar
273. Layers
Dockerfile layers are an important concept when it comes to efficiency of storage and distribution. Any images built on common base images or intermediate commands that produce the same result do not have to be replicated within a repository. For example, 100 images all extending from the same OpenJDK 17 image do not need to have the OpenJDK 17 portions repeated.
To make it easier to view and analyze the layers of the Dockerfile — we can use a simple inspection tool called dive. This shows us how the image is constructed, where we may have wasted space, and potentially how to optimize. Since these images are brand new and based off production base images — we will not see much wasted space at this time. However, it will help us better understand the Docker image and how cloud features added to Spring Boot can help us.
Dive Not Required
There is no need to install the dive tool to learn about layers and how Spring Boot provides support for layers.
All necessary information to understand the topic is contained in the following material.
|
$ dive [imageId or name:tag]
With the image displayed, I find it helpful to:
-
hit [CNTL]+L if "Show Layer Changes is not yet selected"
-
hit [TAB] to switch to "Current Layer Contents" pane on the right
-
hit [CNTL]+U,R,M, and B to turn off all display except "Added"
-
hit [TAB] to switch back to "Layers" pane on the left
In the "Layers" pane we can scroll up and down the layers to see which files where added because of which ordered command in the Dockerfile. If all the layers look the same, make sure you are only displaying the "Added" artifacts.
Dive within Docker
Or — of course — you could run dive within Docker to inspect a Docker image.
This requires that you map the image’s Docker socket to the host machine’s Docker socket with the docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive [imageId or name:tag] |
273.1. Analyzing Basic Docker Image
In this first example, we are looking at the layers of the basic Dockerfile. Notice:
-
a majority of the size was the result of extending the OpenJDK image. That space represents content that a Dockerfile repository does not have to replicate.
-
the last layer contains the 26MB executable JAR. Because that technically contains our custom application. This is content a Dockerfile repository has to replicate.
$ dive docker-hello-example:execjar
273.2. Analyzing Basic Buildpack Image
If we look at the Docker image built with buildpack, through the Maven plugin, we will see the same 26MB exploded as separate files towards the end of the image. From a layering perspective — the exploded structure has not saved us anything.
$ dive docker-hello-example:6.0.1-SNAPSHOT
However, now that we have it exploded — we will have the option to break it into further layers.
274. Adding Fine-grain Layering
Having all 26MB of our Spring Boot application in a single layer can be wasteful — especially if we push new images to a repository many times during development. We end up with 26MB.version1, 26MB.version2, etc. when each push is more than likely a few modifications of class files within the application and a complete change in library dependencies not as common.
274.1. Configure Layer-ready Executable JAR
The Spring Boot plugin and buildpack provide support for creating
finer-grain layers from the executable JAR by enabling the layers
plugin configuration property.
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<layers>
<enabled>true</enabled>
</layers>
</configuration>
</plugin>
274.2. Building and Inspecting Layer-ready Executable JAR
If we rebuild the executable JAR with the layered option, an extra wrapper
is added to the executable JAR file that can be activated with the
-Djarmode=layeredtools
option to the java -jar
command.
This option takes one of two arguments: list or extract.
$ mvn clean package spring-boot:repackage -Dlayered=true -DskipTests (1)
$ java -Djarmode=layertools -jar target/docker-hello-example-6.0.1-SNAPSHOT-bootexec.jar
Usage:
java -Djarmode=layertools -jar docker-hello-example-6.0.1-SNAPSHOT-bootexec.jar
Available commands:
list List layers from the jar that can be extracted
extract Extracts layers from the jar for image creation
help Help about any command
1 | -Dlayered=true activates layering within the Maven pom.xml |
274.3. Default Executable JAR Layers
Spring Boot automatically configures four (4) layers by default: (released) dependencies, spring-boot-loader, snapshot-dependencies, and application. These layers are ordered from most stable (dependencies) to least stable (application). We have the ability to change the layers — but I won’t go into that here.
$ java -Djarmode=layertools -jar target/docker-hello-example-6.0.1-SNAPSHOT-bootexec.jar list
dependencies
spring-boot-loader
snapshot-dependencies
application
275. Layered Buildpack Image
With the layers
configuration property enabled, the next build will result
in a layered image posted to the local Docker repository.
$ mvn package spring-boot:build-image -Dlayered=true -DskipTests
...
Successfully built image 'docker.io/library/docker-hello-example:6.0.1-SNAPSHOT'
275.1. Dependency Layer
The dependency layer contains all the released dependencies. This happens to make up most of the 26MB we had for the executable JAR. This 26MB does not need to be replicated in the image repository if consistent with follow-on publications of our image.
275.2. Snapshot Layer
The snapshot layer contains dependency artifacts that have not been released. This is an indication that the artifact is slightly more stable than our application code but not as stable as the released dependencies.
275.3. Application Layer
The application layer contains the code for the local module — which should be the most volatile. Notice that in this example, the application code is 12KB out of the total 26MB for the executable JAR. If we change our application code and redeploy the image somewhere — only this small portion of the code needs to change.
275.4. Review: Single Layer Application
If you remember … before we added multiple layers, all the library stable JARs and semi-stable SNAPSHOT dependencies were in the same layers as our potentially changing application code. We now have them in separate layers.
]
276. Layered Docker Image
Since buildpack may not be for everyone, Spring Boot provides a means
for standard Docker users to create layered images with a standard
Dockerfile and standard docker
commands. The following example is based
on the
Example Dockerfile on the Spring Boot features page.
276.1. Example Layered Dockerfile
The Dockerfile is written in two parts: builder and image construction. The first, builder half of the file copies in the executable JAR and extracts the layer directories into a temporary portion of the image.
The second, construction half builds the final image by extending off what could be an independent parent image and the products of the builder phase. Notice how the four (4) layers are copied in separately - forming distinct boundaries.
FROM openjdk:17.0.2 as builder (1)
WORKDIR application
ARG JAR_FILE=target/*-bootexec.jar
COPY ${JAR_FILE} application.jar
RUN java -Djarmode=layertools -jar application.jar extract
FROM openjdk:17.0.2 (2)
WORKDIR application
COPY --from=builder application/dependencies/ ./
COPY --from=builder application/spring-boot-loader/ ./
COPY --from=builder application/snapshot-dependencies/ ./
COPY --from=builder application/application/ ./
ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"]
1 | commands used to setup building the image |
2 | commands used to build the image |
$ docker build . -f Dockerfile.layered -t docker-hello-example:layered
Sending build context to Docker daemon 26.1MB
276.2. Example Build
The following shows the output of building our example using
the docker build
command and the Dockerfile above.
Notice:
-
that it copies in the executableJAR and extracts the layers into the temporary image.
-
how it is building separate, distinct layers by using separate COPY commands for each layer directory.
=> [stage-1 2/6] WORKDIR application
=> [builder 3/4] COPY target/*-bootexec.jar application.jar
=> [builder 4/4] RUN java -Djarmode=layertools -jar application.jar extract
=> [stage-1 3/6] COPY --from=builder application/dependencies/ ./
=> [stage-1 4/6] COPY --from=builder application/spring-boot-loader/ ./
=> [stage-1 5/6] COPY --from=builder application/snapshot-dependencies/ ./
=> [stage-1 6/6] COPY --from=builder application/application/ ./
=> => naming to docker.io/library/docker-hello-example:layered
276.3. Dependency Layer
The dependency layer — like with the buildpack version — contains 26MB of the released JARs. This makes up the bulk of what was in our executable JAR.
276.4. Snapshot Layer
The snapshot layer contains dependencies that have not yet been released. These are believed to be more stable than our application code but less stable than the released dependencies.
276.5. Application Layer
The application layer contains our custom application code. This layer is thought to be the most volatile and is in the top-most layer.
277. Summary
In this module we learned:
-
Docker is a ecosystem of concepts, tools, and standards
-
Docker — the company — provides an implementation of those concepts, tools, and standards
-
Docker images can be created using different tools and technologies
-
the
docker build
command uses a Dockerfile -
buildpack uses knowledgeable inspection of the codebase
-
-
Docker images have ordered layers — from common operating system to custom application
-
buildpack layers are rigorous enough that they can be rebased upon freshly patched images — making hundreds to millions of image patches feasible within a short amount of time
-
intelligent separation of code into layers and proper ordering can lead to storage and complexity savings
-
Spring Boot provides a means to separate the executable JAR into layers that match certain criteria
Heroku Docker Deployments
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
278. Introduction
With a basic introduction to Docker under our belt, I would like to return to the Heroku deployment topic to identify the downside of deploying full applications — whether they be
-
naked Spring Boot executable JAR
-
Spring Boot executable JAR wrapped in a Docker image
— and show the benefit of using a layered application.
This is a follow-on lecture
It is assumed that you have already covered the Heroku deployment and Docker lectures, have a Heroku account, already deployed a Spring Boot application, and interacted with that application via the Internet. If not, you will need to go back to that lecture and review the basics of getting started with the Heroku account. If you do not have Docker — the product — installed, you should still be able to follow along to pick up the concepts. |
278.1. Goals
You will learn:
-
to deploy a Docker-based image to an cloud provider to make it accessible to Internet users
278.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
make a Heroku-deployable Docker image that accepts environment variable(s)
-
deploy a Docker image to Heroku using docker repository commands
-
deploy a Docker image to Heroku using CLI commands
279. Heroku Docker Notes
The following are Heroku references for Spring Boot and Docker deployments
Of important note — the Maven Spring Boot Plugin built Docker image (using buildpack) — uses an internal memory calculator that initially mandates 1GB of memory. This exceeds the free 512MB Heroku limit. Deploying this version of the application will immediately fail until we locate a way to change that value. However, we can successfully deploy the standard Dockerfile version — which lacks an explicit, up-front memory requirement.
We will also need to do some property expression gymnastics that will be straight forward to implement using the standard Dockerfile approach.
280. Heroku Login
With the Heroku CLI installed — we need to login. This will redirect us to the browser where we can complete the login.
$ heroku login
heroku: Press any key to open up the browser to login or q to exit:
Opening browser to https://cli-auth.heroku.com/auth/cli/browser/f944d777-93c7-40af-b772-0a1c5629c609
Logging in... done
Logged in as ...
280.1. Heroku Container Login
Heroku requires an additional login step to work with containers. With the initial login complete — no additional credentials will be asked for but this step seems required.
$ heroku container
7.60.0
$ heroku container:login
Login Succeeded
280.2. Create Heroku App
At this point you are ready to again perform a one-time (per deployment app) process that will reserve an app-name for you on herokuapp.com. We know that this name is used to reference our application and form a URL to access it on the Internet.
$ heroku create [app-name] (1)
Creating β¬’ [app-name]... done
https://app-name.herokuapp.com/ | https://git.heroku.com/app-name.git
1 | if app-name not supplied, a random app-name will be generated |
Heroku also uses Git repositories for deployment
Heroku creates a Git repository for the app-name that can
also be leveraged as a deployment interface. I will not be covering
that option.
|
You can create more than one heroku app and the app can be renamed
with the following apps:rename
command.
$ heroku apps:rename --app oldname newname
Visit the Heroku apps page to locate technical details related to your apps.
281. Adjust Dockerfile
Heroku requires the application accept a $PORT
environment variable
to identify the listen port at startup. We know from our lessons
in configuration, we can accomplish that by supplying a Spring Boot
property on the command line.
java -jar (app).jar --server.port=$PORT
Since we are launched using a Dockerfile and the parameter will require
a shell evaluation, we can accomplish this by
using the Dockerfile CMD
command below — which will feed the ENTRYPOINT
command its resulting values when expressed this way.
[55]
I have also added a default value of 8080 when the $PORT
variable has not
been supplied (i.e., in local environment).
ENV PORT=8080 (1)
ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] (2)
CMD ["--server.port=${PORT}"] (3)
1 | default value used if PORT is not supplied |
2 | ENTRYPOINT always executes no matter if a parameter is supplied |
3 | CMD expresses a default when no parameter(s) are supplied |
281.1. Test Dockerfile server.port
We can test the configuration locally using the following commands.
281.1.1. Testing $PORT CMD With Environment Variable
In this iteration, we are simulating the Heroku container supplying a PORT
environment variable with a value.
This value will be used by the Spring Boot application running within the Docker image.
The PORT
value is also mapped to an external 9090
value so we can call the server from the outside.
$ mvn package spring-boot:repackage -Dlayered=true (1)
$ docker build . -f Dockerfile.layered -t docker-hello-example:layered
$ docker run --rm -p 9090:5000 -e PORT=5000 docker-hello-example:layered (2)
...
Tomcat started on port(s): 5000 (http) with context path '' (3)
Started DockerHelloExampleApp in 3.623 seconds (JVM running for 4.392)
1 | example sets layered false by default and toggles with layered property |
2 | -e option defines PORT environment variable to value 5000 |
3 | --server.port=${PORT} sees that value and server listens on port 5000 |
We can use the following to test.
$ curl http://localhost:9090/api/hello?name=jim
hello, jim
281.1.2. Testing without Environment Variable
In this iteration, we are simulating local development independent of the Heroku container by not supplying a PORT
environment variable and using the default from the Docker CMD
setting.
Like before, this value will be used by the Spring Boot application running within the Docker image and that value will again be mapped to external port 9090
value so we can call the server from the outside.
$ docker run --rm -p 9090:8080 docker-hello-example:layered (1)
Tomcat started on port(s): 8080 (http) with context path '' (2)
Started DockerHelloExampleApp in 4.414 seconds (JVM running for 5.177)
1 | no PORT environment variable is expressed |
2 | server uses assigned ENV default of 8080 |
We can again use the following to test.
$ curl http://localhost:9090/api/hello?name=jim
hello, jim
282. Deploy Docker Image
I will demonstrate two primary ways deploy a Docker image to Heroku:
-
using
docker push
command to deploy a tagged image to the Heroku Docker repository -
using the
heroku container:push
command to build and upload an image
Both require a follow-on heroku container:release
command to complete
the deployment.
282.1. Deploying Tagged Image
One way to deploy a Docker image to Heroku is to create a Docker tag associated with the target Heroku repository and then push that image to the Docker repository. The tag has the following format
registry.heroku.com/[app-name]/web (1)
1 | registry.heroku.com is the actual address of the Heroku Docker repository |
My examples will use the app-name ejava-docker
.
282.1.1. Tagging the Image
There are at least two ways to tag the image:
-
WAY 1: tag the Docker image during the build
Tag Docker Image During Builddocker build . -f Dockerfile.layered -t registry.heroku.com/ejava-docker/web ... Successfully tagged registry.heroku.com/ejava-docker/web:latest
-
WAY 2: tag an existing Docker image
Tag Existing Docker Image$ docker build . -f Dockerfile.layered -t docker-hello-example:layered $ docker tag docker-hello-example:layered registry.heroku.com/ejava-docker/web
In either case, we will end up with a tag in the repository that will look like the following.
$ docker images | grep heroku
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.heroku.com/ejava-docker/web latest 72fe4327f05f 15 minutes ago 293MB
282.1.2. Deploying the Image
The last step in deploying the tagged image is to invoke docker push
using the full name of the tag.
$ docker push registry.heroku.com/ejava-docker/web
The push refers to repository [registry.heroku.com/ejava-docker/web]
3e974fa6054f: Pushed
...
7ef368776582: Layer already exists
latest: digest: sha256:37c99a899b26f2cfb192cd42f930120b11bb56408eb3e4590dfe78b957f2acf1 size: 2621
282.2. Push using Heroku CLI
The other alternative is to use heroku container:push
to build and push
the Docker image without going through the local repository.
$ heroku container:push web --app ejava-docker
container:push requires Dockerfile to be named Dockerfile — no file references
The heroku container:push command requires the Dockerfile be called Dockerfile and in the current directory. The command does not allow us to reference a unique filename (e.g., Dockerfile.layered ).
I used a soft link to get around that (i.e., ln -s Dockerfile.layered Dockerfile ).
The container:push documentation does infer that files normally referenced locally by the Dockerfile can be in a referenced location — possibly allowing the Dockerfile to be placed in a unique location versus having a unique name.
|
283. Complete Deployment
A successfully pushed image will not be made immediately available. We
must follow through with a release
command.
283.1. Release Pushed Image to Users
The following command finishes the deployment — making the updated image accessible to users.
$ heroku container:release web --app ejava-docker
Releasing images web to ejava-docker... done
283.2. Tail Logs
We can gain some insight into the application health by tailing the logs.
$ heroku logs --app ejava-docker --tail
Starting process with command `--server.port\=\$\{PORT:-8080\}`
...
Tomcat started on port(s): 54644 (http) with context path ''
Started DockerHelloExampleApp in 9.194 seconds (JVM running for 9.964)
283.3. Access Site
We can access the deployed application at this point but will be required to use HTTPS. Notice, however, HTTPS is fully setup with a trusted certificate.
$ curl -v https://ejava-docker.herokuapp.com/api/hello?name=jim
* Trying 52.73.83.132...
* TCP_NODELAY set
* Connected to ejava-docker.herokuapp.com (52.73.83.132) port 443 (#0)
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=US; ST=California; L=San Francisco; O=Heroku, Inc.; CN=*.herokuapp.com
* start date: Jun 15 00:00:00 2020 GMT
* expire date: Jul 7 12:00:00 2021 GMT
* subjectAltName: host "ejava-docker.herokuapp.com" matched cert's "*.herokuapp.com"
* issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert SHA2 High Assurance Server CA
* SSL certificate verify ok.
> GET /api/hello?name=jim HTTP/1.1
> Host: ejava-docker.herokuapp.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200
< Server: Cowboy
Hello, jim
284. Summary
In this module we learned:
-
to deploy an application under development to Heroku cloud provider to make it accessible to Internet users
-
using Docker form
-
-
to deploy incremental and iterative changes to the application
Docker Compose
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
285. Introduction
In a few previous lectures we have used the raw Docker API command line calls to perform the desired goals. At some early point there will become unwieldy and we will be searching for a way to wrap these commands. Years ago, I resorted to Ant and the exec command to wrap and chain my high level goals. In this lecture we will learn about something far more native and capable to managing Docker containers — docker-compose.
285.1. Goals
You will learn:
-
how to implement a network of services for development and testing using Docker Compose
-
how to operate a Docker Compose network lifecycle and how to interact with the running instances
285.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
identify the purpose of Docker Compose for implementing a network of virtualized services
-
create a Docker Compose file that defines a network of services and their dependencies
-
custom configure a Docker Compose network for different uses
-
perform Docker Compose lifecycle commands to build, start, and stop a network of services
-
execute ad-hoc commands inside running images
-
instantiate back-end services for use with the follow-on database lectures
286. Development and Integration Testing with Real Resources
To date, we have primarily worked with a single Web application. In the follow-on lectures we will soon need to add back-end database resources.
We can test with mocks and in-memory versions of some resources. However, there will come a day when we are going to need a running copy of the real thing or possibly a specific version. |
Figure 126. Need to Integrate with Specific Real Services
|
We have already gone through the work to package our API service in a Docker image and the Docker community has built a plethora of offerings for ready and easy download. Among them are Docker images for the resources we plan to eventually use: It would seem that we have a path forward. |
Figure 127. Virtualize Services with Docker
|
286.1. Managing Images
You know from our initial Docker lectures that we can easily download the images
and run them individually (given some instructions) with the docker run
command.
Knowing that — we could try doing the following and almost get it to work.
$ docker run --rm -p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=secret mongo:4.4.0-bionic (1)
$ docker run --rm -p 5432:5432 \
-e POSTGRES_PASSWORD=secret postgres:12.3-alpine (2)
$ docker run --rm -p 9080:8080 \
-e MONGODB_URI=... \ (3)
-e DATABASE_URL=... \
docker-hello-example:6.0.1-SNAPSHOT
1 | using the mongo container from Dockerhub |
2 | using the postgres container from Dockerhub |
3 | using our example Spring Boot Web application; it does not yet use the databases |
However, this begins to get complicated when:
-
we start integrating the API image with the individual resources through networking
-
we want to make the test easily repeatable
-
we want multiple instances of the test running concurrently on the same machine without interference with one another
Lets not mess with manual Docker commands for too long! There are better ways to do this with Docker Compose.
287. Docker Compose
DockerCompose is a tool for defining and running multi-container Docker applications. With Docker Compose, we can:
-
define our network of applications in a single YAML file
-
start/stop applications according to defined dependencies
-
run commands inside of running images
-
treat the running applications as normal, running Docker images
287.1. Docker Compose is Local to One Machine
Docker Compose runs everything local. It is a modest but necessary step above Docker but far simpler than any of the distributed environments that logically come after it (e.g., Docker Swam, Kubernetes). If you are familiar with Kubernetes and MiniKube, then you can think of Docker Compose is a very simple/poor man’s Helm Chart. "Poor" in that it only runs on a single machine. "Simple" because you only need to define details of each service and not have to worry about distributed aspects or load balancing that might come in a more distributed solution.
With Docker Compose, there
-
are one or more configuration files
-
is the opportunity to apply environment variables and extensions
-
are commands to build and control lifecycle actions of the network
Let’s start with the Docker Compose configuration file.
288. Docker Compose Configuration File
The
Docker Compose (configuration) file is based on
YAML — which uses a concise way to express information
based on indentation and firm symbol rules. Assuming we have a simple network of three (3)
services, we can limit our definition to a file version
and individual services
.
version: '3.8'
services:
mongo:
...
postgres:
...
api:
...
-
version - informs the docker-compose binary what features could be present within the file. I have shown a recent version of
3.8
but our use of the file will be very basic and could likely be set to3
or as low as2
. -
services - lists the individual nodes and their details. Each node is represented by a Docker image and we will look at a few examples next.
Refer to the Compose File Reference for more details.
288.1. mongo Service Definition
The mongo
service defines our instance of MongoDB.
mongo:
image: mongo:4.4.0-bionic
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: secret
# ports: (1)
# - "27017" (2)
# - "27017:27017" (3)
# - "37001:27017" (4)
# - "127.0.0.1:37001:27017" (5)
1 | not assigning port# here |
2 | 27017 internal, random external |
3 | 27017 both internal and external |
4 | 37001 external and 27017` internal |
5 | 37001 exposed only on 127.0.0.1 external and 27017` internal |
-
image - identifies the name and tag of the Docker image. This will be automatically downloaded if not already available locally
-
environment - defines specific environment variables to be made available when running the image.
-
VAR: X
passes in variableVAR
with valueX
. -
VAR
by itself passes in variableVAR
with whatever the value ofVAR
has been assigned to be in the environment (i.e., environment variable or from environment file).
-
-
ports - maps a container port to a host port with the syntax
"host interface:host port#:container port#"
-
host port#:container port#
by itself will map to add host interfaces -
"container port#"
by itself will be mapped to a random host port# -
no ports defined means the container port# that do exist are only accessible within the network of services defined within the file
-
288.2. postgres Service Definition
The postgres
service defines our instance of Postgres.
postgres:
image: postgres:12.3-alpine
# ports: (1)
# - "5432:5432"
environment:
POSTGRES_PASSWORD: secret
-
the default username and database name is
postgres
-
assigning a custom password of
secret
Mapping Port to Specific Host Port Restricts Concurrency to one Instance
Mapping a container port# to a fixed host port# makes the service easily accessible
from the host via a well-known port# but restricts the number of instances that can be
run concurrently to one. This is typically what you might do with development resources.
We will cover how to do both easily — shortly.
|
288.3. api Service Definition
The api
service defines our API server with the Votes and Elections Services.
This service will become a client of the other three services.
api:
build:
context: .
dockerfile: Dockerfile.layered
image: docker-hello-example:layered
ports:
- "${API_PORT:-8080}:8080"
depends_on:
- mongo
- postgres
environment:
- spring.profiles.active=integration
- MONGODB_URI=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin
- DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres
-
build - identifies a source Dockerfile that can build the image for this service
-
context - defines the path to the Dockerfile
-
dockerfile - defines the specific name of the Dockerfile (optional in this case)
-
-
image - identifies the name and tag used for the built image
-
ports - using a ${variable:-default} reference so that we have option to expose the container port# 8080 to a dynamically assigned host port# during testing. If
API_PORT
is not resolved to a value, the default8080
value will be used. -
depends_on - establishes a dependency between the images. This triggers a start of dependencies when starting this service. It also adds a hostname to this image’s environment. Therefore, the
api
server can reach the other services using hostnamesmongo
andpostgres
. You will see an example of that when you look closely at the URLs in the later examples. -
environment - environment variables passed to Docker image.
-
using
spring.profiles.active
to instruct API to useintegration
profile -
API is not yet using the databases, but these URLs are consistent with what will be encountered when deployed to Heroku.
-
if only the environment variable name is supplied, it’s value will not be defined here and the value from external sources will be passed at runtime
-
288.4. Build/Download Images
We can trigger the build or download of necessary images using the docker-compose build
command or simply by starting api
service the first time.
$ docker-compose build postgres uses an image, skipping mongo uses an image, skipping Building api [+] Building 0.2s (13/13) FINISHED => => naming to docker.io/library/docker-hello-example:layered ..
After the first start, a re-build is only performed using the build
command or when the --build
option.
288.5. Default Port Assignments
If we start the services …
$ export API_PORT=1234 && docker-compose up -d (1) Creating network "docker-hello-example_default" with the default driver Creating docker-hello-example_mongo_1 ... done Creating docker-hello-example_postgres_1 ... done Creating docker-hello-example_api_1 ... done
1 | up starts service and -d runs the container in the background as a daemon |
You will notice that no ports were assigned to the unassigned mongo
and postgres
services.
However, the given shown port# in the output is available to the other hosts within that Docker network.
If we don’t need mongo
or postgres
accessible to the host’s network — we are good.
The api
service was assigned a variable (value 1234
) port# — which is accessible to the host’s network.
$ docker-compose ps Name State Ports ---------------------------------------------------------------------------------- docker-hello-example_api_1 Up 0.0.0.0:1234->8080/tcp,:::1234->8080/tcp docker-hello-example_mongo_1 Up 27017/tcp docker-hello-example_postgres_1 Up 5432/tcp
$ curl http://localhost:1234/api/hello?name=jim hello, jim
288.6. Compose Override Files
Docker Compose files can be layered from base (shown above) to specialized. The following example shows the previous definitions being extended to include mapped host port# mappings. We might add this override in the development environment to make it easy to access the service ports on the host’s local network using well-known port numbers.
version: '3.8'
services:
mongo:
ports:
- "27017:27017"
postgres:
ports:
- "5432:5432"
Notice how the container port# is now mapped according to how the override file has specified.
$ unset API_PORT $ docker-compose down $ docker-compose up -d
$ docker-compose ps
Name State Ports
--------------------------------------------------------------------------------------
docker-hello-example_api_1 Up 0.0.0.0:8080->8080/tcp,:::8080->8080/tcp
docker-hello-example_mongo_1 Up 0.0.0.0:27017->27017/tcp,:::27017->27017/tcp
docker-hello-example_postgres_1 Up 0.0.0.0:5432->5432/tcp,:::5432->5432/tcp
Override Limitations May Cause Compose File Refactoring
There is a limit to what you can override versus augment. Single values can replace single
values. However, lists of values can only contribute to a larger list. That means we cannot
create a base file with ports mapped and then a build system override with the port mappings
taken away.
|
288.7. Compose Override File Naming
Docker Compose looks for a specially named file of docker-compose.override.yml
in the
local directory next to the local docker-compose.yml
file.
$ ls docker-compose.*
docker-compose.override.yml docker-compose.yml
$ docker-compose up (1)
1 | Docker Compose automatically applies overrides from docker-compose.override.yml in this case |
288.8. Multiple Compose Files
Docker Compose will accept a series of explicit -f file
specifications that are processed from
left to right. This allows you to name your own override files.
$ docker-compose -f docker-compose.yml -f development.yml up (1)
$ docker-compose -f docker-compose.yml -f integration.yml up
$ docker-compose -f docker-compose.yml -f production.yml up
1 | starting network in foreground with two configuration files, with the left-most file being specialized by the right-most file |
288.9. Environment Files
Docker Compose will look for variables to be defined in the following locations in the following order:
-
as an environment variable
-
in an environment file
-
when the variable is named and set to a value in the Compose file
Docker Compose will use .env
as its default environment file. A file like this
would normally not be checked into CM since it might have real credentials, etc.
$ cat .gitignore ... .env
API_PORT=9090
You can also explicitly name an environment file to use. The following is
explicitly applying the alt-env
environment file — thus bypassing the
.env
file.
$ cat alt-env
API_PORT=9999
$ docker-compose --env-file alt-env up -d (1)
$ docker ps
IMAGE PORTS NAMES
dockercompose-votes-api:latest 0.0.0.0:9999->8080/tcp
...
1 | starting network in background with an alternate environment file mapping API port to 9999 |
289. Docker Compose Commands
289.1. Build Source Images
With the docker-compose.yml
file defined — we can use that to control the build
of our source images. Notice in the example below that it is building the same
image we built in the previous lecture.
$ docker-compose build
postgres uses an image, skipping
mongo uses an image, skipping
Building api
[+] Building 0.2s (13/13) FINISHED
=> => naming to docker.io/library/docker-hello-example:layered
289.2. Start Services in Foreground
We can start all the the services in the foreground using the up
command.
The command will block and continually tail the output of each container.
$ docker-compose up
docker-hello-example_mongo_1 is up-to-date
docker-hello-example_postgres_1 is up-to-date
Recreating docker-hello-example_api_1 ... done
Attaching to docker-hello-example_mongo_1, docker-hello-example_postgres_1, docker-hello-example_api_1
We can trigger a new build with the --build
option. If there is no image present,
a build will be triggered automatically but will not be automatically reissued on
subsequent commands without supplying the --build
option.
289.3. Project Name
Docker Compose names all of our running services using a project name prefix. The default
project name is the parent directory name. Notice below how the parent directory name
docker-hello-example
was used in each of the running service names.
pwd
.../svc-container/docker-hello-example
$ docker-compose up
docker-hello-example_mongo_1 is up-to-date
docker-hello-example_postgres_1 is up-to-date
Recreating docker-hello-example_api_1 ... done
We can explicitly set the project name using the -p
option. This can be helpful
if the parent directory happens to be something generic — like target
or src/test/resources
.
$ docker-compose -p foo up (1) Creating network "foo_default" with the default driver Creating foo_postgres_1 ... done (2) Creating foo_mongo_1 ... done Creating foo_api_1 ... done Attaching to foo_postgres_1, foo_mongo_1, foo_api_1
1 | manually setting project name to foo |
2 | network and services all have prefix of foo |
289.4. Start Services in Background
We can start the processes in the background by adding the -d
option.
$ docker-compose up -d Creating network "docker-hello-example_default" with the default driver Creating docker-hello-example_postgres_1 ... done Creating docker-hello-example_mongo_1 ... done Creating docker-hello-example_api_1 ... done $ (1)
1 | -d option starts all services in the background and returns us to our shell prompt |
289.5. Access Service Logs
With the services running in the background, we can access the logs using the docker-compose logs
command.
$ docker-compose logs api (1) $ docker-compose logs -f api mongo (2) $ docker-compose logs --tail 10 (3)
1 | returns all logs for the api service |
2 | tails the current logs for the api and mongo services. |
3 | returns the latest 10 messages in each log |
289.6. Stop Running Services
If the services were started in the foreground, we can simply stop them with the <ctl>+C
command. If they were started in the background or in a separate shell, we can stop them
by executing the down
command in the docker-compose.yml
directory.
$ docker-compose down
Stopping docker-hello-example_api_1 ... done
Stopping docker-hello-example_mongo_1 ... done
Stopping docker-hello-example_postgres_1 ... done
Removing docker-hello-example_api_1 ... done
Removing docker-hello-example_mongo_1 ... done
Removing docker-hello-example_postgres_1 ... done
Removing network docker-hello-example_default
290. Docker Cleanup
Docker Compose will mostly cleanup after itself. The only exceptions are the older versions
of the API image and the builder image that went into creating the final API images. Using
my example settings, these are all end up being named and tagged as none
in the images repository.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-hello-example layered 9c45ff5ac1cf 17 hours ago 316MB
registry.heroku.com/ejava-docker/web latest 9c45ff5ac1cf 17 hours ago 316MB
docker-hello-example execjar 669de355e620 46 hours ago 315MB
dockercompose-votes-api latest da94f637c3f4 5 days ago 340MB
<none> <none> d64b4b57e27d 5 days ago 397MB
<none> <none> c5aa926e7423 7 days ago 340MB
<none> <none> 87e7aabb6049 7 days ago 397MB
<none> <none> 478ea5b821b5 10 days ago 340MB
<none> <none> e1a5add0b963 10 days ago 397MB
<none> <none> 4e68464bb63b 11 days ago 340MB
<none> <none> b09b4a95a686 11 days ago 397MB
...
<none> <none> ee27d8f79886 4 months ago 396MB
adoptopenjdk 14-jre-hotspot 157bb71cd724 5 months ago 283MB
mongo 4.4.0-bionic 409c3f937574 12 months ago 493MB
postgres 12.3-alpine 17150f4321a3 14 months ago 157MB
<none> <none> b08caee4cd1b 41 years ago 279MB
docker-hello-example 6.0.1-SNAPSHOT a855dabfe552 41 years ago 279MB
Docker Images are Actually Smaller than Provided SIZE
Even though Docker displays each of these images as >300MB, they may share
some base layers and — by themselves — much smaller. The value presented is
the space taken up if all other images are removed or if this image was exported
to its own TAR file.
|
290.1. Docker Image Prune
The following command will clear out any docker images that are not named/tagged and not part of another image.
$ docker image prune
WARNING! This will remove all dangling images.
Are you sure you want to continue? [y/N] y
Deleted Images:
deleted: sha256:ebc8dcf8cec15db809f4389efce84afc1f49b33cd77cfe19066a1da35f4e1b34
...
deleted: sha256:e4af263912d468386f3a46538745bfe1d66d698136c33e5d5f773e35d7f05d48
Total reclaimed space: 664.8MB
290.2. Docker System Prune
The following command performs the same type of cleanup as the image
prune command and performs
an additional amount on cleanup many other Docker areas deemed to be "trash".
$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] y
Deleted Networks:
testcontainers-votes-spock-it_default
Deleted Images:
deleted: sha256:e035b45628fe431901b2b84e2b80ae06f5603d5f531a03ae6abd044768eec6cf
...
deleted: sha256:c7560d6b795df126ac2ea532a0cc2bad92045e73d1a151c2369345f9cd0a285f
Total reclaimed space: 443.3MB
290.3. Image Repository State After Pruning
After pruning the images — we have just the named/tagged image(s).
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-hello-example layered 9c45ff5ac1cf 17 hours ago 316MB
registry.heroku.com/ejava-docker/web latest 9c45ff5ac1cf 17 hours ago 316MB
docker-hello-example execjar 669de355e620 46 hours ago 315MB
mongo 4.4.0-bionic 409c3f937574 12 months ago 493MB
postgres 12.3-alpine 17150f4321a3 14 months ago 157MB
docker-hello-example 6.0.1-SNAPSHOT a855dabfe552 41 years ago 279MB
291. Summary
In this module we learned:
-
the purpose of Docker Compose and how it is used to define a network of services operating within a virtualized Docker environment
-
to create a Docker Compose file that defines a network of services and their dependencies
-
to custom configure a Docker Compose network for different uses
-
perform Docker Compose lifecycle commands
-
execute ad-hoc commands inside running images
Why We Covered Docker and Docker Compose
The Docker and Docker Compose lectures have been included in this course because of the high probability of your future deployment environments for your Web applications and to provide a more capable and easy to use environment to learn, develop, and debug.
|
Where are You?
This lecture leaves you at a point where your Web application and database instances are alive but not yet communicating.
The URLs/URIs shown in this example are consistent with what you will encounter in Heroku when deploying.
However, we have much to do before then.
|
Where are You Going?
In the following series of lectures we will dive into the persistence tier, do some local development with the resources we have just setup, and then return to this topic once we are ready to re-deploy with a database-ready Web application.
|
Assignment 4: Deployments
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
This assignment contains two options (4a-App Deploy and 4b-Docker Deploy). Completing both is not a requirement. You are to implement one or the other. If you attempt both — please be explicit as to which one you have selected.
Both options will result in a deployment to Heroku and an IT test against that instance. You must deploy the application using your well-known-application name and leave it deployed during the grading period. The application will not be deployed during a build in the grading environment.
Because of the choice of deployments, the various paths that can be taken within each option, and the fact that this assignment primarily deploys what you have already created — there is no additional support or starter modules supplied for this assignment. Everything you need should be supplied by
-
your assignment 3 solution,
-
the IT test for assignment 3,
-
the deployment details covered in the Heroku and Docker lectures
-
the docker-hello-example module
Include Details Relevant to a Single Deployment Solution
Please make every attempt to follow one solution path and turn in only those details required to implement either the Spring Boot JAR or Docker deployment.
|
292. Assignment 4a: Application Deployment Option
292.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of deploying a Spring Boot executable JAR to Heroku. You will:
-
create a new Heroku application with a choice of names
-
deploy a Spring Boot application to Heroku using the Heroku Maven Plugin or Git commands
-
interact with your developed application on the Internet
292.2. Overview
In this portion of the assignment you will be deploying your assignment 3 solution to Heroku as a Spring Boot JAR, making it accessible to the Internet, and be able to update with incremental changes.
292.3. Requirements
-
Create an application name on Heroku. This may be random, a provided name, or random renamed later to a provided name.
-
Deploy your application as a Spring Boot JAR using the Heroku Maven Plugin. The profile(s) activated should use HTTP — not HTTPS added in the last assignment.
-
Provide a failsafe, IT integration test that demonstrates functionality of the deployed application on Heroku. This can be the same IT test submitted in the previous assignment adjusted to use a remote URL.
-
Turn in a source tree with complete Maven modules that will build web application. Deployment should not be a default goal in what you turn in.
292.3.1. Grading
Your solution will be evaluated on:
-
create a new Heroku application with a choice of names
-
whether you have provided the URL with application name of your deployed solution
-
-
deploy a Spring Boot application to Heroku using the Heroku Maven Plugin.
-
whether your solution for assignment 3 is now deployed to Heroku and functional after a normal warm-up period
-
-
interact with your developed application on the Internet
-
whether your integration test demonstrates basic application functionality in its deployed state
-
292.3.2. Additional Details
-
Setup your Heroku account and client interface according to the course lecture and referenced Heroku reference pages.
-
Your Heroku deployment and integration test can be integrated for your development purposes, but what you turn in must
-
assume the application is already deployed by default
-
be pre-wired with the remote Heroku URL to your application
-
be able to automatically run your IT test as part of the Maven module build.
-
293. Assignment 4b: Docker Deployment Option
293.1. Docker Image
293.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of building a Docker Image. You will:
-
build a layered Spring Boot Docker image using a Dockerfile and docker commands
-
make a Heroku-deployable Docker image that accepts environment variable(s)
293.1.2. Overview
In this portion of the assignment you will be building a Docker image with your Spring Boot application organized in layers
293.1.3. Requirements
-
Create a layered Docker image using a Dockerfile multi-stage build that will extend a JDK image and complete the image with your Spring Boot Application broken into separate layers.
-
The Spring Boot application should use HTTP and not HTTPS within the container.
-
-
Configure the Docker image to map the
server.port
Web server property to thePORT
environment variable when supplied.-
assign a default value when not supplied
-
-
Turn in a source tree with complete Maven modules that will build web application.
293.1.4. Grading
Your solution will be evaluated on:
-
build a layered Spring Boot Docker image using a Dockerfile and docker commands
-
whether you have a multi-stage Dockerfile
-
whether the Dockerfile successfully builds a layered version of your application using standard docker commands
-
-
make a Heroku-deployable Docker image that accepts environment variable(s)
-
whether you successfully map an optional
PORT
environment variable to theserver.port
property for the Web server.
-
293.1.5. Additional Details
-
You may optionally choose to build the Dockerfile using the Spotify Dockerfile Maven Plugin
293.2. Heroku Docker Deploy
293.2.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of provisioning a site and deploying a Docker image to Heroku. You will:
-
deploy a Docker image to Heroku
293.2.2. Overview
In this portion of the assignment you will be deploying your assignment 3 solution to Heroku as a Docker image.
293.2.3. Requirements
-
Create an application name on Heroku. This may be random, a provided name, or random renamed later to a provided name.
-
Deploy your application as a Docker image using the Heroku CLI or other means. The profile(s) activated should use HTTP — not HTTPS added in the last assignment.
-
Provide an integration test that demonstrates functionality of the deployed application on Heroku. This can be the same tests submitted in the previous assignment adjusted to use a remote URL.
-
Turn in a source tree with complete Maven modules that will build web application. Deployment should not be a default goal in what you turn in.
293.2.4. Grading
Your solution will be evaluated on:
-
deploy a Docker image to Heroku
-
whether you have provided the URL with application name of your deployed solution
-
whether your solution for assignment 3 is now deployed to Heroku, within a Docker image, and functional after a normal warm-up period
-
whether your integration test demonstrates basic application functionality in its deployed state
-
293.2.5. Additional Details
-
Setup your Heroku account and client interface according to the course lecture and referenced Heroku reference pages.
-
Your Heroku deployment and integration test can be integrated for your development purposes, but what you turn in must
-
assume the application is already deployed by default
-
be pre-wired with the remote Heroku URL to your application
-
be able to automatically run your IT test as part of the Maven module build.
-
RDBMS
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
294. Introduction
This lecture will introduce working with relational databases with Spring Boot. It includes the creation and migration of schema, SQL commands, and low-level application interaction with JDBC.
294.1. Goals
The student will learn:
-
to identify key parts of a RDBMS schema
-
to instantiate and migrate a database schema
-
to automate database schema migration
-
to interact with database tables and rows using SQL
-
to identify key aspects of Java Database Connectivity (JDBC) API
294.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
define a database schema that maps a single class to a single table
-
implement a primary key for each row of a table
-
define constraints for rows in a table
-
define an index for a table
-
automate database schema migration with the Flyway tool
-
manipulate table rows using SQL commands
-
identify key aspects of a JDBC call
295. Schema Concepts
Relational databases are based on a set of explicitly defined tables, columns, constraints, sequences, and indexes.
The overall structure of these definitions is called schema
.
Our first example will be a single table with a few columns.
295.1. RDBMS Tables/Columns
A table is identified by a name and contains a flat set of fields called columns
.
It is common for the table name to have an optional scoping prefix in the event that the database is shared (e.g., during testing or a minimal deployment).
In this example, the song
table is prefixed by a reposongs_
name that identifies which course example this table belongs to.
Table "public.reposongs_song" (1)
Column |
----------+
id | (2)
title | (3)
artist |
released |
1 | table named reposongs_song , part of the reposongs schema |
2 | column named id |
3 | column named title |
295.2. Column Data
Individual tables represent a specific type of object and their columns hold the data.
Each row of the song
table will always have an id
, title
, artist
, and released
column.
id | title | artist | released
----+-----------------------------+------------------------+------------
1 | Noli Me Tangere | Orbital | 2002-07-06
2 | Moab Is My Washpot | Led Zeppelin | 2005-03-26
3 | Arms and the Man | Parliament Funkadelic | 2019-03-11
295.3. Column Types
Each column is assigned a type that constrains the type and size of value they can hold.
Table "public.reposongs_song"
Column | Type |
----------+------------------------+
id | integer | (1)
title | character varying(255) | (2)
artist | character varying(255) |
released | date | (3)
1 | id column has type integer |
2 | title column has type varchar that is less than or equal to 255 characters |
3 | released column has type date |
295.4. Example Column Types
The following lists several common example column data types. A more complete list of column types can be found on the w3schools web site. Some column types can be vendor-specific.
Category | Example Type |
---|---|
Character Data |
|
Boolean/ Numeric data |
|
Temporal data |
|
Character field maximum size is vendor-specific
The maximum size of a char/varchar column is vendor-specific, ranging from 4000 characters to much larger values. |
295.5. Constraints
Column values are constrained by their defined type and can be additionally constrained to be required (not null
), unique (e.g., primary key), a valid reference to an existing row (foreign key), and various other constraints that will be part of the total schema definition.
The following example shows a required column and a unique primary key constraint.
postgres=# \d reposongs_song
Table "public.reposongs_song"
Column | Type | Nullable |
----------+------------------------+----------+
id | integer | not null |(1)
title | character varying(255) | |
artist | character varying(255) | |
released | date | |
Indexes:
"song_pk" PRIMARY KEY, btree (id) (2)
1 | column id is required |
2 | column id constrained to hold a unique (primary) key for each row |
295.6. Primary Key
A primary key is used to uniquely identify a specific row within a table and can also be the target of incoming references (foreign keys). There are two origins of a primary key: natural and surrogate. Natural primary keys are derived directly from the business properties of the object. Surrogate primary keys are externally generated and added to the business properties.
The following identifies the two primary key origins and lists a few advantages and disadvantages.
Primary Key Origins | Natural PKs | Surrogate PKs |
---|---|---|
Description |
derived directly from business properties of object |
externally generated and added to object |
Example |
|
|
Advantages |
|
|
Disadvantages |
|
|
For this example, I am using a surrogate primary key that could have been based on either a UUID or sequence number.
295.7. UUID
A UUID is a globally unique 128 bit value written in hexadecimal, broken up into five groups using dashes, resulting in a 36 character string.
$ uuidgen | awk '{print tolower($0)}'
594075a4-5578-459f-9091-e7734d4f58ce
There are different versions of the algorithm, but each target the same structure and the negligible chance of duplication. [56] This provides not only a unique value for the table row, but also a unique value across all tables, services, and domains.
The following lists a few advantages and disadvantages for using UUIDs as a primary key.
UUID Advantages | UUID Disadvantages |
---|---|
|
|
295.8. Database Sequence
A database sequence is a numeric value guaranteed to be unique by the database. Support for sequences and the syntax used to work with them varies per database. The following shows an example of creating, incrementing, and dropping a sequence in postgres.
postgres=# create sequence seq_a start 1 increment 1; (1)
CREATE SEQUENCE
postgres=# select nextval('seq_a'); (2)
nextval
---------
1
(1 row)
postgres=# select nextval('seq_a');
nextval
---------
2
(1 row)
postgres=# drop sequence seq_a;
DROP SEQUENCE
1 | can define starting point and increment for sequence |
2 | obtain next value of sequence using a database query |
Database Sequences do not dictate how unique value is used
Database Sequences do not dictate how the unique value is used.
The caller can use that directly as the primary key for one or more tables or anything at all.
The caller may also use the returned value to self-generate IDs on its own (e.g., a page offset of IDs).
That is where the |
295.8.1. Database Sequence with Increment
We can use the increment
option to help maintain a 1:1 ratio between
sequence and primary key values — while giving the caller the ability
to self-generate values within a increment window.
postgres=# create sequence seq_b start 1 increment 100; (1)
CREATE SEQUENCE
postgres=# select nextval('seq_b');
nextval
---------
1 (1)
(1 row)
postgres=# select nextval('seq_b');
nextval
---------
101 (1)
(1 row)
1 | increment leaves a window of values that can be self-generated by caller |
The database client calls nextval
whenever it starts or runs out of a window of IDs.
This can cause gaps in the sequence of IDs.
296. Example POJO
We will be using an example Song
class to demonstrate some database schema and interaction concepts.
Initially, I will only show the POJO portions of the class required to implement a business object and manually map this to the database.
Later, I will add some JPA mapping constructs to automate the database mapping.
The class is a read-only value class with only constructors and getters. We cannot use the lombok @Value annotation because JPA (part of a follow-on example) will require us to define a no argument constructor and attributes cannot be final.
package info.ejava.examples.db.repo.jpa.songs.bo;
...
@Getter (1)
@ToString
@Builder
@AllArgsConstructor
@NoArgsConstructor
public class Song {
private int id; (2)
private String title;
private String artist;
private LocalDate released;
}
1 | each property will have a getter method() but the only way to set values is through the constructor/builder |
2 | surrogate primary key will be used as a primary key |
POJOs can be read/write
There is no underlying requirement to use a read-only POJO with JPA or any other mapping.
However, doing so does make it more consistent with DDD read-only entity concepts where changes are through explicit save/update calls to the repository versus subtle side-effects of calling an entity setter() .
|
297. Schema
To map this class to the database, we will need the following constructs:
-
a table
-
a sequence to generate unique values for primary keys
-
an integer column to hold
id
-
2 varchar columns to hold
title
andartist
-
a date column to hold
released
The constructs are defined by schema
.
Schema is instantiated using specific commands.
Most core schema creation commands are vendor neutral.
Some schema creation commands (e.g., IF EXISTS
) and options are vendor-specific.
297.1. Schema Creation
Schema can be
-
authored by hand,
-
auto-generated, or
-
a mixture of the two.
We will have the tooling necessary to implement auto-generation once we get to JPA, but we are not there yet. For now, we will start by creating a complete schema definition by hand.
297.2. Example Schema
The following example defines a sequence and a table in our database ready for use with postgres.
drop sequence IF EXISTS hibernate_sequence; (1)
drop table IF EXISTS reposongs_song;
create sequence hibernate_sequence start 1 increment 1; (2)
create table reposongs_song (
id int not null,
title varchar(255),
artist varchar(255),
released date,
constraint song_pk primary key (id)
);
comment on table reposongs_song is 'song database'; (3)
comment on column reposongs_song.id is 'song primary key';
comment on column reposongs_song.title is 'official song name';
comment on column reposongs_song.artist is 'who recorded song';
comment on column reposongs_song.released is 'date song released';
create index idx_song_title on reposongs_song(title);
1 | remove any existing residue |
2 | create new DB table(s) and sequence |
3 | add descriptive comments |
298. Schema Command Line Population
To instantiate the schema, we have the option to use the command line interface (CLI).
The following example connects to a database running within docker-compose.
The psql
CLI is executed on the same machine as the database, thus saving us the requirement of supplying the password.
The contents of the schema file is supplied as stdin.
$ docker-compose up -d postgres
Creating ejava_postgres_1 ... done
$ docker-compose exec -T postgres psql -U postgres \ (1) (2)
< .../src/main/resources/db/migration/V1.0.0_0__initial_schema.sql (3)
DROP SEQUENCE
DROP TABLE
NOTICE: sequence "hibernate_sequence" does not exist, skipping
NOTICE: table "reposongs_song" does not exist, skipping
CREATE SEQUENCE
CREATE TABLE
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
1 | running psql CLI command on postgres image |
2 | -T disables docker-compose pseudo-tty allocation |
3 | reference to schema file on host |
Pass file using stdin
The file is passed in through stdin using the "<" character.
Do not miss adding the "<" character.
|
The following schema commands add an index to the title
field.
$ docker-compose exec -T postgres psql -U postgres \
< .../src/main/resources/db/migration/V1.0.0_1__initial_indexes.sql
CREATE INDEX
298.1. Schema Result
We can log back into the database to take a look at the resulting schema.
The following executes the psql
CLI interface in the postgres image.
$ docker-compose exec postgres psql -U postgres
psql (12.3)
Type "help" for help.
#
298.2. List Tables
The following lists the tables created in the postgres database.
postgres=# \d+
List of relations
Schema | Name | Type | Owner | Size | Description
--------+--------------------+----------+----------+------------+---------------
public | hibernate_sequence | sequence | postgres | 8192 bytes |
public | reposongs_song | table | postgres | 8192 bytes | song database
(2 rows)
298.3. Describe Song Table
postgres=# \d reposongs_song
Table "public.reposongs_song"
Column | Type | Collation | Nullable | Default
----------+------------------------+-----------+----------+---------
id | integer | | not null |
title | character varying(255) | | |
artist | character varying(255) | | |
released | date | | |
Indexes:
"song_pk" PRIMARY KEY, btree (id)
"idx_song_title" btree (title)
299. RDBMS Project
Although it is common to execute schema commands interactively during initial development, sooner or later they should end up documented in source file(s) that can help document the baseline schema and automate getting to a baseline schema state. Spring Boot provides direct support for automating schema migration — whether it be for test environments or actual production migration. This automation is critical to modern dynamic deployment environments. Lets begin filling in some project-level details of our example.
299.1. RDBMS Project Dependencies
To get our project prepared to communicate with the database, we are going to need a RDBMS-based spring-data starter and at least one database dependency.
The following dependency example readies our project for JPA (a layer well above RDBMS) and to be able to use either the postgres
or h2
database.
-
h2 is an easy and efficient in-memory database choice to base unit testing. Other in-memory choices include HSQLDB and Derby databases.
-
postgres
is one of many choices we could use for a production-ready database
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId> (1)
</dependency>
(2)
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
<!-- schema management --> (3)
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
<scope>runtime</scope>
</dependency>
1 | brings in all dependencies required to access database using JPA (including APIs and Hibernate implementation) |
2 | defines two database clients we have the option of using — h2 offers an in-memory server |
3 | brings in a schema management tool |
299.2. RDBMS Access Objects
The JPA starter takes care of declaring a few key @Bean
instances that can be injected into components.
-
javax.sql.DataSource
is part of the standard JDBC API — which is a very mature and well-supported standard -
javax.persistence.EntityManager
is part of the standard JPA API — which is a layer above JDBC and also a well-supported standard.
@Autowired
private javax.sql.DataSource dataSource; (1)
@Autowired
private javax.persistence.EntityManager entityManager; (2)
1 | DataSource defines a starting point to interface to database using JDBC |
2 | EntityManager defines a starting point for JPA interaction with the database |
299.3. RDBMS Connection Properties
Spring Boot will make some choices automatically, but since we have defined two database dependencies, we should be explicit.
The default datasource is defined with the spring.datasource
prefix.
The URL defines which client to use.
The driver-class-name and dialect can be explicitly defined, but can also be determined internally based on the URL and details reported by the live database.
The following example properties define an in-memory h2 database.
spring.datasource.url=jdbc:h2:mem:songs
#spring.datasource.driver-class-name=org.h2.Driver (1)
1 | Spring Boot can automatically determine driver-class-name from provided URL |
The following example properties define a postgres client. Since this is a server, we have other properties — like username and password — that have to be supplied.
spring.datasource.url=jdbc:postgresql://localhost:5432/postgres
spring.datasource.username=postgres
spring.datasource.password=secret
#spring.datasource.driver-class-name=org.postgresql.Driver
#spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
Driver can be derived from JDBC URL
In a normal Java application, JDBC drivers automatically register with the JDBC DriverManager at startup. When a client requests a connection to a specific JDBC URL, the JDBC DriverManager interrogates each driver, looking for support for the provided JDBC URL. |
300. Schema Migration
The schema of a project rarely stays constant and commonly has to migrate from version to version. No matter what can be automated during development, we need to preserve existing data in production and formal integration environments. Spring Boot has a default integration with Flyway in order to provide ordered migration from version to version. Some of its features (e.g., undo) require a commercial license, but its open-source offering implements forward migrations for free.
300.1. Flyway Automated Schema Migration
"Flyway is an open-source database migration tool". [57] It comes pre-integrated with Spring Boot once we add the Maven module dependency. Flyway executes provided SQL migration scripts against the database and maintains the state of the migration for future sessions.
300.2. Flyway Schema Source
By default, schema files [58]
-
are searched for in the
classpath:db/migration
directory-
overridden using
spring.flyway.locations
property -
locations can be from the classpath and filesystem
-
location expressions support
{vendor}
placeholder expansion
-
spring.flyway.locations=classpath:db/migration/common,classpath:db/migration/{vendor}
-
following a naming pattern of V<version>__<name/comment>.sql (double underscore between version and name/comment) with version being a period (".") or single underscore ("_") separated set of version digits (e.g., V1.0.0_0, V1_0_0_0)
The following example shows a set of schema migration files located in the default, vendor neutral location.
target/classes/
|-- application-postgres.properties
|-- application.properties
`-- db
`-- migration
|-- V1.0.0_0__initial_schema.sql
|-- V1.0.0_1__initial_indexes.sql
`-- V1.1.0_0__add_artist.sql
300.3. Flyway Automatic Schema Population
Spring Boot will automatically trigger a migration of the files when the application starts.
The following example is launching the application and activating the postgres
profile with the client setup to communicate with the remote postgres database.
The --db.populate
is turning off application level population of the database.
That is part of a later example.
java -jar target/jpa-song-example-6.0.1-SNAPSHOT-bootexec.jar --spring.profiles.active=postgres --db.populate=false
300.4. Database Server Profiles
By default, the example application will use an in-memory database.
#h2
spring.datasource.url=jdbc:h2:mem:users
To use the postgres database, we need to fill in the properties within the selected profile.
#postgres
spring.datasource.url=jdbc:postgresql://localhost:5432/postgres
spring.datasource.username=postgres
spring.datasource.password=secret
300.5. Dirty Database Detection
If flyway detects a non-empty schema and no flyway table(s), it will immediately throw an exception and the program terminates.
FlywayException: Found non-empty schema(s) "public" but no schema history table.
Use baseline() or set baselineOnMigrate to true to initialize the schema history table.
Keeping this simple, we can simply drop the existing schema.
postgres=# drop table reposongs_song;
DROP TABLE
postgres=# drop sequence hibernate_sequence;
DROP SEQUENCE
300.6. Flyway Migration
With everything correctly in place, flyway will execute the migration.
The following output is from the console log showing the activity of Flyway migrating the schema of the database.
VersionPrinter : Flyway Community Edition 7.1.1 by Redgate
DatabaseType : Database: jdbc:postgresql://localhost:5432/postgres (PostgreSQL 12.3)
DbValidate : Successfully validated 3 migrations (execution time 00:00.026s)
JdbcTableSchemaHistory : Creating Schema History table "public"."flyway_schema_history" ...
DbMigrate : Current version of schema "public": << Empty Schema >>
DbMigrate : Migrating schema "public" to version "1.0.0.0 - initial schema"
DefaultSqlScriptExecutor : DB: sequence "hibernate_sequence" does not exist, skipping
DefaultSqlScriptExecutor : DB: table "reposongs_song" does not exist, skipping
DbMigrate : Migrating schema "public" to version "1.0.0.1 - initial indexes"
DbMigrate : Migrating schema "public" to version "1.1.0.0 - add artist"
DbMigrate : Successfully applied 3 migrations to schema "public" (execution time 00:00.190s)
301. SQL CRUD Commands
All RDBMS-based interactions are based on Structured Query Language (SQL) and its set of Data Manipulation Language (DML) commands. It will help our understanding of what the higher-level frameworks are providing if we take a look at a few raw examples.
SQL Commands are case-insensitive
All SQL commands are case-insensitive. Using upper or lower case in these examples is a matter of personal/project choice. |
301.1. H2 Console Access
When H2 is activated — we can activate the H2 user interface using the following property.
spring.h2.console.enabled=true
Once the application is up and running, the following URL provides access to the H2 console.
http://localhost:8080/h2-console
301.2. Postgres CLI Access
With postgres activated, we can access the postgres server using the psql
CLI.
$ docker-compose exec postgres psql -U postgres
psql (12.3)
Type "help" for help.
postgres=#
301.3. Next Value for Sequence
We created a sequence in our schema to managed unique IDs. We can obtain the next value for that sequence using a SQL command. Unfortunately, obtaining the next value for a sequence is vendor-specific. The following two examples show examples for postgres and h2.
select nextval('hibernate_sequence');
nextval
---------
6
call next value for hibernate_sequence;
---
1
301.4. SQL ROW INSERT
We add data to a table using the INSERT command.
insert into reposongs_song(id, title, artist, released) values (6,'Don''t Worry Be Happy','Bobby McFerrin', '1988-08-05');
Use two single-quote characters to embed single-quote
The single-quote character is used to delineate a string in SQL commands.
Use two single-quote characters to express a single quote character within a command (e.g., |
301.5. SQL SELECT
We output row data from the table using the SELECT command;
# select * from reposongs_song; id | title | artist | released ----+----------------------+----------------+------------ 6 | Don't Worry Be Happy | Bobby McFerrin | 1988-08-05 7 | Sledgehammer | Peter Gabriel | 1986-05-18
The previous example output all columns and rows for the table in a non-deterministic order. We can control the columns output, the column order, and the row order for better management. The next example outputs specific columns and orders rows in ascending order by the released date.
# select released, title, artist from reposongs_song order by released ASC; released | title | artist ------------+----------------------+---------------- 1986-05-18 | Sledgehammer | Peter Gabriel 1988-08-05 | Don't Worry Be Happy | Bobby McFerrin
301.6. SQL ROW UPDATE
We can change column data of one or more rows using the UPDATE command.
The following example shows a row with a value that needs to be changed.
# insert into reposongs_song(id, title, artist, released) values (8,'October','Earth Wind and Fire', '1978-11-18');
The following snippet shows updating the title column for the specific row.
# update reposongs_song set title='September' where id=8;
The following snippet uses the SELECT command to show the results of our change.
# select * from reposongs_song where id=8; id | title | artist | released ----+-----------+---------------------+------------ 8 | September | Earth Wind and Fire | 1978-11-18
301.7. SQL ROW DELETE
We can remove one or more rows with the DELETE command. The following example removes a specific row matching the provided ID.
# delete from reposongs_song where id=8; DELETE 1
# select * from reposongs_song; id | title | artist | released ----+----------------------+----------------+------------ 6 | Don't Worry Be Happy | Bobby McFerrin | 1988-08-05 7 | Sledgehammer | Peter Gabriel | 1986-05-18
301.8. RDBMS Transaction
Transactions are an important and integral part of relational databases. The transactionality of a database are expressed in "ACID" properties [59]:
-
Atomic - all or nothing. Everything in the unit acts as a single unit
-
Consistent - moves from one valid state to another
-
Isolation - the degree of visibility/independence between concurrent transactions
-
Durability - a committed transaction exists
By default, most interactions with the database are considered individual transactions with an auto-commit after each one. Auto-commit can be disabled so that multiple commands can be part of the same, single transaction.
301.8.1. BEGIN Transaction Example
The following shows an example of a disabling auto-commit in postgres by issuing the BEGIN command. Every change from this point until the COMMIT or ROLLBACK is temporary and is isolated from other concurrent transactions (to the level of isolation supported by the database and configured by the connection).
# BEGIN; (1)
BEGIN
# insert into reposongs_song(id, title, artist, released)
values (7,'Sledgehammer','Peter Gabriel', '1986-05-18');
INSERT 0 1
# select * from reposongs_song;
id | title | artist | released | foo
----+----------------------+----------------+------------+-----
6 | Don't Worry Be Happy | Bobby McFerrin | 1988-08-05 |
7 | Sledgehammer | Peter Gabriel | 1986-05-18 | (2)
(3 rows)
1 | new transaction started when BEGIN command issued |
2 | commands within a transaction will be able to see uncommitted changes from the same transaction |
301.8.2. ROLLBACK Transaction Example
The following shows how the previous command(s) in the current transaction can be rolled back — as if they never executed. The transaction ends once we issue COMMIT or ROLLBACK.
# ROLLBACK; (1)
ROLLBACK
# select * from reposongs_song; (2)
id | title | artist | released
----+----------------------+----------------+------------
6 | Don't Worry Be Happy | Bobby McFerrin | 1988-08-05
1 | transaction ends once COMMIT or ROLLBACK command issued |
2 | commands outside of a transaction will not be able to see uncommitted and rolled back changes of another transaction |
302. JDBC
With database schema in place and a key amount of SQL under our belt, it is time to move on to programmatically interacting with the database. Our next stop is a foundational aspect of any Java database interaction, the Java Database Connectivity (JDBC) API. JDBC is a standard Java API for communicating with tabular databases. [60] We hopefully will never need to write this code in our applications, but it eventually gets called by any database mapping layers we may use — therefore it is good to know some of the foundation.
302.1. JDBC DataSource
The javax.sql.DataSource
is the starting point for interacting with the database.
Assuming we have Flyway schema migrations working at startup, we already know we have our database properties setup properly.
It is now our chance to inject a DataSource
and do some work.
The following snippet shows an example of an injected DataSource
.
That DataSource
is being used to obtain the URL used to connect to the database.
Most JDBC commands declare a checked exception (SQLException
) that must be caught or also declared thrown.
@Component
@RequiredArgsConstructor
public class JdbcSongDAO {
private final javax.sql.DataSource dataSource; (1)
@PostConstruct
public void init() {
try {
String url = dataSource.getConnection().getMetaData().getURL();
... (2)
} catch (SQLException ex) { (3)
throw new IllegalStateException(ex);
}
}
1 | DataSource injected using constructor injection |
2 | DataSource used to obtain a connection and metadata for the URL |
3 | All/most JDBC commands declare throwing a SQLException that must be explicitly handled |
302.2. Obtain Connection and Statement
We obtain a java.sql.Connection
from the DataSource
and a Statement
from the connection.
Connections and statements must be closed when complete and we can automated that with a Java try-with-resources statement.
PreparedStatement
can be used to assemble the statement up front and reused in a loop if appropriate.
public void create(Song song) throws SQLException {
String sql = //insert/select/delete/update ... (1)
try(Connection conn = dataSource.getConnection(); (2)
PreparedStatement statement = conn.prepareStatement(sql)) {
//statement.executeUpdate(); (3)
//statement.executeQuery();
}
}
1 | action-specific SQL will be supplied to the PreparedStatement |
2 | try-with-resources construct automatically closes objects declared at this scope |
3 | Statement used to query and modify database |
302.3. JDBC Create Example
public void create(Song song) throws SQLException {
String sql = "insert into REPOSONGS_SONG(id, title, artist, released) values(?,?,?,?)";(1)
try(Connection conn = dataSource.getConnection();
PreparedStatement statement = conn.prepareStatement(sql)) {
int id = nextId(conn); //get next ID from database (2)
log.info("{}, params={}", sql, List.of(id, song.getTitle(), song.getArtist(), song.getReleased()));
statement.setInt(1, id); (3)
statement.setString(2, song.getTitle());
statement.setString(3, song.getArtist());
statement.setDate(4, Date.valueOf(song.getReleased()));
statement.executeUpdate();
setId(song, id); //inject ID into supplied instance (4)
}
}
1 | SQL commands have ? placeholders for parameters |
2 | leveraging a helper method (based on a query statement) to obtain next sequence value |
3 | filling in the individual variables of the SQL template |
4 | leveraging a helper method (based on Java reflection) to set the generated ID of the instance before returning |
Use Variables over String Literal Values
Repeated SQL commands should always use parameters over literal values. Identical SQL templates allow database parsers to recognize a repeated command and leverage earlier query plans. Unique SQL strings require database to always parse the command and come up with new plans. |
302.4. Set ID Example
The following snippet shows the helper method used earlier to set the ID of an existing instance.
We need the helper because id
is declared private.
id
is declared private and without a setter because it should never change.
Persistence is one of the exceptions to "should never change".
private void setId(Song song, int id) {
try {
Field f = Song.class.getDeclaredField("id"); (1)
f.setAccessible(true); (2)
f.set(song, id); (3)
} catch (NoSuchFieldException | IllegalAccessException ex) {
throw new IllegalStateException("unable to set Song.id", ex);
}
}
1 | using Java reflection to locate the id field of the Song class |
2 | must set to accessible since id is private — otherwise an IllegalAccessException |
3 | setting the value of the id field |
302.5. JDBC Select Example
The following snippet shows an examle of using a JDBC select.
In this case we are querying the database and representing the returned rows as instances of Song
POJOs.
public Song findById(int id) throws SQLException {
String sql = "select title, artist, released from REPOSONGS_SONG where id=?"; (1)
try(Connection conn = dataSource.getConnection();
PreparedStatement statement = conn.prepareStatement(sql)) {
statement.setInt(1, id); (2)
try (ResultSet rs = statement.executeQuery()) { (3)
if (rs.next()) { (4)
Date releaseDate = rs.getDate(3); (5)
return Song.builder()
.id(id)
.title(rs.getString(1))
.artist(rs.getString(2))
.released(releaseDate == null ? null : releaseDate.toLocalDate())
.build();
} else {
throw new NoSuchElementException(String.format("song[%d] not found",id));
}
}
}
}
1 | provide a SQL template with ? placeholders for runtime variables |
2 | fill in variable placeholders |
3 | execute query and process results in one or more ResultSet — which must be closed when complete |
4 | must test ResultSet before obtaining first and each subsequent row |
5 | obtain values from the ResultSet — numerical order is based on SELECT clause |
302.6. nextId
The nextId()
call from createSong()
is another query on the surface, but it is incrementing a sequence at the database level to supply the value.
private int nextId(Connection conn) throws SQLException {
String sql = dialect.getNextvalSql();
try(PreparedStatement call = conn.prepareStatement(sql)) {
try (ResultSet rs = call.executeQuery()) {
if (rs.next()) {
Long id = rs.getLong(1);
return id.intValue();
} else {
throw new IllegalStateException("no sequence result returned from call");
}
}
}
}
302.7. Dialect
Sequences syntax (and support for Sequences) is often DB-specific. Therefore, if we are working at the SQL or JDBC level, we need to use the proper dialect for our target database. The following snippet shows two choices for dialect for getting the next value for a sequence.
private Dialect dialect;
enum Dialect {
H2("call next value for hibernate_sequence"),
POSTGRES("select nextval('hibernate_sequence')");
private String nextvalSql;
private Dialect(String nextvalSql) {
this.nextvalSql = nextvalSql;
}
String getNextvalSql() { return nextvalSql; }
}
303. Summary
In this module we learned:
-
to define a relational database schema for a table, columns, sequence, and index
-
to define a primary key, table constraints, and an index
-
to automate the creation and migration of the database schema
-
to interact with database tables and columns with SQL
-
underlying JDBC API interactions
Java Persistence API (JPA)
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
304. Introduction
This lecture covers implementing object/relational mapping (ORM) to an RDBMS using the Java Persistence API (JPA). This lecture will directly build on the previous concepts covered in the RDBMS and show the productivity power gained by using an ORM to map Java classes to the database.
304.1. Goals
The student will learn:
-
to identify the underlying JPA constructs that are the basis of Spring Data JPA Repositories
-
to implement a JPA application with basic CRUD capabilities
-
to understand the significance of transactions when interacting with JPA
304.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
declare project dependencies required for using JPA
-
define a DataSource to interface with the RDBMS
-
define a PersistenceContext containing an
@Entity
class -
inject an EntityManager to perform actions on a PeristenceUnit and database
-
map a simple
@Entity
class to the database using JPA mapping annotations -
perform basic database CRUD operations on an
@Entity
-
define transaction scopes
305. Java Persistence API
The Java Persistence API (JPA) is an object/relational mapping (ORM) layer that sits between the application code and JDBC and is the basis for Spring Data JPA Repositories.
JPA permits the application to primarily interact with plain old Java (POJO) business objects and a few standard persistence interfaces from JPA to fully manage our objects in the database.
JPA works off convention and customized by annotations primarily on the POJO, called an Entity.
JPA offers a rich set of capability that would take us many chapters and weeks to cover.
I will just cover the very basic setup and @Entity
mapping at this point.
305.1. JPA Standard and Providers
The JPA standard was originally part of Java EE, which is now managed by the Eclipse Foundation within Jakarta. It was released just after Java 5, which was the first version of Java to support annotations. It replaced the older, heavyweight Entity Bean Standard — that was ill-suited for the job of realistic O/R mapping — and progressed on a path that was in line with Hibernate. There are several persistence providers of the API
-
EclipseLink is now the reference implementation
-
Hibernate was one of the original implementations and the default implementation within Spring Boot
-
OpenJPA from the Apache Software Foundation
305.2. JPA Dependencies
Access to JPA requires declaring a dependency on the JPA interface (jakarta.persistence-api
) and a provider implementation (e.g., hibernate-core
).
This is automatically added to the project by declaring a dependency on the spring-boot-starter-data-jpa
module.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
The following shows a subset of the dependencies brought into the application by declaring a dependency on the JPA starter.
+- org.springframework.boot:spring-boot-starter-data-jpa:jar:2.7.0:compile
| +- org.springframework.boot:spring-boot-starter-aop:jar:2.7.0:compile
| +- org.springframework.boot:spring-boot-starter-jdbc:jar:2.7.0:compile
| | \- org.springframework:spring-jdbc:jar:5.3.20:compile
| +- jakarta.transaction:jakarta.transaction-api:jar:1.3.3:compile
| +- jakarta.persistence:jakarta.persistence-api:jar:2.2.3:compile (1)
| +- org.hibernate:hibernate-core:jar:5.6.9.Final:compile (2)
1 | the JPA API module is required to compile standard JPA constructs |
2 | a JPA provider module is required to access extensions and for runtime implementation of the standard JPA constructs |
From these dependencies we have the ability to define and inject various JPA beans.
305.3. Enabling JPA AutoConfiguration
JPA has its own defined bootstrapping constructs that involve settings in persistence.xml
and entity mappings in
orm.xml
configuration files.
These files define the overall persistence unit and include information to connect to the database and any custom entity mapping overrides.
Spring Boot JPA automatically configures a default persistence unit and other related beans when the @EnableJpaRepositories
annotation is provided.
@EntityScan
is used to identify packages for @Entities
to include in the persistence unit.
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
@SpringBootApplication
@EnableJpaRepositories (1)
// Class<?>[] basePackageClasses() default {};
// String repositoryImplementationPostfix() default "Impl";
// ...(many more configurations)
@EntityScan (2)
// Class<?>[] basePackageClasses() default {};
public class JPASongsApp {
1 | triggers and configures scanning for JPA Repositories |
2 | triggers and configures scanning for JPA Entities |
By default, this configuration will scan packages below the class annotated with the @EntityScan
annotation.
We can override that default using the attributes of the @EntityScan
annotation.
305.4. Configuring JPA DataSource
Spring Boot provides convenient ways to provide property-based configurations through its standard property handing, making the connection areas of persistence.xml
unnecessary (but still usable).
The following examples show how our definition of the DataSource
for the JDBC/SQL example can be used for JPA as well.
H2 In-Memory Example Properties
|
Postgres Client Example Properties
|
305.5. Automatic Schema Generation
JPA provides the capability to automatically generate schema from the Persistence Unit definitions. This can be configured to write to a file to be used to kickstart schema authoring. However, the most convenient use for schema generation is at runtime during development.
Spring Boot will automatically enable runtime schema generation for in-memory database URLs. We can also explicitly enable runtime schema generation using the following hibernate property.
spring.jpa.hibernate.ddl-auto=create
305.6. Schema Generation to File
The JPA provider can be configured to generate schema to a file. This can be used directly by tools like Flyway or simply to kickstart manual schema authoring.
The following configuration snippet instructs the JPA provider to generate a create and drop commands into the same drop_create.sql
file based on the metadata discovered within the PersistenceContext.
Hibernate has the additional features to allow for formatting and line termination specification.
spring.jpa.properties.javax.persistence.schema-generation.scripts.action=drop-and-create
spring.jpa.properties.javax.persistence.schema-generation.create-source=metadata
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=target/generated-sources/ddl/drop_create.sql
spring.jpa.properties.javax.persistence.schema-generation.scripts.drop-target=target/generated-sources/ddl/drop_create.sql
spring.jpa.properties.hibernate.hbm2ddl.delimiter=; (1)
spring.jpa.properties.hibernate.format_sql=true (2)
1 | adds ";" character to terminate every command — making it SQL script-ready |
2 | adds new lines to make more human-readable |
action
can have values of none
, create
, drop-and-create
, and drop
[61]
create/drop-source
can have values of metadata
, script
, metadata-then-script
, or script-then-metadata
.
metadata
will come from the class defaults and annotations.
script
will come from a location referenced by create/drop-script-source
Generate Schema to Debug Complex Mappings
Generating schema from @Entity class metadata is a good way to debug odd persistence behavior.
Even if normally ignored, the generated schema can identify incorrect and accidental definitions that may cause unwanted behavior.
|
305.7. Other Useful Properties
It is useful to see database SQL commands coming from the JPA/Hibernate layer during early stages of development or learning. The following properties will print the JPA SQL commands and values that were mapped to the SQL substitution variables.
spring.jpa.show-sql=true (1) logging.level.org.hibernate.type=trace (2)
1 | prints JPA SQL commands |
2 | prints SQL parameter values |
The following cleaned up output shows the result of the activated debug. We can see the individual SQL commands issued to the database as well as the parameter values used in the call and extracted from the response.
Hibernate: call next value for hibernate_sequence
Hibernate: insert into reposongs_song (artist, released, title, id) values (?, ?, ?, ?)
binding parameter [1] as [VARCHAR] - [Rage Against The Machine]
binding parameter [2] as [DATE] - [2020-05-12]
binding parameter [3] as [VARCHAR] - [Recalled to Life]
binding parameter [4] as [INTEGER] - [1]
305.8. Configuring JPA Entity Scan
Spring Boot JPA will automatically scan for @Entity
classes.
We can provide a specification to external packages to scan using the @EntityScan
annotation.
The following shows an example of using a String package specification to a root package to scan for @Entity
classes.
import org.springframework.boot.autoconfigure.domain.EntityScan;
...
@EntityScan(value={"info.ejava.examples.db.repo.jpa.songs.bo"})
The following example, instead uses a Java class to express a package to scan.
We are using a specific @Entity
class in this case, but some may define an interface simply to help mark the package and use that instead.
The advantage of using a Java class/interface is that it will work better when refactoring.
import info.ejava.examples.db.repo.jpa.songs.bo.Song;
...
@EntityScan(basePackageClasses = {Song.class})
305.9. JPA Persistence Unit
The JPA Persistence Unit represents the overall definition of a group of Entities and how we interact with the database.
A defined Persistence Unit can be injected into the application using an EntityManagerFactory
.
From this injected class, clients can gain access to metadata and initiate a Persistence Context.
import javax.persistence.EntityManagerFactory;
...
@Autowired
private EntityManagerFactory emf;
305.10. JPA Persistence Context
A Persistence Context is a usage instance of a Persistence Unit and is represented by an EntityManager
.
An @Entity
with the same identity is represented by a single instance within a Persistence Context.
import javax.persistence.EntityManager;
...
@Autowired
private EntityManager em;
Injected EntityManagers
reference the same Persistence Context when called within the same thread. That means that a Song
loaded by one client with ID=1 will be available to sibling code when using ID=1.
Use/Inject EntityManagers
Normal application code that creates, gets, updates, and deletes |
306. JPA Entity
A JPA @Entity
is a class that is mapped to the database that primarily represents a row in a table.
The following snippet is the example Song class we have already manually mapped to the REPOSONGS_SONG
database table using manually written schema and JDBC/SQL commands in a previous lecture.
To make the class an @Entity
, we must:
-
annotate the class with
@Entity
-
provide a no-argument constructor
-
identify one or more colums to represent the primary key using the
@Id
annotation -
override any convention defaults with further annotations
@javax.persistence.Entity (1)
@Getter
@AllArgsConstructor
@NoArgsConstructor (2)
public class Song {
@javax.persistence.Id (3) (4)
private int id;
@Setter
private String title;
@Setter
private String artist;
@Setter
private java.time.LocalDate released;
}
1 | class must be annotated with @Entity |
2 | class must have a no-argument constructor |
3 | class must have one or more fields designated as the primary key |
4 | annotations can be on the field or property and the choice for @Id determines the default |
Primary Key property is not modifiable
This Java class is not providing a setter for the field mapped to the primary key in the database. The primary key will be generated by the persistence provider at runtime and assigned to the field. The field cannot be modified while the instance is managed by the provider. The all-args constructor can be used to instantiate a new object with a specific primary key. |
306.1. JPA @Entity Defaults
By convention and supplied annotations, the class as shown above would:
-
have the entity name "Song" (important when expressing queries; ex.
select s from Song s
) -
be mapped to the
SONG
table to match the entity name -
have columns
id integer
,title varchar
,artist varchar
, andreleased (date)
-
use
id
as its primary key and manage that using a provider-default mechanism
306.2. JPA Overrides
Many/all of the convention defaults can be customized by further annotations. We commonly need to:
-
supply a table name that matches our intended schema (i.e.,
select * from REPOSONGS_SONG
vsselect * from SONG
) -
select which primary key mechanism is appropriate for our use
-
supply column names that match our intended schema
-
identify which properties are optional, part of the initial
INSERT
, andUPDATE
-able -
supply other parameters useful for schema generation (e.g., String length)
@Entity
@Table(name="REPOSONGS_SONG") (1)
@NoArgsConstructor
...
public class Song {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE) (2)
@Column(name = "ID") (3)
private int id;
@Column(name="TITLE", length=255, nullable=true, insertable=true, updatable=true)(4)
private String title;
private String artist;
private LocalDate released;
}
1 | overriding the default table name SONG with REPOSONGS_SONG |
2 | overriding the default primary key mechanism with SEQUENCE .
The default sequence name is hibernate-sequence for the Hibernate JPA provider. |
3 | re-asserting the default convention column name ID for the id field |
4 | re-asserting many of the default convention column mappings |
Schema generation properties not used at runtime
Properties like |
307. Basic JPA CRUD Commands
JPA provides an API for implementing persistence to the database through manipulation of @Entity
instances and calls to the EntityManager.
307.1. EntityManager persist()
We create a new object in the database by calling persist()
on the EntityManager and passing in an @Entity
instance that represents something new.
This will:
-
assign a primary key if configured to do so
-
add the instance to the Persistence Context
-
make the
@Entity
instance managed from that point forward
The following snippet shows a partial DAO implementation using JPA.
@Component
@RequiredArgsConstructor
public class JpaSongDAO {
private final EntityManager em;
public void create(Song song) {
em.persist(song);
}
...
A database INSERT
SQL command will be queued to the database as a result of a successful call and the @Entity
instance will be in a managed state.
Hibernate: call next value for hibernate_sequence
Hibernate: insert into reposongs_song (artist, released, title, id) values (?, ?, ?, ?)
In the managed state, any changes to the @Entity
will result in a future UPDATE
SQL command.
Updates are issued during the next JPA session "flush".
JPA session flushes can be triggered manually or automatically prior to or no later than the next commit.
307.2. EntityManager find() By Identity
JPA supplies a means to get the full @Entity
using its primary key.
public Song findById(int id) {
return em.find(Song.class, id);
}
If the instance is not yet loaded into the Persistence Context, SELECT
SQL command(s) will be issued to the database to obtain the persisted state.
The following snippet shows the SQL generated by Hibernate to fetch the state from the database to realize the @Entity
instance within the JVM.
Hibernate: select
song0_.id as id1_0_0_,
song0_.artist as artist2_0_0_,
song0_.released as released3_0_0_,
song0_.title as title4_0_0_
from reposongs_song song0_
where song0_.id=?
From that point forward, the state will be returned from the Persistence Context without the need to get the state from the database.
307.3. EntityManager query
JPA provides many types of queries
-
JPA Query Language (JPAQL) - a very SQL-like String syntax expressed in terms of
@Entity
classes and relationship constructs -
Criteria Language - a type-safe, Java-centric syntax that avoids String parsing and makes dynamic query building more efficient than query string concatenation and parsing
-
Native SQL - the same SQL we would have provided to JDBC
The following snippet shows an example of executing a JPAQL Query.
public boolean existsById(int id) {
return em.createQuery("select count(s) from Song s where s.id=:id",(1)
Number.class) (2)
.setParameter("id", id) (3)
.getSingleResult() (4)
.longValue()==1L; (5)
}
1 | JPAQL String based on @Entity constructs |
2 | query call syntax allows us to define the expected return type |
3 | query variables can be set by name or position |
4 | one (mandatory) or many results can be returned from query |
5 | entity exists if row count of rows matching PK is 1. Otherwise should be 0 |
The following shows how our JPAQL snippet mapped to the raw SQL issued to the database.
Notice that our Song
@Entity
reference was mapped to the REPOSONGS_SONG
database table.
Hibernate: select
count(song0_.id) as col_0_0_
from reposongs_song song0_
where song0_.id=?
307.4. EntityManager flush()
Not every change to an @Entity
and call to an EntityManager
results in an immediate 1:1 call to the database.
Some of these calls manipulate an in-memory cache in the JVM and may get issued in a group of other commands at some point in the future.
We normally want to allow the EntityManager
to cache these calls as much as possible.
However, there are times (e.g., prior to making a raw SQL query) where we want to make sure the database has the current state of the cache.
The following snippet shows an example of flushing the contents of the cache after changing the state of a managed @Entity
instance.
Song s = ... //obtain a reference to a managed instance
s.setTitle("...");
em.flush(); //optional!!! will eventually happen at some point
Whether is was explicitly issued or triggered internally by the JPA provider, the following snippet shows the resulting UPDATE
SQL call to change the state of the database to match the Persistence Context.
Hibernate: update reposongs_song
set artist=?, released=?, title=? (1)
where id=?
1 | all fields designated as updatable=true are included in the UPDATE |
307.5. EntityManager remove()
JPA provides a means to delete an @Entity
from the database.
However, we must have the managed @Entity
instance loaded in the Persistence Context first to use this capability.
The reason for this is that a JPA delete can optionally involve cascading actions to remove other related entities as well.
The following snippet shows how a managed @Entity
instance can be used to initiate the removal from the database.
public void delete(Song song) {
em.remove(song);
}
The following snippet shows how the remove command was mapped to a SQL DELETE
command.
Hibernate: delete from reposongs_song where id=?
307.6. EntityManager clear() and detach()
There are two commands that will remove entities from the Persistence Context. They have their purpose, but know that they are rarely used and can be dangerous to call.
-
clear() - will remove all entities
-
detach() - will remove a specific
@Entity
I only bring these up because you may come across class examples where I am calling
flush()
and clear()
in the middle of a demonstration.
This is purposely mimicking a fresh Persistence Context within scope of a single transaction.
em.clear();
em.detach(song);
Calling clear()
or detach()
will evict all managed entities or targeted managed @Entity
from the Persistence Context — loosing any in-progress and future modifications.
In the case of returning redacted @Entities — this may be exactly what you want (you don’t want the redactions to remove data from the database).
Use clear() and detach() with Caution
Calling |
308. Transactions
All commands require some type of transaction when interacting with the database. The transaction can be activated and terminated at varying levels of scope integrating one or more commands into a single transaction.
308.1. Transactions Required for Explicit Changes/Actions
The injected EntityManager
is the target of our application calls and the transaction gets associated with that object.
The following snippet shows the provider throwing a TransactionRequiredException
when the calling persist()
on the injected EntityManager
when no transaction has been activated.
@Autowired
private EntityManager em;
...
@Test
void transaction_missing() {
//given - an instance
Song song = mapper.map(dtoFactory.make());
//when - persist is called without a tx, an exception is thrown
em.persist(song); (1)
}
1 | TransactionRequiredException exception thrown |
javax.persistence.TransactionRequiredException: No EntityManager with actual transaction available for current thread - cannot reliably process 'persist' call
308.2. Activating Transactions
Although you will find transaction methods on the EntityManager
, these are only meant for individually managed instances created directly from the EntityManagerFactory
.
Transactions for injected an EntityManager
are managed by the container and triggered by the presence of a @Transactional
annotation on a called bean method within the call stack.
This next example annotates the calling @Test
method with the @Transactional
annotation to cause a transaction to be active for the three (3) contained EntityManager
calls.
import org.springframework.transaction.annotation.Transactional;
...
@Test
@Transactional (1)
void transaction_present_in_caller() {
//given - an instance
Song song = mapper.map(dtoFactory.make());
//when - persist called within caller transaction, no exception thrown
em.persist(song); (2)
em.flush(); //force DB interaction (2)
//then
then(em.find(Song.class, song.getId())).isNotNull(); (2)
} (3)
1 | @Transactional triggers an Aspect to activate a transaction for the Persistence Context operating within the current thread |
2 | the same transaction is used on all three (3) EntityManager calls |
3 | the end of the method will trigger the transaction-initiating Aspect to commit (or rollback) the transaction it activated |
308.3. Conceptual Transaction Handling
Logically speaking, the transaction handling done on behalf of @Transactional
is similar to the snippet shown below.
However, as complicated as that is — it does not begin to address nested calls.
Also note that a thrown RuntimeException
triggers a rollback and anything else triggers a commit.
tx = em.getTransaction();
try {
tx.begin();
//call code (2)
} catch (RuntimeException ex) {
tx.setRollbackOnly(); (1)
} catch (Exception ex) { (2)
} finally {
if (tx.getRollbackOnly()) {
tx.rollback();
} else {
tx.commit();
}
}
1 | RuntimeException , by default, triggers a rollback |
2 | Normal returns and checked exceptions, by default, trigger a commit |
308.4. Activating Transactions in @Components
We can alternatively push the demarcation of the transaction boundary down to the @Component
methods.
The snippet below shows a DAO @Component
that designates each of its methods being @Transactional
.
This has the benefit of knowing that each of the calls to EntityManager
methods will have the required transaction in place — whether it is the right one is a later topic.
@Component
@RequiredArgsConstructor
@Transactional (1)
public class JpaSongDAO {
private final EntityManager em;
public void create(Song song) {
em.persist(song);
}
public Song findById(int id) {
return em.find(Song.class, id);
}
public void delete(Song song) {
em.remove(song);
}
1 | each method will be assigned a transaction |
308.5. Calling @Transactional @Component Methods
The following example shows the calling code invoking methods of the DAO @Component
in independent transactions.
The code works because there really is no dependency between the INSERT
and SELECT
to be part of the same transaction, as long as the INSERT
commits before the SELECT
transaction starts.
@Test
void transaction_present_in_component() {
//given - an instance
Song song = mapper.map(dtoFactory.make());
//when - persist called within component transaction, no exception thrown
jpaDao.create(song); (1)
//then
then(jpaDao.findById(song.getId())).isNotNull(); (2)
}
1 | INSERT is completed in separate transaction |
2 | SELECT completes in follow-on transaction |
308.6. @Transactional @Component Methods SQL
The following shows the SQL triggered by the snippet above with the different transactions annotated.
(1)
Hibernate: insert into reposongs_song (artist, released, title, id) values (?, ?, ?, ?)
(2)
Hibernate: select
song0_.id as id1_0_0_,
song0_.artist as artist2_0_0_,
song0_.released as released3_0_0_,
song0_.title as title4_0_0_
from reposongs_song song0_
where song0_.id=?
1 | transaction 1 |
2 | transaction 2 |
308.7. Unmanaged @Entity
However, we do not always get that lucky — for individual, sequential transactions to play well together. JPA entities follow the notation of managed and unmanaged/detached state.
-
Managed entities are actively being tracked by a Persistence Context
-
Unmanaged/Detached entities have either never been or no longer associated with a Persistence Context
The following snippet shows an example of where a follow-on method fails because the EntityManager
requires that @Entity
be currently managed. However, the end of the create()
transaction made it detached.
@Test
void transaction_common_needed() {
//given a persisted instance
Song song = mapper.map(dtoFactory.make());
jpaDao.create(song); //song is detached at this point (1)
//when - removing detached entity we get an exception
jpaDao.delete(song); (2)
1 | the first transaction starts and ends at this call |
2 | the EntityManager.remove operates in a separate transaction with a detached @Entity from the previous transaction |
The following text shows the error message thrown by the EntityManager.remove
call when a detached entity is passed in to be deleted.
java.lang.IllegalArgumentException: Removing a detached instance info.ejava.examples.db.repo.jpa.songs.bo.Song#1
308.8. Shared Transaction
We can get things to work better if we encapsulate methods behind a @Service
method defining good transaction boundaries.
Lacking a more robust application, the snippet below adds the @Transactional
to the @Test
method to have it shared by the three (3) DAO @Component
calls — making the @Transactional
annotations on the DAO meaningless.
@Test
@Transactional (1)
void transaction_common_present() {
//given a persisted instance
Song song = mapper.map(dtoFactory.make());
jpaDao.create(song); //song is detached at this point (2)
//when - removing managed entity, it works
jpaDao.delete(song); (2)
//then
then(jpaDao.findById(song.getId())).isNull(); (2)
}
1 | @Transactional at the calling method level is shared across all lower-level calls |
2 | Each DAO call is executed in the same transaction and the @Entity can still be managed across all calls |
308.9. @Transactional Attributes
There are several attributes that can be set on the @Transactional
annotation.
A few of the more common properties to set include
-
propagation - defaults to REQUIRED, proactively activating a transaction if not already present
-
SUPPORTS - lazily initiates a transaction, but fully supported if already active
-
MANDATORY - error if called without an active transaction
-
REQUIRES_NEW - proactively creates a new transaction separate from the caller’s transaction
-
NOT_SUPPORTED - nothing within the called method will honor transaction semantics
-
NEVER - do not call with an active transaction
-
NESTED - may not be supported, but permits nested transactions to complete before returning to calling transaction
-
-
isolation - location to assign JDBC Connection isolation
-
readOnly - defaults to false, hints to JPA provider that entities can be immediately detached
-
rollback definitions - when to implement non-standard rollback rules
309. Summary
In this module we learned:
-
to configure a JPA project in include project dependencies and required application properties
-
to define a PersistenceContext and where to scan for
@Entity
classes -
requirements for an
@Entity
class -
default mapping conventions for
@Entity
mappings -
optional mapping annotations for
@Entity
mappings -
to perform basic CRUD operations with the database
Spring Data JPA Repository
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
310. Introduction
JDBC/SQL provided a lot of capability to interface with the database, but with a significant amount of code required.
JPA simplified the mapping, but as you observed with the JPA DAO implementation — there was still a modest amount of boilerplate code.
Spring Data JPA Repository leverages the capabilities and power of JPA to map @Entity
classes to the database but also further eliminates much of the boilerplate code remaining with JPA.
310.1. Goals
The student will learn:
-
to manage objects in the database using the Spring Data Repository
-
to leverage different types of built-in repository features
-
to extend the repository with custom features when necessary
310.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
declare a
JpaRepository
for an existing JPA@Entity
-
perform simple CRUD methods using provided repository methods
-
add paging and sorting to query methods
-
implement queries based on POJO examples and configured matchers
-
implement queries based on predicates derived from repository interface methods
-
implement a custom extension of the repository for complex or compound database access
311. Spring Data JPA Repository
Spring Data JPA provides repository support for JPA-based mappings.
[62]
We start off by writing no mapping code — just interfaces associated with our @Entity
and primary key type — and have Spring Data JPA implement the desired code.
The Spring Data JPA interfaces are layered — offering useful tools for interacting with the database.
Our primary @Entity
types will have a repository interface declared that inherit from JpaRepository
and any custom interfaces we optionally define.
312. Spring Data Repository Interfaces
As we go through these interfaces and methods, please remember that all of the method implementations of these interfaces (except for custom) will be provided for us.
marker interface capturing the |
|
depicts many of the CRUD capabilities we demonstrated with the JPA DAO in previous JPA lecture |
|
Spring Data provides some nice end-to-end support for sorting and paging.
This interface adds some sorting and paging to the |
|
provides query-by-example methods that use prototype |
|
brings together the |
|
SongsRepositoryCustom/ SongsRepositoryCustomImpl |
we can write our own extensions for complex or compound calls — while taking advantage of an |
SongsRepository |
our repository inherits from the repository hierarchy and adds additional methods that are automatically implemented by Spring Data JPA |
313. SongsRepository
All we need to create a functional repository is an @Entity
class and a primary key type.
From our work to date, we know that our @Entity
is the Song class and the primary key is the primitive int
type.
313.1. Song @Entity
@Entity
@NoArgsConstructor
public class Song {
@Id //be sure this is javax.persistence.Id
private int id;
Use Correct @Id
There are many @Id annotation classes.
Be sure to be the correct one for the technology you are currently mapping.
In this case, use javax.persistence.Id .
|
313.2. SongsRepository
We declare our repository at whatever level of Repository
is appropriate for our use.
It would be common to simply declare it as extending JpaRepository
.
public interface SongsRepository extends JpaRepository<Song, Integer> {}(1) (2)
1 | Song is the repository type |
2 | Integer is used for the primary key type for an int |
Consider Using Non-Primitive Primary Key Types
Although these lecture notes provide ways to mitigate issues with generated primary keys using a primitive data type, you will find that Spring Data JPA works easier with nullable object types. |
Repositories and Dynamic Interface Proxies
Having covered the lectures on Dynamic Interface Proxies and have seen the amount of boilerplate code that exists for persistence — you should be able to imagine how the repositories could be implemented with no up-front, compilation knowledge of the @Entity type.
|
314. Configuration
Assuming your repository and entity classes are in a package below the class annotated with @SpringBootApplication
— all that is needed is the @EnableJpaRepositories
to enable the necessary auto-configuration to instantiate the repository.
@SpringBootApplication
@EnableJpaRepositories
public class JPASongsApp {
If, however, your repository or entities are not located in the default packages scanned, their packages can be scanned with configuration options to the @EnableJpaRepositories
and @EntityScan
annotations.
@EnableJpaRepositories(basePackageClasses = {SongsRepository.class}) (1) (2)
@EntityScan(basePackageClasses = {Song.class}) (1) (3)
1 | the Java class provided here is used to identify the base Java package |
2 | where to scan for repository interfaces |
3 | where to scan for @Entity classes |
314.1. Injection
With the repository interface declared and the JPA repository support enabled, we can then successfully inject the repository into our application.
@Autowired
private SongsRepository songsRepo;
315. CrudRepository
Lets start looking at the capability of our repository — starting with the declared methods of the CrudRepository
interface.
public interface CrudRepository<T, ID> extends Repository<T, ID> {
<S extends T> S save(S var1);
<S extends T> Iterable<S> saveAll(Iterable<S> var1);
Optional<T> findById(ID var1);
boolean existsById(ID var1);
Iterable<T> findAll();
Iterable<T> findAllById(Iterable<ID> var1);
long count();
void deleteById(ID var1);
void delete(T var1);
void deleteAll(Iterable<? extends T> var1);
void deleteAll();
}
315.1. CrudRepository save() New
We can use the CrudRepository.save()
method to either create or update our @Entity
instance in the database.
In this specific example, we call save()
with a new object. The JPA provider can tell this is a new object because the generated primary key value is currently unassigned.
An object type has a default value of null in Java.
Our primitive int
type has a default value of 0 in Java.
//given an entity instance
Song song = mapper.map(dtoFactory.make());
assertThat(song.getId()).isZero(); (1)
//when persisting
songsRepo.save(song);
//then entity is persisted
then(song.getId()).isNotZero(); (2)
1 | default value for generated primary key using primitive type interpreted as unassigned |
2 | primary key assigned by provider |
The following shows the SQL that is generated by JPA provider to add the new object to the database.
call next value for hibernate_sequence
insert into reposongs_song (artist, released, title, id) values (?, ?, ?, ?)
315.2. CrudRepository save() Update Existing
The CrudRepository.save()
method is an "upsert".
-
if the
@Entity
is new, the repository will callEntityManager.persist
as you saw in the previous example -
if the
@Entity
exists, the repository will callEntityManager.merge
to update the database
//given an entity instance
Song song = mapper.map(dtoFactory.make());
songsRepo.save(song);
songsRepo.flush(); //for demo only (1)
Song updatedSong = Song.builder()
.id(song.getId()) (3)
.title("new title")
.artist(song.getArtist())
.released(song.getReleased())
.build(); (2)
//when persisting update
songsRepo.save(updatedSong);
//then entity is persisted
then(songsRepo.findOne(Example.of(updatedSong))).isPresent(); (4)
1 | making sure @Entity has been saved |
2 | a new, unmanaged @Entity instance is created for a fresh update of database |
3 | new, unmanaged @Entity instance has an assigned, non-default primary key value |
4 | object’s new state is found in database |
315.3. CrudRepository save()/Update Resulting SQL
The following snippet shows the SQL executed by the repository/EntityManager during the save()
— where it must first determine if the object exists in the database before calling SQL INSERT
or UPDATE
.
select ... (1)
from reposongs_song song0_
where song0_.id=?
binding parameter [1] as [INTEGER] - [1]
extracted value ([artist2_0_0_] : [VARCHAR]) - [The Beach Boys]
extracted value ([released3_0_0_] : [DATE]) - [2010-06-07]
extracted value ([title4_0_0_] : [VARCHAR]) - [If I Forget Thee Jerusalem]
update reposongs_song set artist=?, released=?, title=? where id=? (2)
binding parameter [1] as [VARCHAR] - [The Beach Boys]
binding parameter [2] as [DATE] - [2010-06-07]
binding parameter [3] as [VARCHAR] - [new title]
binding parameter [4] as [INTEGER] - [1]
1 | EntityManager.merge() performs SELECT to determine if assigned primary key exists and loads that state |
2 | EntityManager.merge() performs UPDATE to modify state of existing @Entity in database |
315.4. New Entity?
We just saw where the same method (save()
) was used to both create or update the object in the database.
This works differently depending on how the repository can determine whether the @Entity
instance passed to it is new or not.
-
for auto-assigned primary keys, the
@Entity
instance is considered new if@Version
(not used in our example) and@Id
are not assigned — as long as the@Id
type is non-primitive. -
for manually-assigned and primitive
@Id
types,@Entity
can implement thePersistable<ID>
interface to assist the repository in knowing when the@Entity
is new.
public interface Persistable<ID> {
@Nullable
ID getId();
boolean isNew();
}
315.5. CrudRepository existsById()
Spring Data JPA adds a convenience method that can check whether the @Entity
exists in the database without loading the entire object or writing a custom query.
The following snippet demonstrates how we can check for the existence of a given ID.
//given a persisted entity instance
Song pojoSong = mapper.map(dtoFactory.make());
songsRepo.save(pojoSong);
//when - determining if entity exists
boolean exists = songsRepo.existsById(pojoSong.getId());
//then
then(exists).isTrue();
The following shows the SQL produced from the findById()
call.
select count(*) as col_0_0_ from reposongs_song song0_ where song0_.id=? (1)
1 | count(*) avoids having to return all column values |
315.6. CrudRepository findById()
If we need the full object, we can always invoke the findById()
method, which should be a thin wrapper above EntityManager.find()
, except that the return type is a Java Optional<T>
versus the @Entity
type (T
).
//when - finding the existing entity
Optional<Song> result = songsRepo.findById(pojoSong.getId());
//then
then(result).isPresent(); (1)
1 | findById() always returns a non-null Optional<T> object |
315.6.1. CrudRepository findById() Found Example
The Optional<T>
can be safely tested for existence using isPresent()
.
If isPresent()
returns true
, then get()
can be called to obtain the targeted @Entity
.
//given
then(result.isPresent()).isTrue();
//when - obtaining the instance
Song dbSong = result.get();
//then - instance provided
then(dbSong).isNotNull();
315.6.2. CrudRepository findById() Not Found Example
If isPresent()
returns false
, then get()
will throw a NoSuchElementException
if called.
This gives your code some flexibility for how you wish to handle a target @Entity
not being found.
//given
then(result).isNotPresent();
//then - the optional is asserted during the get()
assertThatThrownBy(() -> result.get())
.isInstanceOf(NoSuchElementException.class);
315.7. CrudRepository delete()
The repository also offers a wrapper around EntityManager.delete()
where an instance is required.
Whether the instance existed or not, a successful call will always result in the @Entity
no longer in the database.
//when - deleting an existing instance
songsRepo.delete(existingSong);
//then - instance will be removed from DB
then(songsRepo.existsById(existingSong.getId())).isFalse();
315.7.1. CrudRepository delete() Not Loaded
However, if the instance passed to the delete()
method is not in its current Persistence Context, then it will load it before deleting so that it has all information required to implement any JPA delete cascade events.
select ... from reposongs_song song0_ where song0_.id=? (1)
delete from reposongs_song where id=?
1 | @Entity loaded as part of implementing a delete |
JPA Supports Cascade Actions
JPA relationships can be configured to perform an action (e.g., delete) to both sides of the relationship when one side is acted upon (e.g., deleted).
This could allow a parent |
315.7.2. CrudRepository delete() Not Exist
If the instance did not exist, the delete()
call silently returns.
//when - deleting a non-existing instance
songsRepo.delete(doesNotExist); (1)
1 | no exception thrown for not exist |
select ... as title4_0_0_ from reposongs_song song0_ where song0_.id=? (1)
1 | no @Entity was found/loaded as a result of this call |
315.8. CrudRepository deleteById()
Spring Data JPA also offers a convenience deleteById()
method taking only the primary key.
//when - deleting an existing instance
songsRepo.deleteById(existingSong.getId());
However, since this is JPA under the hood and JPA may have cascade actions defined, the @Entity
is still retrieved if it is not currently loaded in the Persistence Context.
select ... from reposongs_song song0_ where song0_.id=?
delete from reposongs_song where id=?
deleteById will Throw Exception
Calling deleteById for a non-existant @Entity will throw a EmptyResultDataAccessException .
|
315.9. Other CrudRepository Methods
That was a quick tour of the CrudRepository<T,ID>
interface methods.
The following snippet shows the methods not covered.
Most provide convenience methods around the entire repository.
<S extends T> Iterable<S> saveAll(Iterable<S> var1);
Iterable<T> findAll();
Iterable<T> findAllById(Iterable<ID> var1);
long count();
void deleteAll(Iterable<? extends T> var1);
void deleteAll();
316. PagingAndSortingRepository
Before we get too deep into queries, it is good to know that Spring Data has first-class support for sorting and paging.
-
sorting - determines the order which matching results are returned
-
paging - breaks up results into chunks that are easier to handle than entire database collections
Here is a look at the declared methods of the PagingAndSortingRepository<T,ID>
interface.
This defines extra parameters for the CrudRepository.findAll()
methods.
public interface PagingAndSortingRepository<T, ID> extends CrudRepository<T, ID> {
Iterable<T> findAll(Sort var1);
Page<T> findAll(Pageable var1);
}
We will see paging and sorting option come up in many other query types as well.
Use Paging and Sorting for Collection Queries
All queries that return a collection should seriously consider adding paging and sorting parameters. Small test databases can become significantly populated production databases over time and cause eventual failure if paging and sorting is not applied to unbounded collection query return methods. |
316.1. Sorting
Sorting can be performed on one or more properties and in ascending and descending order.
The following snippet shows an example of calling the findAll()
method and having it return
-
Song
entities in descending order according torelease
date -
Song
entities in ascending order according toid
value whenrelease
dates are equal
//when
List<Song> byReleased = songsRepository.findAll(
Sort.by("released").descending().and(Sort.by("id").ascending())); (1) (2)
//then
LocalDate previous = null;
for (Song s: byReleased) {
if (previous!=null) {
then(previous).isAfterOrEqualTo(s.getReleased()); //DESC order
}
previous=s.getReleased();
}
1 | results can be sorted by one or more properties |
2 | order of sorting can be ascending or descending |
The following snippet shows how the SQL was impacted by the Sort.by()
parameter.
select ...
from reposongs_song song0_
order by song0_.released desc, song0_.id asc (1)
1 | Sort.by() added the extra SQL order by clause |
316.2. Paging
Paging permits the caller to designate how many instances are to be returned in a call and the offset to start that group (called a page or slice) of instances.
The snippet below shows an example of using one of the factory methods of Pageable
to create a PageRequest
definition using page size (limit), offset, and sorting criteria.
If many pages will be traversed — it is advised to sort by a property that will produce a stable sort over time during table modifications.
//given
int offset = 0;
int pageSize = 3;
Pageable pageable = PageRequest.of(offset/pageSize, pageSize, Sort.by("released"));(1) (2)
//when
Page<Song> songPage = songsRepository.findAll(pageable);
1 | using PageRequest factory method to create Pageable from provided page information |
2 | parameters are pageNumber, pageSize, and Sort |
Use Stable Sort over Large Collections
Try to use a property for sort (at least by default) that will produce a stable sort when paging through a large collection to avoid repeated or missing objects from follow-on pages because of new changes to the table. |
316.3. Page Result
The page result is represented by a container object of type Page<T>
, which extends Slice<T>
.
I will describe the difference next, but the PagingAndSortingRepository<T,ID>
interface always returns a Page<T>
, which will provide:
|
Figure 132. Page<T> Extends Slice<T>
|
Page Issues Extra Count Query
Of course the total number of elements available in the database does not come for free.
An extra query is performed to get the count.
If that attribute is not needed, use a Slice return using a derived query.
|
316.4. Slice Properties
The Slice<T>
base interface represents properties about the content returned.
//then
Slice songSlice = songPage; (1)
then(songSlice).isNotNull();
then(songSlice.isEmpty()).isFalse();
then(songSlice.getNumber()).isEqualTo(0); (2)
then(songSlice.getSize()).isEqualTo(pageSize);
then(songSlice.getNumberOfElements()).isEqualTo(pageSize);
List<Song> songsList = songSlice.getContent();
then(songsList).hasSize(pageSize);
1 | Page<T> extends Slice<T> |
2 | slice increment — first slice is 0 |
316.5. Page Properties
The Page<T>
derived interface represents properties about the entire collection/table.
The snippet below shows an example of the total number of elements in the table being made available to the caller.
then(songPage.getTotalElements()).isEqualTo(savedSongs.size()); //unique to Page
The Page<T>
content and number of elements is made available through the following set of SQL queries.
select ... from reposongs_song song0_ order by song0_.released asc limit ? (1) select count(song0_.id) as col_0_0_ from reposongs_song song0_ (2)
1 | SELECT used to load page of entities (aka the Slice information) |
2 | SELECT COUNT(*) used to return total matches in the database — returned or not because of Pageable limits (aka the Page portion of the information) |
316.6. Stateful Pageable Creation
In the above example, we created a Pageable
from stateless parameters — passing in pageNumber, pageSize, and sorting specifications.
Pageable pageable = PageRequest.of(offset / pageSize, pageSize, Sort.by("released"));(1)
1 | parameters are pageNumber, pageSize, and Sort |
We can also use the original Pageable
to generate the next or other relative page specifications.
Pageable next = pageable.next();
Pageable previous = pageable.previousOrFirst();
Pageable first = pageable.first();
316.7. Page Iteration
The next Pageable
can be used to advance through the complete set of query results, using the previous Pageable
and testing the returned Slice
.
for (int i=1; songSlice.hasNext(); i++) { (1)
pageable = pageable.next(); (2)
songSlice = songsRepository.findAll(pageable);
songsList = songSlice.getContent();
then(songSlice).isNotNull();
then(songSlice.getNumber()).isEqualTo(i);
then(songSlice.getSize()).isEqualTo(pageSize);
then(songSlice.getNumberOfElements()).isLessThanOrEqualTo(pageSize);
then(((Page)songSlice).getTotalElements()).isEqualTo(savedSongs.size());//unique to Page
}
then(songSlice.hasNext()).isFalse();
then(songSlice.getNumber()).isEqualTo(songsRepository.count() / pageSize);
1 | Slice.hasNext() will indicate when previous Slice represented the end of the results |
2 | next Pageable obtained from previous Pageable |
The following snippet shows an example of the SQL issued to the database.
The offset
parameter is added to the SQL query once we get beyond the first page,
select ... from reposongs_song song0_ order by song0_.released asc limit ? offset ? (1)
select count(song0_.id) as col_0_0_ from reposongs_song song0_
1 | deeper into paging causes offset (in addition to limit ) to be added to query |
317. Query By Example
Not all queries will be as simple as findAll()
.
We now need to start looking at queries that can return a subset of results based on them matching a set of predicates.
The QueryByExampleExecutor<T>
parent interface to JpaRepository<T,ID>
provides a set of variants to the collection-based results that accepts an "example" to base a set of predicates off of.
public interface QueryByExampleExecutor<T> {
<S extends T> Optional<S> findOne(Example<S> var1);
<S extends T> Iterable<S> findAll(Example<S> var1);
<S extends T> Iterable<S> findAll(Example<S> var1, Sort var2);
<S extends T> Page<S> findAll(Example<S> var1, Pageable var2);
<S extends T> long count(Example<S> var1);
<S extends T> boolean exists(Example<S> var1);
}
317.1. Example Object
An Example
is an interface with the ability to hold onto a probe and matcher.
317.1.1. Probe Object
The probe is an instance of the repository @Entity
type.
The following snippet is an example of creating a probe that represents the fields we are looking to match.
//given
Song savedSong = savedSongs.get(0);
Song probe = Song.builder()
.title(savedSong.getTitle())
.artist(savedSong.getArtist())
.build(); (1)
1 | probe will carry values for title and artist to match |
317.1.2. ExampleMatcher Object
The matcher defaults to an exact match of all non-null properties in the probe. There are many definitions we can supply to customize the matcher.
-
ExampleMatcher.matchingAny()
- forms an OR relationship between all predicates -
ExampleMatcher.matchingAll()
- forms an AND relationship between all predicates
The matcher can be broken down into specific fields, designing a fair number of options for String-based predicates but very limited options for non-String fields.
|
|
The following snippet shows an example of the default ExampleMatcher
.
ExampleMatcher matcher = ExampleMatcher.matching(); (1)
1 | default matcher is matchingAll |
317.2. findAll By Example
We can supply an Example
instance to the findAll()
method to conduct our query.
The following snippet shows an example of using a probe with a default matcher.
It is intended to locate all songs matching the artist
and title
we specified in the probe.
//when
List<Song> foundSongs = songsRepository.findAll(
Example.of(probe),//default matcher is matchingAll() and non-null
Sort.by("id"));
However, there is a problem.
Our Example
instance with supplied probe and default matcher did not locate any matches.
//then - not found
then(foundSongs).isEmpty();
317.3. Primitive Types are Non-Null
The reason for the no-match is because the primary key value is being added to the query and we did not explicitly supply that value in our probe.
select ...
from reposongs_song song0_
where song0_.id=0 (1)
and song0_.artist=? and song0_.title=?
order by song0_.id asc
1 | song0_.id=0 test for unassigned primary key, prevents match being found |
The id
field is a primitive int
type that cannot be null and defaults to a 0 value.
That, and the fact that the default matcher is a "match all" (using AND
) keeps our example from matching anything.
@Entity
public class Song {
@Id @GeneratedValue
private int id; (1)
1 | id can never be null and defaults to 0, unassigned value |
317.4. matchingAny ExampleMatcher
One option we could take would be to switch from the default matchingAll
matcher to a matchingAny
matcher.
The following snippet shows an example of how we can specify the override.
//when
List<Song> foundSongs = songsRepository.findAll(
Example.of(probe, ExampleMatcher.matchingAny()),(1)
Sort.by("id"));
1 | using matchingAny versus default matchingAll |
This causes some matches to occur, but it likely is not what we want.
-
the
id
predicate is still being supplied -
the overall condition does not require the
artist
ANDtitle
to match.
select ...
from reposongs_song song0_
where song0_.id=0 or song0_.artist=? or song0_.title=? (1)
order by song0_.id asc
1 | matching any ("or") of the non-null probe values |
317.5. Ignoring Properties
What we want to do is use a matchAll
matcher and have the non-null primitive id
field ignored.
The following snippet shows an example matcher configured to ignore the primary key.
ExampleMatcher ignoreId = ExampleMatcher.matchingAll().withIgnorePaths("id");(1)
//when
List<Song> foundSongs = songsRepository.findAll(
Example.of(probe, ignoreId), (2)
Sort.by("id"));
//then
then(foundSongs).isNotEmpty();
then(foundSongs.get(0).getId()).isEqualTo(savedSong.getId());
1 | id primary key is being excluded from predicates |
2 | non-null and non-id fields of probe are used for AND matching |
The following snippet shows the SQL produced.
This SQL matches only the title
and artist
fields, without a reference to the id
field.
select ...
from reposongs_song song0_
where song0_.title=? and song0_.artist=? (1) (2)
order by song0_.id asc
1 | the primitive int id field is being ignored |
2 | both title and artist fields must match |
317.6. Contains ExampleMatcher
We have some options on what we can do with the String matches.
The following snippet provides an example of testing whether title
contains the text in the probe while performing an exact match of the artist
and ignoring the id
field.
Song probe = Song.builder()
.title(savedSong.getTitle().substring(2))
.artist(savedSong.getArtist())
.build();
ExampleMatcher matcher = ExampleMatcher
.matching()
.withIgnorePaths("id")
.withMatcher("title", ExampleMatcher.GenericPropertyMatchers.contains());
317.6.1. Using Contains ExampleMatcher
The following snippet shows that the Example
successfully matched on the Song
we were interested in.
//when
List<Song> foundSongs = songsRepository.findAll(Example.of(probe,matcher), Sort.by("id"));
//then
then(foundSongs).isNotEmpty();
then(foundSongs.get(0).getId()).isEqualTo(savedSong.getId());
The following SQL shows what was performed by our Example
.
Both title
and artist
are required to match.
The match for title
is implemented as a "contains"/LIKE
.
//binding parameter [1] as [VARCHAR] - [Earth Wind and Fire]
//binding parameter [2] as [VARCHAR] - [% a God Unknown%] (1)
//binding parameter [3] as [CHAR] - [\]
select ...
from reposongs_song song0_
where song0_.artist=? and (song0_.title like ? escape ?) (2)
order by song0_.id asc
1 | title parameter supplied with % characters around the probe value |
2 | title predicate uses a LIKE |
318. Derived Queries
For fairly straight forward queries, Spring Data JPA can derive the required commands from a method signature declared in the repository interface. This provides a more self-documenting version of similar queries we could have formed with query-by-example.
The following snippet shows a few example queries added to our repository interface to address specific queries needed in our application.
public interface SongsRepository extends JpaRepository<Song, Integer> {
Optional<Song> getByTitle(String title); (1)
List<Song> findByTitleNullAndReleasedAfter(LocalDate date); (2)
List<Song> findByTitleStartingWith(String string, Sort sort); (3)
Slice<Song> findByTitleStartingWith(String string, Pageable pageable); (4)
Page<Song> findPageByTitleStartingWith(String string, Pageable pageable); (5)
1 | query by an exact match of title |
2 | query by a match of two fields |
3 | query using sort |
4 | query with paging support |
5 | query with paging support and table total |
Let’s look at a complete example first.
318.1. Single Field Exact Match Example
In the following example, we have created a query method getByTitle
that accepts the exact match title value and an Optional
return value.
Optional<Song> getByTitle(String title); (1)
We use the declared interface method in a normal manner and Spring Data JPA takes care of the implementation.
//when
Optional<Song> result = songsRepository.getByTitle(song.getTitle());
//then
then(result.isPresent()).isTrue();
The resulting SQL is the same as if we implemented it using query-by-example or JPA query language.
select ...
from reposongs_song song0_
where song0_.title=?
318.2. Query Keywords
Spring Data has several keywords, followed by By
, that it looks for starting the interface method name.
Those with multiple terms can be used interchangeably.
Meaning | Keywords | |||
---|---|---|---|---|
Query |
|
|||
Count |
|
|||
Exists |
|
|||
Delete |
|
318.3. Other Keywords
-
Distinct (e.g.,
findDistinctByTitle
) -
Is, Equals (e.g.,
findByTitle
,findByTitleIs
,findByTitleEquals
) -
Not (e.g.,
findByTitleNot
,findByTitleIsNot
,findByTitleNotEquals
) -
IsNull, IsNotNull (e.g.,
findByTitle(null)
,findByTitleIsNull()
,findByTitleIsNotNull()
) -
StartingWith, EndingWith, Containing (e.g.,
findByTitleStartingWith
,findByTitleEndingWith
,findByTitleContaining
) -
LessThan, LessThanEqual, GreaterThan, GreaterThanEqual, Between (e.g.,
findByIdLessThan
,findByIdBetween(lo,hi)
) -
Before, After (e.g.,
findByReleaseAfter
) -
In (e.g.,
findByTitleIn(collection)
) -
OrderBy (e.g.,
findByTitleContainingOrderByTitle
)
The list is significant, but not meant to be exhaustive. Perform a web search for your specific needs (e.g., "Spring Data Derived Query …") if what is needed is not found here.
318.4. Multiple Fields
We can define queries using one or more fields using And
and Or
.
The following example defines an interface method that will test two fields: title
and released
.
title
will be tested for null and released
must be after a certain date.
List<Song> findByTitleNullAndReleasedAfter(LocalDate date);
The following snippet shows an example of how we can call/use the repository method. We are using a simple collection return without sorting or paging.
//when
List<Song> foundSongs = songsRepository.findByTitleNullAndReleasedAfter(firstSong.getReleased());
//then
Set<Integer> foundIds = foundSongs.stream()
.map(s->s.getId())
.collect(Collectors.toSet());
then(foundIds).isEqualTo(expectedIds);
The resulting SQL shows that a query is performed looking for null title
and released
after the LocalDate provided.
select ...
from reposongs_song song0_
where (song0_.title is null) and song0_.released>?
318.5. Collection Response Query Example
We can perform queries with various types of additional arguments and return types. The following shows an example of a query that accepts a sorting order and returns a simple collection with all objects found.
List<Song> findByTitleStartingWith(String string, Sort sort);
The following snippet shows an example of how to form the Sort
and call the query method derived from our interface declaration.
//when
Sort sort = Sort.by("id").ascending();
List<Song> songs = songsRepository.findByTitleStartingWith(startingWith, sort);
//then
then(songs.size()).isEqualTo(expectedCount);
The following shows the resulting SQL — which now contains a sort clause based on our provided definition.
select ...
from reposongs_song song0_
where song0_.title like ? escape ?
order by song0_.id asc
318.6. Slice Response Query Example
Derived queries can also be declared to accept a Pageable
definition and return a Slice
.
The following example shows a similar interface method declaration to what we had prior — except we have wrapped the Sort
within a Pageable
and requested a Slice
, which will contain only those items that match the predicate and comply with the paging constraints.
Slice<Song> findByTitleStartingWith(String string, Pageable pageable);
The following snippet shows an example of forming the PageRequest
, making the call, and inspecting the returned Slice
.
//when
PageRequest pageable=PageRequest.of(0, 1, Sort.by("id").ascending());
Slice<Song> songsSlice=songsRepository.findByTitleStartingWith(startingWith, pageable);
//then
then(songsSlice.getNumberOfElements()).isEqualTo(pageable.getPageSize());
The following resulting SQL shows how paging limits were placed in the query. If we had asked for a page beyond 0, an offset would have also been provided.
select ...
from reposongs_song song0_
where song0_.title like ? escape ?
order by song0_.id asc limit ?
318.7. Page Response Query Example
We can alternatively declare a Page
return type if we also need to know information about all available matches in the table.
The following shows an example of returning a Page
.
The only reason Page
shows up in the method name is to form a different method signature than its sibling examples.
Page
is not required to be in the method name.
Page<Song> findPageByTitleStartingWith(String string, Pageable pageable);
The following snippet shows how we can form a PageRequest
to pass to the derived query method and accept a Page
in reponse with additional table information.
//when
PageRequest pageable = PageRequest.of(0, 1, Sort.by("id").ascending());
Page<Song> songsPage = songsRepository.findPageByTitleStartingWith(startingWith, pageable);
//then
then(songsPage.getNumberOfElements()).isEqualTo(pageable.getPageSize());
then(songsPage.getTotalElements()).isEqualTo(expectedCount); (1)
1 | an extra property is available to tell us the total number of matches relative to the entire table — that may not have been reported on the current page |
The following shows the resulting SQL of the Page
response.
Note that two queries were performed.
One provided all the data required for the parent Slice
and the second query provided the table totals that were not bounded by the page limits.
select ...
from reposongs_song song0_
where song0_.title like ? escape ?
order by song0_.id asc
limit ? (1)
select count(song0_.id) as col_0_0_
from reposongs_song song0_
where song0_.title like ? escape ? (2)
1 | first query provides Slice data within Pageable limits (offset ommitted for first page) |
2 | second query provides table-level count for Page that have no page size limits |
319. JPA-QL Named Queries
Query-by-example and derived queries are targeted at flexible, but mostly simple queries. Often there is a need to write more complex queries.
If you remember in JPA, we can write JPA-QL and native SQL queries to implement our database query access.
We can also register them as a @NamedQuery
associated with the @Entity
class.
This allows for more complex queries as well as to use queries defined in a JPA orm.xml
source file (without having to recompile)
The following snippet shows a @NamedQuery
called Song.findArtistGESize
that implements a query of the Song
entity’s table to return Song
instances that have artist names longer than a particular size.
@Entity
@Table(name="REPOSONGS_SONG")
@NamedQuery(name="Song.findByArtistGESize",
query="select s from Song s where length(s.artist) >= :length")
public class Song {
The following snippet shows an example of using that @NamedQuery
with the JPA EntityManager
.
TypedQuery<Song> query = entityManager
.createNamedQuery("Song.findByArtistGESize", Song.class)
.setParameter("length", minLength);
List<Song> jpaFoundSongs = query.getResultList();
319.1. Mapping @NamedQueries to Repository Methods
That same tool is still available to us with repositories.
If we name the query [prefix].[suffix]
, where prefix
is the @Entity.name
of the object’s returned and suffix
matches the name of the repository interface method — we can have them automatically called by our repository.
The following snippet shows a repository interface method that will have its query defined by the @NamedQuery
defined on the @Entity
class.
Note that we map repository method parameters to the @NamedQuery
parameter using the @Param
annotation.
//see @NamedQuery(name="Song.findByArtistGESize" in Song class
List<Song> findByArtistGESize(@Param("length") int length); (1) (2)
1 | interface method name matches `@NamedQuery.name" suffix |
2 | @Param maps method parameter to @NamedQuery parameter |
The following snippet shows the resulting SQL generated from the JPA-QL/@NamedQuery
select ... from reposongs_song song0_ where length(song0_.artist)>=?
320. @Query Annotation Queries
Spring Data JPA provides an option for the query to be expressed on the repository method versus the @Entity
class.
The following snippet shows an example of a similar query we did for artist
length — except in this case we are querying against title
length.
@Query("select s from Song s where length(s.title) >= :length")
List<Song> findByTitleGESize(@Param("length") int length);
We get the expected resulting SQL.
select ...
from reposongs_song song0_
where length(song0_.title)>=?
Named Queries can be supplied in property file
Named queries can also be expressed in a property file — versus being placed directly onto the method. Property files can provide a more convenient source for expressing more complex queries.
The default location is |
320.1. @Query Annotation Native Queries
Although I did not demonstrate it, the @NamedQuery
can also be expressed in native SQL.
In most cases with native SQL queries, the returned information is just data.
We can also directly express the repository interface method as a native SQL query as well as have it returned straight data.
The following snippet shows a repository interface method implemented as native SQL that will return only the title
columns based on size.
@Query(value="select s.title from REPOSONGS_SONG s where length(s.title) >= :length", nativeQuery=true)
List<String> getTitlesGESizeNative(@Param("length") int length);
The following output shows the resulting SQL. We can tell this was from a native SQL query because the SQL does not contain mangled names used by JPA generated SQL.
select s.title (1)
from REPOSONGS_SONG s
where length(s.title) >= ?
1 | native SQL query gets expressed exactly as we supplied it |
321. JpaRepository Methods
Many of the methods and capabilities of the JpaRepository<T,ID>
are available at the higher level interfaces.
The JpaRepository<T,ID>
itself declares four types of additional methods
-
flush-based methods
-
batch-based deletes
-
reference-based accessors
-
return type extensions
public interface JpaRepository<T,ID> extends PagingAndSortingRepository<T, ID>, QueryByExampleExecutor<T> {
void flush();
<S extends T> S saveAndFlush(S entity);
void deleteInBatch(Iterable<T> entities);
void deleteAllInBatch();
T getOne(ID id);
321.1. JpaRepository Type Extensions
The methods in the JpaRepository<T,ID>
interface not discussed here mostly just extend existing parent methods with more concrete return types (e.g., List
versus Iterable
).
public interface CrudRepository<T,ID> extends Repository<T, ID> {
Iterable<T> findAll();
...
public interface JpaRepository<T,ID> extends PagingAndSortingRepository<T, ID>, QueryByExampleExecutor<T> {
@Override
List<T> findAll(); (1)
...
1 | List<T> extends Iterable<T> |
321.2. JpaRepository flush()
As we know with JPA, many commands are cached within the local Persistence Context and issued to the database at some point in time in the future.
That point in time is either the end of the transaction or some event within the scope of the transaction (e.g., issue a JPA query).
flush()
commands can be used to immediately force queued commands to the database.
We would need to do this prior to issuing a native SQL command if we want our latest changes to be included with that command.
In the following example, a transaction is held open during the entire method because of the @Transaction
declaration.
saveAll()
just adds the objects to the Persistence Context and caches their insert commands.
The flush()
command finally forces the SQL INSERT
commands to be issued.
@Test
@Transactional
void flush() {
//given
List<Song> songs = dtoFactory.listBuilder().songs(5,5).stream()
.map(s->mapper.map(s))
.collect(Collectors.toList());
songsRepository.saveAll(songs); (1)
//when
songsRepository.flush(); (2)
}
1 | instances are added to the Persistence Unit cache |
2 | instances are explicitly flushed to the database |
The pre-flush actions are only to assign the primary key value.
Hibernate: call next value for hibernate_sequence
Hibernate: call next value for hibernate_sequence
Hibernate: call next value for hibernate_sequence
Hibernate: call next value for hibernate_sequence
Hibernate: call next value for hibernate_sequence
The post-flush actions insert the rows into the database.
Hibernate: insert into reposongs_song (artist, released, title, id) values (?,?,?,?)
Hibernate: insert into reposongs_song (artist, released, title, id) values (?,?,?,?)
Hibernate: insert into reposongs_song (artist, released, title, id) values (?,?,?,?)
Hibernate: insert into reposongs_song (artist, released, title, id) values (?,?,?,?)
Hibernate: insert into reposongs_song (artist, released, title, id) values (?,?,?,?)
Call flush() Before Issuing Native SQL Queries
You do not need to call |
321.3. JpaRepository deleteInBatch
The standard deleteAll(collection)
will issue deletes one SQL statement at a time as shown in the comments of the following snippet.
songsRepository.deleteAll(savedSongs);
//delete from reposongs_song where id=? (1)
//delete from reposongs_song where id=?
//delete from reposongs_song where id=?
1 | SQL DELETE commands are issues one at a time for each ID |
The JpaRepository.deleteInBatch(collection)
will issue a single DELETE SQL statement with all IDs expressed in the where clause.
songsRepository.deleteInBatch(savedSongs);
//delete from reposongs_song where id=? or id=? or id=? (1)
1 | one SQL DELETE command is issued for all IDs |
321.4. JPA References
JPA has the notion of references that represent a promise to an @Entity
in the database.
This is normally done to make loading targeted objects from the database faster and leaving related objects to be accessed only on-demand.
In the following examples, the code is demonstrating how it can form a reference to a persisted object in the database — without going through the overhead of realizing that object.
321.4.1. Reference Exists
In this first example, the referenced object exists and the transaction stays open from the time the reference is created — until the reference was resolved.
@Test
@Transactional
void ref_session() {
...
//when - obtaining a reference with a session
Song dbSongRef = songsRepository.getOne(song.getId()); (1)
//then
then(dbSongRef).isNotNull();
then(dbSongRef.getId()).isEqualTo(song.getId()); (2)
then(dbSongRef.getTitle()).isEqualTo(song.getTitle()); (3)
}
1 | returns only a reference to the @Entity — without loading from database |
2 | still only dealing with the unresolved reference up and to this point |
3 | actual object resolved from database at this point |
321.4.2. Reference Session Inactive
The following example shows that a reference can only be resolved during its initial transaction. We are able to perform some light commands that can be answered directly from the reference, but as soon as we attempt to access data that would require querying the database — it fails.
import org.hibernate.LazyInitializationException;
...
@Test
void ref_no_session() {
...
//when - obtaining a reference without a session
Song dbSongRef = songsRepository.getOne(song.getId()); (1)
//then - get a reference with basics
then(dbSongRef).isNotNull();
then(dbSongRef.getId()).isEqualTo(song.getId()); (2)
assertThatThrownBy(
() -> dbSongRef.getTitle()) (3)
.isInstanceOf(LazyInitializationException.class);
}
1 | returns only a reference to the @Entity from original transaction |
2 | still only dealing with the unresolved reference up and to this point |
3 | actual object resolution attempted at this point — fails |
321.4.3. Bogus Reference
The following example shows that the reference is never attempted to be resolved until something is needed from the object it represents — beyond its primary key.
import javax.persistence.EntityNotFoundException;
...
@Test
@Transactional
void ref_not_exist() {
//given
int doesNotExist=1234;
//when
Song dbSongRef = songsRepository.getOne(doesNotExist); (1)
//then - get a reference with basics
then(dbSongRef).isNotNull();
then(dbSongRef.getId()).isEqualTo(doesNotExist); (2)
assertThatThrownBy(
() -> dbSongRef.getTitle()) (3)
.isInstanceOf(EntityNotFoundException.class);
}
1 | returns only a reference to the @Entity with an ID not in database |
2 | still only dealing with the unresolved reference up and to this point |
3 | actual object resolution attempted at this point — fails |
322. Custom Queries
Sooner or later, a repository action requires some complexity that is beyond the ability to leverage a single query-by-example, derived query, or even JPA-QL. We may need to implement some custom logic or may want to encapsulate multiple calls within a single method.
322.1. Custom Query Interface
The following example shows how we can extend the repository interface to implement custom calls using the JPA EntityManager
and the other repository methods. Our custom implementation will return a random Song
from the database.
public interface SongsRepositoryCustom {
Optional<Song> random();
}
322.2. Repository Extends Custom Query Interface
We then declare the repository to extend the additional custom query interface — making the new method(s) available to callers of the repository.
public interface SongsRepository extends JpaRepository<Song, Integer>, SongsRepositoryCustom { (1)
...
1 | added additional SongRepositoryCustom interface for SongRepository to extend |
322.3. Custom Query Method Implementation
Of course, the new interface will need an implementation. This will require at least two lower-level database calls
-
determine how many objects there are in the database
-
return a random instance for one of those values
The following snippet shows a portion of the custom method implementation. Note that two additional helper methods are required. We will address them in a moment. By default, this class must have the same name as the interface, followed by "Impl".
public class SongsRepositoryCustomImpl implements SongsRepositoryCustom {
private final SecureRandom random = new SecureRandom();
...
@Override
public Optional<Song> random() {
Optional randomSong = Optional.empty();
int count = (int) songsRepository.count(); (1)
if (count!=0) {
int offset = random.nextInt(count);
List<Song> songs = songs(offset, 1); (2)
randomSong = songs.isEmpty() ? Optional.empty():Optional.of(songs.get(0));
}
return randomSong;
}
}
1 | leverages CrudRepository.count() helper method |
2 | leverages a local, private helper method to access specific Song |
322.4. Repository Implementation Postfix
If you have an alternate suffix pattern other than "Impl" in your application, you can set that value in an attribute of the @EnableJpaRepositories
annotation.
The following shows a declaration that sets the suffix to its normal default value (i.e., we did not have to do this).
If we changed this postfix value from "Impl" to "Xxx", then we would need to change SongsRepositoryCustomImpl
to SongsRepositoryCustomXxx
.
@EnableJpaRepositories(repositoryImplementationPostfix="Impl") (1)
1 | Impl is the default value. Configure this attribute to use non-Impl postfix |
322.5. Helper Methods
The custom random()
method makes use of two helper methods.
One is in the CrudRepository
interface and the other directly uses the EntityManager
to issue a query.
public interface CrudRepository<T, ID> extends Repository<T, ID> {
long count();
protected List<Song> songs(int offset, int limit) {
return em.createNamedQuery("Song.songs")
.setFirstResult(offset)
.setMaxResults(limit)
.getResultList();
}
We will need to inject some additional resources in order to make these calls:
-
SongsRepository
-
EntityManager
322.6. Naive Injections
We could have attempted to inject a SongsRepository
and EntityManager
straight into the Impl
class.
@RequiredArgsConstructor
public class SongsRepositoryCustomImpl implements SongsRepositoryCustom {
private final EntityManager em;
private final SongsRepository songsRepository;
However,
-
injecting the
EntityManager
would functionally work, but would not necessarily be part of the same Persistence Context and transaction as the rest of the repository -
eagerly injecting the
SongsRepository
in theImpl
class will not work because theImpl
class is now part of theSongsRepository
implementation. We have a recursion problem to resolve there.
322.7. Required Injections
We need to instead
-
inject a
JpaContext
and obtain theEntityManager
from that context -
use
@Autowired @Lazy
and a non-final attribute for theSongsRepository
injection to indicate that this instance can be initialized without access to the injected bean
import org.springframework.data.jpa.repository.JpaContext;
...
public class SongsRepositoryCustomImpl implements SongsRepositoryCustom {
private final EntityManager em; (1)
@Autowired @Lazy (2)
private SongsRepository songsRepository;
public SongsRepositoryCustomImpl(JpaContext jpaContext) { (1)
em=jpaContext.getEntityManagerByManagedType(Song.class);
}
1 | EntityManager obtained from injected JpaContext |
2 | SongsRepository lazily injected to mitigate the recursive dependency between the Impl class and the full repository instance |
322.8. Calling Custom Query
With all that in place, we can then call our custom random()
method and obtain a sample Song
to work with from the database.
//when
Optional<Song> randomSong = songsRepository.random();
//then
then(randomSong.isPresent()).isTrue();
The following shows the resulting SQL
select count(song0_.id) as col_0_0_
from reposongs_song song0_
select ...
from reposongs_song song0_
limit ? offset ?
323. Summary
In this module we learned:
-
that Spring Data JPA eliminates the need to write boilerplate JPA code
-
to perform basic CRUD management for
@Entity
classes using a repository -
to implement query-by-example
-
that unbounded collections can grow over time and cause our applications to eventually fail
-
that paging and sorting can easily be used with repositories
-
-
to implement query methods derived from a query DSL
-
to implement custom repository extensions
323.1. Comparing Query Types
Of the query types,
-
derived queries and query-by-example are simpler but have their limits
-
derived queries are more expressive
-
query-by-example can be built flexibly at runtime
-
nothing is free — so anything that requires translation between source and JPA form may incur extra initialization and/or processing time
-
-
JPA-QL and native SQL
-
have virtually no limit to what they can express
-
cannot be dynamically defined for a repository like query-by-example. You would need to use the
EntityManager
directly to do that. -
have loose coupling between the repository method name and the actual function of the executed query
-
can be resolved in an external source file that would allow for query changes without recompiling
-
JPA Repository End-to-End Application
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
324. Introduction
This lecture takes what you have learned in establishing a RDBMS data tier using Spring Data JPA and shows that integrated into an end-to-end application with API CRUD calls and finder calls using paging. It is assumed that you already know about API topics like Data Transfer Objects (DTOs), JSON and XML content, marshalling/unmarshalling using Jackson and JAXB, web APIs/controllers, and clients. This lecture will put them all together.
324.1. Goals
The student will learn:
-
to integrate a Spring Data JPA Repository into an end-to-end application, accessed through an API
-
to make a clear distinction between Data Transfer Objects (DTOs) and Business Objects (BOs)
-
to identify data type architectural decisions required for a multi-tiered application
-
to understand the need for paging when working with potentially unbounded collections and remote clients
-
to setup proper transaction and other container feature boundaries using annotations and injection
324.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
implement a BO tier of classes that will be mapped to the database
-
implement a DTO tier of classes that will exchange state with external clients
-
implement a service tier that completes useful actions
-
identify the controller/service layer interface decisions when it comes to using DTO and BO classes
-
determine the correct transaction propagation property for a service tier method
-
implement a mapping tier between BO and DTO objects
-
implement paging requests through the API
-
implement page responses through the API
325. BO/DTO Component Architecture
325.1. Business Object(s)/@Entities
For our Songs application — I have kept the data model simple and kept it limited to a single business object (BO) @Entity
class mapped to the database using JPA and accessed through a Spring Data JPA repository.
Figure 133. BO Class Mapped to DB as JPA @Entity
|
The business objects are the focal point of information where we implement our business decisions. |
The primary focus of our BO classes is to map business implementation concepts to the database.
The following snippet shows some of the required properties of a JPA @Entity
class.
@Entity
@Table(name="REPOSONGS_SONG")
@NoArgsConstructor
...
public class Song {
@Id @GeneratedValue(strategy = GenerationType.SEQUENCE)
private int id;
...
325.2. Data Transfer Object(s) (DTOs)
The Data Transfer Objects are the focal point of interfacing with external clients. They represent state at a point in time. For external web APIs, they are commonly mapped to both JSON and XML.
For the API, we have the decision of whether to reuse BO classes as DTOs or implement a separate set of classes for that purpose. Even though some applications start out simple, there will come a point where database technology or mappings will need to change at a different pace than API technology or mappings.
Figure 134. DTO
|
For that reason, I created a separate SongsDTO class to represent a sample DTO.
It has a near 1:1 mapping with the |
The primary focus of our DTO classes is to map business interface concepts to a portable exchange format.
The following snippet shows some of the annotations required to map the SongDTO
class to XML using Jackson and JAXB.
Jackson JSON requires very few annotations in the simple cases.
@JacksonXmlRootElement(localName = "song", namespace = "urn:ejava.db-repo.songs")
@XmlRootElement(name = "song", namespace = "urn:ejava.db-repo.songs") (2)
@NoArgsConstructor
...
public class SongDTO { (1)
@JacksonXmlProperty(isAttribute = true)
@XmlAttribute
private int id;
1 | Jackson JSON requires very little to no annotations for simple mappings |
2 | XML mappings require more detailed definition to be complete |
325.3. BO/DTO Mapping
With separate BO and DTO classes, there is a need for mapping between the two.
|
Figure 135. BO to DTO Mapping
|
We have several options on how to organize this role.
325.3.1. BO/DTO Self Mapping
|
Figure 136. BO to DTO Self Mapping
|
325.3.2. BO/DTO Method Self Mapping
|
Figure 137. BO to DTO Method Self Mapping
|
325.3.3. BO/DTO Helper Method Mapping
|
Figure 138. BO/DTO Helper Method Mapping
|
325.3.4. BO/DTO Helper Class Mapping
|
Figure 139. BO/DTO Helper Class Mapping
|
325.3.5. BO/DTO Helper Class Mapping Implementations
Mapping helper classes can be implemented by:
-
brute force implementation
-
Benefit: likely the fastest performance and technically simplest to understand
-
Drawback: tedious setter/getter code
-
-
off-the-shelf mapper libraries (e.g. Dozer, Orika, MapStruct, ModelMapper, JMapper) [65] [66]
-
Benefit: declarative language and inferred DIY mapping options
-
Drawbacks:
-
relies on reflection and other generalizations for mapping which add to overhead
-
non-trivial mappings can be complex to understand
-
-
326. Service Architecture
Services — with the aid of BOs — implement the meat of the business logic.
The service
Example Service Class Declaration
|
326.1. Injected Service Boundaries
Container features like @Transactional
, @PreAuthorize
, @Async
, etc. are only implemented at component boundaries.
When a @Component
dependency is injected, the container has the opportunity to add features using "interpose".
As a part of interpose — the container implements proxy to add the desired feature of the target component method.
Therefore it is important to arrange a component boundary wherever you need to start a new characteristic provided by the container. The following is a more detailed explanation of what not to do and do.
326.1.1. Buddy Method Boundary
The methods within a component class are not typically subject to container interpose. Therefore a call from m1() to m2() within the same component class is a straight Java call.
|
Figure 141. Buddy Method Boundary
|
326.1.2. Self Instantiated Method Boundary
Container interpose is only performed when the container has a chance to decorate the called component. Therefore, a call to a method of a component class that is self-instantiated will not have container interpose applied — no matter how the called method is annotated.
|
Figure 142. Self Instantiated Method Boundary
|
326.1.3. Container Injected Method Boundary
Components injected by the container are subject to container interpose and will have declared characteristics applied.
|
Figure 143. Container Injected Method Boundary
|
326.2. Compound Services
With @Component
boundaries and interpose constraints understood — in more complex transaction, security, or threading solutions, the logical @Service
many get broken up into one or more physical helper @Component
classes.
Figure 144. Single Service Expressed as Multiple Components
|
Each physical helper
To external users of
|
327. BO/DTO Interface Options
With the core roles of BOs and DTOs understood, we next have a decision to make about where to use them within our application between the API and service classes.
Figure 145. BO/DTO Interface Decisions
|
|
327.1. API Maps DTO/BO
It is natural to think of the @Service
as working with pure implementation (BO) classes.
This leaves the mapping job to the @RestController
and all clients of the @Service
.
|
Figure 146. API Maps DTO to BO for Service Interface
|
327.2. @Service Maps DTO/BO
Alternatively, we can have the @Service
fully encapsulate the implementation details and work with DTOs in its interface.
This places the job of DTO/BO translation to the @Service
and the @RestController
and all @Service
clients work with DTOs.
Figure 147. Service Maps DTO in Service Interface to BO
|
|
327.3. Layered Service Mapping Approach
The later DTO interface/mapping approach just introduced — maps closely to the Domain Driven Design (DDD) "Application Layer". However, one could also implement a layering of services.
|
Layered Services Permit a Level of Trust between Inner Components
When using this approach, I like:
|
328. Implementation Details
With architectural decisions understood, lets take a look at some of the key details of the end-to-end application.
328.1. Song BO
We have already covered the Song
BO @Entity
class in a lot of detail during the JDBC, JPA, and Spring Data JPA lectures.
The following lists most of the key business aspects and implementation details of the class.
package info.ejava.examples.db.repo.jpa.songs.bo;
...
@Entity
@Table(name="REPOSONGS_SONG")
@Getter
@ToString
@Builder
@With
@AllArgsConstructor
@NoArgsConstructor
...
public class Song {
@Id @GeneratedValue(strategy = GenerationType.SEQUENCE)
@Column(name="ID", nullable=false, insertable=true, updatable=false)
private int id;
@Setter
@Column(name="TITLE", length=255, nullable=true, insertable=true, updatable=true)
private String title;
@Setter
private String artist;
@Setter
private LocalDate released;
}
328.2. SongDTO
The SongDTO class has been mapped to Jackson JSON and Jackson and JAXB XML. The details of Jackson and JAXB mapping were covered in the API Content lectures. Jackson JSON required no special annotations to map this class. Jackson and JAXB XML primarily needed some annotations related to namespaces and attribute mapping. JAXB also required annotations for mapping the LocalDate field.
The following lists the annotations required to marshal/unmarshal the SongsDTO class using Jackson and JAXB.
package info.ejava.examples.db.repo.jpa.songs.dto;
...
@JacksonXmlRootElement(localName = "song", namespace = "urn:ejava.db-repo.songs")
@XmlRootElement(name = "song", namespace = "urn:ejava.db-repo.songs")
@XmlAccessorType(XmlAccessType.FIELD)
@Data @Builder
@NoArgsConstructor @AllArgsConstructor
public class SongDTO {
@JacksonXmlProperty(isAttribute = true)
@XmlAttribute
private int id;
private String title;
private String artist;
@XmlJavaTypeAdapter(LocalDateJaxbAdapter.class) (1)
private LocalDate released;
...
}
1 | JAXB requires an adapter for the newer LocalDate java class |
328.2.1. LocalDateJaxbAdapter
Jackson is configured to marshal LocalDate out of the box using the ISO_LOCAL_DATE format for both JSON and XML.
"released" : "2013-01-30" //Jackson JSON
<released xmlns="">2013-01-30</released> //Jackson XML
JAXB does not have a default format and requires the class be mapped to/from a string using an XmlAdapter
.
@XmlJavaTypeAdapter(LocalDateJaxbAdapter.class)
private LocalDate released;
public static class LocalDateJaxbAdapter extends XmlAdapter<String, LocalDate> {
@Override
public LocalDate unmarshal(String text) {
return LocalDate.parse(text, DateTimeFormatter.ISO_LOCAL_DATE);
}
@Override
public String marshal(LocalDate timestamp) {
return DateTimeFormatter.ISO_LOCAL_DATE.format(timestamp);
}
}
328.3. Song JSON Rendering
The following snippet provides example JSON of a Song
DTO payload.
{
"id" : 1,
"title" : "Tender Is the Night",
"artist" : "No Doubt",
"released" : "2003-11-16"
}
328.4. Song XML Rendering
The following snippets provide example XML of Song
DTO payloads.
They are technically equivalent from an XML Schema standpoint, but use some alternate syntax XML to achieve the same technical goals.
<song xmlns="urn:ejava.db-repo.songs" id="2">
<title xmlns="">The Mirror Crack'd from Side to Side</title>
<artist xmlns="">Earth Wind and Fire</artist>
<released xmlns="">2018-01-01</released>
</song>
<ns2:song xmlns:ns2="urn:ejava.db-repo.songs" id="1">
<title>Brandy of the Damned</title>
<artist>Orbital</artist>
<released>2015-11-10</released>
</ns2:song>
328.5. Pageable/PageableDTO
I placed a high value on paging when working with unbounded collections when covering repository find methods. The value of paging comes especially into play when dealing with external users. That means we will need a way to represent Page, Pageable, and Sort in requests and responses as a part of DTO solution.
You will notice that I made a few decisions on how to implement this interface
-
I am assuming that both sides of the interface using the DTO classes are using Spring Data. The DTO classes have a direct dependency on their non-DTO siblings.
-
I am using the Page, Pageable, and Sort DTOs to directly self-map to/from Spring Data types. This makes the client and service code much simpler.
Pageable pageable = PageableDTO.of(pageNumber, pageSize, sortString).toPageable(); (1) Page<SongDTO> result = ... SongsPageDTO resultDTO = new SongsPageDTO(result); (1)
1 using self-mapping between paging DTOs and Spring Data ( Pageable
andPage
) types -
I chose to use the Spring Data types (
Pageable
andPage
) in the@Service
interface when expressing paging and performed the Spring Data/DTO mappings in the@RestController
. The@Service
still takes DTO business types and maps DTO business types to/from BOs. I did this so that I did not eliminate any pre-existing library integration with Spring Data paging types.Page<SongDTO> getSongs(Pageable pageable); (1)
1 using Spring Data ( Pageable
andPage
) and business DTO (SongDTO
) types in@Service
interface
I will be going through the architecture and wiring in these lecture notes. The actual DTO code is surprisingly complex to render in the different formats and libraries. These topics were covered in detail in the API content lectures. I also chose to implement the PageableDTO and sort as immutable — which added some interesting mapping challenges worth inspecting.
328.5.1. PageableDTO Request
Requests require an expression for Pageable. The most straight forward way to accomplish this is through query parameters. The example snippet below shows pageNumber, pageSize, and sort expressed as simple string values as part of the URI. We have to write code to express and parse that data.
(1)
/api/songs/example?pageNumber=0&pageSize=5&sort=released:DESC,id:ASC
(2)
1 | pageNumber and pageSize are direct properties used by PageRequest |
2 | sort contains a comma separated list of order compressed into a single string |
Integer pageNumber
and pageSize
are straight forward to represent as numeric values in the query.
Sort requires a minor amount of work.
Spring Data Sort is an ordered list of "property and direction".
I have chosen to express property and direction using a ":" separated string and concatenate the ordering using a ",".
This allows the query string to be expressed in the URI without special characters.
328.5.2. PageableDTO Client-side Request Mapping
Since I expect code using the PageableDTO to also be using Spring Data, I chose to use self-mapping between the PageableDTO and Spring Data Pageable.
The following snippet shows how to map Pageable
to PageableDTO and the PageableDTO properties to URI query parameters.
PageRequest pageable = PageRequest.of(0, 5,
Sort.by(Sort.Order.desc("released"), Sort.Order.asc("id")));
PageableDTO pageSpec = PageableDTO.of(pageable); (1)
URI uri=UriComponentsBuilder
.fromUri(serverConfig.getBaseUrl())
.path(SongsController.SONGS_PATH).path("/example")
.queryParams(pageSpec.getQueryParams()) (2)
.build().toUri();
1 | using PageableDTO to self map from Pageable |
2 | using PageableDTO to self map to URI query parameters |
328.5.3. PageableDTO Server-side Request Mapping
The following snippet shows how the individual page request properties can be used to build a local instance of PageableDTO in the @RestController
.
Once the PageableDTO is built, we can use that to self map to a Spring Data Pageable
to be used when calling the @Service
.
public ResponseEntity<SongsPageDTO> findSongsByExample(
@RequestParam(value="pageNumber",defaultValue="0",required=false) Integer pageNumber,
@RequestParam(value="pageSize",required=false) Integer pageSize,
@RequestParam(value="sort",required=false) String sortString,
@RequestBody SongDTO probe) {
Pageable pageable = PageableDTO.of(pageNumber, pageSize, sortString) (1)
.toPageable(); (2)
1 | building PageableDTO from page request properties |
2 | using PageableDTO to self map to Spring Data Pageable |
328.5.4. Pageable Response
Responses require an expression for Pageable to indicate the pageable properties about the content returned. This must be expressed in the payload, so we need a JSON and XML expression for this. The snippets below show the JSON and XML DTO renderings of our Pageable properties.
"pageable" : {
"pageNumber" : 1,
"pageSize" : 25,
"sort" : "title:ASC,artist:ASC"
}
<pageable xmlns="urn:ejava.common.dto" pageNumber="1" pageSize="25" sort="title:ASC,artist:ASC"/>
328.6. Page/PageDTO
Pageable
is part of the overall Page<T>
, with contents.
Therefore, we also need a way to return a page of content to the caller.
328.6.1. PageDTO Rendering
JSON is very lenient and could have been implemented with a generic PageDTO<T>
class.
{"content":[ (1)
{"id":10, (2)
"title":"Blue Remembered Earth",
"artist":"Coldplay",
"released":"2009-03-18"}],
"totalElements":10, (1)
"pageable":{"pageNumber":3,"pageSize":3,"sort":null} (1)
}
1 | content , totalElements , and pageable are part of reusable PageDTO |
2 | song within content array is part of concrete Songs domain |
However, XML — with its use of unique namespaces, requires a sub-class to provide the type-specific values for content and overall page.
<songsPage xmlns="urn:ejava.db-repo.songs" totalElements="10"> (1)
<wstxns1:content xmlns:wstxns1="urn:ejava.common.dto">
<song id="10"> (2)
<title xmlns="">Blue Remembered Earth</title>
<artist xmlns="">Coldplay</artist>
<released xmlns="">2009-03-18</released>
</song>
</wstxns1:content>
<pageable xmlns="urn:ejava.common.dto" pageNumber="3" pageSize="3"/>
</songsPage>
1 | totalElements mapped to XML as an (optional) attribute |
2 | songsPage and song are in concrete domain urn:ejava.db-repo.songs namespace |
328.6.2. SongsPageDTO Subclass Mapping
The SongsPageDTO
subclass provides the type-specific mapping for the content and overall page.
The generic portions are handled by the base class.
@JacksonXmlRootElement(localName = "songsPage", namespace = "urn:ejava.db-repo.songs") (1)
@XmlRootElement(name = "songsPage", namespace = "urn:ejava.db-repo.songs") (1)
@XmlType(name = "SongsPage", namespace = "urn:ejava.db-repo.songs")
@XmlAccessorType(XmlAccessType.NONE)
@NoArgsConstructor
public class SongsPageDTO extends PageDTO<SongDTO> {
@JsonProperty
@JacksonXmlElementWrapper(localName = "content", namespace = "urn:ejava.common.dto")(2)
@JacksonXmlProperty(localName = "song", namespace = "urn:ejava.db-repo.songs") (3)
@XmlElementWrapper(name="content", namespace = "urn:ejava.common.dto") (2)
@XmlElement(name="song", namespace = "urn:ejava.db-repo.songs") (3)
public List<SongDTO> getContent() {
return super.getContent();
}
public SongsPageDTO(List<SongDTO> content, Long totalElements, PageableDTO pageableDTO) {
super(content, totalElements, pageableDTO);
}
public SongsPageDTO(Page<SongDTO> page) {
this(page.getContent(), page.getTotalElements(),
PageableDTO.fromPageable(page.getPageable()));
}
}
1 | Each type-specific mapping must have its own XML naming |
2 | "Wrapper" is the outer element for the individual members of collection and part of generic framework |
3 | "Property/Element" is the individual members of collection and interface/type specific |
328.6.3. PageDTO Server-side Rendering Response Mapping
The @RestController
can use the concrete DTO class (SongPageDTO in this case) to self-map from a Spring Data Page<T>
to a DTO suitable for marshaling back to the API client.
Page<SongDTO> result=songsService.findSongsMatchingAll(probe, pageable);
SongsPageDTO resultDTO = new SongsPageDTO(result); (1)
ResponseEntity<SongsPageDTO> response = ResponseEntity.ok(resultDTO);
1 | using SongsPageDTO to self-map Sing Data Page<T> to DTO |
328.6.4. PageDTO Client-side Rendering Response Mapping
The PageDTO<T>
class can be used to self-map to a Spring Data Page<T>
.
Pageable, if needed, can be obtained from the Page<T>
or through the pageDTO.getPageable()
DTO result.
SongsPageDTO pageDTO = request.exchange()
.expectStatus().isOk()
.returnResult(SongsPageDTO.class)
.getResponseBody().blockFirst();
Page<SongDTO> page = pageDTO.toPage(); (1)
Pageable pageable = ... (2)
1 | using PageDTO<T> to self-map to a Spring Data Page<T> |
2 | can use page.getPageable() or pageDTO.getPageable().toPageable() obtain Pageable |
329. SongMapper
The SongMapper
@Component
class is used to map between SongDTO
and Song
BO instances.
It leverages Lombok builder methods — but is pretty much a simple/brute force mapping.
329.1. Example Map: SongDTO to Song BO
The following snippet is an example of mapping a SongDTO
to a Song
BO.
@Component
public class SongsMapper {
public Song map(SongDTO dto) {
Song bo = null;
if (dto!=null) {
bo = Song.builder()
.id(dto.getId())
.artist(dto.getArtist())
.title(dto.getTitle())
.released(dto.getReleased())
.build();
}
return bo;
}
...
329.2. Example Map: Song BO to SongDTO
The following snippet is an example of mapping a Song
BO to a SongDTO
.
...
public SongDTO map(Song bo) {
SongDTO dto = null;
if (bo!=null) {
dto = SongDTO.builder()
.id(bo.getId())
.artist(bo.getArtist())
.title(bo.getTitle())
.released(bo.getReleased())
.build();
}
return dto;
}
...
330. Service Tier
The SongsService @Service
encapsulates the implementation of our management of Songs.
330.1. SongsService Interface
The SongsService
interface defines a portion of pure CRUD methods and a series of finder methods.
To be consistent with DDD encapsulation, the @Service
interface is using DTO classes.
Since the @Service
is an injectable component, I chose to use straight Spring Data pageable types to possibly integrate with libraries that inherently work with Spring Data types.
public interface SongsService {
SongDTO createSong(SongDTO songDTO); (1)
SongDTO getSong(int id);
void updateSong(int id, SongDTO songDTO);
void deleteSong(int id);
void deleteAllSongs();
Page<SongDTO> findReleasedAfter(LocalDate exclusive, Pageable pageable);(2)
Page<SongDTO> findSongsMatchingAll(SongDTO probe, Pageable pageable);
}
1 | chose to use DTOs for business data (SongDTO ) in @Service interface |
2 | chose to use Spring Data types (Page and Pageable ) in pageable @Service finder methods |
330.2. SongsServiceImpl Class
The SongsServiceImpl
implementation class is implemented using the SongsRepository
and SongsMapper
.
@RequiredArgsConstructor (1) (2)
@Service
public class SongsServiceImpl implements SongsService {
private final SongsMapper mapper;
private final SongsRepository songsRepo;
1 | Creates a constructor for all final attributes |
2 | Single constructors are automatically used for Autowiring |
I will demonstrate two types of methods here — one requiring an active transaction and the other that only supports but does not require a transaction.
330.3. createSong()
The createSong()
method
-
accepts a
SongDTO
, creates a new song, and returns the created song as aSongDTO
, with the generated ID. -
declares a
@Transaction
annotation to be associated with a Persistence Context and propagationREQUIRED
in order to enforce that a database transaction be active from this point forward. -
calls the mapper to map from/to a
SongsDTO
to/from aSong
BO -
uses the
SongsRepository
to interact with the database
@Transactional(propagation = Propagation.REQUIRED) (1) (2) (3)
public SongDTO createSong(SongDTO songDTO) {
Song songBO = mapper.map(songDTO); (4)
//manage instance
songsRepo.save(songBO); (5)
return mapper.map(songBO); (6)
}
1 | @Transaction associates Persistence Context with thread of call |
2 | propagation used to control activation and scope of transaction |
3 | REQUIRED triggers the transaction to start no later than this method |
4 | mapper converting DTO input argument to BO instance |
5 | BO instance saved to database and updated with primary key |
6 | mapper converting BO entity to DTO instance for return from service |
330.4. findSongsMatchingAll()
The findSongsMatchingAll()
method
-
accepts a
SongDTO
as a probe andPageable
to adjust the search and results -
declares a
@Transaction
annotation to be associated with a Persistence Context and propagationSUPPORTS
to indicate that no database changes will be performed by this method. -
calls the mapper to map from/to a
SongsDTO
to/from aSong
BO -
uses the
SongsRepository
to interact with the database
@Transactional(propagation = Propagation.SUPPORTS) (1) (2) (3)
public Page<SongDTO> findSongsMatchingAll(SongDTO probeDTO, Pageable pageable) {
Song probe = mapper.map(probeDTO); (4)
ExampleMatcher matcher = ExampleMatcher.matchingAll().withIgnorePaths("id"); (5)
Page<Song> songs = songsRepo.findAll(Example.of(probe, matcher), pageable); (6)
return mapper.map(songs); (7)
}
1 | @Transaction associates Persistence Context with thread of call |
2 | propagation used to control activation and scope of transaction |
3 | SUPPORTS triggers the any active transaction to be inherited by this method but does not proactively start one |
4 | mapper converting DTO input argument to BO instance to create probe for match |
5 | building matching rules to include an ignore of id property |
6 | finder method invoked with matching and paging arguments to return page of BOs |
7 | mapper converting page of BOs to page of DTOs |
331. RestController API
The @RestController
provides an HTTP Facade for our @Service
.
@RestController
@Slf4j
@RequiredArgsConstructor
public class SongsController {
public static final String SONGS_PATH="api/songs";
public static final String SONG_PATH= SONGS_PATH + "/{id}";
public static final String RANDOM_SONG_PATH= SONGS_PATH + "/random";
private final SongsService songsService; (1)
1 | @Service injected into class using constructor injection |
I will demonstrate two of the operations available.
331.1. createSong()
The createSong()
operation
-
is called using
POST /api/songs
method and URI -
passed a SongDTO, containing the fields to use marshaled in JSON or XML
-
calls the
@Service
to handle the details of creating the Song -
returns the created song using a SongDTO
@RequestMapping(path=SONGS_PATH,
method=RequestMethod.POST,
consumes={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE},
produces={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<SongDTO> createSong(@RequestBody SongDTO songDTO) {
SongDTO result = songsService.createSong(songDTO); (1)
URI uri = ServletUriComponentsBuilder.fromCurrentRequestUri()
.replacePath(SONG_PATH)
.build(result.getId()); (2)
ResponseEntity<SongDTO> response = ResponseEntity.created(uri).body(result);
return response; (3)
}
1 | DTO from HTTP Request supplied to and result DTO returned from @Service method |
2 | URI of created instance calculated for Location response header |
3 | DTO marshalled back to caller with HTTP Response |
331.2. findSongsByExample()
The findSongsByExample()
operation
-
is called using "POST /api/songs/example" method and URI
-
passed a SongDTO containing the properties to search for using JSON or XML
-
calls the
@Service
to handle the details of finding the songs after mapping thePageable
from query parameters -
converts the
Page<SongDTO>
into aSongsPageDTO
to address marshaling concerns relative to XML -
returns the page as a
SongsPageDTO
@RequestMapping(path=SONGS_PATH + "/example",
method=RequestMethod.POST,
consumes={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE},
produces={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<SongsPageDTO> findSongsByExample(
@RequestParam(value="pageNumber",defaultValue="0",required=false) Integer pageNumber,
@RequestParam(value="pageSize",required=false) Integer pageSize,
@RequestParam(value="sort",required=false) String sortString,
@RequestBody SongDTO probe) {
Pageable pageable=PageableDTO.of(pageNumber, pageSize, sortString).toPageable();(1)
Page<SongDTO> result=songsService.findSongsMatchingAll(probe, pageable); (2)
SongsPageDTO resultDTO = new SongsPageDTO(result); (3)
ResponseEntity<SongsPageDTO> response = ResponseEntity.ok(resultDTO);
return response;
}
1 | PageableDTO constructed from page request query parameters |
2 | @Service accepts DTO arguments for call and returns DTO constructs mixed with Spring Data paging types |
3 | type-specific SongsPageDTO marshalled back to caller to support type-specific XML namespaces |
331.3. WebClient Example
The following snippet shows an example of using a WebClient to request a page of finder results form the API. WebClient is part of the Spring WebFlux libraries — which implements reactive streams. The use of WebClient here is purely for example and not a requirement of anything created. However, using WebClient did force my hand to add JAXB to the DTO mappings since Jackson XML is not yet supported by WebFlux. RestTemplate does support both Jackson and JAXB XML mapping - which would have made mapping simpler.
@Autowired
private WebClient webClient;
...
UriComponentsBuilder findByExampleUriBuilder = UriComponentsBuilder
.fromUri(serverConfig.getBaseUrl())
.path(SongsController.SONGS_PATH).path("/example");
...
//given
MediaType mediaType = ...
PageRequest pageable = PageRequest.of(0, 5, Sort.by(Sort.Order.desc("released")));
PageableDTO pageSpec = PageableDTO.of(pageable); (1)
SongDTO allSongsProbe = SongDTO.builder().build(); (2)
URI uri = findByExampleUriBuilder.queryParams(pageSpec.getQueryParams()) (3)
.build().toUri();
WebClient.RequestHeadersSpec<?> request = webClient.post()
.uri(uri)
.contentType(mediaType)
.body(Mono.just(allSongsProbe), SongDTO.class)
.accept(mediaType);
//when
ResponseEntity<SongsPageDTO> response = request
.retrieve()
.toEntity(SongsPageDTO.class).block();
//then
then(response.getStatusCode().is2xxSuccessful()).isTrue();
SongsPageDTO page = response.getBody();
1 | limiting query rsults to first page, ordered by "release", with a page size of 5 |
2 | create a "match everything" probe |
3 | pageable properties added as query parameters |
WebClient/WebFlex does not yet support Jackson XML
WebClient and WebFlex does not yet support Jackson XML.
This is what primarily forced the example to leverage JAXB for XML.
WebClient/WebFlux automatically makes the decision/transition under the covers once an |
332. Summary
In this module we learned:
-
to integrate a Spring Data JPA Repository into an end-to-end application, accessed through an API
-
implement a service tier that completes useful actions
-
to make a clear distinction between DTOs and BOs
-
to identify data type architectural decisions required for DTO and BO types
-
to setup proper transaction and other container feature boundaries using annotations and injection
-
implement paging requests through the API
-
implement page responses through the API
MongoDB with Mongo Shell
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
333. Introduction
This lecture will introduce working with MongoDB database using the Mongo shell.
333.1. Goals
The student will learn:
-
basic concepts behind the Mongo NoSQL database
-
to create a database and collection
-
to perform basic CRUD operations with database collection and documents using Mongo shell
333.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
identify the purpose of a MongoDB collection, structure of a MongoDB document, and types of example document fields
-
access a MongoDB database using the Mongo shell
-
perform basic CRUD actions on documents
-
perform paging commands
-
leverage the aggregation pipeline for more complex commands
334. Mongo Concepts
Mongo is a document-oriented database. This type of database enforces very few rules when it comes to schema. About the only rules that exist are:
-
a primary key field, called
_id
must exist -
no document can be larger than 16MB
GridFS API Supports Unlimited Size Documents.
MongoDB supports unlimited size documents using the GridFS API. GridFS is basically a logical document abstraction over a collection of related individual physical documents — called "chunks" — abiding by the standard document-size limits |
334.1. Mongo Terms
The table below lists a few keys terms associated with MongoDB.
Mongo Term | Peer RDBMS Term | Description |
---|---|---|
Database |
Database |
a group of document collections that fall under the same file and administrative management |
Collection |
Table |
a set of documents with indexes and rules about how the documents are managed |
Document |
Row |
a collection of fields stored in binary JSON (BSON) format. RDBMS tables must have a defined schema and all rows must match that schema. |
Field |
Column |
a JSON property that can be a single value or nested document. An RDBMS column will have a single type based on the schema and cannot be nested. |
Server |
Server |
a running instance that can perform actions on the database. A server contains more than one database. |
Mongos |
(varies) |
an intermediate process used when data is spread over multiple servers.
Since we will be using only a single server, we will not need a |
334.2. Mongo Documents
Mongo Documents are stored in a binary JSON format called "BSON". There are many native types that can be represented in BSON. Among them include "string", "boolean", "date", "ObjectId", "array", etc.
Documents/fields can be flat or nested.
{
"field1": value1, (1)
"field2": value2,
"field3": { (2)
"field31": value31,
"field32": value32
},
"field4": [ value41, value42, value43 ], (3)
"field5": [ (4)
{ "field511": value511, "field512": value512 },
{ "field521": value521}
{ "field513": value513, "field513": value513, "field514": value514 }
]
}
1 | example field with value of BSON type |
2 | example nested document within a field |
3 | example field with type array — with values of BSON type |
4 | example field with type array — with values of nested documents |
The follow-on interaction examples will use a flat document structure to keep things simple to start with.
335. MongoDB Server
To start our look into using Mongo commands, lets instantiate a MongoDB, connect with the Mongo Shell, and execute a few commands.
335.1. Starting Docker-Compose MongoDB
One simple option we have to instantiate a MongoDB is to use Docker Compose.
The following snippet shows an example of launching MongoDB from the docker-compose.yml script in the example directory.
$ docker-compose up -d mongodb
Creating ejava_mongodb_1 ... done
$ docker ps --format "{{.Image}}\t{{.Ports}}\t{{.Names}}"
mongo:4.4.0-bionic 0.0.0.0:27017->27017/tcp mongo-book-example_mongodb_1 (1)
1 | image is running with name ejava_mongodb_1 and server 27017 port is mapped also to host |
This specific MongoDB server is configured to use authentication and has an admin account pre-configured to use credentials admin/secret
.
335.2. Connecting using Host’s Mongo Shell
If we have Mongo shell installed locally, we can connect to MongoDB using the default mapping to localhost.
$ which mongo
/usr/local/bin/mongo (1)
$ mongo -u admin -p secret (2) (3)
MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
1 | mongo shell happens to be installed locally |
2 | password can be securely prompted by leaving off command line |
3 | URL defaults to mongodb://127.0.0.1:27017 |
335.3. Connecting using Guest’s Mongo Shell
If we do not have Mongo shell installed locally, we can connect to MongoDB by executing the command in the MongoDB image.
$ docker-compose exec mongodb mongo -u admin -p secret (1) (2)
MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
1 | runs the mongo shell command within the mongodb Docker image |
2 | URL defaults to mongodb://127.0.0.1:27017 |
335.4. Switch to test Database
We start off with three default databases meant primarily for server use.
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
We can switch to a database to make it the default database for follow-on commands even before it exists.
> use test (1)
switched to db test
> show collections
>
1 | makes the test database the default database for follow-on commands |
Mongo will create a new/missing database on-demand when the first document is inserted. |
335.5. Database Command Help
We can get a list of all commands available to us for a collection using the db.<collection>.help()
command.
The collection does not have to exist yet.
> db.books.help() (1) DBCollection help ... db.books.insertOne( obj, <optional params> ) - insert a document, optional parameters are: w, wtimeout, j db.books.insert(obj)
1 | command to list all possible commands for a collection |
336. Basic CRUD Commands
336.1. Insert Document
We can create a new document in the database, stored in a named collection.
The following snippet shows the syntax for inserting a single, new book in the books
collection.
All fields are optional at this point and the _id
field will be automatically generated by the server when we do not provide one.
> db.books.insertOne({title:"GWW", author:"MM", published:ISODate("1936-06-30")})
{
"acknowledged" : true,
"insertedId" : ObjectId("606c82da9ef76345a2bf0b7f") (1)
}
1 | insertOne command returns the _id assigned |
MongoDB creates the collection, if it does not exist.
> show collections
books
336.2. Primary Keys
MongoDB requires that all documents contain a primary key with the name _id
and will generate one of type ObjectID
if not provided.
You have the option of using a business value from the document or a self-generated uniqueID, but it has to be stored in the _id
field.
The following snippet shows an example of an insert
using a supplied, numeric primary key.
> db.books.insert({_id:17, title:"GWW", author:"MM", published:ISODate("1936-06-30")})
WriteResult({ "nInserted" : 1 })
> db.books.find({_id:17})
{ "_id" : 17, "title" : "GWW", "author" : "MM", "published" : ISODate("1936-06-30T00:00:00Z") }
336.3. Document Index
All collections are required to have an index on the _id
field.
This index is generated automatically.
> db.books.getIndexes()
[
{ "v" : 2, "key" : { "_id" : 1 }, "name" : "_id_" } (1)
]
1 | index on _id field in books collection |
336.4. Create Index
We can create an index on one or more other fields using the createIndex() command.
The following example creates a non-unique, ascending index on the title
field.
By making it sparse — only documents with a title
field are included in the index.
> db.books.createIndex({title:1}, {unique:false, sparse:true})
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
336.5. Find All Documents
We can find all documents by passing in a JSON document that matches the fields we are looking for.
We can find all documents in the collection by passing in an empty query ({}
).
Output can be made more readable by adding .pretty()
.
> db.books.find({}) (1)
{ "_id" : ObjectId("606c82da9ef76345a2bf0b7f"), "title" : "GWW", "author" : "MM", "published" : ISODate("1936-06-30T00:00:00Z") }
> db.books.find({}).pretty() (2)
{
"_id" : ObjectId("606c82da9ef76345a2bf0b7f"),
"title" : "GWW",
"author" : "MM",
"published" : ISODate("1936-06-30T00:00:00Z")
}
1 | empty query criteria matches all documents in the collection |
2 | adding .pretty() expands the output |
336.6. Return Only Specific Fields
We can limit the fields returned by using a "projection" expression.
1
means to include.
0
means to exclude.
_id
is automatically included and must be explicitly excluded.
All other fields are automatically excluded and must be explicitly included.
> db.books.find({}, {title:1, published:1, _id:0}) (1)
{ "title" : "GWW", "published" : ISODate("1936-06-30T00:00:00Z") }
1 | find all documents and only include the title and published date |
336.7. Get Document by Id
We can obtain a document by searching on any number of its fields.
The following snippet locates a document by the primary key _id
field.
> db.books.find({_id:ObjectId("606c82da9ef76345a2bf0b7f")})
{ "_id" : ObjectId("606c82da9ef76345a2bf0b7f"), "title" : "GWW", "author" : "MM", "published" : ISODate("1936-06-30T00:00:00Z") }
336.8. Replace Document
We can replace the entire document by providing a filter and replacement document.
The snippet below filters on the _id
field and replaces the document with a version that modifies the title
field.
> db.books.replaceOne(
{ "_id" : ObjectId("606c82da9ef76345a2bf0b7f")},
{"title" : "Gone WW", "author" : "MM", "published" : ISODate("1936-06-30T00:00:00Z") })
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 } (1)
1 | result document indicates a single match was found and modified |
The following snippet shows a difference in the results when a match is not found for the filter.
> db.books.replaceOne({ "_id" : "badId"}, {"title" : "Gone WW"})
{ "acknowledged" : true, "matchedCount" : 0, "modifiedCount" : 0 } (1)
1 | matchCount and modifiedCount result in 0 when filter does not match anything |
The following snippet shows the result of replacing the document.
> db.books.findOne({_id:ObjectId("606c82da9ef76345a2bf0b7f")})
{
"_id" : ObjectId("606c82da9ef76345a2bf0b7f"),
"title" : "Gone WW",
"author" : "MM",
"published" : ISODate("1936-06-30T00:00:00Z")
}
336.9. Save/Upsert a Document
We will receive an error if we issue an insert
a second time using an _id
that already exists.
> db.books.insert({_id:ObjectId("606c82da9ef76345a2bf0b7f"), title:"Gone WW", author:"MMitchell", published:ISODate("1936-06-30")})
WriteResult({
"nInserted" : 0,
"writeError" : {
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: test.books index: _id_ dup key: { _id: ObjectId('606c82da9ef76345a2bf0b7f') }",
}
})
We will be able to insert a new document or update an existing one using the save
command.
This very useful command performs an "upsert".
> db.books.save({_id:ObjectId("606c82da9ef76345a2bf0b7f"), title:"Gone WW", author:"MMitchell", published:ISODate("1936-06-30")}) (1)
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
1 | save command performs an upsert |
336.10. Update Field
We can update specific fields in a document using one of the update commands. This is very useful when modifying large documents or when two concurrent threads are looking to increment a value in the document.
> filter={ "_id" : ObjectId("606c82da9ef76345a2bf0b7f")} (1)
> command={$set:{"title" : "Gone WW"} }
> db.books.updateOne( filter, command )
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 0 }
1 | using shell to store value in variable used in command |
> db.books.findOne({_id:ObjectId("606c82da9ef76345a2bf0b7f")})
{
"_id" : ObjectId("606c82da9ef76345a2bf0b7f"),
"title" : "Gone WW",
"author" : "MM",
"published" : ISODate("1936-06-30T00:00:00Z")
}
336.11. Delete a Document
We can delete a document using the delete
command and a filter.
> db.books.deleteOne({_id:ObjectId("606c82da9ef76345a2bf0b7f")})
{ "acknowledged" : true, "deletedCount" : 1 }
337. Paging Commands
As with most find() implementations, we need to take care to provide a limit to the number of documents returned. The Mongo shell has a built-in default limit. We can control what the database is asked to do using a few paging commands.
337.1. Sample Documents
This example has a small collection of 10 documents.
> db.books.count({})
10
The following lists the primary key, title, and author. There is no sorting or limits placed on this output
> db.books.find({}, {title:1, author:1})
{ "_id" : ObjectId("607c77169fca586207a97242"), "title" : "123Pale Kings and Princes", "author" : "Lanny Miller" }
{ "_id" : ObjectId("607c77169fca586207a97243"), "title" : "123Bury My Heart at Wounded Knee", "author" : "Ilona Leffler" }
{ "_id" : ObjectId("607c77169fca586207a97244"), "title" : "123Carrion Comfort", "author" : "Darci Jacobs" }
{ "_id" : ObjectId("607c77169fca586207a97245"), "title" : "123Antic Hay", "author" : "Dorcas Harris Jr." }
{ "_id" : ObjectId("607c77169fca586207a97246"), "title" : "123Where Angels Fear to Tread", "author" : "Latashia Gerhold" }
{ "_id" : ObjectId("607c77169fca586207a97247"), "title" : "123Tiger! Tiger!", "author" : "Miguel Gulgowski DVM" }
{ "_id" : ObjectId("607c77169fca586207a97248"), "title" : "123Waiting for the Barbarians", "author" : "Curtis Willms II" }
{ "_id" : ObjectId("607c77169fca586207a97249"), "title" : "123A Time of Gifts", "author" : "Babette Grimes" }
{ "_id" : ObjectId("607c77169fca586207a9724a"), "title" : "123Blood's a Rover", "author" : "Daryl O'Kon" }
{ "_id" : ObjectId("607c77169fca586207a9724b"), "title" : "123Precious Bane", "author" : "Jarred Jast" }
337.2. limit()
We can limit the output provided by the database by adding the limit()
command and supplying the maximum number of documents to return.
> db.books.find({}, {title:1, author:1}).limit(3) (1) (2) (3)
{ "_id" : ObjectId("607c77169fca586207a97242"), "title" : "123Pale Kings and Princes", "author" : "Lanny Miller" }
{ "_id" : ObjectId("607c77169fca586207a97243"), "title" : "123Bury My Heart at Wounded Knee", "author" : "Ilona Leffler" }
{ "_id" : ObjectId("607c77169fca586207a97244"), "title" : "123Carrion Comfort", "author" : "Darci Jacobs" }
1 | find all documents matching {} filter |
2 | return projection of _id (default), title`, and author |
3 | limit results to first 3 documents |
337.3. sort()/skip()/limit()
We can page through the data by adding the skip()
command.
It is common that skip()
is accompanied by sort()
so that the follow on commands are using the same criteria.
The following snippet shows the first few documents after sorting by author.
> db.books.find({}, {author:1}).sort({author:1}).skip(0).limit(3) (1)
{ "_id" : ObjectId("607c77169fca586207a97249"), "author" : "Babette Grimes" }
{ "_id" : ObjectId("607c77169fca586207a97248"), "author" : "Curtis Willms II" }
{ "_id" : ObjectId("607c77169fca586207a97244"), "author" : "Darci Jacobs" }
1 | return first page of limit() size, after sorting by author |
The following snippet shows the second page of documents sorted by author.
> db.books.find({}, {author:1}).sort({author:1}).skip(3).limit(3) (1)
{ "_id" : ObjectId("607c77169fca586207a9724a"), "author" : "Daryl O'Kon" }
{ "_id" : ObjectId("607c77169fca586207a97245"), "author" : "Dorcas Harris Jr." }
{ "_id" : ObjectId("607c77169fca586207a97243"), "author" : "Ilona Leffler" }
1 | return second page of limit() size, sorted by author |
The following snippet shows the last page of documents sorted by author. In this case, we have less than the limit available.
> db.books.find({}, {author:1}).sort({author:1}).skip(9).limit(3) (1)
{ "_id" : ObjectId("607c77169fca586207a97247"), "author" : "Miguel Gulgowski DVM" }
1 | return last page sorted by author |
338. Aggregation Pipelines
There are times when we need to perform multiple commands and reshape documents. It may be more efficient and better encapsulated to do within the database versus issuing multiple commands to the database. MongoDB provides a feature called the Aggregation Pipeline that performs a sequence of commands called stages.
The intent of introducing the Aggregation topic is for those cases where one needs extra functionality without making multiple trips to the database and back to the client. The examples here will be very basic.
338.1. Common Commands
Some of these commands are common to db.<collection>.find()
:
|
|
|
|
|
The primary difference between aggregate’s use of these common commands and find()
is that find()
can only operate against the documents in the collection. aggregate()
can work against the documents in the collection and any intermediate reshaping of the results along the pipeline.
Downstream Pipeline Stages do not use Collection Indexes
Only initial aggregation pipeline stage commands — operating against the database collection — can take advantage of indexes. |
338.2. Unique Commands
Some commands unique to aggregation include:
-
group - similar to SQL’s "group by" for a JOIN, allowing us to locate distinct, common values across multiple documents and perform a group operation (like
sum
) on their remaining fields -
lookup - similar functionality to SQL’s JOIN, where values in the results are used to locate additional information from other collections for the result document before returning to the client
-
…(see Aggregate Pipeline Stages documentation)
338.3. Simple Match Example
The following example implements functionality we could have implemented with db.books.find()
.
It uses 5 stages:
-
$match
- to select documents with title field containing the letterT
-
$sort
- to order documents byauthor
field in descending order -
$project
- return only the_id
(default) andauthor
fields -
$skip
- to skip over 0 documents -
$limit
- to limit output to 2 documents
> db.books.aggregate([
{$match: {title:/T/}},
{$sort: {author:-1}},
{$project:{author:1}},
{$skip:0},
{$limit:2} ])
{ "_id" : ObjectId("607c77169fca586207a97247"), "author" : "Miguel Gulgowski DVM" }
{ "_id" : ObjectId("607c77169fca586207a97246"), "author" : "Latashia Gerhold" }
338.4. Count Matches
This example implements a count of matching fields on the database.
The functionality could have been achieved with db.books.count()
, but is gives us a chance to show a few things that can be leveraged in more complex scenarios.
-
$match
- to select documents with title field containing the letterT
-
$group
- to re-organize/re-structure the documents in the pipeline to gather them under a new, primary key and to perform an aggregate function on their remaining fields. In this case we are assigning all documents thenull
primary key and incrementing a new field calledcount
in the result document.
> db.books.aggregate([
{$match:{ title:/T/}},
{$group: {_id:null, count:{ $sum:1}}} ]) (1)
{ "_id" : null, "count" : 3 } /(2)
1 | create a new document with field count and increment value by 1 for each occurrence |
2 | the resulting document is re-shaped by pipeline |
The following example assigns the primary key (_id
) field to the author
field instead, causing each document to a distinct author
that just happens to have only 1 instance each.
> db.books.aggregate([
{$match:{ title:/T/}},
{$group: {_id:"$author", count:{ $sum:1}}} ]) (1)
{ "_id" : "Miguel Gulgowski DVM", "count" : 1 }
{ "_id" : "Latashia Gerhold", "count" : 1 }
{ "_id" : "Babette Grimes", "count" : 1 }
1 | assign primary key to author field |
339. Helpful Commands
This section contains a set if helpful Mongo shell commands. It will be populated over time.
339.1. Default Database
We can invoke the Mongo shell with credentials and be immediately assigned a named, default database.
-
authenticating as usual
-
supplying the database to execute against
-
supplying the database to authenticate against (commonly
admin
)
The following snippet shows an example of authenticating as admin
and starting with test
as the default database for follow-on commands.
$ docker-compose exec mongodb mongo test -u admin -p secret --authenticationDatabase admin
...
> db.getName()
test
> show collections
books
339.2. Command-Line Script
We can invoke the Mongo shell with a specific command to execute by using the --eval
command line parameter.
The following snippet shows an example of listing the contents of the books
collection in the test
database.
$ docker-compose exec mongodb mongo test -u admin -p secret --authenticationDatabase admin --eval 'db.books.find({},{author:1})'
MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/test?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("47e146a5-49c0-4fe4-be67-cc8e72ea0ed9") }
MongoDB server version: 4.4.0
{ "_id" : ObjectId("607c77169fca586207a97242"), "author" : "Lanny Miller" }
{ "_id" : ObjectId("607c77169fca586207a97243"), "author" : "Ilona Leffler" }
{ "_id" : ObjectId("607c77169fca586207a97244"), "author" : "Darci Jacobs" }
{ "_id" : ObjectId("607c77169fca586207a97245"), "author" : "Dorcas Harris Jr." }
{ "_id" : ObjectId("607c77169fca586207a97246"), "author" : "Latashia Gerhold" }
{ "_id" : ObjectId("607c77169fca586207a97247"), "author" : "Miguel Gulgowski DVM" }
{ "_id" : ObjectId("607c77169fca586207a97248"), "author" : "Curtis Willms II" }
{ "_id" : ObjectId("607c77169fca586207a97249"), "author" : "Babette Grimes" }
{ "_id" : ObjectId("607c77169fca586207a9724a"), "author" : "Daryl O'Kon" }
{ "_id" : ObjectId("607c77169fca586207a9724b"), "author" : "Jarred Jast" }
340. Summary
In this module we learned:
-
to identify a MongoDB collection, document, and fields
-
to create a database and collection
-
access a MongoDB database using the Mongo shell
-
to perform basic CRUD actions on documents to manipulate a MongoDB collection
-
to perform paging commands to control returned results
-
to leverage the aggregation pipeline for more complex commands
MongoTemplate
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
341. Introduction
There are at least three (3) different APIs for interacting with MongoDB using Java — the last two from Spring are closely related.
- MongoClient
-
is the core API from Mongo.
- MongoOperations (interface)/MongoTemplate (implementation class)
-
is a command-based API around MongoClient from Spring and integrated into Spring Boot
- Spring Data MongoDB Repository
-
is a repository-based API from Spring Data that is consistent with Spring Data JPA
This lecture covers implementing interactions with a MongoDB using the MongoOperations API, implemented using MongoTemplate. Even if one intends to use the repository-based API, the MongoOperations API will still be necessary to implement various edge cases — like individual field changes versus whole document replacements.
341.1. Goals
The student will learn:
-
to setup a MongoDB Maven project with references to embedded test and independent development and operational instances
-
to map a POJO class to a MongoDB collection
-
to implement MongoDB commands using a Spring command-level MongoOperations/MongoTemplate Java API
341.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
declare project dependencies required for using Spring’s MongoOperations/MongoTemplate API
-
implement basic unit testing using an (seemingly) embedded MongoDB
-
define a connection to a MongoDB
-
switch between the embedded test MongoDB and stand-alone MongoDB for interactive development inspection
-
define a
@Document
class to map to MongoDB collection -
inject a MongoOperations/MongoTemplate instance to perform actions on a database
-
perform whole-document CRUD operations on a
@Document
class using the Java API -
perform surgical field operations using the Java API
-
perform queries with paging properties
-
perform Aggregation pipeline operations using the Java API
342. Mongo Project
Except for the possibility of indexes and defining specialized collection features — there is not the same schema rigor required to bootstrap a Mongo project or collection before using. Our primary tasks will be to
-
declare a few, required dependencies
-
setup project for integration testing with an embedded MongoDB instance to be able to run tests with zero administration
-
conveniently switch between an embedded and stand-alone MongoDB instance to be able to inspect the database using the Mongo shell during development
342.1. Mongo Project Dependencies
The following snippet shows a dependency declaration for MongoDB APIs.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId> (1)
</dependency>
1 | brings in all dependencies required to access database using Spring Data MongoDB |
That dependency primarily brings in dependencies that are general to Spring Data and specific to MongoDB.
[INFO] +- org.springframework.boot:spring-boot-starter-data-mongodb:jar:2.7.0:compile
[INFO] | +- org.mongodb:mongodb-driver-sync:jar:4.6.0:compile
[INFO] | | +- org.mongodb:bson:jar:4.6.0:compile
[INFO] | | \- org.mongodb:mongodb-driver-core:jar:4.6.0:compile
[INFO] | | \- org.mongodb:bson-record-codec:jar:4.6.0:runtime
[INFO] | \- org.springframework.data:spring-data-mongodb:jar:3.4.0:compile
[INFO] | +- org.springframework:spring-tx:jar:5.3.20:compile
[INFO] | \- org.springframework.data:spring-data-commons:jar:2.7.0:compile
That is enough to cover integration with an external MongoDB during operational end-to-end scenarios. Next we need to address the integration test environment.
342.2. Mongo Project Integration Testing Options
MongoDB is written in C++. That means that we cannot simply instantiate MongoDB within our integration test JVM. We have at least three options:
-
Fongo in-memory MongoDB implementation
-
Flapdoodle embedded MongoDB and Auto Configuration or referenced Maven plugins:
-
Testcontainers Docker wrapper
Each should be able to do the job for what we want to do here. However,
-
Although Fongo is an in-memory solution, it is not MongoDB and edge cases may not work the same as a real MongoDB instance.
-
Flapdoodle calls itself "embedded". However, the term embedded is meant to mean "within the scope of the test" and not "within the process itself". The download and management of the server is what is embedded. The Spring Boot Documentation discusses Flapdoodle and the Spring Boot Embedded Mongo AutoConfiguration seamlessly integrates Flapdoodle with a few options. Full control of the configuration can be performed using the referenced Maven plugins or writing your own
@Configuration
beans that invoke the Flapdoodle API directly. -
Testcontainers provides full control over the versions and configuration of MongoDB instances using Docker. The following article points out some drawback to using Flapdoodle and how leveraging Testcontainers solved their issues. [67]
342.3. Flapdoodle Test Dependencies
This lecture will use the Flapdoodle Embedded Mongo setup. The following Maven dependency will bring in Flapdoodle libraries and trigger the Spring Boot Embedded MongoDB Auto Configuration
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<scope>test</scope>
</dependency>
A test instance of MongoDB is downloaded and managed through a test library called Flapdoodle Embedded Mongo. It is called "embedded", but unlike H2 and other embedded Java RDBMS implementations — the only thing that is embedded about this capability is the logical management feel. The library downloads a MongoDB instance (cached), starts, and stops the instance as part of running the test. Spring Data MongoDB includes a starter that will activate Flapdoodle when running a unit integration test and it detects the library on the classpath. We can bypass the use of Flapdoodle and use an externally managed MongoDB instance by turning off the Flapdoodle starter.
342.4. MongoDB Access Objects
There are two primary beans of interest when we connect and interact with MongoDB: MongoClient and MongoOperations/MongoTemplate.
-
MongoClient is a client provided by Mongo that provides the direct connection to the database and mimics the behavior of the Mongo Shell using Java. AutoConfiguration will automatically instantiate this, but can be customized using
MongoClients
factory class. -
MongoOperations is an interface provided by Spring Data that defines a type-mapped way to use the client
-
MongoTemplate is the implementation class for MongoOperations — also provided by Spring Data. AutoConfiguration will automatically instantiate this using the
MongoClient
and a specific database name.
Embedded MongoDB Auto Configuration Instantiates MongoClient to Reference Flapdoodle Instance
By default, the Spring Boot Embedded MongoDB Auto Configuration class will instantiate a MongoDB instance using Flapdoodle and instantiate a MongoClient that references that instance. |
342.5. MongoDB Connection Properties
To communicate with an explicit MongoDB server, we need to supply various properties or combine them into a single spring.data.mongodb.uri
The following example property file lists the individual properties commented out and the combined properties expressed as a URL.
These will be used to automatically instantiate an injectable MongoClient
and MongoTemplate
instance.
#spring.data.mongodb.host=localhost
#spring.data.mongodb.port=27017
#spring.data.mongodb.database=test
#spring.data.mongodb.authentication-database=admin
#spring.data.mongodb.username=admin
#spring.data.mongodb.password=secret
spring.data.mongodb.uri=mongodb://admin:secret@localhost:27017/test?authSource=admin
342.6. Injecting MongoTemplate
The MongoDB starter takes care of declaring key MongoClient
and MongoTemplate
@Bean
instances that can be injected into components.
Generally, injection of the MongoClient
will not be necessary.
@Autowired
private MongoTemplate mongoTemplate; (1)
1 | MongoTemplate defines a starting point to interface to MongoDB in a Spring application |
Alternatively, we can inject using the interface of MongoTemplate.
@Autowired
private MongoOperations mongoOps; (1)
1 | MongoOperations is the interface for MongoTemplate |
342.7. Disabling Embedded MongoDB
By default, Spring Boot will automatically use the Embedded MongoDB and Flapdoodle test instance for our MongoDB. For development, we may want to work against a live MongoDB instance so that we can interactively inspect the database using the Mongo shell. The only way to prevent using Embedded MongoDB during testing — is to disable the starter.
The following snippet shows the command-line system property that will disable EmbeddedMongoAutoConfiguration
from activating. That will leave only the standard MongoAutoConfiguration
to execute and setup MongoClient using spring.data.mongodb
properties.
-Dspring.autoconfigure.exclude=\
org.springframework.boot.autoconfigure.mongo.embedded.EmbeddedMongoAutoConfiguration
To make things simpler, I added a conditional @Configuration
class that would automatically trigger the exclusion of the EmbeddedMongoAutoConfiguration
when the spring.data.mongodb.uri
was present.
@Configuration
@ConditionalOnProperty(prefix="spring.data.mongodb",name="uri",matchIfMissing=false)(1)
@EnableAutoConfiguration(exclude = EmbeddedMongoAutoConfiguration.class) (2)
public class DisableEmbeddedMongoConfiguration {
}
1 | class is activated on the condition that property spring.data.mongodb.uri be present |
2 | when activated, class definition will disable EmbeddedMongoAutoConfiguration |
342.8. @ActiveProfiles
With the ability to turn on/off the EmbeddedMongo and MongoDB configurations, it would be nice to make this work seamlessly with profiles.
We know that we can define an @ActiveProfiles
for integration tests to use, but this is very static.
It cannot be changed during normal build time using a command-line option.
@SpringBootTest(classes= NTestConfiguration.class) (1)
@ActiveProfiles(profiles="mongodb") (2)
public class MongoOpsBooksNTest {
1 | defines various injectable instances for testing |
2 | statically defines which profile will be currently active |
What we can do is take advantage of the resolver
option of @ActiveProfiles
.
Anything we list in profiles
is the default.
Anything that is returned from the resolver
is what is used.
The resolver
is an instance of ActiveProfilesResolver
.
@SpringBootTest(classes= NTestConfiguration.class)
@ActiveProfiles(profiles={ ... }, resolver = ...class) (1) (2)
public class MongoOpsBooksNTest {
1 | profiles list the default profile(s) to use |
2 | resolver implements ActiveProfilesResolver and determines what profiles to use at runtime |
342.9. TestProfileResolver
I implemented a simple class based on an example from the internet from Amit Kumar.
[68]
The class will inspect the spring.profiles.active
if present and return an array of strings containing those profiles.
If the property does not exist, then the default options of the test class are used.
-Dspring.profiles.active=mongodb,foo,bar
The following snippet shows how that is performed.
//Ref: https://www.allprogrammingtutorials.com/tutorials/overriding-active-profile-boot-integration-tests.php
public class TestProfileResolver implements ActiveProfilesResolver {
private final String PROFILE_KEY = "spring.profiles.active";
private final DefaultActiveProfilesResolver defaultResolver = new DefaultActiveProfilesResolver();
@Override
public String[] resolve(Class<?> testClass) {
return System.getProperties().containsKey(PROFILE_KEY) ?
//return profiles expressed in property as array of strings
System.getProperty(PROFILE_KEY).split("\\s*,\\s*") : (1)
//return profile(s) expressed in the class' annotation
defaultResolver.resolve(testClass);
}
}
1 | regexp splits string at the comma (',') character and an unlimited number of contiguous whitespace characters on either side |
342.10. Using TestProfileResolver
The following snippet shows how TestProfileResolver
can be used by an integration test.
-
The test uses no profile by default — activating Embedded MongoDB.
-
If the
mongodb
profile is specified using a system property or temporarily inserted into the source — then that profile will be used. -
Since my mongodb profile declares
spring.data.mongodb.uri
, Embedded MongoDB is deactivated.
@SpringBootTest(classes= NTestConfiguration.class) (1)
@ActiveProfiles(resolver = TestProfileResolver.class) (2)
//@ActiveProfiles(profiles="mongodb", resolver = TestProfileResolver.class) (3)
public class MongoOpsBooksNTest {
1 | defines various injectable instances for testing |
2 | defines which profile will be currently active |
3 | defines which profile will be currently active, with mongodb being the default profile |
342.11. Inject MongoTemplate
In case you got a bit lost in that testing detour, we are now at a point where we can begin interacting with our chosen MongoDB instance using an injected MongoOperations
(the interface) or MongoTemplate
(the implementation class).
@AutoConfigure
private MongoTemplate mongoTemplate;
I wanted to show you how to use the running MongoDB when we write the integration tests using MongoTemplate
so that you can inspect the live DB instance with the Mongo shell while the database is undergoing changes.
Refer to the previous MongoDB lecture for information on how to connect the DB with the Mongo shell.
343. Example POJO
We will be using an example Book
class to demonstrate some database mapping and interaction concepts.
The class properties happen to be mutable and the class provides an all-arg constructor to support a builder and adds with()
modifiers to be able to chain modifications (using new instances).
These are not specific requirements of Spring Data Mongo.
Spring Data Mongo is designed to work with many different POJO designs.
package info.ejava.examples.db.mongo.books.bo;
...
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
import org.springframework.data.mongodb.core.mapping.Field;
@Document(collection = "books")
@Getter
@Builder
@With
@AllArgsConstructor
public class Book {
@Id
private String id;
@Setter
@Field(name="title")
private String title;
@Setter
private String author;
@Setter
private LocalDate published;
}
343.1. Property Mapping
343.1.1. Collection Mapping
Spring Data Mongo will map instances of the class to a collection
-
by the same name as the class (e.g.,
book
, by default) -
by the
collection
name supplied in the@Document
annotation
@Document(collection = "books") (2) public class Book { (1)
1 | instances are, by default, mapped to "book" collection |
2 | @Documentation.collection annotation property overrides default collection name |
MongoTemplate also provides the ability to independently provide the collection name during the command — which makes the class mapping even less important.
343.1.2. Primary Key Mapping
The MongoDB _id
field will be mapped to a field that either
-
is called
id
-
is annotated with
@Id
-
is mapped to field
_id
using@Field
annotation
import org.springframework.data.annotation.Id;
@Id (1)
private String id; (1) (2)
1 | property is both named id and annotated with @Id to map to _id field |
2 | String id type can be mapped to auto-generated MongoDB _id field |
Only _id
fields mapped to String
, BigInteger
, or ObjectId
can have auto-generated _id
fields mapped to them.
343.2. Field Mapping
Class properties will be mapped, by default to a field of the same name.
The @Field
annotation can be used to customize that behavior.
import org.springframework.data.mongodb.core.mapping.Field;
@Field(name="title") (1)
private String titleXYZ;
1 | maps Java property titleXYZ to MongoDB document field title |
We can annotate a property with @Transient
to prevent a property from being stored in the database.
import org.springframework.data.annotation.Transient;
@Transient (1)
private String dontStoreMe;
1 | @Transient excludes the Java property from being mapped to the database |
343.3. Instantiation
Spring Data Mongo leverages constructors in the following order
-
No argument constructor
-
Multiple argument constructor annotated with
@PersistenceConstructor
-
Solo, multiple argument constructor (preferably an all-args constructor)
Given our example, the all-args constructor will be used.
343.4. Property Population
For properties not yet set by the constructor, Spring Data Mongo will set fields using the following order
-
use
setter()
if supplied -
use
with()
if supplied, to construct a copy with the new value -
directly modify the field using reflection
344. Command Types
MongoTemplate offers different types of command interactions
- Whole Document
-
complete document passed in as argument and/or returned as result
- By Id
-
command performed on document matching provided ID
- Filter
-
command performed on documents matching filter
- Field Modifications
-
command makes field level changes to database documents
- Paging
-
options to finder commands to limit results returned
- Aggregation Pipeline
-
sequential array of commands to be performed on the database
These are not the only categories of commands you could come up with describing the massive set, but it will be enough to work with for a while. Inpect the MongoTemplate Javadoc for more options and detail.
345. Whole Document Operations
The MongoTemplate
instance already contains a reference to a specific database and the @Document
annotation of the POJO has the collection name — so the commands know exactly which collection to work with.
Commands also offer options to express the collection as a string at command-time to add flexibility to mapping approaches.
345.1. insert()
MongoTemplate
offers an explicit insert()
that will always attempt to insert a new document without checking if the ID already exists.
If the created document has a generated ID not yet assigned — then this should always successfully add a new document.
One thing to note about class mapping is that MongoTemplate
adds an additional field to the document during insert.
This field is added to support polymorphic instantiation of result classes.
{ "_id" : ObjectId("608b3021bd49095dd4994c9d"),
"title" : "Vile Bodies",
"author" : "Ernesto Rodriguez",
"published" : ISODate("2015-03-10T04:00:00Z"),
"_class" : "info.ejava.examples.db.mongo.books.bo.Book" } (1)
1 | MongoTemplate adds extra _class field to help dynamically instantiate query results |
This behavior can be turned off by configuring your own instance of MongoTemplate and following the following example.
345.1.1. insert() Successful
The following snippet shows an example of a transient book instance being successfully inserted into the database collection using the insert
command.
//given an entity instance
Book book = ...
//when persisting
mongoTemplate.insert(book); (1) (2)
//then documented is persisted
then(book.getId()).isNotNull();
then(mongoTemplate.findById(book.getId(), Book.class)).isNotNull();
1 | transient document assigned an ID and inserted into database collection |
2 | database referenced by MongoTemplate and collection identified in Book @Document.collection annotation |
345.1.2. insert() Duplicate Fail
If the created document is given an assigned ID value, then the call will fail with a DuplicateKeyException
exception if the ID already exists.
import org.springframework.dao.DuplicateKeyException;
...
//given a persisted instance
Book book = ...
mongoTemplate.insert(book);
//when persisting an instance by the same ID
Assertions.assertThrows(DuplicateKeyException.class,
()->mongoTemplate.insert(book)); (1)
1 | document with ID matching database ID cannot be inserted |
345.2. save()/Upsert
The save()
command is an "upsert" (Update or Insert) command and likely the simplest form of "upsert" provided by MongoTemplate (there are more).
It can be used to insert a document if new or replace if already exists - based only on the evaluation of the ID.
345.2.1. Save New
The following snippet shows a new transient document being saved to the database collection.
We know that it is new because the ID is unassigned and generated at save()
time.
//given a document not yet saved to DB
Book transientBook = ...
assertThat(transientBook.getId()).isNull();
//when - updating
mongoTemplate.save(transientBook);
//then - db has new state
then(transientBook.getId()).isNotNull();
Book dbBook = mongoTemplate.findById(transientBook.getId());
then(dbBook.getTitle()).isEqualTo(transientBook.getTitle());
then(dbBook.getAuthor()).isEqualTo(transientBook.getAuthor());
then(dbBook.getPublished()).isEqualTo(transientBook.getPublished());
345.2.2. Replace Existing
The following snippet shows a new document instance with the same ID as a document in the database, but with different values.
In this case, save()
performs an update/(whole document replacement).
//given a persisted instance
Book originalBook = ...
mongoTemplate.insert(originalBook);
Book updatedBook = mapper.map(dtoFactory.make()).withId(originalBook.getId());
assertThat(updatedBook.getTitle()).isNotEqualTo(originalBook.getTitle());
//when - updating
mongoTemplate.save(updatedBook);
//then - db has new state
Book dbBook = mongoTemplate.findById(book.getId(), Book.class);
then(dbBook.getTitle()).isEqualTo(updatedBook.getTitle());
then(dbBook.getAuthor()).isEqualTo(updatedBook.getAuthor());
then(dbBook.getPublished()).isEqualTo(updatedBook.getPublished());
345.3. remove()
remove()
is another command that accepts a document as its primary input.
It returns some metrics about what was found and removed.
The snippet below shows the successful removal of an existing document.
The DeleteResult
response document provides feedback of what occurred.
//given a persisted instance
Book book = ...
mongoTemplate.save(book);
//when - deleting
DeleteResult result = mongoTemplate.remove(book);
long count = result.getDeletedCount();
//then - no longer in DB
then(count).isEqualTo(1);
then(mongoTemplate.findById(book.getId(), Book.class)).isNotNull();
346. Operations By ID
There are very few commands that operate on an explicit ID.
findById
is the only example.
I wanted to highlight the fact that most commands use a flexible query filter and we will show examples of that next.
346.1. findById()
findById()
will return the complete document associated with the supplied ID.
The following snippet shows an example of the document being found.
//given a persisted instance
Book book = ...
//when finding
Book dbBook = mongoTemplate.findById(book.getId(), Book.class); (1)
//then document is found
then(dbBook.getId()).isEqualTo(book.getId());
then(dbBook.getTitle()).isEqualTo(book.getTitle());
then(dbBook.getAuthor()).isEqualTo(book.getAuthor());
then(dbBook.getPublished()).isEqualTo(book.getPublished());
1 | Book class is supplied to identify the collection and the type of response object to populate |
No document found does not throw an exception — just returns a null object.
//given a persisted instance
String missingId = "12345";
//when finding
Book dbBook = mongoTemplate.findById(missingId, Book.class);
//then
then(dbBook).isNull();
347. Operations By Query Filter
Many commands accept a Query
object used to filter which documents in the collection the command applies to.
The Query
can express:
-
criteria
-
targeted types
-
paging
We will stick to just simple the criteria here.
Criteria filter = Criteria.where("field1").is("value1")
.and("field2").not().is("value2");
If we specify the collection name (e.g., "books") in the command versus the type (e.g., Book
class), we lack the field/type mapping information.
That means we must explicitly name the field and use the type known by the MongoDB collection.
Query.query(Criteria.where("id").is(id)); //Book.class (1)
Query.query(Criteria.where("_id").is(new ObjectId(id))); //"books" (2)
1 | can use property values when supplying mapped class in full command |
2 | must supply field and explicit mapping type when supplying collection name in full command |
347.1. exists() By Criteria
exists()
accepts a Query
and returns a simple true or false.
The query can be as simple or complex as necessary.
The following snippet looks for documents with a matching ID.
//given a persisted instance
Book book = ...
mongoTemplate.save(book);
//when testing exists
Query filter = Query.query(Criteria.where("id").is(id));
boolean exists = mongoTemplate.exists(filter,Book.class);
//then document exists
then(exists).isTrue();
MongoTemplate was smart enough to translate the "id" property to the _id
field and the String value to an ObjectId
when building the criteria with a mapped class.
{ "_id" : { "$oid" : "608ae2939f024c640c3b1d4b"}}
347.2. delete()
delete()
is another command that can operate on a criteria filter.
//given a persisted instance
Book book = ...
mongoTemplate.save(book);
//when - deleting
Query filter = Query.query(Criteria.where("id").is(id));
DeleteResult result = mongoTemplate.remove(filter, Book.class);
//then - no long in DB
then(count).isEqualTo(1);
then(mongoTemplate.existsById(book.getId())).isFalse();
348. Field Modification Operations
For cases with large documents — where it would be an unnecessary expense to retrieve the entire document and then to write it back with changes — MongoTemplate can issue individual field commands. This is also useful in concurrent modifications where one wants upsert a document (and have only a single instance) but also update an existing document with fresh information (e.g., increment a counter, set a processing timestamp)
348.1. update() Field(s)
The update()
command can be used to perform actions on individual fields.
The following example changes the title of the first document that matches the provided criteria.
Update commands can have a minor complexity to include incrementing, renaming, and moving fields — as well as manipulating arrays.
//given a persisted instance
Book originalBook = ...
mongoTemplate.save(originalBook);
String newTitle = "X" + originalBook.getTitle();
//when - updating
Query filter = Query.query(Criteria.where("_id").is(new ObjectId(id)));(1)
Update update = new Update(); (2)
update.set("title", newTitle); (3)
UpdateResult result = mongoTemplate.updateFirst(filter, update, "books"); (4)
//{ "_id" : { "$oid" : "60858ca8a3b90c12d3bb15b2"}} ,
//{ "$set" : { "title" : "XTo Sail Beyond the Sunset"}}
long found = result.getMatchedCount();
//then - db has new state
then(found).isEqualTo(1);
Book dbBook = mongoTemplate.findById(originalBook.getId());
then(dbBook.getTitle()).isEqualTo(newTitle);
then(dbBook.getAuthor()).isEqualTo(originalBook.getAuthor());
then(dbBook.getPublished()).isEqualTo(originalBook.getPublished());
1 | identifies a criteria for update |
2 | individual commands to apply to the database document |
3 | document found will have its title changed |
4 | must use explicit _id field and ObjectId value when using ("books") collection name versus Book class |
348.2. upsert() Fields
If the document was not found and we want to be in a state where one will exist with the desired title, we could use an upsert()
instead of an update()
.
UpdateResult result = mongoTemplate.upsert(filter, update, "books"); (1)
1 | upsert guarantees us that we will have a document in the books collection with the intended modifications |
349. Paging
In conjunction with find
commands, we need to soon look to add paging instructions in order to sort and slice up the results into page-sized bites.
RestTemplate
offers two primary ways to express paging
-
Query configuration
-
Pagable command parameter
349.1. skip()/limit()
We can express offset and limit on the Query
object using skip()
and limit()
builder methods.
Query query = new Query().skip(offset).limit(limit);
In the example below, a findOne()
with skip()
is performed to locate a single, random document.
private final SecureRandom random = new SecureRandom();
public Optional<Book> random() {
Optional randomSong = Optional.empty();
long count = mongoTemplate.count(new Query(), "books");
if (count!=0) {
int offset = random.nextInt((int)count);
Book song = mongoTemplate.findOne(new Query().skip(offset), Book.class); (1) (2)
randomSong = song==null ? Optional.empty() : Optional.of(song);
}
return randomSong;
}
1 | skip() is eliminating offset documents from the results |
2 | findOne() is reducing the results to a single (first) document |
We could have also expressed the command with find()
and limit(1)
.
mongoTemplate.find(new Query().skip(offset).limit(1), Book.class);
349.2. Sort
With offset and limit, we often need to express sort — which can get complex.
Spring Data defines a Sort
class that can express a sequence of properties to sort in ascending and/or descending order.
That too can be assigned to the Query
instance.
public List<Book> find(List<String> order, int offset, int limit) {
Query query = new Query();
query.with( Sort.by(order.toArray(new String[0]))); (1)
query.skip(offset); (2)
query.limit(limit); (3)
return mongoTemplate.find(query, Book.class);
}
1 | Query accepts a standard Sort type to implement ordering |
2 | Query accepts a skip to perform an offset into the results |
3 | Query accepts a limit to restrict the number of returned results. |
349.3. Pageable
Spring Data provides a Pageable
type that can express sort, offset, and limit — using Sort, pageSize, and pageNumber.
That too can be assigned to the Query
instance.
int pageNo=1;
int pageSize=3;
Pageable pageable = PageRequest.of(pageNo, pageSize,
Sort.by(Sort.Direction.DESC, "published"));
public List<Book> find(Pageable pageable) {
return mongoTemplate.find(new Query().with(pageable), Book.class); (1)
}
1 | Query accepts a Pageable to permit flexible ordering, offset, and limit |
350. Aggregation
Most queries can be performed using the database find()
commands.
However, as we have seen in the MongoDB lecture — some complex queries require different stages and command types to handle selections, projections, grouping, etc.
For those cases, Mongo provides the Aggregation Pipeline — which can be accessed through the MongoTemplate
.
The following snippet shows a query that locates all documents that contain a author
field and match a regular expression.
//given
int minLength = ...
Set<String> ids = savedBooks.stream() ... //return IDs od docs matching criteria
String expression = String.format("^.{%d,}$", minLength);
//when pipeline executed
Aggregation pipeline = Aggregation.newAggregation(
Aggregation.match(Criteria.where("author").regex(expression)),
Aggregation.match(Criteria.where("author").exists(true))
);
AggregationResults<Book> result = mongoTemplate.aggregate(pipeline,"books",Book.class);
List<Book> foundSongs = result.getMappedResults();
//then expected IDs found
Set<String> foundIds = foundSongs.stream()
.map(s->s.getId()).collect(Collectors.toSet());
then(foundIds).isEqualTo(ids);
Mongo BasicDocument Issue with $exists Command
Aggregation Pipeline was forced to be used in this case, because a normal collection
org.springframework.data.mongodb.InvalidMongoDbApiUsageException: Due to limitations of the com.mongodb.BasicDocument, you can't add a second 'author' expression specified as 'author : Document{{$exists=true}}'. Criteria already contains 'author : ^.{22,}$'. This provides a good example of how to divide up the commands into independent queries using Aggregation Pipeline. |
351. ACID Transactions
Before we leave the accessing MongoDB through the MongoTemplate
Java API topic, I wanted to lightly cover ACID transactions.
-
Atomicity
-
Consistency
-
Isolation
-
Durability
351.1. Atomicity
MongoDB has made a lot of great strides in scale and performance by providing flexible document structures. Individual caller commands to change a document represent separate, atomic transactions. Documents can be as large or small as one desires and should take document atomicity into account when forming document schema.
However, as of MongoDB 4.0, MongoDB supports multi-document atomic transactions if absolutely necessary. The following online resource provides some background on how to accomplish this. [69]
MongoDB Multi-Document Transactions and not the Normal Path
Just because you can implement multi-document atomic transactions and references between documents, don’t use RDBMS mindset when designing document schema. Try to make a single document represent state that is essential to be in a consistent state. |
MongoDB documentation does warn against its use. So multi-document acid transactions should not be a first choice.
351.2. Consistency
Since MongoDB does not support a fixed schema or enforcement of foreign references between documents, there is very little for the database to keep consistent. The primary consistency rules the database must enforce are any unique indexes — requiring that specific fields be unique within the collection.
351.3. Isolation
Within the context of a single document change — MongoDB [70]
-
will always prevent a reader from seeing partial changes to a document.
-
will provide a reader a complete version of a document that may have been inserted/updated after a
find()
was initiated but before it was returned to the caller (i.e., can receive a document that no longer matches the original query) -
may miss including documents that satisfy a query after the query was initiated but before the results are returned to the caller
351.4. Durability
The durability of a Mongo transaction is a function of the number of nodes within a cluster that acknowledge a change before returning the call to the client.
UNACKNOWLEDGED
is fast but extremely unreliable.
Other ranges, including MAJORITY
at least guarantee that one or more nodes in the cluster have written the change.
These are expressed using the MongoDB WriteConcern class.
MongoTemplate
allows us to set the WriteConcern for follow-on MongoTemplate
commands.
Durability is a more advanced topic and requires coverage of system administration and cluster setup — which is well beyond the scope of this lecture.
My point of bringing this and other ACID topics up here is to only point out that the MongoTemplate
offers access to these additional features.
352. Summary
In this module we learned to:
-
setup a MongoDB Maven project
-
inject a MongoOperations/MongoTemplate instance to perform actions on a database
-
instantiate a (seemingly) embedded MongoDB connection for integration tests
-
instantiate a stand-alone MongoDB connection for interactive development and production deployment
-
switch between the embedded test MongoDB and stand-alone MongoDB for interactive development inspection
-
map a
@Document
class to a MongoDB collection -
implement MongoDB commands using a Spring command-level MongoOperations/MongoTemplate Java API
-
perform whole-document CRUD operations on a
@Document
class using the Java API -
perform surgical field operations using the Java API
-
perform queries with paging properties
-
perform Aggregation pipeline operations using the Java API
Spring Data MongoDB Repository
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
353. Introduction
MongoTemplate provided a lot of capability to interface with the database, but with a significant amount of code required. Spring Data MongoDB Repository eliminates much of the boilerplate code for the most common operations and allows us access to MongoTemplate for the harder edge-cases.
Due to the common Spring Data framework between the two libraries and the resulting similarity between Spring Data JPA and Spring Data MongoDB repositories, this lecture is about 95% the same as the Spring Data JPA lecture. Although it is presumed that the Spring Data JPA lecture precedes this lecture — it was written so that was not a requirement. However, if you have already mastered Spring Data JPA Repositories, you should be able to quickly breeze through this material because of the significant similarities in concepts and APIs. |
353.1. Goals
The student will learn:
-
to manage objects in the database using the Spring Data MongoDB Repository
-
to leverage different types of built-in repository features
-
to extend the repository with custom features when necessary
353.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
declare a
MongoRepository
for an existing@Document
-
perform simple CRUD methods using provided repository methods
-
add paging and sorting to query methods
-
implement queries based on POJO examples and configured matchers
-
implement queries based on predicates derived from repository interface methods
-
implement a custom extension of the repository for complex or compound database access
354. Spring Data MongoDB Repository
Spring Data MongoDB provides repository support for @Document
-based mappings.
[71]
We start off by writing no mapping code — just interfaces associated with our @Document
and primary key type — and have Spring Data MongoDB implement the desired code.
The Spring Data MongoDB interfaces are layered — offering useful tools for interacting with the database.
Our primary @Document
types will have a repository interface declared that inherit from MongoRepository
and any custom interfaces we optionally define.
355. Spring Data MongoDB Repository Interfaces
As we go through these interfaces and methods, please remember that all of the method implementations of these interfaces (except for custom) will be provided for us.
marker interface capturing the |
|
depicts many of the CRUD capabilities we demonstrated with the MongoOps DAO in previous MongoTemplate lecture |
|
Spring Data MongoDB provides some nice end-to-end support for sorting and paging.
This interface adds some sorting and paging to the |
|
provides query-by-example methods that use prototype |
|
brings together the |
|
BooksRepositoryCustom/ BooksRepositoryCustomImpl |
we can write our own extensions for complex or compound calls — while taking advantage of an |
BooksRepository |
our repository inherits from the repository hierarchy and adds additional methods that are automatically implemented by Spring Data MongoDB |
@Document is not Technically Required
Technically, the |
356. BooksRepository
All we need to create a functional repository is a @Document
class and a primary key type.
From our work to date, we know that our @Document
is the Book class and the primary key is the primitive String
type. This type works well with MongoDB auto-generated IDs.
356.1. Book @Document
@Document(collection = "books")
public class Book {
@Id
private String id;
Multiple @Id Annotations, Use Spring Data’s @Id Annotation
The import org.springframework.data.annotation.Id; |
356.2. BooksRepository
We declare our repository to extend MongoRepository
.
public interface BooksRepository extends MongoRepository<Book, String> {}(1) (2)
1 | Book is the repository type |
2 | String is used for the primary key type |
Consider Using Non-Primitive Primary Key Types
You will find that Spring Data MongoDB works easier with nullable object types. |
357. Configuration
Assuming your repository classes are in a package below the class annotated with @SpringBootApplication
— not much is else is needed. Adding @EnableMongoRepositories
is necessary when working with more complex classpaths.
@SpringBootApplication
@EnableMongoRepositories
public class MongoDBBooksApp {
If your repository is not located in the default packages scanned, their packages can be scanned with configuration options to the @EnableMongoRepositories
annotation.
@EnableMongoRepositories(basePackageClasses = {BooksRepository.class}) (1) (2)
1 | the Java class provided here is used to identify the base Java package |
2 | where to scan for repository interfaces |
357.1. Injection
With the repository interface declared and the Mongo repository support enabled, we can then successfully inject the repository into our application.
@Autowired
private BooksRepository booksRepository;
358. CrudRepository
Lets start looking at the capability of our repository — starting with the declared methods of the CrudRepository
interface.
public interface CrudRepository<T, ID> extends Repository<T, ID> {
<S extends T> S save(S var1);
<S extends T> Iterable<S> saveAll(Iterable<S> var1);
Optional<T> findById(ID var1);
boolean existsById(ID var1);
Iterable<T> findAll();
Iterable<T> findAllById(Iterable<ID> var1);
long count();
void deleteById(ID var1);
void delete(T var1);
void deleteAll(Iterable<? extends T> var1);
void deleteAll();
}
358.1. CrudRepository save() New
We can use the CrudRepository.save()
method to either create or update our @Document
instance in the database.
It has a direct correlation to MongoTemplate’s save()
method so there is not much extra functionality added by the repository layer.
In this specific example, we call save()
with an object with an unassigned primary key.
The primary key will be generated by the database when inserted and assigned to the object by the time the command completes.
//given a transient document instance
Book book = ...
assertThat(book.getId()).isNull(); (1)
//when persisting
booksRepo.save(book);
//then document is persisted
then(book.getId()).isNotNull(); (2)
1 | document not yet assigned a generated primary key |
2 | primary key assigned by database |
358.2. CrudRepository save() Update Existing
The CrudRepository.save()
method is an "upsert" method.
-
if the
@Document
is new it will be inserted -
if a
@Document
exists with the currently assigned primary key, the original contents will be replaced
//given a persisted document instance
Book book = ...
booksRepo.save(book); (1)
Book updatedBook = book.withTitle("new title"); (2)
//when persisting update
booksRepo.save(updatedBook);
//then new document state is persisted
then(booksRepo.findOne(Example.of(updatedBook))).isPresent(); (3)
1 | object inserted into database — resulting in primary key assigned |
2 | a separate instance with the same ID has modified title |
3 | object’s new state is found in database |
358.3. CrudRepository save()/Update Resulting Mongo Command
Watching the low-level Mongo commands, we can see that Mongo’s built-in upsert
capability allows the client to perform the action without a separate query.
update{"q":{"_id":{"$oid":"606cbfc0932e084392422bb6"}},
"u":{"_id":{"$oid":"606cbfc0932e084392422bb6"},"title":"new title","author":...},
"multi":false,
"upsert":true}
358.4. CrudRepository existsById()
The repository adds a convenience method that can check whether the @Document
exists in the database without already having an instance or writing a criteria query.
The following snippet demonstrates how we can check for the existence of a given ID.
//given a persisted document instance
Book pojoBook = ...
booksRepo.save(pojoBook);
//when - determining if document exists
boolean exists = booksRepo.existsById(pojoBook.getId());
//then
then(exists).isTrue();
The resulting Mongo command issued a query for the ID, limiting the results to a single result, and a projection with only the primary key contained.
query: { _id: ObjectId('606cc5d742931870e951e08e') }
sort: {}
projection: {} (1)
collation: { locale: \"simple\" }
limit: 1"}}
1 | projection: {} returns only the primary key |
358.5. CrudRepository findById()
If we need the full object, we can always invoke the findById()
method, which should be a thin wrapper above MongoTemplate.find()
, except that the return type is a Java Optional<T>
versus the @Document
type (T
).
//given a persisted document instance
Book pojoBook = ...
booksRepo.save(pojoBook);
//when - finding the existing document
Optional<Book> result = booksRepo.findById(pojoBook.getId()); (1)
//then
then(result.isPresent()).isTrue();
1 | findById() always returns a non-null Optional<T> object |
358.5.1. CrudRepository findById() Found Example
The Optional<T>
can be safely tested for existence using isPresent()
.
If isPresent()
returns true
, then get()
can be called to obtain the targeted @Document
.
//given
then(result).isPresent();
//when - obtaining the instance
Book dbBook = result.get();
//then - instance provided
then(dbBook).isNotNull();
//then - database copy matches initial POJO
then(dbBook.getAuthor()).isEqualTo(pojoBook.getAuthor());
then(dbBook.getTitle()).isEqualTo(pojoBook.getTitle());
then(pojoBook.getPublished()).isEqualTo(dbBook.getPublished());
358.5.2. CrudRepository findById() Not Found Example
If isPresent()
returns false
, then get()
will throw a NoSuchElementException
if called.
This gives your code some flexibility for how you wish to handle a target @Document
not being found.
//then - the optional can be benignly tested
then(result).isNotPresent();
//then - the optional is asserted during the get()
assertThatThrownBy(() -> result.get())
.isInstanceOf(NoSuchElementException.class);
358.6. CrudRepository delete()
The repository also offers a wrapper around MongoTemplate.remove()
that accepts an instance.
Whether the instance existed or not, a successful call will always result in the @Document
no longer in the database.
//when - deleting an existing instance
booksRepo.delete(existingBook);
//then - instance will be removed from DB
then(booksRepo.existsById(existingBook.getId())).isFalse();
358.6.1. CrudRepository delete() Not Exist
If the instance did not exist, the delete()
call silently returns.
//when - deleting a non-existing instance
booksRepo.delete(doesNotExist);
358.7. CrudRepository deleteById()
The repository also offers a convenience deleteById()
method taking only the primary key.
//when - deleting an existing instance
booksRepo.deleteById(existingBook.getId());
358.8. Other CrudRepository Methods
That was a quick tour of the CrudRepository<T,ID>
interface methods.
The following snippet shows the methods not covered.
Most provide convenience methods around the entire repository.
<S extends T> Iterable<S> saveAll(Iterable<S> var1);
Iterable<T> findAll();
Iterable<T> findAllById(Iterable<ID> var1);
long count();
void deleteAll(Iterable<? extends T> var1);
void deleteAll();
359. PagingAndSortingRepository
Before we get too deep into queries, it is good to know that Spring Data MongoDB has first-class support for sorting and paging.
-
sorting - determines the order which matching results are returned
-
paging - breaks up results into chunks that are easier to handle than entire database collections
Here is a look at the declared methods of the PagingAndSortingRepository<T,ID>
interface.
This defines extra parameters for the CrudRepository.findAll()
methods.
public interface PagingAndSortingRepository<T, ID> extends CrudRepository<T, ID> {
Iterable<T> findAll(Sort var1);
Page<T> findAll(Pageable var1);
}
We will see paging and sorting option come up in many other query types as well.
Use Paging and Sorting for Collection Queries
All queries that return a collection should seriously consider adding paging and sorting parameters. Small test databases can become significantly populated production databases over time and cause eventual failure if paging and sorting is not applied to unbounded collection query return methods. |
359.1. Sorting
Sorting can be performed on one or more properties and in ascending and/or descending order.
The following snippet shows an example of calling the findAll()
method and having it return
-
Book
entities in descending order according topublished
date -
Book
entities in ascending order according toid
value whenpublished
dates are equal
//when
List<Book> byPublished = booksRepository.findAll(
Sort.by("published").descending().and(Sort.by("id").ascending()));(1) (2)
//then
LocalDate previous = null;
for (Book s: byPublished) {
if (previous!=null) {
then(previous).isAfterOrEqualTo(s.getPublished()); //DESC order
}
previous=s.getPublished();
}
1 | results can be sorted by one or more properties |
2 | order of sorting can be ascending or descending |
The following snippet shows how the Mongo command was impacted by the Sort.by()
parameter.
query: {}
sort: { published: -1, _id: 1 } (1)
projection: {}"
1 | Sort.by() added the extra sort parameters to Mongo command |
359.2. Paging
Paging permits the caller to designate how many instances are to be returned in a call and the offset to start that group (called a page or slice) of instances.
The snippet below shows an example of using one of the factory methods of Pageable
to create a PageRequest
definition using page size (limit), offset, and sorting criteria.
If many pages will be traversed — it is advised to sort by a property that will produce a stable sort over time during table modifications.
//given
int offset = 0;
int pageSize = 3;
Pageable pageable = PageRequest.of(offset/pageSize, pageSize, Sort.by("published"));(1)
//when
Page<Book> bookPage = booksRepository.findAll(pageable);
1 | using PageRequest factory method to create Pageable from provided page information |
Use Stable Sort over Large Collections
Try to use a property for sort (at least by default) that will produce a stable sort when paging through a large collection to avoid repeated or missing objects from follow-on pages because of new changes to the table. |
359.3. Page Result
The page result is represented by a container object of type Page<T>
, which extends Slice<T>
.
I will describe the difference next, but the PagingAndSortingRepository<T,ID>
interface always returns a Page<T>
, which will provide:
|
Figure 150. Page<T> Extends Slice<T>
|
359.4. Slice Properties
The Slice<T>
base interface represents properties about the content returned.
//then
Slice bookSlice = bookPage; (1)
then(bookSlice).isNotNull();
then(bookSlice.isEmpty()).isFalse();
then(bookSlice.getNumber()).isEqualTo(0); (2)
then(bookSlice.getSize()).isEqualTo(pageSize);
then(bookSlice.getNumberOfElements()).isEqualTo(pageSize);
List<Book> booksList = bookSlice.getContent();
then(booksList).hasSize(pageSize);
1 | Page<T> extends Slice<T> |
2 | slice increment — first slice is 0 |
359.5. Page Properties
The Page<T>
derived interface represents properties about the entire collection/table.
The snippet below shows an example of the total number of elements in the table being made available to the caller.
then(bookPage.getTotalElements()).isEqualTo(savedBooks.size());
359.6. Stateful Pageable Creation
In the above example, we created a Pageable
from stateless parameters.
We can also use the original Pageable
to generate the next or other relative page specifications.
Pageable pageable = PageRequest.of(offset / pageSize, pageSize, Sort.by("published"));
...
Pageable next = pageable.next();
Pageable previous = pageable.previousOrFirst();
Pageable first = pageable.first();
359.7. Page Iteration
The next Pageable
can be used to advance through the complete set of query results, using the previous Pageable
and testing the returned Slice
.
for (int i=1; bookSlice.hasNext(); i++) { (1)
pageable = pageable.next(); (2)
bookSlice = booksRepository.findAll(pageable);
booksList = bookSlice.getContent();
then(bookSlice).isNotNull();
then(bookSlice.getNumber()).isEqualTo(i);
then(bookSlice.getSize()).isEqualTo(pageSize);
then(bookSlice.getNumberOfElements()).isLessThanOrEqualTo(pageSize);
then(((Page)bookSlice).getTotalElements()).isEqualTo(savedBooks.size());//unique to Page
}
then(bookSlice.hasNext()).isFalse();
then(bookSlice.getNumber()).isEqualTo(booksRepository.count() / pageSize);
1 | Slice.hasNext() will indicate when previous Slice represented the end of the results |
2 | next Pageable obtained from previous Pageable |
360. Query By Example
Not all queries will be as simple as findAll()
.
We now need to start looking at queries that can return a subset of results based on them matching a set of predicates.
The QueryByExampleExecutor<T>
parent interface to MongoRepository<T,ID>
provides a set of variants to the collection-based results that accepts an "example" to base a set of predicates off of.
public interface QueryByExampleExecutor<T> {
<S extends T> Optional<S> findOne(Example<S> var1);
<S extends T> Iterable<S> findAll(Example<S> var1);
<S extends T> Iterable<S> findAll(Example<S> var1, Sort var2);
<S extends T> Page<S> findAll(Example<S> var1, Pageable var2);
<S extends T> long count(Example<S> var1);
<S extends T> boolean exists(Example<S> var1);
}
360.1. Example Object
An Example
is an interface with the ability to hold onto a probe and matcher.
360.1.1. Probe Object
The probe is an instance of the repository @Document
type.
The following snippet is an example of creating a probe that represents the fields we are looking to match.
//given
Book savedBook = savedBooks.get(0);
Book probe = Book.builder()
.title(savedBook.getTitle())
.author(savedBook.getAuthor())
.build(); (1)
1 | probe will carry values for title and author to match |
360.1.2. ExampleMatcher Object
The matcher defaults to an exact match of all non-null properties in the probe. There are many definitions we can supply to customize the matcher.
-
ExampleMatcher.matchingAny()
- forms an OR relationship between all predicates -
ExampleMatcher.matchingAll()
- forms an AND relationship between all predicates
The matcher can be broken down into specific fields, designing a fair number of options for String-based predicates but very limited options for non-String fields.
|
|
The following snippet shows an example of the default ExampleMatcher
.
ExampleMatcher matcher = ExampleMatcher.matching(); (1)
1 | default matcher is matchingAll |
360.2. findAll By Example
We can supply an Example
instance to the findAll()
method to conduct our query.
The following snippet shows an example of using a probe with a default matcher.
It is intended to locate all books matching the author
and title
we specified in the probe.
//when
List<Book> foundBooks = booksRepository.findAll(
Example.of(probe),//default matcher is matchingAll() and non-null
Sort.by("id"));
The default matcher ends up working perfectly with our @Document
class because a nullable primary key was used — keeping the primary key from being added to the criteria.
360.3. Ignoring Properties
If we encounter any built-in types that cannot be null — we can configure a match to explicitly ignore certain fields.
The following snippet shows an example matcher configured to ignore the primary key.
ExampleMatcher ignoreId = ExampleMatcher.matchingAll().withIgnorePaths("id");(1)
//when
List<Book> foundBooks = booksRepository.findAll(
Example.of(probe, ignoreId), (2)
Sort.by("id"));
//then
then(foundBooks).isNotEmpty();
then(foundBooks.get(0).getId()).isEqualTo(savedBook.getId());
1 | id primary key is being excluded from predicates |
2 | non-null and non-id fields of probe are used for AND matching |
360.4. Contains ExampleMatcher
We have some options on what we can do with the String matches.
The following snippet provides an example of testing whether title
contains the text in the probe while performing an exact match of the author
and ignoring the id
field.
Book probe = Book.builder()
.title(savedBook.getTitle().substring(2))
.author(savedBook.getArtist())
.build();
ExampleMatcher matcher = ExampleMatcher
.matching()
.withIgnorePaths("id")
.withMatcher("title", ExampleMatcher.GenericPropertyMatchers.contains());
360.4.1. Using Contains ExampleMatcher
The following snippet shows that the Example
successfully matched on the Book
we were interested in.
//when
List<Book> foundBooks=booksRepository.findAll(Example.of(probe,matcher), Sort.by("id"));
//then
then(foundBooks).isNotEmpty();
then(foundBooks.get(0).getId()).isEqualTo(savedBook.getId());
361. Derived Queries
For fairly straight forward queries, Spring Data MongoDB can derive the required commands from a method signature declared in the repository interface. This provides a more self-documenting version of similar queries we could have formed with query-by-example.
The following snippet shows a few example queries added to our repository interface to address specific queries needed in our application.
public interface BooksRepository extends MongoRepository<Book, String> {
Optional<Book> getByTitle(String title); (1)
List<Book> findByTitleNullAndPublishedAfter(LocalDate date); (2)
List<Book> findByTitleStartingWith(String string, Sort sort); (3)
Slice<Book> findByTitleStartingWith(String string, Pageable pageable); (4)
Page<Book> findPageByTitleStartingWith(String string, Pageable pageable); (5)
1 | query by an exact match of title |
2 | query by a match of two fields (title and released ) |
3 | query using sort |
4 | query with paging support |
5 | query with paging support and table total |
Let’s look at a complete example first.
361.1. Single Field Exact Match Example
In the following example, we have created a query method getByTitle
that accepts the exact match title value and an Optional
return value.
Optional<Book> getByTitle(String title); (1)
We use the declared interface method in a normal manner and Spring Data MongoDB takes care of the implementation.
//when
Optional<Book> result = booksRepository.getByTitle(book.getTitle());
//then
then(result.isPresent()).isTrue();
The result is essentially the same as if we implemented it using query-by-example or more directly through the MongoTemplate.
361.2. Query Keywords
Spring Data MongoDB has several keywords, followed by By
, that it looks for starting the interface method name.
Those with multiple terms can be used interchangeably.
Meaning | Keywords | |||
---|---|---|---|---|
Query |
|
|||
Count |
|
|||
Exists |
|
|||
Delete |
|
361.3. Other Keywords
-
Distinct (e.g.,
findDistinctByTitle
) -
Is, Equals (e.g.,
findByTitle
,findByTitleIs
,findByTitleEquals
) -
Not (e.g.,
findByTitleNot
,findByTitleIsNot
,findByTitleNotEquals
) -
IsNull, IsNotNull (e.g.,
findByTitle(null)
,findByTitleIsNull()
,findByTitleIsNotNull()
) -
StartingWith, EndingWith, Containing (e.g.,
findByTitleStartingWith
,findByTitleEndingWith
,`findByTitleContaining
) -
LessThan, LessThanEqual, GreaterThan, GreaterThanEqual, Between (e.g.,
findByIdLessThan
,findByIdBetween(lo,hi)
) -
Before, After (e.g.,
findByPublishedAfter
) -
In (e.g.,
findByTitleIn(collection)
) -
OrderBy (e.g.,
findByTitleContainingOrderByTitle
)
The list is significant, but not meant to be exhaustive. Perform a web search for your specific needs (e.g., "Spring Data Derived Query …") if what is needed is not found here.
361.4. Multiple Fields
We can define queries using one or more fields using And
and Or
.
The following example defines an interface method that will test two fields: title
and published
.
title
will be tested for null and published
must be after a certain date.
List<Book> findByTitleNullAndPublishedAfter(LocalDate date);
The following snippet shows an example of how we can call/use the repository method. We are using a simple collection return without sorting or paging.
//when
List<Book> foundBooks = booksRepository.findByTitleNullAndPublishedAfter(firstBook.getPublished());
//then
Set<String> foundIds = foundBooks.stream().map(s->s.getId()).collect(Collectors.toSet());
then(foundIds).isEqualTo(expectedIds);
361.5. Collection Response Query Example
We can perform queries with various types of additional arguments and return types. The following shows an example of a query that accepts a sorting order and returns a simple collection with all objects found.
List<Book> findByTitleStartingWith(String string, Sort sort);
The following snippet shows an example of how to form the Sort
and call the query method derived from our interface declaration.
//when
Sort sort = Sort.by("id").ascending();
List<Book> books = booksRepository.findByTitleStartingWith(startingWith, sort);
//then
then(books.size()).isEqualTo(expectedCount);
361.6. Slice Response Query Example
Derived queries can also be declared to accept a Pageable
definition and return a Slice
.
The following example shows a similar interface method declaration to what we had prior — except we have wrapped the Sort
within a Pageable
and requested a Slice
, which will contain only those items that match the predicate and comply with the paging constraints.
Slice<Book> findByTitleStartingWith(String string, Pageable pageable);
The following snippet shows an example of forming the PageRequest
, making the call, and inspecting the returned Slice
.
//when
PageRequest pageable=PageRequest.of(0, 1, Sort.by("id").ascending());(1) (2)
Slice<Book> booksSlice=booksRepository.findByTitleStartingWith(startingWith,pageable);
//then
then(booksSlice.getNumberOfElements()).isEqualTo(pageable.getPageSize());
1 | pageNumber is 0 |
2 | pageSize is 1 |
361.7. Page Response Query Example
We can alternatively declare a Page
return type if we also need to know information about all available matches in the table.
The following shows an example of returning a Page
.
The only reason Page
shows up in the method name is to form a different method signature than its sibling examples.
Page
is not required to be in the method name.
Page<Book> findPageByTitleStartingWith(String string, Pageable pageable); (1)
1 | the Page return type (versus Slice ) triggers an extra query performed to supply totalElements Page property |
The following snippet shows how we can form a PageRequest
to pass to the derived query method and accept a Page
in reponse with additional table information.
//when
PageRequest pageable = PageRequest.of(0, 1, Sort.by("id").ascending());
Page<Book> booksPage = booksRepository.findPageByTitleStartingWith(startingWith, pageable);
//then
then(booksPage.getNumberOfElements()).isEqualTo(pageable.getPageSize());
then(booksPage.getTotalElements()).isEqualTo(expectedCount); (1)
1 | an extra property is available to tell us the total number of matches relative to the entire table — that may not have been reported on the current page |
362. @Query Annotation Queries
Spring Data MongoDB provides an option for the query to be expressed on the repository method.
The following example will locate a book published between the provided dates — inclusive.
The default derived query implemented it exclusive of the two dates.
The @Query
annotation takes precidence over the default derived query.
This shows how easy it is to define a customized version of the query.
@Query("{ 'published': { $gte: ?0, $lte: ?1 } }") (1)
List<Book> findByPublishedBetween(LocalDate starting, LocalDate ending);
1 | ?0 is the first parameter (starting ) and ?1 is the second parameter (ending ) |
The following snippet shows an example of implementing a query using a regular expression completed by the input parameters.
It locates all books with titles
greater-than or equal to the length
parameter.
It also declares that only the title
field of the Book
instances need to be returned — making the result smaller.
@Query(value="{ 'title': /^.{?0,}$/ }", fields="{'_id':0, 'title':1}") (1) (2)
List<Book> getTitlesGESizeAsBook(int length);
1 | value expresses which Books should match |
2 | fields expresses which fields should be returned and populated in the instance |
Named Queries can be supplied in property file
Named queries can also be expressed in a property file — versus being placed directly onto the method. Property files can provide a more convenient source for expressing more complex queries.
The default location is |
362.1. @Query Annotation Attributes
The matches in the query can be used for more than just find
.
We can alternately apply count
, exists
, or delete
and include information for fields
projected, sort
, and collation
.
Attribute | Default | Description | Example |
---|---|---|---|
String fields |
"" |
projected fields |
fields = "{ title : 1 }" |
boolean count |
false |
count() action performed on query matches |
|
boolean exists |
false |
exists() action performed on query matches |
|
boolean delete |
false |
delete() action performed on query matches |
|
String sort |
"" |
sort expression for query results |
sort = "{ published : -1 }" |
String collation |
"" |
location information |
363. MongoRepository Methods
Many of the methods and capabilities of the MongoRepository<T,ID>
are available at the higher level interfaces.
The MongoRepository<T,ID>
itself declares two types of additional methods
-
insert/upsert state-specific optimizations
-
return type extensions
<S extends T> S insert(S entity); (1)
<S extends T> List<S> insert(Iterable<S> entities);
<S extends T> List<S> saveAll(Iterable<S> entities); (2)
List<T> findAll();
List<T> findAll(Sort sort);
<S extends T> List<S> findAll(Example<S> example);
<S extends T> List<S> findAll(Example<S> example, Sort sort);
1 | insert is specific to MongoRepository and assumes the document is new |
2 | List<T> is a sub-type of Iterable<T> and provides a richer set of inspection methods for the returned result |
364. Custom Queries
Sooner or later, a repository action requires some complexity that is beyond the ability to leverage a single query-by-example or derived query. We may need to implement some custom logic or may want to encapsulate multiple calls within a single method.
364.1. Custom Query Interface
The following example shows how we can extend the repository interface to implement custom calls using the MongoTemplate and the other repository methods. Our custom implementation will return a random Book
from the database.
public interface BooksRepositoryCustom {
Optional<Book> random();
}
364.2. Repository Extends Custom Query Interface
We then declare the repository to extend the additional custom query interface — making the new method(s) available to callers of the repository.
public interface BooksRepository extends MongoRepository<Book, String>, BooksRepositoryCustom { (1)
...
1 | added additional BookRepositoryCustom interface for BookRepository to extend |
364.3. Custom Query Method Implementation
Of course, the new interface will need an implementation. This will require at least two lower-level database calls
-
determine how many objects there are in the database
-
return an instance for one of those random values
The following snippet shows a portion of the custom method implementation. Note that two additional helper methods are required. We will address them in a moment. By default, this class must have the same name as the interface, followed by "Impl".
public class BookRepositoryCustomImpl implements BookRepositoryCustom {
private final SecureRandom random = new SecureRandom();
...
@Override
public Optional<Book> random() {
Optional randomBook = Optional.empty();
int count = (int) booksRepository.count(); (1)
if (count!=0) {
int offset = random.nextInt(count);
List<Book> books = books(offset, 1); (2)
randomBook=books.isEmpty() ? Optional.empty() : Optional.of(books.get(0));
}
return randomBook;
}
1 | leverages CrudRepository.count() helper method |
2 | leverages a local, private helper method to access specific Book |
364.4. Repository Implementation Postfix
If you have an alternate suffix pattern other than "Impl" in your application, you can set that value in an attribute of the @EnableMongoRepositories
annotation.
The following shows a declaration that sets the suffix to its normal default value (i.e., we did not have to do this).
If we changed this value from "Impl" to "Xxx", then we would need to change BooksRepositoryCustomImpl
to BooksRepositoryCustomXxx
.
@EnableMongoRepositories(repositoryImplementationPostfix="Impl")(1)
1 | Impl is the default value. Configure this attribute to use non-Impl postfix |
364.5. Helper Methods
The custom random()
method makes use of two helper methods.
One is in the CrudRepository
interface and the other directly uses the MongoTemplate
to issue a query.
public interface CrudRepository<T, ID> extends Repository<T, ID> {
long count();
protected List<Book> books(int offset, int limit) {
return mongoTemplate.find(new Query().skip(offset).limit(limit), Book.class);
}
We will need to inject some additional resources in order to make these calls:
-
BooksRepository
-
MongoTemplate
364.6. Naive Injections
Since we are not using sessions or transactions with Mongo, a simple/naive injection will work fine. We do not have to worry about injecting a specific instance. However, we will run into a circular dependency issue with the BooksRepository.
@RequiredArgsConstructor
public class BooksRepositoryCustomImpl implements BooksRepositoryCustom {
private final MongoTemplate mongoTemplate; (1)
private final BooksRepository booksRepository; (2)
1 | any MongoTemplate instance referencing the correct database and collection is fine |
2 | eager/mandatory injection of self needs to be delayed |
364.7. Required Injections
We need to instead
-
use
@Autowired @Lazy
and a non-final attribute for theBooksRepository
injection to indicate that this instance can be initialized without access to the injected bean
import org.springframework.data.jpa.repository.MongoContext;
...
public class BooksRepositoryCustomImpl implements BooksRepositoryCustom {
private final MongoTemplate mongoTemplate;
@Autowired @Lazy (1)
private BooksRepository booksRepository;
1 | BooksRepository lazily injected to mitigate the recursive dependency between the Impl class and the full repository instance |
364.8. Calling Custom Query
With all that in place, we can then call our custom random()
method and obtain a sample Book
to work with from the database.
//when
Optional<Book> randomBook = booksRepository.random();
//then
then(randomBook.isPresent()).isTrue();
364.9. Implementing Aggregation
MongoTemplate has more power in it than what can be expressed with MongoRepository.
As seen with the random()
implementation, we have the option of combining operations and dropping down the to MongoTemplate
for a portion of the implementation.
That can also include use of the Aggregation Pipeline, GridFS, Geolocation, etc.
The following custom implementation is declared in the Custom interface, extended by the BooksRepository.
public interface BookRepositoryCustom {
...
List<Book> findByAuthorGESize(int length);
The snippet below shows the example leveraging the Aggregation Pipeline for its implementation and returning a normal List<Book>
collection.
@Override
public List<Book> findByAuthorGESize(int length) {
String expression = String.format("^.{%d,}$", length);
Aggregation pipeline = Aggregation.newAggregation(
Aggregation.match(Criteria.where("author").regex(expression)),
Aggregation.match(Criteria.where("author").exists(true))
);
AggregationResults<Book> result =
mongoTemplate.aggregate(pipeline, "books", Book.class);
return result.getMappedResults();
}
That allows us unlimited behavior in the data access layer and the ability to encapsulate the capability into a single data access component.
365. Summary
In this module we learned:
-
that Spring Data MongoDB eliminates the need to write boilerplate MongoTemplate code
-
to perform basic CRUD management for
@Document
classes using a repository -
to implement query-by-example
-
that unbounded collections can grow over time and cause our applications to eventually fail
-
that paging and sorting can easily be used with repositories
-
-
to implement query methods derived from a query DSL
-
to implement custom repository extensions
Mongo Repository End-to-End Application
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
366. Introduction
This lecture takes what you have learned in establishing a MongoDB data tier using Spring Data MongoDB and shows that integrated into an end-to-end application with API CRUD calls and finder calls using paging. It is assumed that you already know about API topics like Data Transfer Objects (DTOs), JSON and XML content, marshalling/unmarshalling using Jackson and JAXB, web APIs/controllers, and clients. This lecture will put them all together.
Due to the common component technologies between the Spring Data JPA and Spring Data MongoDB end-to-end solution, this lecture is about 95% the same as the Spring Data JPA End-to-End Application lecture. Although it is presumed that the Spring Data JPA End-to-End Application lecture precedes this lecture — it was written so that was not a requirement. However, if you have already mastered the Spring Data JPA End-to-End Application topics, you should be able to quickly breeze through this material because of the significant similarities in concepts and APIs. |
366.1. Goals
The student will learn:
-
to integrate a Spring Data MongoDB Repository into an end-to-end application, accessed through an API
-
to make a clear distinction between Data Transfer Objects (DTOs) and Business Objects (BOs)
-
to identify data type architectural decisions required for a multi-tiered application
-
to understand the need for paging when working with potentially unbounded collections and remote clients
366.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
implement a BO tier of classes that will be mapped to the database
-
implement a DTO tier of classes that will exchange state with external clients
-
implement a service tier that completes useful actions
-
identify the controller/service layer interface decisions when it comes to using DTO and BO classes
-
implement a mapping tier between BO and DTO objects
-
implement paging requests through the API
-
implement page responses through the API
367. BO/DTO Component Architecture
367.1. Business Object(s)/@Documents
For our Books application — I have kept the data model simple and kept it limited to a single business object (BO) @Document
class mapped to the database using Spring Data MongoDB annotations and accessed through a Spring Data MongoDB repository.
Figure 151. BO Class Mapped to DB as Spring Data MongoDB @Document
|
The business objects are the focal point of information where we implement our business decisions. |
The primary focus of our BO classes is to map business implementation concepts to the database.
The following snippet shows some of the optional mapping properties of a Spring Data MongoDB @Document
class.
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
import org.springframework.data.mongodb.core.mapping.Field;
@Document(collection = "books") (1)
...
public class Book {
@Id (2)
private String id;
@Field(name="title") (3)
private String title;
private String author;
private LocalDate published;
...
1 | @Document.collection used to define the DB collection to use — otherwise uses name of class |
2 | @Id used to map the document primary key field to a class property |
3 | @Field used to custom map a class property to a document field — the example is performing what the default would have done |
367.2. Data Transfer Object(s) (DTOs)
The Data Transfer Objects are the focal point of interfacing with external clients. They represent state at a point in time. For external web APIs, they are commonly mapped to both JSON and XML.
For the API, we have the decision of whether to reuse BO classes as DTOs or implement a separate set of classes for that purpose. Even though some applications start out simple, there will come a point where database technology or mappings will need to change at a different pace than API technology or mappings.
Figure 152. DTO
|
For that reason, I created a separate |
The primary focus of our DTO classes is to map business interface concepts to a portable exchange format.
367.3. BookDTO Class
The following snippet shows some of the annotations required to map the BookDTO
class to XML using Jackson and JAXB.
Jackson JSON requires very few annotations in the simple cases.
@JacksonXmlRootElement(localName = "book", namespace = "urn:ejava.db-repo.books")
@XmlRootElement(name = "book", namespace = "urn:ejava.db-repo.books") (2)
@XmlAccessorType(XmlAccessType.FIELD)
@NoArgsConstructor
...
public class BookDTO { (1)
@JacksonXmlProperty(isAttribute = true)
@XmlAttribute
private String id;
private String title;
private String author;
@XmlJavaTypeAdapter(LocalDateJaxbAdapter.class) (3)
private LocalDate published;
...
1 | Jackson JSON requires very little to no annotations for simple mappings |
2 | XML mappings require more detailed definition to be complete |
3 | JAXB requires a custom mapping definition for java.time types |
367.4. BO/DTO Mapping
With separate BO and DTO classes, there is a need for mapping between the two.
|
Figure 153. BO to DTO Mapping
|
We have several options on how to organize this role.
367.4.1. BO/DTO Self Mapping
|
Figure 154. BO to DTO Self Mapping
|
367.4.2. BO/DTO Method Self Mapping
|
Figure 155. BO to DTO Method Self Mapping
|
367.4.3. BO/DTO Helper Method Mapping
|
Figure 156. BO/DTO Helper Method Mapping
|
367.4.4. BO/DTO Helper Class Mapping
|
Figure 157. BO/DTO Helper Class Mapping
|
367.4.5. BO/DTO Helper Class Mapping Implementations
Mapping helper classes can be implemented by:
-
brute force implementation
-
Benefit: likely the fastest performance and technically simplest to understand
-
Drawback: tedious setter/getter code
-
-
off-the-shelf mapper libraries (e.g. Dozer, Orika, MapStruct, ModelMapper, JMapper) [74] [75]
-
Benefit: declarative language and inferred DIY mapping options
-
Drawbacks:
-
relies on reflection and other generalizations for mapping which add to overhead
-
non-trivial mappings can be complex to understand
-
-
368. Service Architecture
Services — with the aid of BOs — implement the meat of the business logic.
The service
Example Service Class Declaration
|
368.1. Injected Service Boundaries
Container features like @Secured
, @Async
, etc. are only implemented at component boundaries.
When a @Component
dependency is injected, the container has the opportunity to add features using "interpose".
As a part of interpose — the container implements proxy to add the desired feature of the target component method.
Therefore it is important to arrange a component boundary wherever you need to start a new characteristic provided by the container. The following is a more detailed explanation of what not to do and do.
368.1.1. Buddy Method Boundary
The methods within a component class are not typically subject to container interpose. Therefore a call from m1() to m2() within the same component class is a straight Java call.
|
Figure 159. Buddy Method Boundary
|
368.1.2. Self Instantiated Method Boundary
Container interpose is only performed when the container has a chance to decorate the called component. Therefore, a call to a method of a component class that is self-instantiated will not have container interpose applied — no matter how the called method is annotated.
|
Figure 160. Self Instantiated Method Boundary
|
368.1.3. Container Injected Method Boundary
Components injected by the container are subject to container interpose and will have declared characteristics applied.
|
Figure 161. Container Injected Method Boundary
|
368.2. Compound Services
With @Component
boundaries and interpose constraints understood — in more complex security, or threading solutions, the logical @Service
many get broken up into one or more physical helper @Component
classes.
Figure 162. Single Service Expressed as Multiple Components
|
Each helper
To external users of
|
369. BO/DTO Interface Options
With the core roles of BOs and DTOs understood, we next have a decision to make about where to use them within our application between the API and service classes.
Figure 163. BO/DTO Interface Decisions
|
|
369.1. API Maps DTO/BO
It is natural to think of the @Service
as working with pure implementation (BO) classes.
This leaves the mapping job to the @RestController
and all clients of the @Service
.
|
Figure 164. API Maps DTO to BO for Service Interface
|
369.2. @Service Maps DTO/BO
Alternatively, we can have the @Service
fully encapsulate the implementation details and work with DTOs in its interface.
This places the job of DTO/BO translation to the @Service
and the @RestController
and all @Service
clients work with DTOs.
Figure 165. Service Maps DTO in Service Interface to BO
|
|
369.3. Layered Service Mapping Approach
The later DTO interface/mapping approach just introduced — maps closely to the Domain Driven Design (DDD) "Application Layer". However, one could also implement a layering of services.
|
Layered Services Permit a Level of Trust between Inner Components
When using this approach, I like:
|
370. Implementation Details
With architectural decisions understood, lets take a look at some of the key details of the end-to-end application.
370.1. Book BO
We have already covered the Book
BO @Document
class in a lot of detail during the MongoTemplate lecture.
The following lists most of the key business aspects and implementation details of the class.
package info.ejava.examples.db.mongo.books.bo;
...
@Document(collection = "books")
@Builder
@With
@ToString
@EqualsAndHashCode
@Getter
@AllArgsConstructor
public class Book {
@Id
private String id;
@Setter
@Field(name="title")
private String title;
@Setter
private String author;
@Setter
private LocalDate published;
}
370.2. BookDTO
The BookDTO class has been mapped to Jackson JSON and Jackson and JAXB XML. The details of Jackson and JAXB mapping were covered in the API Content lectures. Jackson JSON required no special annotations to map this class. Jackson and JAXB XML primarily needed some annotations related to namespaces and attribute mapping. JAXB also required annotations for mapping the LocalDate field.
The following lists the annotations required to marshal/unmarshal the BooksDTO class using Jackson and JAXB.
package info.ejava.examples.db.repo.jpa.books.dto;
...
@JacksonXmlRootElement(localName = "book", namespace = "urn:ejava.db-repo.books")
@XmlRootElement(name = "book", namespace = "urn:ejava.db-repo.books")
@XmlAccessorType(XmlAccessType.FIELD)
@Data @Builder
@NoArgsConstructor @AllArgsConstructor
public class BookDTO {
@JacksonXmlProperty(isAttribute = true)
@XmlAttribute
private int id;
private String title;
private String author;
@XmlJavaTypeAdapter(LocalDateJaxbAdapter.class) (1)
private LocalDate published;
...
}
1 | JAXB requires an adapter for the newer LocalDate java class |
370.2.1. LocalDateJaxbAdapter
Jackson is configured to marshal LocalDate out of the box using the ISO_LOCAL_DATE format for both JSON and XML.
"published" : "2013-01-30" //Jackson JSON
<published xmlns="">2013-01-30</published> //Jackson XML
JAXB does not have a default format and requires the class be mapped to/from a string using an XmlAdapter
.
@XmlJavaTypeAdapter(LocalDateJaxbAdapter.class)
private LocalDate published;
public static class LocalDateJaxbAdapter extends XmlAdapter<String, LocalDate> {
@Override
public LocalDate unmarshal(String text) {
return LocalDate.parse(text, DateTimeFormatter.ISO_LOCAL_DATE);
}
@Override
public String marshal(LocalDate timestamp) {
return DateTimeFormatter.ISO_LOCAL_DATE.format(timestamp);
}
}
370.3. Book JSON Rendering
The following snippet provides example JSON of a Book
DTO payload.
{
"id":"609b316de7366e0451a7bcb0",
"title":"Tirra Lirra by the River",
"author":"Mr. Arlen Swift",
"published":"2020-07-26"
}
370.4. Book XML Rendering
The following snippets provide example XML of Book
DTO payloads.
They are technically equivalent from an XML Schema standpoint, but use some alternate syntax XML to achieve the same technical goals.
<book xmlns="urn:ejava.db-repo.books" id="609b32b38065452555d612b8">
<title xmlns="">To a God Unknown</title>
<author xmlns="">Rudolf Harris</author>
<published xmlns="">2019-11-22</published>
</book>
<ns2:book xmlns:ns2="urn:ejava.db-repo.books" id="609b32b38065452555d61222">
<title>The Mermaids Singing</title>
<author>Olen Rolfson IV</author>
<published>2020-10-14</published>
</ns2:book>
370.5. Pageable/PageableDTO
I placed a high value on paging when working with unbounded collections when covering repository find methods. The value of paging comes especially into play when dealing with external users. That means we will need a way to represent Page, Pageable, and Sort in requests and responses as a part of DTO solution.
You will notice that I made a few decisions on how to implement this interface
-
I am assuming that both sides of the interface using the DTO classes are using Spring Data. The DTO classes have a direct dependency on their non-DTO siblings.
-
I am using the Page, Pageable, and Sort DTOs to directly self-map to/from Spring Data types. This makes the client and service code much simpler.
Pageable pageable = PageableDTO.of(pageNumber, pageSize, sortString).toPageable(); (1) Page<BookDTO> result = ... BooksPageDTO resultDTO = new BooksPageDTO(result); (1)
1 using self-mapping between paging DTOs and Spring Data ( Pageable
andPage
) types -
I chose to use the Spring Data types in the
@Service
interface and performed the Spring Data to DTO mapping in the@RestController
. I did this so that I did not eliminate any pre-existing library integration with Spring Data paging types.Page<BookDTO> getBooks(Pageable pageable); (1)
1 using Spring Data ( Pageable
andPage
) and business DTO (BookDTO
) types in@Service
interface
I will be going through the architecture and wiring in these lecture notes. The actual DTO code is surprisingly complex to render in the different formats and libraries. These topics were covered in detail in the API content lectures. I also chose to implement the PageableDTO and sort as immutable — which added some interesting mapping challenges worth inspecting.
370.5.1. PageableDTO Request
Requests require an expression for Pageable. The most straight forward way to accomplish this is through query parameters. The example snippet below shows pageNumber, pageSize, and sort expressed as simple string values as part of the URI. We have to write code to express and parse that data.
(1)
/api/books/example?pageNumber=0&pageSize=5&sort=published:DESC,id:ASC
(2)
1 | pageNumber and pageSize are direct properties used by PageRequest |
2 | sort contains a comma separated list of order compressed into a single string |
Integer pageNumber
and pageSize
are straight forward to represent as numeric values in the query.
Sort requires a minor amount of work.
Spring Data Sort is an ordered list of property and direction.
I have chosen to express property and direction using a ":" separated string and concatenate the ordering using a ",".
This allows the query string to be expressed in the URI without special characters.
370.5.2. PageableDTO Client-side Request Mapping
Since I expect code using the PageableDTO to also be using Spring Data, I chose to use self-mapping between the PageableDTO
and Spring Data Pageable
.
The following snippet shows how to map Pageable
to PageableDTO
and the PageableDTO
properties to URI query parameters.
PageRequest pageable = PageRequest.of(0, 5,
Sort.by(Sort.Order.desc("published"), Sort.Order.asc("id")));
PageableDTO pageSpec = PageableDTO.of(pageable); (1)
URI uri=UriComponentsBuilder
.fromUri(serverConfig.getBaseUrl())
.path(BooksController.BOOKS_PATH).path("/example")
.queryParams(pageSpec.getQueryParams()) (2)
.build().toUri();
1 | using PageableDTO to self map from Pageable |
2 | using PageableDTO to self map to URI query parameters |
370.5.3. PageableDTO Server-side Request Mapping
The following snippet shows how the individual page request properties can be used to build a local instance of PageableDTO
in the @RestController
.
Once the PageableDTO
is built, we can use that to self map to a Spring Data Pageable
to be used when calling the @Service
.
public ResponseEntity<BooksPageDTO> findBooksByExample(
@RequestParam(value="pageNumber",defaultValue="0",required=false) Integer pageNumber,
@RequestParam(value="pageSize",required=false) Integer pageSize,
@RequestParam(value="sort",required=false) String sortString,
@RequestBody BookDTO probe) {
Pageable pageable = PageableDTO.of(pageNumber, pageSize, sortString) (1)
.toPageable(); (2)
1 | building PageableDTO from page request properties |
2 | using PageableDTO to self map to Spring Data Pageable |
370.5.4. Pageable Response
Responses require an expression for Pageable
to indicate the pageable properties about the content returned.
This must be expressed in the payload, so we need a JSON and XML expression for this.
The snippets below show the JSON and XML DTO renderings of our Pageable
properties.
"pageable" : {
"pageNumber" : 1,
"pageSize" : 25,
"sort" : "title:ASC,author:ASC"
}
<pageable xmlns="urn:ejava.common.dto" pageNumber="1" pageSize="25" sort="title:ASC,author:ASC"/>
370.6. Page/PageDTO
Pageable
is part of the overall Page<T>
, with contents.
Therefore, we also need a way to return a page of content to the caller.
370.6.1. PageDTO Rendering
JSON is very lenient and could have been implemented with a generic PageDTO<T>
class.
{"content":[ (1)
{"id":"609cffbc881de53b82657f17", (2)
"title":"An Instant In The Wind",
"author":"Clifford Blick",
"published":"2003-04-09"}],
"totalElements":10, (1)
"pageable":{"pageNumber":3,"pageSize":3,"sort":null}} (1)
1 | content , totalElements , and pageable are part of reusable PageDTO |
2 | book within content array is part of concrete Books domain |
However, XML — with its use of unique namespaces, requires a sub-class to provide the type-specific values for content and overall page.
<booksPage xmlns="urn:ejava.db-repo.books" totalElements="10"> (1)
<wstxns1:content xmlns:wstxns1="urn:ejava.common.dto">
<book id="609cffbc881de53b82657f17"> (2)
<title xmlns="">An Instant In The Wind</title>
<author xmlns="">Clifford Blick</author>
<published xmlns="">2003-04-09</published>
</book>
</wstxns1:content>
<pageable xmlns="urn:ejava.common.dto" pageNumber="3" pageSize="3"/>
</booksPage>
1 | totalElements mapped to XML as an (optional) attribute |
2 | booksPage and book are in concrete domain urn:ejava.db-repo.books namespace |
370.6.2. BooksPageDTO Subclass Mapping
The BooksPageDTO
subclass provides the type-specific mapping for the content and overall page.
The generic portions are handled by the base class.
@JacksonXmlRootElement(localName="booksPage", namespace="urn:ejava.db-repo.books")
@XmlRootElement(name="booksPage", namespace="urn:ejava.db-repo.books")
@XmlType(name="BooksPage", namespace="urn:ejava.db-repo.books")
@XmlAccessorType(XmlAccessType.NONE)
@NoArgsConstructor
public class BooksPageDTO extends PageDTO<BookDTO> {
@JsonProperty
@JacksonXmlElementWrapper(localName="content", namespace="urn:ejava.common.dto")
@JacksonXmlProperty(localName="book", namespace="urn:ejava.db-repo.books")
@XmlElementWrapper(name="content", namespace="urn:ejava.common.dto")
@XmlElement(name="book", namespace="urn:ejava.db-repo.books")
public List<BookDTO> getContent() {
return super.getContent();
}
public BooksPageDTO(List<BookDTO> content, Long totalElements,
PageableDTO pageableDTO) {
super(content, totalElements, pageableDTO);
}
public BooksPageDTO(Page<BookDTO> page) {
this(page.getContent(), page.getTotalElements(),
PageableDTO.fromPageable(page.getPageable()));
}
}
370.6.3. PageDTO Server-side Rendering Response Mapping
The @RestController
can use the concrete DTO class (BookPageDTO
in this case) to self-map from a Spring Data Page<T>
to a DTO suitable for marshaling back to the API client.
Page<BookDTO> result=booksService.findBooksMatchingAll(probe, pageable);
BooksPageDTO resultDTO = new BooksPageDTO(result); (1)
ResponseEntity<BooksPageDTO> response = ResponseEntity.ok(resultDTO);
1 | using BooksPageDTO to self-map Sing Data Page<T> to DTO |
370.6.4. PageDTO Client-side Rendering Response Mapping
The PageDTO<T>
class can be used to self-map to a Spring Data Page<T>
.
Pageable, if needed, can be obtained from the Page<T>
or through the pageDTO.getPageable()
DTO result.
BooksPageDTO pageDTO = request.exchange()
.expectStatus().isOk()
.returnResult(BooksPageDTO.class)
.getResponseBody().blockFirst();
Page<BookDTO> page = pageDTO.toPage(); (1)
Pageable pageable = ... (2)
1 | using PageDTO<T> to self-map to a Spring Data Page<T> |
2 | can use page.getPageable() or pageDTO.getPageable().toPageable() obtain Pageable |
371. BookMapper
The BookMapper
@Component
class is used to map between BookDTO
and Book
BO instances.
It leverages Lombok builder methods — but is pretty much a simple/brute force mapping.
371.1. Example Map: BookDTO to Book BO
The following snippet is an example of mapping a BookDTO
to a Book
BO.
@Component
public class BooksMapper {
public Book map(BookDTO dto) {
Book bo = null;
if (dto!=null) {
bo = Book.builder()
.id(dto.getId())
.author(dto.getAuthor())
.title(dto.getTitle())
.published(dto.getPublished())
.build();
}
return bo;
}
...
371.2. Example Map: Book BO to BookDTO
The following snippet is an example of mapping a Book
BO to a BookDTO
.
...
public BookDTO map(Book bo) {
BookDTO dto = null;
if (bo!=null) {
dto = BookDTO.builder()
.id(bo.getId())
.author(bo.getAuthor())
.title(bo.getTitle())
.published(bo.getPublished())
.build();
}
return dto;
}
...
372. Service Tier
The BooksService @Service
encapsulates the implementation of our management of Books.
372.1. BooksService Interface
The BooksService
interface defines a portion of pure CRUD methods and a series of finder methods.
To be consistent with DDD encapsulation, the @Service
interface is using DTO classes.
Since the @Service
is an injectable component, I chose to use straight Spring Data pageable types to possibly integrate with libraries that inherently work with Spring Data types.
public interface BooksService {
BookDTO createBook(BookDTO bookDTO); (1)
BookDTO getBook(int id);
void updateBook(int id, BookDTO bookDTO);
void deleteBook(int id);
void deleteAllBooks();
Page<BookDTO> findPublishedAfter(LocalDate exclusive, Pageable pageable);(2)
Page<BookDTO> findBooksMatchingAll(BookDTO probe, Pageable pageable);
}
1 | chose to use DTOs in @Service interface |
2 | chose to use Spring Data types in pageable @Service finder methods |
372.2. BooksServiceImpl Class
The BooksServiceImpl
implementation class is implemented using the BooksRepository
and BooksMapper
.
@RequiredArgsConstructor (1) (2)
@Service
public class BooksServiceImpl implements BooksService {
private final BooksMapper mapper;
private final BooksRepository booksRepo;
1 | Creates a constructor for all final attributes |
2 | Single constructors are automatically used for Autowiring |
I will demonstrate two methods here — one that creates a book and one that finds books. There is no need for any type of formal transaction here because we are representing the boundary of consistency within a single document.
MongoDB 4.x Does Support Multi-document Transactions
Multi-document transactions are now supported within MongoDB (as of version 4.x) and Spring Data MongoDB. When using declared transactions with Spring Data MongoDB, this looks identical to transactions implemented with Spring Data JPA. The programmatic interface is fairly intuitive as well. However, it is not considered a best, early practice. Therefore, I will defer that topic to a more advanced coverage of MongoDB interactions. |
372.3. createBook()
The createBook()
method
-
accepts a
BookDTO
, creates a new book, and returns the created book as aBookDTO
, with the generated ID. -
calls the mapper to map from/to a BooksDTO to/from a
Book
BO -
uses the
BooksRepository
to interact with the database
public BookDTO createBook(BookDTO bookDTO) {
Book bookBO = mapper.map(bookDTO); (1)
//insert instance
booksRepo.save(bookBO); (2)
return mapper.map(bookBO); (3)
}
1 | mapper converting DTO input argument to BO instance |
2 | BO instance saved to database and updated with primary key |
3 | mapper converting BO entity to DTO instance for return from service |
372.4. findBooksMatchingAll()
The findBooksMatchingAll()
method
-
accepts a
BookDTO
as a probe andPageable
to adjust the search and results -
calls the mapper to map from/to a BooksDTO to/from a
Book
BO -
uses the
BooksRepository
to interact with the database
public Page<BookDTO> findBooksMatchingAll(BookDTO probeDTO, Pageable pageable) {
Book probe = mapper.map(probeDTO); (1)
ExampleMatcher matcher = ExampleMatcher.matchingAll(); (2)
Page<Book> books = booksRepo.findAll(Example.of(probe, matcher), pageable); (3)
return mapper.map(books); (4)
}
1 | mapper converting DTO input argument to BO instance to create probe for match |
2 | building matching rules to AND all supplied non-null properties |
3 | finder method invoked with matching and paging arguments to return page of BOs |
4 | mapper converting page of BOs to page of DTOs |
373. RestController API
The @RestController
provides an HTTP Facade for our @Service
.
@RestController
@Slf4j
@RequiredArgsConstructor
public class BooksController {
public static final String BOOKS_PATH="api/books";
public static final String BOOK_PATH= BOOKS_PATH + "/{id}";
public static final String RANDOM_BOOK_PATH= BOOKS_PATH + "/random";
private final BooksService booksService; (1)
1 | @Service injected into class using constructor injection |
I will demonstrate two of the operations available.
373.1. createBook()
The createBook()
operation
-
is called using
POST /api/books
method and URI -
passed a BookDTO, containing the fields to use marshaled in JSON or XML
-
calls the
@Service
to handle the details of creating the Book -
returns the created book using a BookDTO
@RequestMapping(path=BOOKS_PATH,
method=RequestMethod.POST,
consumes={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE},
produces={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<BookDTO> createBook(@RequestBody BookDTO bookDTO) {
BookDTO result = booksService.createBook(bookDTO); (1)
URI uri = ServletUriComponentsBuilder.fromCurrentRequestUri()
.replacePath(BOOK_PATH)
.build(result.getId()); (2)
ResponseEntity<BookDTO> response = ResponseEntity.created(uri).body(result);
return response; (3)
}
1 | DTO from HTTP Request supplied to and result DTO returned from @Service method |
2 | URI of created instance calculated for Location response header |
3 | DTO marshalled back to caller with HTTP Response |
373.2. findBooksByExample()
The findBooksByExample()
operation
-
is called using "POST /api/books/example" method and URI
-
passed a BookDTO containing the properties to search for using JSON or XML
-
calls the
@Service
to handle the details of finding the books after mapping thePageable
from query parameters -
converts the
Page<BookDTO>
into aBooksPageDTO
to address marshaling concerns relative to XML. -
returns the page as a
BooksPageDTO
@RequestMapping(path=BOOKS_PATH + "/example",
method=RequestMethod.POST,
consumes={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE},
produces={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<BooksPageDTO> findBooksByExample(
@RequestParam(value="pageNumber",defaultValue="0",required=false) Integer pageNumber,
@RequestParam(value="pageSize",required=false) Integer pageSize,
@RequestParam(value="sort",required=false) String sortString,
@RequestBody BookDTO probe) {
Pageable pageable=PageableDTO.of(pageNumber, pageSize, sortString).toPageable();(1)
Page<BookDTO> result=booksService.findBooksMatchingAll(probe, pageable); (2)
BooksPageDTO resultDTO = new BooksPageDTO(result); (3)
ResponseEntity<BooksPageDTO> response = ResponseEntity.ok(resultDTO);
return response;
}
1 | PageableDTO constructed from page request query parameters |
2 | @Service accepts DTO arguments for call and returns DTO constructs mixed with Spring Data paging types |
3 | type-specific BooksPageDTO marshalled back to caller to support type-specific XML namespaces |
373.3. WebClient Example
The following snippet shows an example of using a WebClient to request a page of finder results form the API. WebClient is part of the Spring WebFlux libraries — which implements reactive streams. The use of WebClient here is purely for example and not a requirement of anything created. However, using WebClient did force my hand to add JAXB to the DTO mappings since Jackson XML is not yet supported by WebFlux. RestTemplate does support both Jackson and JAXB XML mapping - which would have made mapping simpler.
@Autowired
private WebClient webClient;
...
UriComponentsBuilder findByExampleUriBuilder = UriComponentsBuilder
.fromUri(serverConfig.getBaseUrl())
.path(BooksController.BOOKS_PATH).path("/example");
...
//given
MediaType mediaType = ...
PageRequest pageable = PageRequest.of(0, 5, Sort.by(Sort.Order.desc("published")));
PageableDTO pageSpec = PageableDTO.of(pageable); (1)
BookDTO allBooksProbe = BookDTO.builder().build(); (2)
URI uri = findByExampleUriBuilder.queryParams(pageSpec.getQueryParams()) (3)
.build().toUri();
WebClient.RequestHeadersSpec<?> request = webClient.post()
.uri(uri)
.contentType(mediaType)
.body(Mono.just(allBooksProbe), BookDTO.class)
.accept(mediaType);
//when
ResponseEntity<BooksPageDTO> response = request
.retrieve()
.toEntity(BooksPageDTO.class).block();
//then
then(response.getStatusCode().is2xxSuccessful()).isTrue();
BooksPageDTO page = response.getBody();
1 | limiting query rsults to first page, ordered by "release", with a page size of 5 |
2 | create a "match everything" probe |
3 | pageable properties added as query parameters |
WebClient/WebFlex does not yet support Jackson XML
WebClient and WebFlex does not yet support Jackson XML.
This is what primarily forced the example to leverage JAXB for XML.
WebClient/WebFlux automatically makes the decision/transition under the covers once an |
374. Summary
In this module we learned:
-
to integrate a Spring Data MongoDB Repository into an end-to-end application, accessed through an API
-
implement a service tier that completes useful actions
-
to make a clear distinction between DTOs and BOs
-
to identify data type architectural decisions required for DTO and BO types
-
to setup proper container feature boundaries using annotations and injection
-
implement paging requests through the API
-
implement page responses through the API
Heroku Database Deployments
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
375. Introduction
This lecture contains several "how to" aspects of building and deploying a Docker image to Heroku with Postgres or Mongo database dependencies.
375.1. Goals
You will learn:
-
how to build a Docker image as part of the build process
-
how to provision Postgres and Mongo internet-based resources for use with Internet deployments
-
how to deploy an application to the Internet to use provisioned Internet resources
375.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
provision a Postgres Internet-accessible database
-
provision a Mongo Internet-accessible database
-
map Heroku environment variables to Spring Boot properties using a shell script
-
build a Docker image as part of the build process
376. Production Properties
We will want to use real database instances for remote deployment and we will get to that in a moment. For right now, lets take a look at some of the Spring Boot properties we need defined in order to properly make use of a live database.
376.1. Postgres Production Properties
We will need the following RDBMS properties individually enumerated for Postgres at runtime.
-
spring.data.datasource.url
-
spring.data.datasource.username
-
spring.data.datasource.password
The remaining properties can be pre-set with a properties configuration embedded within the application.
##rdbms
#spring.datasource.url=... (1)
#spring.datasource.username=...
#spring.datasource.password=...
spring.jpa.show-sql=false
spring.jpa.hibernate.ddl-auto=validate
spring.flyway.enabled=true
1 | datasource properties will be supplied at runtime |
376.2. Mongo Production Properties
We will need the Mongo URL and luckily that and the user credentials can be expressed in a single URL construct.
-
spring.data.mongodb.uri
There are no other mandatory properties to be set beyond the URL.
#mongo
#spring.data.mongodb.uri=mongodb://... (1)
1 | mongodb.uri — with credentials — will be supplied at runtime |
377. Parsing Runtime Properties
The Postgres URL will be provided to us by Heroku using the DATABASE_URL
property as show below.
They provide a means to separate the URL into variables, but that feature was not available for Docker deployments at the time I investigated.
We can easily to that ourselves.
A logically equivalent Mongo URL will be made available from the Mongo resource provider. Luckily we can pass that single value in as the Mongo URL and be done.
DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres
MONGODB_URI=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin
377.1. Environment Variable Script
Earlier — when PORT was the only thing we had to worry about — I showed a way to do that with the Dockerfile CMD
option.
ENV PORT=8080 ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] CMD ["--server.port=${PORT}"]
We could have expanded that same approach if we could get the DATABASE_URL broken down into URL and credentials. With that option not available, we can delegate to a script.
The following snippet shows the skeleton of the run_env.sh
script we will put in place to address all types of environment variables we will see in our environments.
The shell will launch whatever command was passed to it ("$@") and append the OPTIONS
that it was able to construct from environment variables.
We will place this in the src/docker
directory to be picked up by the Dockerfile.
The resulting script was based upon the much more complicated example.
#!/bin/bash
OPTIONS=""
#ref: https://raw.githubusercontent.com/heroku/heroku-buildpack-jvm-common/main/opt/jdbc.sh
if [[ -n "${DATABASE_URL:-}" ]]; then
# ...
fi
if [[ -n "${MONGODB_URI:-}" ]]; then
# ...
fi
if [[ -n "${PORT:-}" ]]; then
# ...
fi
exec $@ ${OPTIONS}
377.2. Script Output
The following snippet shows an example args
print of what is passed into the Spring Boot application
from the run_env.sh script.
args [--spring.datasource.url=jdbc:postgresql://postgres:5432/postgres,
--spring.datasource.username=postgres, --spring.datasource.password=secret,
--spring.data.mongodb.uri=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin]
Review: Remember that our environment will look like the following.
DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres
MONGODB_URI=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin
Lets break down the details.
377.3. Heroku DataSource Property
The following script will breakout URL, username, and password and turn them into Spring Boot properties on the command line.
if [[ -n "${DATABASE_URL:-}" ]]; then
pattern="^postgres://(.+):(.+)@(.+)$" (1)
if [[ "${DATABASE_URL}" =~ $pattern ]]; then (2)
JDBC_DATABASE_USERNAME="${BASH_REMATCH[1]}"
JDBC_DATABASE_PASSWORD="${BASH_REMATCH[2]}"
JDBC_DATABASE_URL="jdbc:postgresql://${BASH_REMATCH[3]}"
OPTIONS="${OPTIONS} --spring.datasource.url=${JDBC_DATABASE_URL} "
OPTIONS="${OPTIONS} --spring.datasource.username=${JDBC_DATABASE_USERNAME}"
OPTIONS="${OPTIONS} --spring.datasource.password=${JDBC_DATABASE_PASSWORD}"
else
OPTIONS="${OPTIONS} --no.match=${DATABASE_URL}" (3)
fi
fi
1 | regular expression defining three (3) extraction variables |
2 | if the regular expression finds a match, we will pull that apart and assemble the properties |
3 | if no match is found, --no.match is populated with the DATABASE_URL to be printed for debug reasons |
377.4. Testing DATABASE_URL
You can test the script so far by invoking the with the environment variable set.
(export DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres && bash ./src/docker/run_env.sh echo)
--spring.datasource.url=jdbc:postgresql://postgres:5432/postgres --spring.datasource.username=postgres --spring.datasource.password=secret
Of course, that same test could be done with a Docker image.
docker run --rm \ -e DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres \(1) -v `pwd`/src/docker/run_env.sh:/tmp/run_env.sh \(2) openjdk:17.0.2 \ /tmp/run_env.sh echo (3)
1 | setting the environment variable |
2 | mounting the file in the /tmp directory |
3 | running script and passing in echo as executable to call |
377.5. MongoDB Properties
The Mongo URL we get from Atlas can be passed in as a single property.
If Postgres was this straight forward, we could have stuck with the CMD
option.
if [[ -n "${MONGODB_URI:-}" ]]; then OPTIONS="${OPTIONS} --spring.data.mongodb.uri=${MONGODB_URI}" fi
(export MONGODB_URI=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin && bash ./src/docker/run_env.sh echo)
--spring.data.mongodb.uri=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin
377.6. PORT Property
We need to continue supporting the PORT
environment variable and will add a block for that.
if [[ -n "${PORT:-}" ]]; then OPTIONS="${OPTIONS} --server.port=${PORT}" fi
(export DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres && export MONGODB_URI=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin && export PORT=7777 && bash ./src/docker/run_env.sh echo)
--spring.datasource.url=jdbc:postgresql://postgres:5432/postgres --spring.datasource.username=postgres --spring.datasource.password=secret --spring.data.mongodb.uri=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin --server.port=7777
378. Docker Image
With the embedded properties set, we are now ready to build a Docker image. We will use a Maven plugin to build the image using Docker since the memory requirement for the default Spring Boot Docker image exceeds the Heroku Memory limit for free deployments.
378.1. Dockerfile
The following shows the Dockerfile being used. It is 99% of what can be found in the Spring Boot Maven Plugin Documentation except for:
-
a tweak on the
ARG JAR_FILE
command to add ourbootexec
classifier. Note that our local Maven pom.xmlJAR_FILE
declaration will take care of this as well. -
src/docker/run_env.sh
script added to search for environment variables and break them down into Spring Boot properties
FROM openjdk:17.0.2 as builder
WORKDIR application
ARG JAR_FILE=target/*-bootexec.jar (1)
COPY ${JAR_FILE} application.jar
RUN java -Djarmode=layertools -jar application.jar extract
FROM openjdk:17.0.2
WORKDIR application
COPY --from=builder application/dependencies/ ./
COPY --from=builder application/spring-boot-loader/ ./
COPY --from=builder application/snapshot-dependencies/ ./
COPY --from=builder application/application/ ./
COPY src/docker/run_env.sh ./ (2)
RUN chmod +x ./run_env.sh
ENTRYPOINT ["./run_env.sh", "java","org.springframework.boot.loader.JarLauncher"]
1 | Spring Boot executable JAR has bootexec Maven classifier suffix added |
2 | added a filter script to break certain environment variables into separate properties |
378.2. Spotify Docker Build Maven Plugin
At this point with a Dockerfile in hand, we have the option of building the image with straight docker build
or docker-compose build
.
We can also use the Spotify Docker Maven Plugin to automate the build of the Docker image as part of the module build.
The plugin is forming an explicit path to the JAR file and using the JAR_FILE
variable to pass that into the Dockerfile
.
Note that by supplying the JAR_FILE
reference here, we can build images without worrying about the wildcard glob in the Dockerfile locating too many matches.
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<configuration>
<repository>${project.artifactId}</repository>
<tag>${project.version}</tag>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}-${spring-boot.classifier}.jar</JAR_FILE> (1)
</buildArgs>
</configuration>
<executions>
<execution>
<goals>
<goal>build</goal>
</goals>
</execution>
</executions>
</plugin>
1 | JAR_FILE is passed in as a build argument to Docker |
[INFO] Successfully built dfe2383f7f68
[INFO] Successfully tagged xxx:6.0.1-SNAPSHOT
[INFO]
[INFO] Detected build of image with id dfe2383f7f68
...
[INFO] Successfully built dockercompose-votes-svc:6.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
379. Heroku Deployment
The following are the basic steps taken to deploy the Docker image to Heroku.
379.1. Provision MongoDB
MongoDB offers a Mongo database service on the Internet called Atlas. They offer free accounts and the ability to setup and operate database instances at no cost.
-
Create account using email address
-
Create a new project
-
Create a new (free) cluster within that project
-
Create username/password for DB access
-
Setup Internet IP whitelist (can be wildcard/all) of where to accept connects from. I normally set that to everywhere — at least until I locate the Heroku IP address.
-
Obtain a URL to connect to. It will look something like the following:
mongodb+srv://(username):(password)@(host)/(dbname)?retryWrites=true&w=majority
379.2. Provision Application
Refer back to the Heroku lecture for details, but essentially
-
create a new application
-
set the MONGODB_URI environment variable for that application
-
set the
SPRING_PROFILES_ACTIVE
environment variable toproduction
$ heroku create [app-name] $ heroku config:set MONGODB_URI=mongodb+srv://(username):(password)@(host)/votes_db... --app (app-name) $ heroku config:set SPRING_PROFILES_ACTIVE=production
379.3. Provision Postgres
We can provision Postgres directly on Heroku itself.
$ heroku addons:create heroku-postgresql:hobby-dev Creating heroku-postgresql:hobby-dev on β¬’ xxx... free Database has been created and is available ! This database is empty. If upgrading, you can transfer ! data from another database with pg:copy Created postgresql-shallow-xxxxx as DATABASE_URL Use heroku addons:docs heroku-postgresql to view documentation
After the provision, we can see that a compound DATABASE_URI was provided
$ heroku config --app app-name === app-name Config Vars DATABASE_URL: postgres://(username):(password)@(host):(port)/(database) MONGODB_URI: mongodb+srv://(username):(password)@(host)/votes_db?... SPRING_PROFILES_ACTIVE: production
379.4. Deploy Application
$ docker tag (artifactId):(tag) registry.heroku.com/(app-name)/web
$ heroku container:login Login Succeeded $ docker push registry.heroku.com/(app-name)/web The push refers to repository [registry.heroku.com/(app-name)/web] 6f38c0466979: Pushed 69a39355b3ac: Pushed ea12a8cf9f94: Pushed d2451ff7adf4: Layer already exists ... 7ef368776582: Layer already exists latest: digest: sha256:21197b193a6657dd5e6f10d6751f08faa416a292a17693ac776b211520d84d19 size: 3035
379.5. Release the Application
Invoke the Heroku release command to make the changes visible to the Internet.
$ heroku container:release web --app (app-name) Releasing images web to (app-name)... done
Tail the Heroku log to verify the application starts and the production profile is active.
$ heroku logs --app (app-name) --tail
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (2.7.0)
The following profiles are active: production (1)
1 | make sure the application is running the correct profile |
380. Summary
In this module we learned:
-
how to provision internet-based MongoDB and Postgres resources
-
how to deploy an application to the Internet to use provisioned Postgres and Mongo database resources
-
how to build a Docker image as part of the build process
Assignment 5: DB
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
This assignment is broken up into three mandatory sections and an optional BONUS section for those that need extra credit.
The first two mandatory sections functionally work with Spring Data Repositories outside the scope of the HomeSales workflow. You will create a "service" class that is a peer to your HomeSales Service implementation — but this new class is there solely as a technology demonstration and wrapper for the provided JUnit tests. You will work with both JPA and Mongo Repositories as a part of these first two sections.
In the third mandatory section — you will select one of the two technologies, update the end-to-end thread with a Spring Data Repository, and add in some Pageable and Page aspects for unbounded collection query/results.
In the forth, optional BONUS section — you may switch technology selections and implement Homes or Buyers using a Spring Data Repository.
381. Assignment 5a: Spring Data JPA
381.1. Database Schema
381.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of preparing a relational database for use with an application. You will:
-
define a database schema that maps a single class to a single table
-
implement a primary key for each row of a table
-
define constraints for rows in a table
-
define an index for a table
-
define a DataSource to interface with the RDBMS
-
automate database schema migration with the Flyway tool
381.1.2. Overview
In this portion of the assignment you will be defining, instantiating, and performing minor population of a database schema for HomeSale. We will use a single, flat database design.
I have shown the creation of a sequence despite choosing to use a varchar(36)
primary key for the table.
Please keep the sequence in your schema as sequences are commonly needed in RDBMS solutions.
Use char(36) to allow consistency with Mongo portion of assignment
Use the char-based primary key to make the JPA and Mongo portions of the assignment as similar as possible.
We will use a UUID for the JPA portion, but any unique String fitting into 36 characters will work.
|
Postgres access with Docker/Docker Compose
If you have Docker/Docker Compose, you can instantiate a Postgres instance using the scripts in the ejava-springboot/env directory.
You can also get client access to using the following command.
|
You can switch between in-memory H2 (default) and Postgres once you have your property files setup either by manual change of the source code or using runtime properties with the
|
381.1.3. Requirements
-
Configure database properties so that you are able to work with both in-memory and external database. In-memory will be good for automated testing. Postgres will be good for interactive access to the database while developing.
-
make the default database in-memory
You can set the default database to
h2
and activate the console by setting the following properties.application.properties#default test database spring.datasource.url=jdbc:h2:mem:homesales spring.h2.console.enabled=true
You can turn on verbose JPA/SQL-related DEBUG logging using the following properties.
application.propertiesspring.jpa.show-sql=true logging.level.org.hibernate.type=trace
-
provide a "postgres" Spring profile option to use Postgres DB instead of in-memory
You can switch to an alternate database by overriding the URL in a Spring profile. Add a
postgres
profile insrc/main
tree to optionally connect to an external Postgres server versus the in-memory H2 server. Include any necessary credentials. The following example assumes you will be connecting to the postgres DB launched by the class docker-compose.application-postgres.propertiesspring.datasource.url=jdbc:postgresql://localhost:5432/postgres spring.datasource.username=postgres spring.datasource.password=secret
This is only a class assignment. Do not store credentials in files checked into CM or packaged within your Spring Boot executable JAR in a real environment. Make them available via a file location at runtime when outside of a classroom. -
define the location for your schema migrations for flyway to automate.
spring.flyway.locations=classpath:db/migration/common,classpath:db/migration/{vendor}
-
-
Create a set of SQL migrations below
src/main/resources/db/migration
that will define the database schemaRefer to the JPA songs example for a schema example. However, that example assumes that all schema is vendor-neutral and does not use vendor-specific sibling files. -
create SQL migration file(s) to define the base HomeSale schema. This can be hand-generated or metadata-generated once the
@Entity
class is later defined-
define a sequence called
hibernate_sequence
-
define a HomeSale table with the necessary columns to store a flattened HomeSale object
-
use the
id
field as a primary key. Make this a char-based column type of at least 36 characters (varchar(36)
) to host a UUID string -
define column constraints for size and not-null
-
-
account for when the table(s)/sequence(s) already exist by defining a DROP before creating
drop sequence IF EXISTS hibernate_sequence;
-
-
Create a separate SQL migration file to add indexes
-
define a non-unique index on home_id
-
define a non-unique index on buyer_id
-
-
Create a SQL migration file to add one row in the HomeSale table
CURRENT_DATE
can be used to generate a value forlist_date
andsale_date
You may optionally arrange your population to mimic the lifecycle of a HomeSale by first inserting the initial listing and following up with a SQL update to later add the purchase information. This would allow you to test any not-null constraints against the expected lifecycle on a row. You can manually test schema files by launching the Postgres client and reading the SQL file in from stdin
docker-compose exec -T postgres psql -U postgres < (path to file)
-
Place vendor-neutral SQL in a
common
and vendor-specific SQL in a{vendor}
directory as defined in your flyway properties. The example below shows a possible layout.src/main/resources/ `-- db `-- migration |-- common |-- h2 `-- postgres
I am not anticipating any vendor-specific schema population, but it is a good practice if you use multiple database vendors between development and production.
-
-
Configure the application to establish a connection to the database and establish a DataSource
-
declare a dependency on
spring-boot-starter-data-jpa
-
declare a dependency on the
h2
database driver for default testing -
declare a dependency on the
postgresql
database driver for optional production-ready testing -
declare the database driver dependencies as
scope=runtime
See jpa-song-example
pom.xml for more details on declaring these dependencies.
-
-
Configure Flyway so that it automatically populates the database schema
-
declare a dependency on the
flyway-core
schema migration library -
declare the Flyway dependency as scope=runtime
See jpa-song-example
pom.xml for more details on declaring this plugin
-
-
Enable (and pass) the provided MyJpa5a_SchemaTest that extends Jpa5a_SchemaTest. This test will verify connectivity to the database and the presence of the HomeSale table.
-
supply necessary
@SpringBootTest
test configurations unique to your environment -
supply an implementation of the
DbTestHelper
to be injected into all tests
-
-
Package the JUnit test case such that it executes with Maven as a surefire test
381.1.4. Grading
Your solution will be evaluated on:
-
define a database schema that maps a single class to a single table
-
whether you have expressed your database schema in one or more files
-
-
implement a primary key for each row of a table
-
whether you have identified the primary key for the table
-
-
define constraints for rows in a table
-
whether you have defined size and nullable constrains for columns
-
-
define an index for a table
-
whether you have defined an index for any database columns
-
-
automate database schema migration with the Flyway tool
-
whether you have successfully populated the database schema from a set of files
-
-
define a DataSource to interface with the RDBMS
-
whether a DataSource was successfully injected into the JUnit class
-
381.1.5. Additional Details
-
This and the following RDBMS/JPA and MongoDB tests are all client-side DB interaction tests. Calls from JUnit are directed at the service class. The provided starter example supplies an alternate
@SpringBootConfiguration
test configuration to bypass the extra dependencies defined by the full@SpringBootApplication
server class — which can cause conflicts. The@SpringBootConfiguration
class is latched by the "assignment-tests" profile to keep it from being accidentally used by the later API tests.@SpringBootConfiguration @EnableAutoConfiguration @Profile("assignment-tests") (1) public class DbAssignmentTestConfiguration { @SpringBootTest(classes={DbAssignmentTestConfiguration.class, JpaAssignmentDBConfiguration.class, DbClientTestConfiguration.class}) @ActiveProfiles(profiles={"assignment-tests","test"}, resolver = TestProfileResolver.class)(2) //@ActiveProfiles(profiles={"assignment-tests","test", "postgres"}) @Slf4j public class MyJpa5a_SchemaTest extends Jpa5a_SchemaTest {
1 profile prevents @SpringBootConfiguration
from being used as a@Configuration
for other tests2 assignment-tests
profile is activated for these service/DB-level tests only -
The following
starter
configuration files are used by the tests in this section:-
DbAssignmentTestConfiguration
- discussed above. Provides a@SpringBootConfiguration
class that removes the@SpringBootApplication
dependencies from view. -
DbClientTestConfiguration
- this defines the@Bean
factories for theDbTestHelper
and any supporting components. -
JpaAssignmentDBConfiguration
- this defines server-side beans used in this DB-centric portion of the assignment. It provides@Bean
factories that will get replaced when running the application and performing the end-to-end tests.
-
381.2. Entity/BO Class
381.2.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of defining a JPA @Entity
class and performing basic CRUD actions.
You will:
-
define a PersistenceContext containing an
@Entity
class -
inject an EntityManager to perform actions on a Persistence Unit and database
-
map a simple
@Entity
class to the database using JPA mapping annotations -
perform basic database CRUD operations on an
@Entity
-
define transaction scopes
-
implement a mapping tier between BO and DTO objects
381.2.2. Overview
In this portion of the assignment you will be creating an @Entity/Business Object for a HomeSale, mapping that to a table, and performing CRUD actions with an EntityManager.
Your work will be focused in the following areas:
-
creating a business object (BO)/
@Entity
class to map to the database schema you have already completed -
creating a mapper class that will map properties to/from the DTO and BO instances
-
creating a test helper class, implementing
DbTestHelper
that will assist the provided JUnit tests to interact and inspect your persistence implementation. -
implementing a
JpaAssignmentService
component that will perform specific interactions with the database
The interfaces for the DbTestHelper
and JpaAssignmentService
are located in the support module containing the tests.
The DbTestHelper
interface extends the ApiTestHelper
interface you have previously implemented and will simply extend with the additional functionality.
The BO and mapper classes will be used throughout this overall assignment, including the end-to-end.
The testHelper class will be used for all provided JUnit tests.
The JpaAssignmentService
will only be used during the JPA-specific sections of this assignment.
It is a sibling to your HomeSalesService
component(s) for the purpose of doing one-off database assignment tasks.
It will not be used in the Mongo portions of the assignment or the end-to-end.
381.2.3. Requirements
-
Create a Business Object (BO)/
@Entity
class that represents the HomeSale and will be mapped to the database. ASaleBO
"marker" interface has been provided for your BO class to implement. It has no properties. All interactions with this object by the JUnit test will be through calls to the testHelper and mapper classes. You must complete the following details:-
identify the class as a JPA
@Entity
-
identify a String primary key field with JPA
@Id
-
supply a default constructor
-
supply other constructs as desired to help use and interact with this business object
The BO class will map to a single, flat database row. Keep that in mind when accounting for the Home address. The properties/structure of the BO class do not have to be 1:1 with the properties/structure of the DTO class. -
supply a lifecycle event handler that will assign the string representation of a UUID to the
id
field if null when persisted@Entity @PrePersist Lifecycle Callback to assign Primary Key@PrePersist void prePersist() { if (id==null) { id= UUID.randomUUID().toString(); }
If your Entity class is not within the default scan path, you can manually register the package path using the @EntityScan.basePackageClasses
annotation property. This should be done within a@Configuration
class in thesrc/main
portion of your code. The JUnit test will make the condition and successful correction obvious.
-
-
Create a mapper class that will map to/from HomeSale BO and DTO. A templated
SalesMapper
interface has been provided for this. You must complete the details.-
map from BO to DTO
-
map from DTO to BO
Remember — the structure of the BO and DTO classes do not have to match. Encapsulate any mapping details between the two within this mapper class implementation. The following code snippet shows an example implementation of the templated mapper interface.
public class HomeSaleMapper implements SalesMapper<HomeSaleDTO, HomeSaleBO> { public HomeSaleBO map(HomeSaleDTO dto) { ... } public HomeSaleDTO map(HomeSaleBO bo) { ... }
-
-
Implement the
mapAndPersist
method in yourJpaAssignmentService
. It must perform the following:-
accept a HomeSale DTO
-
map the DTO to a HomeSale BO (using your mapper)
-
persist the BO
-
map the persisted BO to a DTO (will have a primary key assigned)
-
return the resulting DTO
The BO must be persisted. The returned DTO must match the input DTO and express the primary key assigned by the database.
Be sure to address @Transactional
details when modifying the database. -
-
Implement the
queryByAgeRange
method in theJpaAssignmentService
using a@NamedQuery
. It must perform the following:-
query the database using a JPA
@NamedQuery
with JPA query syntax to locateHomeSale BO
objects within a saleAge min/max range, inclusiveYou may use the following query to start with and add ordering to complete
select h from HomeSaleBO h where h.saleAge between :min and :max
-
min
andmax
are variable integer values passed in at runtime -
order the results by
id
ascending -
name the query "<EntityName>.findBySaleAgeRange"
EntityName defaults to the Java SimpleName for the class. Make sure all uses of EntityName (i.e., JPAQL queries and JPA @NamedQuery
name prefixes) match.
-
-
map the BO list returned from the query to a list of DTOs (using your mapper)
-
return the list of DTOs
-
-
Enable (and pass) the provided MyJpa5b_EntityTest that extends Jpa5b_EntityTest. This test will perform checks of the above functionality using:
-
DbTestHelper
-
mapper
-
your DTO and BO classes
-
a functional JPA environment
-
-
Package the JUnit test case such that it executes with Maven as a surefire test
381.2.4. Grading
Your solution will be evaluated on:
-
inject an EntityManager to perform actions on a Persistence Unit and database
-
whether a EntityManager/EntityManager was successfully injected into the JUnit test
-
whether a EntityManager was successfully injected into your
JpaAssignmentService
implementation
-
-
map a simple
@Entity
class to the database using JPA mapping annotations-
whether a new HomeSale BO class was created for mapping to the database
-
-
implement a mapping tier between BO and DTO objects
-
whether the mapper was able to successfully map all fields between BO to DTO
-
whether the mapper was able to successfully map all fields between DTO to BO
-
-
perform basic database CRUD operations on an
@Entity
-
whether the HomeSale BO was successfully persisted to the database
-
whether a named JPA-QL query was used to locate the entity in the database
-
-
define transaction scopes
-
whether the test method was declared to use a single transaction for all steps of the test method
-
381.3. JPA Repository
381.3.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of defining a JPA Repository. You will:
-
declare a
JpaRepository
for an existing JPA@Entity
-
perform simple CRUD methods using provided repository methods
-
add paging and sorting to query methods
-
implement queries based on predicates derived from repository interface methods
-
implement queries based on POJO examples and configured matchers
-
implement queries based on
@NamedQuery
or@Query
specification
381.3.2. Overview
In this portion of the assignment you will define a JPA Repository to perform basic CRUD and query actions.
Your work will be focused in the following areas:
-
creating a JPA Spring Data Repository for persisting HomeSale BO objects
-
implementing repository queries within your
JpaAssignmentService
component
381.3.3. Requirements
-
define a HomeSale JPARepository that can support basic CRUD and complete the queries defined below.
-
enable JpaRepository use with the
@EnableJpaRepositories
annotation on a@Configuration
classSpring Data Repositories are primarily interfaces and the implementation is written for you at runtime using proxies and declarative configuration information. If your Repository class is not within the default scan path, you can manually register the package path using the @EnableJpaRepositories.basePackageClasses
annotation property. This should be done within thesrc/main
portion of your code. The JUnit test will make the condition and successful correction obvious. -
inject the JPA Repository class into your
JpaAssignmentService
component. This will be enough to tell you whether the Repository is properly defined and registered with the Spring context. -
implement the
findByHomeIdByDerivedQuery
method details which must-
accept a homeId and a Pageable specification with pageNumber, pageSize, and sort specification
-
return a Page of matching BOs that comply with the input criteria
-
this query must use the Spring Data Derived Query technique **
-
-
implement the
findByExample
method details which must-
accept a HomeSale BO probe instance and a Pageable specification with pageNumber, pageSize, and sort specification
-
return a Page of matching BOs that comply with the input criteria
-
this query must use the Spring Data Query by Example technique **
Override the default ExampleMatcher to ignore any fields declared with built-in data types that cannot be null.
-
-
implement the
findByAgeRangeByAnnotatedQuery
method details which must-
accept a minimum and maximum age and a Pageable specification with pageNumber and pageSize
-
return a Page of matching BOs that comply with the input criteria and ordered by
id
-
this query must use the Spring Data Named Query technique and leverage the "HomeSaleBO.findBySaleAgeRange"
@NamedQuery
created in the previous section.Named Queries do not support adding Sort criteria from the Pageable parameter. An "order by" for
id
must be expressed within the@NamedQuery
.... order by r.id ASC
There is no technical relationship between the name of the service method you are implementing and the repository method defined on the JPA Spring Data Repository. The name of the service method is mangled to describe "how" you must implement it — not what the name of the repository method should be.
-
-
Enable (and pass) the provided MyJpa5c_RepositoryTest that extends Jpa5c_RepositoryTest. This test will populate the database with content and issue query requests to your
JpaAssignmentService
implementation. -
Package the JUnit test case such that it executes with Maven as a surefire test
381.3.4. Grading
Your solution will be evaluated on:
-
declare a
JpaRepository
for an existing JPA@Entity
-
whether a
JPARepository
was defined and injected into the assignment service helper
-
-
perform simple CRUD methods using provided repository methods
-
whether the database was populated with test instances
-
-
add paging and sorting to query methods
-
whether the query methods where implemented with pageable specifications
-
-
implement queries based on predicates derived from repository interface methods
-
whether a derived query based on method signature was successfully performed
-
-
implement queries based on POJO examples and configured matchers
-
whether a query by example query was successfully performed
-
-
implement queries based on
@NamedQuery
or@Query
specification-
whether a query using a
@NamedQuery
or@Query
source was successfully performed
-
382. Assignment 5b: Spring Data Mongo
382.1. Mongo Client Connection
382.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of setting up a project to work with Mongo. You will:
-
declare project dependencies required for using Spring’s MongoOperations/MongoTemplate API
-
define a connection to a MongoDB
-
inject a MongoOperations/MongoTemplate instance to perform actions on a database
382.1.2. Overview
In this portion of the assignment you will be adding required dependencies and configuration properties necessary to communicate with the Flapdoodle test database and an external MongoDB instance.
Postgres access with Docker/Docker Compose
If you have Docker/Docker Compose, you can instantiate a MongoDB instance using the docker-compose scripts in the ejava-springboot root directory. $ docker-compose up -d mongodb Creating ejava_mongodb_1 ... done You can also get client access to using the following command. $ docker-compose exec mongodb mongo -u admin -p secret ... Welcome to the MongoDB shell. > |
You can switch between Flapdoodle and Mongo in your tests once you have your property files setup.
|
382.1.3. Requirements
-
Configure database properties so that you are able to work with both the Flapdoodle test database and a Mongo instance. Flapdoodle will be good for automated testing. MongoDB will be good for interactive access to the database while developing. Spring Boot will automatically configure tests for Flapdoodle if it is in the classpath and in the absence of a Mongo database URI.
You can turn on verbose MongoDB-related DEBUG logging using the following properties
application.propertieslogging.level.org.springframework.data.mongodb=DEBUG
-
provide a
mongodb
profile option to use an external MongoDB server instead of the Flapdoodle test instanceapplication-mongodb.properties#spring.data.mongodb.host=localhost #spring.data.mongodb.port=27017 #spring.data.mongodb.database=test #spring.data.mongodb.authentication-database=admin #spring.data.mongodb.username=admin #spring.data.mongodb.password=secret spring.data.mongodb.uri=mongodb://admin:secret@localhost:27017/test?authSource=admin
Configure via Individual Properties or Compound URLSpring Data Mongo has the capability to set individual configuration properties or via one, compound URL.This is only a class assignment. Do not store credentials in files checked into CM or packaged within your Spring Boot executable JAR in a real environment. Make them available via a file location at runtime when outside of a classroom. Flapdoodle will be Default Database during TestingFlapdoodle will be the default during testing unless deactivated by the presence of thespring.data.mongodb
connection properties.
-
-
Configure the application to establish a connection to the database and establish a MongoOperations (the interface)/MongoTemplate (the commonly referenced implementation class)
-
declare a dependency on
spring-boot-starter-data-mongo
-
declare a dependency on the
de.flapdoodle.embed.mongo
database driver for default testing withscope=test
See mongo-book-example
pom.xml for more details on declaring these dependencies.
-
-
Enable (and pass) the provided
MyMongo5a_ClientTest
that extendsMongo5a_ClientTest
. This test will verify connectivity to the database. -
Package the JUnit test case such that it executes with Maven as a surefire test
382.1.4. Grading
Your solution will be evaluated on:
-
declare project dependencies required for using Spring’s MongoOperations/MongoTemplate API
-
whether required Maven dependencies where declared to operate and test the application with Mongo
-
-
define a connection to a MongoDB
-
whether a URL to the database was defined when the
mongodb
profile was activated
-
-
inject an MongoOperations/MongoTemplate instance to perform actions on a database
-
whether a MongoOperations client could be injected
-
whether the MongoOperations client could successfully communicate with the database
-
382.1.5. Additional Details
-
As with the RDBMS/JPA tests, these MongoDB tests are all client-side DB interaction tests. Calls from JUnit are directed at the service class. The provided starter example supplies an alternate
@SpringBootConfiguration
test configuration to bypass the extra dependencies defined by the full@SpringBootApplication
server class — which can cause conflicts. The@SpringBootConfiguration
class is latched by the "assignment-tests" profile to keep it from being accidentally used by the later API tests.@SpringBootConfiguration @EnableAutoConfiguration @Profile("assignment-tests") (1) public class DbAssignmentTestConfiguration { @SpringBootTest(classes={DbAssignmentTestConfiguration.class, MongoAssignmentDBConfiguration.class, DbClientTestConfiguration.class }) @ActiveProfiles(profiles={"assignment-tests","test"}, resolver = TestProfileResolver.class)(2) //@ActiveProfiles(profiles={"assignment-tests","test", "mongodb"}) public class MyMongo5a_ClientTest extends Mongo5a_ClientTest {
1 profile prevents @SpringBootConfiguration
from being used as a@Configuration
for other tests2 assignment-tests
profile is activated for these service/DB-level tests only
382.2. Mongo Document
382.2.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of defining a Spring Data Mongo @Document class and performing basic CRUD actions. You will:
-
implement basic unit testing using an (seemingly) embedded MongoDB
-
define a
@Document
class to map to MongoDB collection -
perform whole-document CRUD operations on a
@Document
class using the Java API -
perform queries with paging properties
382.2.2. Overview
In this portion of the assignment you will be
creating a @Document
/Business Object for a HomeSale, mapping that to a collection, and performing CRUD actions with a MongoOperations/MongoTemplate.
Reuse BO and Mapper classes
It has been designed and expected that you will be able to re-use the same HomeSale BO and Mapper classes from the JPA portion of the assignment.
You should not need to create new ones.
The BO class will need a few Spring Data Mongo annotations but the mapper created for the JPA portion should be 100% reusable here as well.
|
382.2.3. Requirements
-
Create (reuse) a Business Object (BO) class that represents the HomeSale and will be mapped to the database. A SaleBO "marker" interface has been provided for your BO class to implement. It has no properties. All interactions with this object by the JUnit test will be through calls to the testHelper and mapper classes. You must complete the following details:
-
identify the class as a Spring Data Mongo
@Document
-
identify a String primary key field with Spring Data Mongo
@Id
This is a different @Id
annotation than the JPA@Id
annotation. -
supply a default constructor
-
-
Reuse the mapper class from the earlier JPA Entity portion of this assignment.
-
Implement the
mapAndPersist
method in yourMongoAssignmentService
. It must perform the following:-
accept a HomeSale DTO
-
map the DTO to a HomeSale BO (using your mapper)
-
persist the BO to the database
-
map the persisted BO to a DTO (will have a primary key assigned)
-
return the resulting DTO
The BO must be persisted. The returned DTO must match the input DTO and express the primary key assigned by the database.
-
-
Implement the
queryByAgeRange
method in yourMongoAssignmentService
. It must perform the following:-
query the database to locate matching
HomeSale BO
documents within a saleAge min/max range, inclusiveYou may use the injected MongoOperations client find command, a query, and the HomeSaleBO.class as a request parameter You may make use of the following query
Query.query(Criteria.where("saleAge").gte(min).lte(max))
-
min and max are variable integer values passed in at runtime
-
order the results by
id
ascending
-
-
map the BO list returned from the query to a list of DTOs (using your mapper)
-
return the list of DTOs
-
-
Enable (and pass) the provided MyMongo5b_DocumentTest that extends Mongo5b_DocumentTest. This test will perform checks of the above functionality using:
-
DbTestHelper
-
mapper
-
your DTO and BO classes
-
a functional MongoDB environment
-
-
Package the JUnit test case such that it executes with Maven as a surefire test
382.2.4. Grading
Your solution will be evaluated on:
-
define a
@Document
class to map to MongoDB collection-
whether the BO class was properly mapped to the database, including document and primary key
-
-
perform whole-document CRUD operations on a
@Document
class using the Java API-
whether a successful insert and query of the database was performed with the injected MongoOperations / MongoTemplate
-
382.3. Mongo Repository
382.3.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of defining a Mongo Repository. You will:
-
declare a
MongoRepository
for an existing@Document
-
implement queries based on predicates derived from repository interface methods
-
implement queries based on POJO examples and configured matchers
-
implement queries based on annotations with JSON query expressions on interface methods
-
add paging and sorting to query methods
382.3.2. Overview
In this portion of the assignment you will define a Mongo Repository to perform basic CRUD and query actions.
Your work will be focused in the following areas:
-
creating a Mongo Spring Data Repository for persisting HomeSale BO objects
-
implementing repository queries within your
MongoAssignmentService
component
382.3.3. Requirements
-
define a HomeSale MongoRepository that can support basic CRUD and complete the queries defined below.
-
enable MongoRepository use with the
@EnableMongoRepositories
annotation on a@Configuration
classSpring Data Repositories are primarily interfaces and the implementation is written for you using proxies and declarative configuration information. If your Repository class is not within the default scan path, you can manually register the package path using the @EnableMongoRepositories.basePackageClasses
annotation property. This should be done within a@Configuration
class in thesrc/main
portion of your code. The JUnit test will make the condition and successful correction obvious. -
inject the Mongo Repository class into the
MongoAssignmentService
. This will be enough to tell you whether the Repository is properly defined and registered with the Spring context. -
implement the
findByHomeIdByDerivedQuery
method details which must-
accept a homeId and a Pageable specification with pageNumber, pageSize, and sort specification
-
return a Page of matching BOs that comply with the input criteria
-
this query must use the Spring Data Derived Query technique **
-
-
implement the
findByExample
method details which must-
accept a HomeSale BO probe instance and a Pageable specification with pageNumber, pageSize, and sort specification
-
return a Page of matching BOs that comply with the input criteria
-
this query must use the Spring Data Query by Example technique **
Override the default ExampleMatcher to ignore any fields declared with built-in data types that cannot be null.
-
-
implement the
findByAgeRangeByAnnotatedQuery
method details which must-
accept a minimum and maximum age and a Pageable specification with pageNumber and pageSize
-
return a Page of matching BOs that comply with the input criteria and ordered by
id
-
this query must use the Spring Data JSON-based Query Methods technique and annotate the repository method with a
@Query
definition.You may use the following JSON query expression for this query. Mongo JSON query expressions only support positional arguments and are zero-relative.
value="{ saleAge : {$gte:?0, $lte: ?1} }" (1)
1 min is position 0 and max is position 1 in the method signature Annotated Queries do not support adding Sort criteria from the Pageable parameter. You may use the following sort expression in the annotation
sort="{id:1}"
There is no technical relationship between the name of the service method you are implementing and the repository method defined on the Mongo Spring Data Repository. The name of the service method is mangled to describe "how" you must implement it — not what the name of the repository method should be.
-
-
Enable (and pass) the provided
MyMongo5c_RepositoryTest
that extendsMongo5c_RepositoryTest
. This test will populate the database with content and issue query requests to yourMongoAssignmentService
implementation. -
Package the JUnit test case such that it executes with Maven as a surefire test
382.3.4. Grading
Your solution will be evaluated on:
-
declare a
MongoRepository
for an existing@Document
-
whether a MongoRepository was declared and successfully integrated into the test case
-
-
implement queries based on predicates derived from repository interface methods
-
whether a dynamic query was implemented via the expression of the repository interface method name
-
-
implement queries based on POJO examples and configured matchers
-
whether a query was successfully implemented using an example with a probe document and matching rules
-
-
implement queries based on annotations with JSON query expressions on interface methods
-
whether a query was successfully implemented using annotated repository methods containing a JSON query and sort documents
-
-
add paging and sorting to query methods
-
whether queries were performed with sorting and paging
-
383. Assignment 5c: Spring Data Application
383.1. API/Service/DB End-to-End
383.1.1. Purpose
In this portion of the assignment, you will demonstrate your knowledge of integrating a Spring Data Repository into an end-to-end application, accessed through an API. You will:
-
implement a service tier that completes useful actions
-
implement controller/service layer interactions relative to DTO and BO classes
-
determine the correct transaction propagation property for a service tier method
-
implement paging requests through the API
-
implement page responses through the API
383.1.2. Overview
In this portion of the assignment you will be taking elements of the application that you have worked on either together or independently and integrate them into an end-to-end application from the API, thru services, security, the repository, and to the database and back.
The fundamental scope of the assignment is to perform existing HomeSales use cases (including security layer(s)) but updated with database and the impacts of database interaction related to eventual size and scale. You will
-
chose either a RDBMS/JPA or Mongo target solution
-
replace your existing HomeSale Service and Repository implementation classes with implementations that are based upon your selected repository technology
For those that
-
augmented your assignment2/API solution in-place to meet the requirements of each follow-on assignment — you will continue that pattern by replacing the core service logic with mapper/BO/repository logic.
-
layered your solution along assignment boundaries, you will override the assignment2 service and repository components with a new service implementation based on the mapper/BO/repository logic. The support modules show an example of doing this.
-
-
update the controller and service interfaces to address paging
It is again, your option of whether to
-
simply add the new paging endpoint to your existing controller and API client class
-
subclass the controller and API class to add the functionality
The Homes/Buyers support modules are provided in a layered approach to help identify what is new with each level and to keep what you are basing your solutions on consistent. It is much harder to implement the layered approach, but offers some challenges and experience in integrating multiple components.
-
-
leave the Homes and Buyers implementation as in-memory repositories
There are two optional support modules supplied:
The tests within each module work but extensive testing with HomeSales has not been performed. It is anticipated that you will continue to use the in-memory Homes and Buyers that you have been using to date. However, it is your option to use those modules in any way. |
Continued use of in-memory Homes and Buyers is expected
The homebuyers-support-svcjpa and homebuyers-support-svcmongo modules are provided as examples of how the flow can be implemented.
It is not a requirement that you change from the in-memory versions to complete this assignment.
|
383.1.3. Requirements
-
Select a database implementation choice (JpaRepository or MongoRepository).
This is a choice to move forward with. The option you don’t select will still be part of your dependencies, source tree, and completed unit integration tests. -
Update/Replace your legacy HomeSale Service and Repository components with a service and repository based on Spring Data Repository.
-
all CRUD calls will be handled by the Repository — no need for DataSource, EntityManager or MongoOperations/MongoTemplate
-
all queries must accept Pageable and return Page
By following the rule early in assignment 2, you made this transition extremely easy on yourself. -
the service should
-
accept DTO types as input and map to BOs for interaction with the Repository
This is should be consistent with your original service interface. The only change should be the conversion of DTO to BO and back. -
map BO types to DTOs as output
This should also be consistent with your original service interface. The Page class has a nice/easy way to map between Page<T1> to Page<T2>. When you combine that with your mapper — it can be a simple one-line of code.
dtos = bos.map(bo->map(bo));
-
-
-
Add the capability to your controller to accept full paging parameters. For those augmenting in-place, you may simply modify your existing finder methods to accept/return the additional information. For those adopting the layered approach, you may add an additional URI to accept/return the additional information.
Example new Paged URI for Layered Approachpublic interface HomesPageableAPI extends HomesAPI { public static final String HOMES_PAGED_PATH = "/api/homes/paged";
There is a convenience method within
PageableDTO
(fromejava-dto-util
) that will convert pageNumber, pageSize, and sort to a Spring DataPageable
.@GetMapping(path = HomesPageableAPI.HOMES_PAGED_PATH, ...) public ResponseEntity<HomePageDTO> getHomesPage( @RequestParam(value = "pageNumber", required = false) Integer pageNumber, @RequestParam(value = "pageSize", required = false) Integer pageSize, @RequestParam(value = "sort", required = false) String sort) { Pageable pageable = PageableDTO.of(pageNumber, pageSize, sort).toPageable();
-
Add the capability to your controller to return paging information with the contents. The current pageNumber, pageSize, and sort that relate to the supplied data must be returned with the contents.
You may use the
Page<T>
class (fromejava-dto-util
) to automate/encapsulate much of this. The primary requirement is to convey the information. The option is yours of whether to use this library demonstrated in the class JPA and Mongo examples as well as the Homes and Buyers examples (fromhomebuyers-support-pageable-svc
)public class HomePageDTO extends PageDTO<HomeDTO> { protected HomePageDTO() {} public HomePageDTO(Page<HomeDTO> page) { super(page); } }
Page<HomeDTO> result = service.getHomes(pageable); HomePageDTO resultDTO = new HomePageDTO(result); return ResponseEntity.ok(resultDTO);
-
Add the capability to your API calls to provide and process the additional page information.
There is a convenience method within
PageableDTO
(fromejava-dto-util
) that will serialize the pageNumber, pageSize, and sort of Spring Data’s Pageable into query parameters.PageableDTO pageableDTO = PageableDTO.fromPageable(pageable); (1) URI url = UriComponentsBuilder .fromUri(homesUrl) .queryParams(pageableDTO.getQueryParams()) (2) .build().toUri();
1 create DTO abstraction from Spring Data’s local Pageable abstraction 2 transfer DTO representation into query parameters -
Write a JUnit Integration Test Case that will
-
populate the database with multiple HomeSales with different and similar properties
-
query for HomeSales based on a criteria that will match some of the HomeSales and return a page of contents that is less than the total matches in the database. (i.e., make the pageSize smaller that total number of matches)
-
page through the results until the end of data is encountered
Again — the DTO Paging framework in common and the JPA Songs and Mongo Books examples should make this less heroic than it may sound.
The requirement is not that you integrate with the provided DTO Paging framework. The requirement is that you implement end-to-end paging and the provided framework can take a lot of the API burden off of you. You may implement page specification and page results in a unique manner as long as it is end-to-end. -
-
Package the JUnit test case such that it executes with Maven as a surefire test
383.1.4. Grading
Your solution will be evaluated on:
-
implement a service tier that completes useful actions
-
whether you successfully implemented a query for HomeSales for a specific homeId
-
whether the service tier implemented the required query with Pageable inputs and a Page response
-
whether this was demonstrated thru a JUnit test
-
-
implement controller/service layer interactions when it comes to using DTO and BO classes
-
whether the controller worked exclusively with DTO business classes and implemented a thin API facade
-
whether the service layer mapped DTO and BO business classes and encapsulated the details of the service
-
-
determine the correct transaction propagation property for a service tier method
-
depending on the technology you select and the usecase you have implemented — whether the state of the database can ever reach an inconsistent state
-
-
implement paging requests through the API
-
whether your controller implemented a means to express Pageable request parameters for queries
-
-
implement page responses through the API
-
whether your controller supported returning a page-worth of results for query results
-
383.1.5. Additional Details
-
This is a bit in the weeds, but something that came up following the layered approach, not paying attention to component scan paths when placing classes into Java packages, and not wanting to change code that you have built upon. The provided assignment3
@SpringBootApplication
andSecurityConfiguration
classes are siblings of the same Java package and thebasePackageClasses
property of the database assignment is looking only for packages — not classes — and will accidentally bring in theHomeSalesSecurityApp
class with a normal reference.src/main/java/info/ejava_student/starter/assignment3/ |-- aop `-- security |-- HomeSalesSecurityApp.java |-- SecurityConfiguration.java (1) |-- homesales `-- identity
1 naming SecurityConfiguration
in thebasePackageClasses
causes the entire package from that point down to be includedBy using
@ComponentScan
, we can define the scan path in more detail and supply a list of filters for what to exclude.Excluding Classes from Scan Path@SpringBootApplication @ComponentScan( basePackageClasses={ ... SecurityConfiguration.class, //SecurityFilterChain ... }, excludeFilters = { @ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, classes = { HomeSalesSecurityApp.class }) })
384. Assignment 5d: Bonus
If you feel you got a slow start to the semester and are now finally getting on a role, the following is an optional BONUS assignment for your consideration. If you complete this bonus assignment successfully and need a ~half-grade boost to get you into the next tier you may want to read on.
384.1. Homes/Buyers Alternate Repository
384.1.1. Purpose
In this portion of the optional BONUS assignment, you will further demonstrate your knowledge of integrating a Spring Data Repository into an end-to-end application, accessed through an API of the opposite technology than you chose for HomeSales.
You will also demonstrate the ability to weave a new component implementation into the Spring context to replace an existing in-memory Repository.
384.1.2. Homes/Buyers Alternate Repository
In this portion of the optional BONUS assignment you will be replacing the in-memory Homes and Buyers Services/Repositories with a repository-backed implementation.
You may use the same technology choice from your end-to-end solution to map all three applications.
You will do so for HomeSales as well as Homes and Buyers and include security and functional capabilities.
You went through the JPA and Mongo setup as part of your Spring Data Assignment. There are further JPA and Mongo end-to-end examples in the homebuyers-support-db-svcjpa
and homebuyers-support-db-svcmongo
support modules that can be leveraged.
Use what you can out of homebuyers-support-db-svcjpa and homebuyers-support-db-svcmongo modules, but when mapping to the alternate database type — you will clearly need to fully replace that portion of the code.
It is expected that you will be able to fully use the security and controller support layers and then insert your mapping of the Homes or Buyers to the database.
|
384.1.3. Requirements
-
Implement a repository and service class mapping Homes to the database of choice. You will need to implement the HomeBO class and mapping between the DTO and BO type.
Portions of the JPA thread for Homes is provided in the homebuyers-support-db-svcjpa
example. -
Implement a repository and service class mapping Buyers to the database of choice. You will need to implement the BuyerBO class and mapping between the DTO and BO type.
Portions of the Mongo thread for Buyers is provided in the homebuyers-support-db-svcmongo
example. -
Provided a JUnit unit integration test that will demonstrate a successful registration — from Home/Buyer creation to HomeSale completion, using your database mappings for all services.
-
Turn in a source tree with complete Maven modules that will build web application.
384.1.4. Grading
Your solution will be evaluated on:
-
whether you were able to implement persistence using a Spring Data Repository for Homes and Buyers.
-
whether you were able to integrate the database repositories for Homes and Buyers into the service and security logic for Homes and Buyers as part of the end-to-end scenario.
Bean Validation
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
385. Introduction
Well-designed software components should always be designed according to a contract of what is required of inputs and outputs; constraints; or pre-conditions and post-conditions. Validation of inputs and outputs need to be performed at component boundaries. These conditions need to be well-advertised but ideally the checking of these conditions should not overwhelm the functional aspects of the code.
public PersonPocDTO createPOC(PersonPocDTO personDTO) {
if (personDTO==null) {
throw new BadRequestException("createPOC.person: must not be null");
} else if (StringUtils.isNotBlank(personDTO.getId())) {
throw new InvalidInputException("createPOC.person.id: must be null");
... (1)
1 | business logic possibly overwhelmed by validation concerns and actual checks |
This lecture will introduce working with the Bean Validation API to implement declarative and expressive validation.
@Validated(PocValidationGroups.CreatePlusDefault.class)
public PersonPocDTO createPOC(
@NotNull
@Valid PersonPocDTO personDTO); (1)
1 | conditions well-advertised and isolated from target business logic |
385.1. Goals
The student will learn:
-
to add declarative pre-conditions and post-conditions to components using the Bean Validation API
-
to define declarative validation constraints
-
to implement custom validation constraints
-
to enable injected call validation for components
-
to identify patterns/anti-patterns for validation
385.2. Objectives
At the conclusion of this lecture and related exercises, the student will be able to:
-
add Bean Validation to their project
-
add declarative data validation constraints to types and method parameters
-
configure a
ValidatorFactory
and obtain aValidator
-
programmatically validate an object
-
programmatically validate parameters to and response from a method call
-
inspect constraint violations
-
enable Spring/AOP validation for components
-
implement a custom validation constraint
-
implement a cross-parameter validation constraint
-
configure Web API constraint violation responses
-
configure Web API parameter validation
-
configure JPA validation
-
configure Spring Data Mongo Validation
-
identify some patterns/anti-patterns for using validation
386. Background
Bean Validation is a standard that originally came out of Java EE/SE as JSR-330 (1.0) in the 2009 timeframe and later updated with JSR-349 (1.1) in 2013 and JSR-380 (2.0) in 2017. It was meant to simplify validation — reducing the chance of error and to reduce the clutter of validation within the business code that required validation. The standard is not specific any particular tier (e.g., UI, Web, Service, DB) but has been integrated into several of the individual frameworks. [76]
Implementations include:
Hibernate Validator was the original and current reference implementation and used within Spring Boot today.
387. Dependencies
To get started with validation in Spring Boot — we add a dependency on spring-boot-starter-validation
.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
That will bring in the validation reference implementation from Hibernate and an implementation for regular expression validation constraints.
[INFO] +- org.springframework.boot:spring-boot-starter-validation:jar:2.7.0:compile
[INFO] | +- org.apache.tomcat.embed:tomcat-embed-el:jar:9.0.63:compile (3)
[INFO] | \- org.hibernate.validator:hibernate-validator:jar:6.2.3.Final:compile (2)
[INFO] | +- jakarta.validation:jakarta.validation-api:jar:2.0.2:compile (1)
[INFO] | +- org.jboss.logging:jboss-logging:jar:3.4.3.Final:compile
[INFO] | \- com.fasterxml:classmate:jar:1.5.1:compile
1 | overall Bean Validation API |
2 | Bean Validation API reference implementation from Hibernate |
3 | regular expression implementation for regular expression constraints |
388. Declarative Constraints
At the core of the Bean Validation API are declarative constraint annotations we can place directly into the source code.
388.1. Data Constraints
The following snippet shows a class with a property that is required to be not-null when valid.
import javax.validation.constraints.NotNull;
...
class AClass {
@NotNull
private String aValue;
...
Constraints do not Actively Prevent Invalid State
The constraint does not actively prevent the property from being set to an invalid value. Unlike with the Lombok annotations, no class code is written as a result of the validation annotations. The constraint will identify whether the property is currently valid when validated. The validating caller can decide what to do with the invalid state. |
388.2. Common Built-in Constraints
You can find a list of built-in constraints in the Bean Validation Spec and the Hibernate Validator documentation. A few common ones include:
|
|
Additional constraints are provided by:
-
Hibernate Additional Constraints (e.g., @CreditCardNumber)
-
Java Bean Validation Extension (JBVExt) (e.g. @Alpha, @IsDate, @IPv4)
We will take a look at how to create a custom constraint later in this lecture.
388.3. Method Constraints
We can provide pre-condition and post-condition constraints on methods and constructors.
The following snippet shows a method that requires a non-null and valid parameter and will return a non-null result.
Constraints for input are placed on the individual input parameters.
Constraints on the output (as well as cross-parameter constraints) are placed on the method.
The @Validated
annotation is added to components to trigger Spring to enable validation for injected components.
import javax.validation.Valid;
import org.springframework.validation.annotation.Validated;
...
@Component
@Validated (3)
public class AService {
@NotNull (2)
public String aMethod(@NotNull @Valid AClass aParameter) { (1)
return ...;
}
1 | method requires a non-null parameter with valid content |
2 | the result of the method is required to be non-null |
3 | @Validated triggers Spring’s use of the Bean Validation API to validate the call |
Null Properties Are Considered @Valid Unless Explicitly Constrained with @NotNull
It is a best practice to consider a null value as valid unless explicitly constrained with @NotNull .
|
We will eventually show all this integrated within Spring, but first we want to make sure we understand the plumbing and what Spring is doing under the covers.
389. Programmatic Validation
To work with the Bean Validation API directly, our initial goal is to obtain a standard javax.validation.Validator
instance.
import javax.validation.Validator;
...
Validator validator;
This can be obtained manually or through injection.
389.1. Manual Validator Instantiation
We can obtain a Validator using one of the Validation
builder methods to return a ValidatorFactory
.
The following snippet shows the builder providing an instance of the default factory provider, with the default configuration.
We will come back to the configure()
method later.
If we have no configuration changes, we can simplify with a call to buildDefaultValidatorFactory()
.
The Validator
instance is obtained from the ValidatorFactory
.
Both the factory and validator instances are thread-safe.
import javax.validation.Validation;
...
ValidatorFactory myValidatorFactory = Validation.byDefaultProvider()
.configure()
//configuration commands
.buildValidatorFactory(); (1)
//ValidatorFactory myValidatorFactory = Validation.buildDefaultValidatorFactory();
Validator myValidator = myValidatorFactory.getValidator(); (1)
1 | factory and validator instances are thread-safe, initialized during bean construction, and used during instance methods |
389.2. Inject Validator Instance
With the validation starter dependency comes a default Validator. For components, we can simply have it injected.
@Autowired
private Validator validator;
389.3. Customizing Injected Instance
If we want the best of both worlds and have some customizations to make, we can define a @Bean
factory to replace the AutoConfigure and return our version of the Validator
instead.
@Bean (1)
public Validator validator() {
return Validation.byDefaultProvider()
.configure()
//configuration commands
.buildValidatorFactory()
.getValidator();
}
1 | A custom Validator @Bean within the application will override the default provided by Spring Boot |
389.4. Review: Class with Constraint
The following validation example(s) will use the following class with a non-null constraint on one of its properties.
@Getter
public class AClass {
@NotNull
private String aValue;
...
389.5. Validate Object
The most straight forward use of the validation programmatic API is to validate individual objects.
The object to be validated is supplied and a Set<ConstraintViolation>
is returned.
No exceptions are thrown by the Bean Validation API itself for constraint violations.
Exceptions are only thrown for invalid use of the API and to report violations within frameworks like
Contexts and Dependency Injection (CDI) or Spring Boot.
The following snippet shows an example of using the validator to validate an object with at least one constraint error.
//given - @NotNull aProperty set to null by ctor
AClass invalidAClass = new AClass();
//when - checking constraints
Set<ConstraintViolation<AClass>> violations = myValidator.validate(invalidAClass);(1)
violations.stream().forEach(v-> log.info("field name={}, value={}, violated={}",
v.getPropertyPath(), v.getInvalidValue(), v.getMessage()));
//then - there will be at least one violation
then(violations).isNotEmpty(); (2)
1 | programmatic call to validate object |
2 | non-empty return set means violations where found |
The result of the validation is a Set<ConstraintViolation>
.
Each constraint violation identifies the:
|
|
The following shows the output of the example.
field name=aValue, value=null, violated=must not be null
Specific Property Validation
We can also validate a value against the definition of a specific property
|
389.6. Validate Method Calls
We can also validate calls to and results from methods (and constructors too). This is commonly performed by AOP code — rather than anything closely related to the business logic.
The following snippet shows a class with a method that has input and response constraints.
The input parameter must be valid and not null.
The response must also be not null.
A @Valid
constraint on an input argument or response will trigger the object validation — which we just demonstrated — to be performed.
public class AService {
@NotNull
public String aMethod(@NotNull @Valid AClass aParameter) { (1) (2)
return ...
}
1 | @NotNull constrains aParameter to always be non-null |
2 | @Valid triggers validation contents of aParameter |
With those validation rules in place, we can check them for the following sample call.
//given
AService myService = new AService(); (1)
AClass myValue = new AClass();
//when
String result = myService.aMethod(myValue);
1 | Note: Service shown here as POJO. Must be injected for container to intercept and subject to validation |
Please note that the code above is a plain POJO call. Validation is only automatically performed for injected components. We will use this call to describe how to programmatically validate a method call. |
389.7. Identify Method Using Java Reflection
Before we can validate anything, we must identify the descriptors of the call and resolve a Method
reference using Java Reflection.
In the following example snippet we locate the method called aMethod
on the AService
class that accepts one parameter of AClass
type.
Object[] methodParams = new Object[]{ myValue };
Class<?>[] methodParamTypes = new Class<?>[]{ AClass.class };
Method methodToCall = AService.class.getMethod("aMethod", methodParamTypes);
The code above has now resolved a reference to the following method call
(AService)myService).aMethod((AClass)myValue);
389.8. Programmatically Check for Parameter Violations
Without actually making the call, we can check whether the given parameters violate defined method constraints by accessing the ExecutableValidator
from the Validator
object.
Executable
is a generalized java.lang.reflect
type for Method
and Constructor
.
//when
Set<ConstraintViolation<AService>> violations = validator
.forExecutables() (1)
.validateParameters(myService, methodToCall, methodParams);
1 | returns ExecutableValidator |
The following snippet shows the reporting of the validation results when subjecting our myValue
parameter to the defined validation rules of the aMethod()
method.
//then
then(violations).hasSize(1);
ConstraintViolation<?> violation = violations.iterator().next();
then(violation.getPropertyPath().toString()).isEqualTo("aMethod.arg0.aValue");
then(violation.getMessage()).isEqualTo("must not be null");
then(violation.getInvalidValue()).isEqualTo(null);
then(violation.getInvalidValue()).isEqualTo(myValue.getAValue());
389.9. Validate Method Results
We can also validate what is returned against the defined rules of the aMethod()
method using the same service instance and method reflection references from the parameter validation.
Except in this case, methodToCall
has already been called and we are now holding onto the result value.
The following example shows an example of validating a null result against the return rules of the aMethod()
method.
//given
String nullResult = null;
//when
violations = validator.forExecutables()
.validateReturnValue(myService, methodToCall, nullResult);
Since null is not allowed, one violation is reported.
//then
then(violations).hasSize(1);
violation = violations.iterator().next();
then(violation.getPropertyPath().toString()).isEqualTo("aMethod.<return value>");
then(violation.getMessage()).isEqualTo("must not be null");
then(violation.getInvalidValue()).isEqualTo(nullResult);
390. Method Parameter Naming
Validation is able to easily gather meaningful field path information from classes and properties.
When we validated the AClass
instance, we were told the given name of the property in error supplied from reflection.
class AClass {
@NotNull
private String aValue;
...
field name=aValue, value=null, violated=must not be null
However, reflection by default does not provide the given names of parameters — only the position.
public class AService {
@NotNull
public String aMethod(@NotNull @Valid AClass aParameter) {
return ...
}
[ERROR] SelfDeclaredValidatorTest.method_arguments:96
expected: "aMethod.aParameter.aValue"
but was: "aMethod.arg0.aValue"
1 | By default, argument position supplied (arg0 ) — not argument name |
There are two ways to solve this.
390.1. Add -parameters to Java Compiler Command
The first way to solve this would be to add the -parameter
option to the Java compiler.
The following snippet shows how to do this for the maven-compiler-plugin
.
Note that this only applies to what is compiled with Maven and not what is actively worked on within the IDE.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<parameters>true</parameters>
</configuration>
</plugin>
The above appears to work fine Maven compiler plugin 3.10.1, but I encountered issues getting that working with the older 3.8.1 (without an explicit -parameters in compilerArgs ).
|
390.2. Add Custom ParameterNameProvider
Another way to help provide parameter names is to configure the ValidatorFactory
with a ParameterNameProvider
.
ValidatorFactory myValidatorFactory = Validation.byDefaultProvider()
.configure()
.parameterNameProvider(new MyParameterNameProvider()) (1)
.buildValidatorFactory();
1 | configuring ValidatorFactory with custom parameter name provider |
390.3. ParameterNameProvider
The following snippets shows the skeletal structure of a sample ParameterNameProvider
.
It has separate incoming calls for Method
and Constructor
calls and must produce a list of names to use.
This particular example is simply returning the default.
Example work will be supplied next.
import javax.validation.ParameterNameProvider;
import java.lang.reflect.Constructor;
import java.lang.reflect.Executable;
import java.lang.reflect.Method;
import java.lang.reflect.Parameter;
...
public class MyParameterNameProvider implements ParameterNameProvider {
@Override
public List<String> getParameterNames(Constructor<?> ctor) {
return getParameterNames((Executable) ctor);
}
@Override
public List<String> getParameterNames(Method method) {
return getParameterNames((Executable) method);
}
protected List<String> getParameterNames(Executable method) {
List<String> argNames = new ArrayList<>(method.getParameterCount());
for (Parameter p: method.getParameters()) {
//do something to determine Parameter p's desired name
String argName=p.getName(); (1)
argNames.add(argName);
}
return argNames; (2)
}
}
1 | real work to determine the parameter name goes here |
2 | must return a list of parameter names of expected size |
390.4. Named Parameters
The Bean Validation API does not provide a way to annotate parameters with names.
They left that up to us and other Java standards.
In this example, I am making use of javax.inject.Named
to supply a textual name of my choice.
import javax.inject.Named;
...
private static class AService {
@NotNull
public String aMethod(@NotNull @Valid @Named("aParameter") AClass aParameter) {(1)
return ...
}
1 | @Named annotation is providing a name to use for MyParameterNameProvider |
390.5. Determining Parameter Name
Now we can update MyParameterNameProvider
to look for and use the @Named.value
property if provided or default to the name from reflection.
Named named = p.getAnnotation(Named.class);
String argName=named!=null && StringUtils.isNotBlank(named.value()) ?
argName=named.value() : //@Named.property
p.getName(); //default reflection name
argNames.add(argName);
The result is a property path that possibly has more meaning.
original: aMethod.arg0.aValue
assisted: aMethod.aParameter.aValue
391. Graphs
Constraint validation has the ability to follow a graph of references annotated with @Valid
.
The following snippet shows an example set of parent classes — each with a reference to equivalent child instances.
The child instance will be invalid in both cases.
Only one of the child references is annotated with @Valid
.
class Child {
@NotNull
String cannotBeNull; (1)
}
class NonTraversingParent {
Child child = new Child(); (2)
}
class TraversingParent {
@Valid (4)
Child child = new Child(); (3)
}
1 | child attribute constrained to not be null |
2 | child instantiated with default instance but not annotated |
3 | child instantiated with default instance |
4 | annotation instructs validator to traverse to the child and validate |
391.1. Graph Non-Traversal
We know from the previous chapter that we can validate any constraints on an object by passing the instance to the validate()
method.
However, validation will stop there if there are no @Valid
annotations on references.
The following snippet shows an example of a parent with an invalid child, but due to the lack of @Valid
annotation, the child state is not evaluated with the parent.
//given
Object nonTraversing = new NonTraversingParent(); (1)
//when
Set<ConstraintViolation<Object>> violations = validator.validate(nonTraversing); (2)
//then
then(violations).isEmpty(); (3)
1 | parent contains an invalid child |
2 | constraint validation does not traverse from parent to child |
3 | child errors are not reported because they were never checked |
391.2. Graph Traversal
Adding the @Valid
annotation to an object reference activates traversal to and validation of the child instance.
This can be continued to grandchildren with follow-on child @Valid
annotations.
The following snippet shows an example of a parent who’s validation traverses to the child because of the @Valid
annotation.
import javax.validation.Valid;
...
//given
Object traversing = new TraversingParent(); (1)
//when
Set<ConstraintViolation<Object>> violations = validator.validate(traversing); (2)
//then
String errorMsgs = violations.stream()
.map(v->v.getPropertyPath().toString()+":"+v.getMessage())
.collect(Collectors.joining("\n"));
then(errorMsgs).contains("child.cannotBeNull:must not be null"); (3)
then(violations).hasSize(1);
1 | parent contains an invalid child |
2 | constraint validation traverses relationship and performed on parent and child |
3 | child errors reported |
392. Groups
The Bean Validation API supports validation within different contexts using groups.
This allows us to write constraints for specific situations, use them when appropriate, and bypass them when not pertinent.
The earlier examples all used the default javax.validation.groups.Default
group and were evaluated by default because no groups were specified in the call to validate()
.
We can defined our own custom groups using interfaces.
392.1. Custom Validation Groups
The following snippet shows an example of two groups.
Create
should only be applied during creation.
CreatePlusDefault
should only be applied during creation but will also apply default validation.
UpdatePlusDefault
can be used to denote constraints unique to updates.
import javax.validation.groups.Default;
...
public interface PocValidationGroups { (3)
public interface Create{}; (1)
public interface CreatePlusDefault extends Create, Default{}; (2)
public interface UpdatePlusDefault extends Default{};
1 | custom group name to be used during create |
2 | groups that extend another group have constrains for that group applied as well |
3 | outer interface not required, Used in example to make purpose and source of group obvious |
392.2. Applying Groups
We can assign specific groups to constraints individually.
In the following example,
-
@Null id
will only be validated when validating theCreate
orCreatePlusDefault
groups -
@Past dob
will be validated for bothCreatePlusDefault
andDefault
validation -
@Size contactPoints
and@NotNull contactPoints
will each be validated the same as@Past dob
. The default group isDefault
when left unspecified.
public class PersonPocDTO {
@Null(groups = PocValidationGroups.Create.class, (1)
message = "cannot be specified for create")
private String id;
private String firstName;
private String lastName;
@Past(groups = Default.class) (2)
private LocalDate dob;
@Size(min=1, message = "must have at least one contact point") (3)
private List<@NotNull @Valid ContactPointDTO> contactPoints;
1 | explicitly setting group to Create , which does not include Default |
2 | explicitly setting group to Default |
3 | implicitly setting group to Default |
392.3. Skipping Groups
With use case-specific groups assigned, we can have certain defined constraints ignored.
The following example shows the validation of an object.
It has an assigned id
, which would make it invalid for a create.
However, there are no violations reported because the group for the @Null id
constraint was not validated.
//given
ContactPointDTO invalidForCreate = contactDTOFactory.make(ContactDTOFactory.oneUpId); (1)
//when
Set<ConstraintViolation<ContactPointDTO>> violations = validator.validate(invalidForCreate); (2)
//then
then(violations).hasSize(0);
1 | object contains non-null id , which is invalid for create scenarios |
2 | implicitly validating against the default group. Create group constraints not validated |
Can Redefine Default Group for Type with @GroupSequence
The Bean Validation API makes it possible to redefined the default group for a particular type using a |
392.4. Applying Groups
To apply a non-default group to the validation — we can simply add their interface identifiers in a sequence after the object passed to validate()
.
The following snippet shows an example of the CreatePlusDefault
group being applied.
The @Null id
constraint is validated and reported in error because the group is was assigned to was part of the validation command.
//given
...
String expectedError="id:cannot be specified for create";
//when
violations = validator.validate(invalidForCreate, CreatePlusDefault.class); (1)
//then
then(violations).hasSize(1); (2)
then(errors(violations)).contains(expectedError); (3)
1 | validating both the CreatePlusDefault and Default groups |
2 | @Null id violation detected and reported |
3 | errors() is a local helper method written to extract field and text from violation |
393. Multiple Groups
We have two ways of treating multiple groups
- validate all
-
performed by passing more than one group to the
validate()
method. Each group is validated in a non-deterministic manner - short circuit
-
performed by defining a
@GroupSequence
. Each group is validated in order and the sequence is short-circuited when their is a failure.
393.1. Example Class with Different Groups
The following snippet shows an example of a class with validations that perform at different costs.
-
@Size email
is thought to be simple to validate -
@Email email
is thought to be a more detailed validation -
the remaining validations have not been addressed by this classification
public class ContactPointDTO {
@Null (groups = {Create.class},
message = "cannot be specified for create")
private String id;
@NotNull
private String name;
@Size(min=7, max=40, groups= SimplePlusDefault.class) (1)
@Email(groups = DetailedOnly.class) (2)
private String email;
1 | @Size email is thought to be a cheap sanity check, but overly simplistic |
2 | @Email email is thought to be thorough validation, but only worth it for reasonably sane values |
The following snippet shows the groups being used in this example.
public interface Create{};
public interface SimplePlusDefault extends Default {}
public interface DetailedOnly {}
393.2. Validate All Supplied Groups
When groups are passed to validate in a sequence, all groups in that sequence are validated.
The following snippet shows an example with SimplePlusDefault
and DetailedOnly
supplied to validate()
.
Each group will be validated, no matter what the results are.
String nameErr="name:must not be null"; //Default Group
String sizeErr="email:size must be between 7 and 40"; //Simple Group
String formatErr="email:must be a well-formed email address"; //DetailedOnly Group
//when - validating against all groups
Set<ConstraintViolation<ContactPointDTO>> violations = validator.validate(
invalidContact,
SimplePlusDefault.class, DetailedOnly.class);
//then - all groups will have their violations reported
then(errors(violations)).contains(nameErr, sizeErr, formatErr).hasSize(3); (1) (2) (3)
1 | @NotNull name (nameError ) is part of Default group |
2 | @Size email (sizeError ) is part of SimplePlusDefault group |
3 | @Email email (formatError ) is part of DetailedOnly group |
393.3. Short-Circuit Validation
If we instead want to layer validations such that cheap validations come first and more extensive or expensive validations occur only after earlier groups are successful, we can define a @GroupSequence
.
-
Groups earlier in the sequence are performed first.
-
Groups later in the sequence are performed later — but only if all constraints in earlier groups pass. Validation will short-circuit at the individual group level when applying a sequence.
The following snippet shows an example of defining a @GroupSequence
that lists the order of group validation.
@GroupSequence({ SimplePlusDefault.class, DetailedOnly.class }) (1)
public interface DetailOrder {};
1 | defines an order-list of validation groups to apply |
The following example shows how the validation stopped at the SimplePlusDefault
group and did not advance to the DetailedOnly
group.
//when - validating using a @GroupSequence
violations = validator.validate(invalidContact, DetailOrder.class);
//then - validation stops once a group produces a violation
then(errors(violations)).contains(nameErr, sizeErr).hasSize(2); (1)
1 | validation was short-circuited at the group where the first set of errors detected |
393.4. Override Default Group
The @GroupSequence` annotation can be directly applied to a type to override the default group when validating instances of that class. |
394. Spring Integration
We saw earlier how we could programmatically validate constraints for Java methods.
This capability was not intended for business code to call — but rather for calls to be intercepted by AOP and constraints applied by that intercepting code.
We can annotate @Component
classes or interfaces with constraints and have Spring perform that validation role for us.
The following snippet shows an example of a interface with a simple aCall
method that accepts an int
parameter that must be greater than 5.
All the information on the method call should be familiar to you by now.
Only the @Validated
annotation is new.
The @Validated
annotation triggers Spring AOP to apply Bean Validation to calls made on the type (interface or class).
import org.springframework.validation.annotation.Validated;
import javax.validation.constraints.Min;
@Validated (2)
public interface ValidatedComponent {
void aCall(@Min(5) int mustBeGE5); (1)
}
1 | interface defines constraint for parameter(s) |
2 | @Validated triggers Spring to perform method validation on component calls |
394.1. Validated Component
The following snippet shows a class implementation of the interface and further declared as a @Component
.
Therefore it can be injected and method calls subject to container interpose using AOP interception.
@Component (1)
public class ValidatedComponentImpl implements ValidatedComponent {
@Override
public void aCall(int mustBeGE5) {
}
}
1 | designates this class implementation to be used for injection |
The component is injected into clients.
@Autowired
private ValidatedComponent component;
394.2. ConstraintViolationException
With the component injected, we can have parameters and results validated against constraints.
The following snippet shows an example component call where a call is made with an invalid parameter.
Spring performs the method validation, throws a javax.validation.ConstraintViolationException
,
and prevents the call.
The Set<ConstraintViolation>
can be obtained from the exception.
At that point we have returned to some familiar territory we covered with programmatic validation.
import javax.validation.ConstraintViolationException;
...
//when
ConstraintViolationException ex = catchThrowableOfType(
() -> component.aCall(1), (1)
ConstraintViolationException.class);
//then
Set<ConstraintViolation<?>> violations = ex.getConstraintViolations();
String errorMsgs = violations.stream()
.map(v->v.getPropertyPath().toString() +":" + v.getMessage())
.collect(Collectors.joining("\n"));
then(errorMsgs).isEqualTo("aCall.mustBeGE5:must be greater than or equal to 5");
1 | Spring intercepts the component call, detects violations, and reports using exception |
394.3. Successful Validation
Of course, if we pass valid parameter(s) to the method —
-
the parameters are validated against the method constraints
-
no exception is thrown
-
the
@Component
method is invoked -
the return object is validated against declared constraints (none in this example)
assertThatNoException().isThrownBy(
()->component.aCall(10) (1)
);
1 | parameter value 10 satisfies the @Min(5) constraint — thus no exception |
394.4. Liskov Substitution Principle
One thing you may have noticed with the selected example is that the interface contained constraints and not the class declaration.
As a matter of fact, if we add any additional constraint beyond what the interface defined — we will get a ConstraintDeclarationException
thrown — preventing the call from completing.
The Bean Validation Specification
describes it
as following the
Liskov Substitution Principle — where anything that is a sub-type of T
can be inserted in place of T
.
Said more specific to Bean Validation — a sub-type or implementation class method cannot add more restrictive constraints to call.
@Component
public class ValidatedComponentImpl implements ValidatedComponent {
@Override
public void aCall(@Positive int mustBeGE5) {} //invalid (1)
}
1 | Bean Validation enforces that subtypes cannot be more constraining than their interface or parent type(s) |
javax.validation.ConstraintDeclarationException: HV000151: A method overriding another method must not redefine the parameter constraint configuration, but method ValidatedComponentImpl#aCall(int) redefines the configuration of ValidatedComponent#aCall(int).
394.5. Disabling Parameter Constraint Override
For the Hibernate Validator, the constraint override rule can be turned off during factory configuration. You can find other Hibernate-specific features in the Hibernate Validator Specifics section of the on-line documentation.
The snippet below uses a generic property interface to disable parameter override constraint.
return Validation.byDefaultProvider() (1)
.configure()
.addProperty("hibernate.validator.allow_parameter_constraint_override",
Boolean.TRUE.toString()) (2)
.parameterNameProvider(new MyParameterNameProvider())
.buildValidatorFactory()
.getValidator();
1 | generic factory configuration interface used to initialize factory |
2 | generic property interface used to set custom behavior of Hibernate Validator |
The snippet below uses a Hibernate-specific configurer and custom method to disable parameter override constraint.
return Validation.byProvider(HibernateValidator.class) (1)
.configure()
.allowOverridingMethodAlterParameterConstraint(true) (2)
.parameterNameProvider(new MyParameterNameProvider())
.buildValidatorFactory()
.getValidator();
1 | Hibernate-specific configuration interface used to initialize factory |
2 | Hibernate-specific method used to set custom behavior of Hibernate Validator |
394.6. Spring Validated Group(s)
We saw earlier how we could programmatically validate using explicit validation groups.
Spring uses the @Validated
annotation in a dual role to define that as well.
-
@Validated
on the interface/class triggers validation to occur -
@Validated
on a parameter or method causes the validation to apply the identified group(s)-
the
groups
attribute is used for this purpose
-
//@Validated (1)
public interface PocService {
@NotNull
@Validated(CreatePlusDefault.class) (2)
public PersonPocDTO createPOC(
@NotNull (3)
@Valid PersonPocDTO personDTO); (4)
1 | @Validated at the class/interface/component level triggers validation to be performed |
2 | @Validated at the method level used to apply specific validation groups (CreatePlusDefault ) |
3 | @NotNull at the property level requires personDTO to be supplied |
4 | @Valid at the property level triggered personDTO to be validated |
394.7. Spring Validated Group(s) Example
The following snippet shows an example of a class where the id
property is required to be null when validating against the Create
group.
public class PersonPocDTO {
@Null(groups = Create.class, message = "cannot be specified for create")
private String id; (1)
1 | id must be null only when validating against Create group |
The following snippet shows the constrained method being passed a parameter that is illegal for the Create
constraint group.
A ConstraintViolationException
is thrown with violations.
PersonPocDTO pocWithId = pocFactory.make(oneUpId); (3)
assertThatThrownBy(() -> pocService.createPOC(pocWithId)) (1)
.isInstanceOf(ConstraintViolationException.class)
.hasMessageContaining("createPOC.person.id: cannot be specified for create"); (2)
1 | @Validated on component triggered validation to occur |
2 | @Validated(CreatePlusDefault.class) caused Create and Default rules to be validated |
3 | poc instance created with an id assigned — making it invalid |
395. Custom Validation
Earlier I listed several common, built-in constraints and available library constraints. Hopefully, they provide most or all of what is necessary to meet our validation needs — but there is always going to be that need for custom validation.
The snippet below shows an example of a custom validation being applied to a LocalDate
— that validates the value is of a certain age in years, with an optional timezone offset.
public class ValidatedClass {
@MinAge(age = 16, tzOffsetHours = -4)
private LocalDate dob;
395.1. Constraint Interface Definition
We can start with the interface definition for our custom constraint annotation.
@Documented
@Target({ ElementType.METHOD, FIELD, ANNOTATION_TYPE, PARAMETER, TYPE_USE })
@Retention( RetentionPolicy.RUNTIME )
@Repeatable(value= MinAge.List.class)
@Constraint(validatedBy = {
MinAgeLocalDateValidator.class,
MinAgeDateValidator.class
})
public @interface MinAge {
String message() default "age below minimum({age}) age";
Class<?>[] groups() default {};
Class<? extends Payload>[] payload() default {};
int age() default 0;
int tzOffsetHours() default 0;
@Documented
@Retention(RUNTIME)
@Target({ METHOD, FIELD, ANNOTATION_TYPE, PARAMETER, TYPE_PARAMETER })
public @interface List {
MinAge[] value();
}
}
395.2. @Documented Annotation
The @Documented
annotation instructs the Javadoc processing to include the Javadoc for this annotation within the Javadoc output for the classes that use it.
/**
* Defines a minimum age based upon a LocalDate, the current
* LocalDate, and a specified timezone.
*/
@Documented //include this in Javadoc for elements that it is defined
The following images show the impact made to Javadoc for a different @PersonHasName
annotation example.
Not only are the constraints shown for the class but the documentation for the annotations is included in the produced Javadoc.
395.3. @Target Annotation
The @Target
annotation defines locations where the constraint is legally allowed to be applied.
The following table lists examples of the different target types.
ElementType.FIELD
define validation on a Java attribute within a class |
ElementType.METHOD
define validation on a return type or cross-parameters of a method |
||
ElementType.PARAMETER
define validation on a parameter to a method |
ElementType.TYPE_USE
define validation within a parameterized type |
||
ElementType.TYPE
define validation on an interface or class that likely inspects the state of the type |
ElementType.CONSTRUCTOR
define validation on the resulting instance after constructor completes |
||
ElementType.ANNOTATION_TYPE
This type allows other annotations to be defined based on this annotation. The snippet shows an example of constraint |
395.4. @Retention
@Retention
is used to determine the lifetime of the annotation.
@Retention(
//SOURCE - annotation discarded by compiler
//CLASS - annotation available in class file but not loaded at runtime - default
RetentionPolicy.RUNTIME //annotation available thru reflection at runtime
)
Bean Validation should always use RUNTIME
395.5. @Repeatable
The @Repeatable
annotation and declaration of an annotation wrapper class is required to supply annotations multiple times on the same target.
This is normally used in conjunction with different validation groups.
The @Repeatable.value
specifies an @interface that contains a value
method that returns an array of the annotation type.
The snippet below provides an example of the @Repeatable
portions of MinAge
.
@Repeatable(value= MinAge.List.class)
public @interface MinAge {
...
@Retention(RUNTIME)
@Target({ METHOD, FIELD, ANNOTATION_TYPE, PARAMETER, TYPE_USE })
public @interface List {
MinAge[] value();
}
}
The following snippet shows the annotation being applied multiple times to the same property — but assigned different groups.
@MinAge(age=18, groups = {VotingGroup.class})
@MinAge(age=65, groups = {RetiringGroup.class})
public LocalDate getConditionalDOB() {
return dob;
}
Repeatable Syntax Use Simplified
The requirement for the wrapper class is based on the Java requirement to have only one annotation type per target. Prior to Java 8, we were also required to explicitly use the construct in the code. Now it is applied behind the scenes by the compiler. Pre-Java 8 Use of Repeatable
|
395.6. @Constraint
The @Constraint
is used to identify the class(es) that will implement the constraint.
The annotation is not used for constraints built upon other constraints (e.g., @AdultAge
⇒ @MinAge
).
The annotation can specify multiple classes — one for each unique type the constraint can be applied to.
The following snippet shows two validation classes: one for java.util.Date
and the other for java.time.LocalDate
.
@Constraint(validatedBy = {
MinAgeLocalDateValidator.class, (1)
MinAgeDateValidator.class (2)
})
public @interface MinAge {
1 | validates annotated LocalDate values |
2 | validates annotated Date values |
@MinAge(age=18, groups = {VotingGroup.class})
@MinAge(age=65, groups = {RetiringGroup.class})
public LocalDate getConditionalDOB() { (1)
return dob;
}
@MinAge(age=16, message="found java.util.Date age({age}) violation")
public Date getDobAsDate() { (2)
return Date.from(dob.atStartOfDay().toInstant(ZoneOffset.UTC));
}
1 | constraining type LocalDate |
2 | constraining type Date |
395.6.1. Core Constraint Annotation Properties
The core constraint annotation properties include
- message
-
contains the default error message template to be returned when constraint violated. The contents of the message get interpolated to fill in variables and substitute entire text strings. This provides a means for more detailed messages as well as internationalization of messages.
- groups
-
identifies which group(s) to validate this constraint against
- payload
-
used to supply instance-specific metadata to the validator. A common example is to establish a severity structure to instruct the validator how to react.
The following snippet provides an example declaration of core properties for @MinAge
constraint.
public @interface MinAge {
String message() default "age below minimum({age}) age";
Class<?>[] groups() default {};
Class<? extends Payload>[] payload() default {};
...
395.7. @MinAge-specific Properties
Each individual constraint annotation can also define its own unique properties. These values will be expressed in the target code and made available to the constraining code at runtime.
The following example shows the @MinAge
constraint with two additional properties
-
age - defines how old the subject has to be in years to be valid
-
tzOffsetHours - an example property demonstrating we can have as many as we need
public @interface MinAge {
...
int age() default 0;
int tzOffsetHours() default 0;
...
395.8. Constraint Implementation Class
The annotation referenced zero or more constraint implementation classes — differentiated by the Java type they can process.
@Constraint(validatedBy = {
MinAgeLocalDateValidator.class,
MinAgeDateValidator.class
})
public @interface MinAge {
Each implementation class has two methods they can override.
-
initialize()
accepts the specific annotation instance that will be validated against -
isValid()
accepts the value to be validated and a context for this specific call. The minimal job of this method is to return true or false. It can optionally provide additional or custom details using the context.
395.9. Constraint Implementation Type Examples
The following snippets show the @MinAge
constraint being implemented against two different temporal types: java.time.LocalDate
and java.util.Date
.
We, of course, could have used inheritance to simplify the implementation.
public class MinAgeLocalDateValidator implements ConstraintValidator<MinAge, LocalDate> {
...
@Override
public void initialize(MinAge annotation) { ... }
@Override
public boolean isValid(LocalDate dob, ConstraintValidatorContext context) { ... }
public class MinAgeDateValidator implements ConstraintValidator<MinAge, Date> {
...
@Override
public void initialize(MinAge annotation) { ... }
@Override
public boolean isValid(Date dob, ConstraintValidatorContext context) { ... }
395.10. Constraint Initialization
The constraint initialize provides a chance to validate whether the constraint definition is valid on its own.
An invalid constraint definition is reported using a RuntimeException
.
If an exception is thrown during either the initialize()
or isValid()
method, it will be
wrapped in a ValidationException
before being reported to the application.
public class MinAgeLocalDateValidator implements ConstraintValidator<MinAge, LocalDate> {
private int minAge;
private ZoneOffset zoneOffset;
@Override
public void initialize(MinAge annotation) {
if (annotation.age() < 0) {
throw new IllegalArgumentException("age constraint cannot be negative");
}
this.minAge = annotation.age();
if (annotation.tzOffsetHours() > 23 || annotation.tzOffsetHours() < -23) {
throw new IllegalArgumentException("tzOffsetHours must be between -23 and +23");
}
zoneOffset = ZoneOffset.ofHours(annotation.tzOffsetHours());
}
395.11. Constraint Validation
The isValid()
method is required to return a boolean true or false — to indicate whether the value is valid according to the constraint.
It is a best-practice to only validate non-null values and to independently use @NotNull
to enforce a required value.
The following snippet shows a simple evaluation of whether the expressed LocalDate
value is older than the minimum required age
.
@Override
public boolean isValid(LocalDate dob, ConstraintValidatorContext context) {
if (dob!=null) { //assume null is valid and use @NotNull if it should not be
final LocalDate now = LocalDate.now(zoneOffset);
final int currentAge = Period.between(dob, now).getYears();
return currentAge >= minAge;
}
return true;
}
}
Treat Null Values as Valid
Null values should be considered valid and independently constrained by |
395.12. Custom Violation Messages
I won’t go into any detail here, but will point out that the isValid()
method has the opportunity to either augment or replace the constraint violation messages reported.
The following example is from a cross-parameter constraint and is reporting that parameters 1 and 2 are not valid when used together in a method call.
context.buildConstraintViolationWithTemplate(context.getDefaultConstraintMessageTemplate())
.addParameterNode(1)
.addConstraintViolation()
.buildConstraintViolationWithTemplate(context.getDefaultConstraintMessageTemplate())
.addParameterNode(2)
.addConstraintViolation();
//the following removes default-generated message
//context.disableDefaultConstraintViolation(); (1)
1 | make this call to eliminate default message |
The following shows the default constraint message provided in the target code.
@ConsistentNameParameters(message = "name1 and/or name2 must be supplied") (1)
public NamedThing(String id, String name1, String name2, LocalDate dob) {
1 | @ConsistentNameParameters is a cross-parameter validation constraint validating name1 and name2 |
NamedThing.name1:name1 and/or name2 must be supplied (1)
NamedThing.name2:name1 and/or name2 must be supplied (1)
NamedThing.<cross-parameter>:name1 and/or name2 must be supplied (2)
1 | path/message generated by the custom constraint validator |
2 | default path/message generated by validation framework |
396. Cross-Parameter Validation
Custom validation is useful but often times the customization is necessary for when we need to validate two or more parameters used together.
The following snippet shows an example of two parameters — name1 and name2 — with the requirement that at least one be supplied. One or the other can be null — but not both.
class NamedThing {
@ConsistentNameParameters(message = "name1 and/or name2 must be supplied") (1)
public NamedThing(String id, String name1, String name2, LocalDate dob) {
1 | cross-parameter annotation placed on the method |
396.1. Cross-Parameter Annotation
The cross-parameter constraint will likely only apply to a method or constructor, so the number of @Targets
will be more limited.
Other than that — the differences are not yet apparent.
@Documented
@Constraint(validatedBy = ConsistentNameParameters.ConsistentNameParametersValidator.class )
@Target({ElementType.METHOD, ElementType.CONSTRUCTOR})
@Retention(RetentionPolicy.RUNTIME)
public @interface ConsistentNameParameters {
396.2. @SupportedValidationTarget
Because of the ambiguity when annotating a method, we need to apply the @SupportedValidationTarget
annotation to identify whether the validation is for the parameters going into the method or the response from the method.
-
ValidationTarget.PARAMETERS - parameters to method
-
ValidationTarget.ANNOTATED_ELEMENT - returned element from method
@SupportedValidationTarget(ValidationTarget.PARAMETERS) (1)
public class ConsistentNameParametersValidator
implements ConstraintValidator<ConsistentNameParameters, Object[]> { (2)
1 | declaring that we are validating parameters going into method/ctor |
2 | must accept Object[] that will be populated with actual parameters |
@SupportValidationTarget adds Clarity to Annotation Purpose
Think how the framework would be confused without the |
396.3. Method Call Correctness Validation
Funny - within the work of a validation method, it sometimes needs to validate whether it is being called correctly. Was the constraint annotation applied to a method with the wrong signature? Did — somehow — a parameter of the wrong type end up in an unexpected position?
The snippet below highlights the point that cross-parameter constraint validators are strongly tied to method signatures. They expect the parameters to be validated in a specific position in the array and to be of a specific type.
@Override
public boolean isValid(Object[] values, ConstraintValidatorContext context) { (1)
if (values.length != 4) { (2)
throw new IllegalArgumentException(
String.format("Unexpected method signature, 4 params expected, %d supplied", values.length));
}
for (int i=1; i<3; i++) { //look at positions 1 and 2 (3)
if (values[i]!=null && !(values[i] instanceof String)) {
throw new IllegalArgumentException(
String.format("Illegal method signature, param[%d], String expected, %s supplied", i, values[i].getClass()));
}
}
...
1 | method parameters supplied in Object[] |
2 | not a specific requirement for this validation — but sanity check we have what is expected |
3 | names validated must be of type String |
396.4. Constraint Validation
Once we have the constraint properly declared and call-correctness validated, the implementation will look similar to most other constraint validations. This method is required to return a true or false.
@Override
public boolean isValid(Object[] values, ConstraintValidatorContext context) { (1)
...
String name1= (String) values[1];
String name2= (String) values[2];
return (StringUtils.isNotBlank(name1) || StringUtils.isNotBlank(name2));
}
397. Web API Integration
397.1. Vanilla Spring/AOP Validation
From what we have learned in the previous chapters, we know that we should be able to annotate any @Component
class/interface — including a @RestController
— and have constraints validated.
I am going to refer to this as "Vanilla Spring/AOP Validation" because it is not unique to any component type.
The following snippet shows an example of the Web API @RestController
that validates parameters according to Create
and Default
Groups.
@Validated (1)
public class ContactsController {
...
@RequestMapping(path=CONTACTS_PATH,
method= RequestMethod.POST,
consumes={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE},
produces={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
@Validated(PocValidationGroups.CreatePlusDefault.class) (2)
public ResponseEntity<PersonPocDTO> createPOC(
@RequestBody
@Valid (3)
PersonPocDTO personDTO) {
...
1 | triggers validation for component |
2 | configures validator for method constraints |
3 | identifies constraint for parameter |
If we call this with an invalid personDTO
(relative to the Default or Create groups), we would expect to see validation fail and some sort of error response from the Web API.
397.2. ConstraintViolationException Not Handled
As expected — Spring will validate the constraints and throw a ConstrainViolationException
.
However, Spring Boot — out of the box — does not provide built-in exception advice for ConstraintViolationException
.
That will result in the caller receiving a 500/INTERNAL_SERVER_ERROR
status response with the default error reporting message.
It is understandable that would be the default since constraints can be technically validated and reported from all different levels of our application.
The exception could realistically be caused by a real internal server error.
However — the reported status does not always have to be generic and misleading.
> POST http://localhost:64153/api/contacts
{"id":"1","firstName":"Douglass","lastName":"Effertz","dob":[2011,6,14],"contactPoints":[{"id":null,"name":"Cell","email":"penni.kautzer@hotmail.com","phone":"(876) 285-7887 x1055","address":{"street":"69166 Angelo Landing","city":"Jaredshire","state":"IA","zip":"81764-6850"}}]}
< 500 INTERNAL_SERVER_ERROR Internal Server Error (1)
{ "url" : "http://localhost:53298/api/contacts",
"statusCode" : 500,
"statusName" : "INTERNAL_SERVER_ERROR",
"message" : "Unexpected Error",
"description" : "unexpected error executing request: javax.validation.ConstraintViolationException: createPOC.person.id: cannot be specified for create",
"timestamp" : "2021-07-01T14:58:48.777269Z" }
1 | INTERNAL_SERVER_ERROR status is mis-leading — cause is bad value provided by client |
The violation — at least in this case — was a bad value from the client.
The id
property cannot be assigned when attempting to create a contact.
Ideally — this would get reported as either a 400/BAD_REQUEST
or 422/UNPROCESSABLE_ENTITY
.
Both are 4xx/Client error status and will point to something the client needs to correct.
397.3. ConstraintViolationException Exception Advice
Assuming that the invalid value came from the client, we can map the unhandled ConstraintViolationException
to a 400/BAD_REQUEST
using (in this case) a global @RestControllerAdvice
.
The following snippet shows how we can take some of the code we have seen used in the JUnit tests to report validation details — and use that within an @ExceptionHandler
to extract the details and report as a 400/BAD_REQUEST
to the client.
import info.ejava.examples.common.web.BaseExceptionAdvice;
...
@RestControllerAdvice (1)
public class ExceptionAdvice extends BaseExceptionAdvice { (2)
@ExceptionHandler(ConstraintViolationException.class)
public ResponseEntity<MessageDTO> handle(ConstraintViolationException ex) {
String description = ex.getConstraintViolations().stream()
.map(v->v.getPropertyPath().toString() + ":" + v.getMessage())
.collect(Collectors.joining("\n"));
HttpStatus status = HttpStatus.BAD_REQUEST; (3)
return buildResponse(status, "Validation Error", description, (Instant)null);
}
1 | controller advice being applied globally to all controllers in the application context |
2 | extending a class of exception handlers and helper methods |
3 | hard-wiring the exception to a 400/BAD_REQUEST status |
397.4. ConstraintViolationException Mapping Result
The following snippet shows the Web API response to the client expressed as a 400/BAD_REQUEST
.
{ "url" : "http://localhost:53408/api/contacts",
"statusCode" : 400,
"statusName" : "BAD_REQUEST",
"message" : "Validation Error",
"description" : "createPOC.person.id: cannot be specified for create",
"timestamp" : "2021-07-01T15:10:59.037162Z" }
Converting from a 500/INTERNAL_SERVER_ERROR
to a 400/BAD_REQUEST
is the minimum of what we wanted (at least it is a Client Error status), but we can try to do better.
We understood what was requested — but could not process the payload as provided.
397.5. Controller Constraint Validation
To cause the violation to be mapped to a 422/UNPROCESSABLE_ENTITY
to better indicate the problem, we can activate validation within the controller framework itself versus the vanilla Spring/AOP validation.
The following snippet shows an example of the @RestController
identifying validation and specific validation groups as part of the Web API framework.
The @Validated
annotation is now being used on the Web API parameters.
@RequestMapping(path=CONTACTS_PATH,
method= RequestMethod.POST,
consumes={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE},
produces={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
//@Validated(PocValidationGroups.CreatePlusDefault.class) -- no longer needed (1)
public ResponseEntity<PersonPocDTO> createPOC(
@RequestBody
//@Valid -- replaced by @Validated (1)
@Validated(PocValidationGroups.CreatePlusDefault.class) (2)
PersonPocDTO personDTO) {
1 | vanilla Spring/AOP validation has been disabled |
2 | Web API-specific parameter validation has been enabled |
397.6. MethodArgumentNotValidException
Spring MVC will independently validate the @RequestBody
, @RequestParam
, and @PathVariable
constraints according to internal rules.
Spring will throw an org.springframework.web.bind.MethodArgumentNotValidException
exception when encountering a violation with the request body.
That exception is mapped — by default — to return a very terse 400/BAD_REQUEST
response.
The snippet below show an example response payload for the default MethodArgumentNotValidException
mapping.
< 400 BAD_REQUEST Bad Request
{"timestamp":"2021-07-01T15:24:44.464+00:00",
"status":400,
"error":"Bad Request",
"message":"",
"path":"/api/contacts"}
By default — we may want to be terse to avoid too much information leakage. However, in this case, let’s improve this.
397.7. MethodArgumentNotValidException Custom Mapping
Of course, we can change the behavior if desired using a custom exception handler.
The following snippet shows an example custom exception handler mapping MethodArgumentNotValidException
to a 422/UNPROCESSABLE_ENTITY
.
@RestControllerAdvice
public class ExceptionAdvice extends BaseExceptionAdvice {
@ExceptionHandler(ConstraintViolationException.class)
public ResponseEntity<MessageDTO> handle(ConstraintViolationException ex) { ... }
@ExceptionHandler(MethodArgumentNotValidException.class)
public ResponseEntity<MessageDTO> handle(MethodArgumentNotValidException ex) { (1)
List<String> fieldMsgs = ex.getFieldErrors().stream() (2)
.map(e -> e.getObjectName()+"."+e.getField()+": "+e.getDefaultMessage())
.toList();
List<String> globalMsgs = ex.getGlobalErrors().stream() (3)
.map(e -> e.getObjectName() +": "+ e.getDefaultMessage())
.toList();
String description = Stream.concat(fieldMsgs.stream(), globalMsgs.stream())
.collect(Collectors.joining("\n"));
return buildResponse(HttpStatus.UNPROCESSABLE_ENTITY, "Validation Error",
description, (Instant)null);
}
1 | Spring MVC throws MethodArgumentNotValidException for @RequestBody violations |
2 | reports fields of objects in error |
3 | reports overall objects (e.g., cross-parameter violations) in error |
397.7.1. MethodArgumentNotValidException Custom Mapping Response
This results in the client receiving an HTTP status indicating the request was understood but the payload provided was invalid. The description is as terse or verbose as we want it to be.
{ "url" : "http://localhost:53818/api/contacts",
"statusCode" : 422,
"statusName" : "UNPROCESSABLE_ENTITY",
"message" : "Validation Error",
"description" : "personPocDTO.id: cannot be specified for create",
"timestamp" : "2021-07-01T15:38:48.045038Z" }
Can Also Supply Client Value if Permitted
The exception handler has access to the invalid value if security policy allows information like that to be in the response. Note that error messages tend to be placed into logs and logs can end up getting handled at a generic level. For example, you would not want an invalid partial but mostly correct SSN to be part of an error log. |
397.8. @PathVariable Validation
Note that the Web API maps the @RequestBody
constraint violations independently from the other parameter types.
The following snippet shows an example of validation constraints applied to @PathVariable
.
These are physically in the URI.
@RequestMapping(path= CONTACT_PATH,
method=RequestMethod.GET,
produces={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<PersonPocDTO> getPOC(
@PathVariable(name="id")
@Pattern(regexp = "[0-9]+", message = "must be a number") (1)
String id) {
1 | validation here is thru vanilla Spring/AOP validation |
397.9. @PathVariable Validation Result
The Web API (using vanilla Spring/AOP validation here) throws a ConstraintViolationException
for @PathVariable
and @RequestParam
properties.
We can leverage the custom exception handler we already have in place to do a decent job reporting status.
The following snippet shows an example response that is being mapped to a 400/BAD_REQUEST
using our custom exception handler for ConstraintViolationException
.
400/BAD_REQUEST
seems appropriate because the id
path parameter is invalid garbage (1…34
) in this case.
> HTTP GET http://localhost:53918/api/contacts/1...34, headers={masked}
< BAD_REQUEST/400
{ "url" : "http://localhost:53918/api/contacts/1...34",
"statusCode" : 400,
"statusName" : "BAD_REQUEST",
"message" : "Validation Error",
"description" : "getPOC.id: must be a number",
"timestamp" : "2021-07-01T15:51:34.724036Z" }
Remember — if we did not have that custom exception handler in place for ConstraintViolationException
, the HTTP status would have been a 500/INTERNAL_SERVER_ERROR
.
< 500 INTERNAL_SERVER_ERROR Internal Server Error
{"timestamp":"2021-07-01T19:21:31.345+00:00","status":500,"error":"Internal Server Error","message":"","path":"/api/contacts/1...34"}
397.10. @RequestParam Validation
@RequestParam
validation follows the same pattern as @PathVariable
and gets reported using a ConstraintViolationException
.
@RequestMapping(path= EXAMPLE_CONTACTS_PATH,
method=RequestMethod.POST,
consumes={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE},
produces={MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<PersonsPageDTO> findPocsByExample(
@RequestParam(value = "pageNumber", defaultValue = "0", required = false)
@PositiveOrZero
Integer pageNumber,
@RequestParam(value = "pageSize", required = false)
@Positive
Integer pageSize,
@RequestParam(value = "sort", required = false) String sortString,
@RequestBody PersonPocDTO probe) {
397.11. @RequestParam Validation Violation Response
The following snippet shows an example response for an invalid set of query parameters.
> POST http://localhost:53996/api/contacts/example?pageNumber=-1&pageSize=0
{ ... }
> BAD_REQUEST/400
{ "url" : "http://localhost:53996/api/contacts/example?pageNumber=-1&pageSize=0",(1) (2)
"statusCode" : 400,
"statusName" : "BAD_REQUEST",
"message" : "Validation Error",
"description" : "findPocsByExample.pageNumber: must be greater than or equal to 0\nfindPocsByExample.pageSize: must be greater than 0",
"timestamp" : "2021-07-01T15:55:44.089734Z" }
1 | pageNumber has an invalid negative value |
2 | pageSize has an invalid non-positive value |
397.12. Non-Client Errors
One thing you may notice with the previous examples is that every constraint violation was blamed on the client — whether it was bad server code calling internally or not.
As an example, lets have the API require that value
be non-negative.
A successful validation of that constraint will result in a service method call.
@RequestMapping(path = POSITIVE_OR_ZERO_PATH,
method=RequestMethod.GET,
produces = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<?> positive(
@PositiveOrZero (1)
@RequestParam(name = "value") int value) {
PersonPocDTO resultDTO = contactsService.positiveOrZero(value); (2)
1 | @RequestParam validated |
2 | value from valid request passed to service method |
397.13. Service Method Error
The following snippet shows that the service call makes an obvious error by passing the value
to an internal component requiring the value to not be positive.
public class PocServiceImpl implements PocService {
...
public PersonPocDTO positiveOrZero(int value) {
//obviously an error!!
internalComponent.negativeOrZero(value);
...
The internal component leverages the Bean Validation by placing a @NegativeOrZero
constraint on the value
.
This is obviously going to fail when the value is ever non-zero.
@Component
@Validated
public class InternalComponent {
public void negativeOrZero(@NegativeOrZero int value) {
397.14. Violation Incorrectly Reported as Client Error
The snippet below shows an example response of the internal error. It is being blamed on the client — when it was actually an internal server error.
> GET http://localhost:54298/api/contacts/positiveOrZero?value=1
< 400 BAD_REQUEST Bad Request
{ "url":"http://localhost:54298/api/contacts/positiveOrZero?value=1",
"statusCode":400,
"statusName":"BAD_REQUEST",
"message":"Validation Error",
"description":"negativeOrZero.value: must be less than or equal to 0",
"timestamp":"2021-07-01T16:23:27.666154Z"}
397.15. Checking Violation Source
One thing we can do to determine the proper HTTP response status — is to inspect the source information of the violation.
The following snippet shows an example of inspecting the whether the violation was reported by a class annotated with @RestController
.
If from the API, then report the 400/BAD_REQUEST
as usual.
If not, report it as a 500/INTERNAL_SERVER_ERROR
.
If you remember — that was the original default behavior.
@ExceptionHandler(ConstraintViolationException.class)
public ResponseEntity<MessageDTO> handle(ConstraintViolationException ex) {
String description = ...
boolean isFromAPI = ex.getConstraintViolations().stream() (1)
.map(v -> v.getRootBean().getClass().getAnnotation(RestController.class))
.filter(a->a!=null)
.findFirst()
.orElse(null)!=null;
HttpStatus status = isFromAPI ?
HttpStatus.BAD_REQUEST : HttpStatus.INTERNAL_SERVER_ERROR;
return buildResponse(status, "Validation Error", description, (Instant)null);
}
1 | isFromAPI set to true if any of the violations came from component annotated with @RestController |
397.16. Internal Server Error Correctly Reported
The following snippet shows the response to the client when our exception handler detects that is is handling at least one violation generated from a class annotated with @RestController
.
{ "url" : "http://localhost:54434/api/contacts/positiveOrZero?value=1",
"statusCode" : 500,
"statusName" : "INTERNAL_SERVER_ERROR",
"message" : "Validation Error",
"description" : "negativeOrZero.value: must be less than or equal to 0",
"timestamp" : "2021-07-01T16:45:50.235724Z" }
Any Source of Constraint Violation May be used to Impact Behavior
There is no magic to using the |
397.17. Service-detected Client Errors
Assuming we do a thorough job validating all client inputs at the @RestController
level, we might be done.
However, what about the case where the client validation is pushed down to the @Service
components.
We would have to adjust our violation source inspection.
The following snippet shows an example of a service validating client requests using the same constraints as before — except this is in a lower-level component.
public interface PocService {
@NotNull
@Validated(PocValidationGroups.CreatePlusDefault.class)
public PersonPocDTO createPOC(
@NotNull
@Valid PersonPocDTO personDTO);
Without any changes, we get violations reported as 400/BAD_REQUEST
status — which as I stated in the beginning was "OK".
< 400 BAD_REQUEST Bad Request
{ "url" : "http://localhost:55168/api/contacts",
"statusCode" : 400,
"statusName" : "BAD_REQUEST",
"message" : "Validation Error",
"description" : "createPOC.person.id: cannot be specified for create",
"timestamp" : "2021-07-01T17:40:12.221497Z" }
I won’t try to improve the HTTP status using source annotations on the validating class. I have already shown how to do that. Lets try another technique.
397.18. Payload
One other option we have to is leverage the payload metadata in each annotation.
Payload
classes are interfaces extending javax.validation.Payload
that identify certain characteristics of the constraint.
public @interface Xxx {
String message() default "...";
Class<?>[] groups() default { };
Class<? extends Payload>[] payload() default { }; (1)
1 | Annotations can carry extra metadata in the payload property |
The snippet below shows an example of a Payload subtype that expresses the violation should be reported as a 500/INTERNAL_SERVICE_ERROR
.
public interface InternalError extends Payload {}
This payload information can be placed in constraints that are known to be validated by internal components.
@Component
@Validated
public class InternalComponent {
public void negativeOrZero(@NegativeOrZero(payload = InternalError.class) int value) {
397.19. Exception Handler Checking Payloads
The snippet below shows our generic, global advice factoring in whether the violation came from an annotation with a InternalError
in the payload.
@ExceptionHandler(ConstraintViolationException.class)
public ResponseEntity<MessageDTO> handle(ConstraintViolationException ex) {
String description = ...;
boolean isFromAPI = ...;
boolean isInternalError = isFromAPI ? false : (1)
ex.getConstraintViolations().stream()
.map(v -> v.getConstraintDescriptor().getPayload())
.filter(p-> p.contains(InternalError.class))
.findFirst()
.orElse(null)!=null;
HttpStatus status = isFromAPI || !isInternalError ?
HttpStatus.BAD_REQUEST : HttpStatus.INTERNAL_SERVER_ERROR;
return buildResponse(status, "Validation Error", description, (Instant)null);
}
1 | isInternalError set to true if any violations contain the InternalError payload |
397.20. Internal Violation Exception Handler Results
The following snippet shows an example of a constraint violation where none of the violations where assigned a payload with InternalError
.
The status is returned as 400/BAD_REQUEST
.
> POST http://localhost:55288/api/contacts
< 400/BAD_REQUEST
"url" : "http://localhost:55288/api/contacts",
"statusCode" : 400,
"statusName" : "BAD_REQUEST",
"message" : "Validation Error",
"description" : "createPOC.person.id: cannot be specified for create",
"timestamp" : "2021-07-01T17:56:23.080884Z"
}
The following snippet shows an example of a constraint violation where at least one of the violations were assigned a payload with InternalError
.
The client may not be able to make heads-or-tails out of the error message, but at least they would know it is something on the server-side to be corrected.
> GET http://localhost:57547/api/contacts/positiveOrZero?value=1
< INTERNAL_SERVER_ERROR/500
{ "url" : "http://localhost:57547/api/contacts/positiveOrZero?value=1",
"statusCode" : 500,
"statusName" : "INTERNAL_SERVER_ERROR",
"message" : "Validation Error",
"description" : "negativeOrZero.value: must be less than or equal to 0",
"timestamp" : "2021-07-01T20:25:05.188126Z" }
398. JPA Integration
Bean Validation is integrated into the JPA standard.
This can be used to validate entities mostly when created, updated, or deleted.
Although not part of the standard, it is also used by some providers to customize generated database schema with additional RDBMS constraints (e.g., @NotNull
, @Size
).
By default, the JPA provider will implement validation of the Default
group for all @Entity
classes during creation or update.
The following is a list of JPA properties that can be used to impact the behavior. They all need to be prefixed with spring.jpa.properties.
when using Spring Boot properties to set the value.
|
ability to control validation at a high level |
|
|
identify groups(s) validated prior to inserting new row |
|
|
identify group(s) validated prior to updating existing row |
|
|
identify group(s) validated prior to removing existing row |
|
399. Mongo Integration
A basic Bean Validation implementation is integrated into Spring Data Mongo.
It leverages event-specific callbacks from
AbstractMongoEventListener
, which is integrated into the Spring
ApplicationListener
framework.
There are no configuration settings and after you see the details — you will quickly realize that they mean to handle the most common case (validate the Default
group on save()
) and for us to implement the corner-cases.
The following snippet shows an example of activating the default MongoRepository validation.
import org.springframework.data.mongodb.core.mapping.event.ValidatingMongoEventListener;
...
@Configuration
public class MyMongoConfiguration {
@Bean
public ValidatingMongoEventListener mongoValidator(Validator validator) {
return new ValidatingMongoEventListener(validator);
}
}
399.1. Validating Saves
To demonstrate validation within the data tier, lets assume that our document class has a constraint for the dob
to be supplied.
@Document(collection = "pocs")
public class PersonPOC {
...
@NotNull
private LocalDate dob;
...
}
When we attempt to save a PersonPOC in the repository without a dob
, the following example shows that the Java source object is validated, a violation is detected, and a ConstraintViolationException
is thrown.
//given
PersonPOC noDobPOC = mapper.map(pocDTOFactory.make().withDob(null));
//when
assertThatThrownBy(() -> contactsRepository.save(noDobPOC))
.isInstanceOf(ConstraintViolationException.class)
.hasMessageContaining("dob: must not be null");
There is nothing more to it than that until we look into the implementation of ValidatingMongoEventListener
.
399.2. ValidatingMongoEventListener
ValidatingMongoEventListener
extends AbstractMongoEventListener
, has a Validator
from injection, and overrides a single event callback called onBeforeSave()
.
package org.springframework.data.mongodb.core.mapping.event;
...
public class ValidatingMongoEventListener extends AbstractMongoEventListener<Object> {
...
private final Validator validator;
@Override
public void onBeforeSave(BeforeSaveEvent<Object> event) {
...
}
}
It does not take much imagination to guess how the rest of this works. I have removed the debug code from the method and provided the remaining details here.
@Override
public void onBeforeSave(BeforeSaveEvent<Object> event) {
Set violations = validator.validate(event.getSource());
if (!violations.isEmpty()) {
throw new ConstraintViolationException(violations);
}
}
The onBeforeSaveEvent
is called after the source Java object has been converted to a form that is ready for storage.
399.3. Other AbstractMongoEventListener Events
There are many reasons — beyond validation (e.g., sub-document ID generation) — we can take advantage of the
AbstractMongoEventListener
callbacks, so it will be good to provide an overview of them now.
-
There are three core events: Save, Load, and Delete
-
Several possible stages to each core event
-
before action performed (e.g., delete)
-
and before converting between Java object and
Document
(e.g., save and load) -
and after converting between Java object and
Document
(e.g., save and load)
-
-
after action is complete (e.g., save)
-
The following table lists the specific events.
onApplicationEvent(MongoMappingEvent<?> event) |
general purpose event handler |
onBeforeConvert(BeforeConvertEvent<E> event) |
callback before Java object converted to |
onBeforeSave(BeforeSaveEvent<E> event) |
callback after Java object converted to |
onAfterSave(AfterSaveEvent<E> event) |
callback after |
onAfterLoad(AfterLoadEvent<E> event) |
callback after |
onAfterConvert(AfterConvertEvent<E> event) |
callback after |
onBeforeDelete(BeforeDeleteEvent<E> event) |
callback before document deleted from DB |
onAfterDelete(AfterDeleteEvent<E> event) |
callback after document deleted from DB |
399.4. MongoMappingEvent
The MongoMappingEvent
itself has three main items.
-
Collection Name — name of the target collection
-
Source — the source or target Java object
-
Document
— the source or targetbson
data type stored to the database
Our validation would always be against the source
, so we just need a callback that provides us with a read-only value to validate.
400. Patterns / Anti-Patterns
Every piece of software has an interface with some sort of pre-conditions and post-conditions that have some sort of formal or informal constraints. Constraint validation — whether using custom code or Bean Validation framework — is a decision to be made when forming the layers of a software architecture. The following patterns and anti-patterns list a few concerns to address. The original outline and content provided below is based on Tom Hombergs' Bean Validation Anti-Patterns article.
400.1. Data Tier Validation
The Data tier has long been the keeper of data constraints — especially with RDBMS schema.
-
Should the constraint validations discussed be implemented at that tier?
-
Can validation wait all the way to the point where it is being stored?
-
Should service and other higher levels of code be working with data that has not been validated?
400.1.1. Data Tier Validation Safety Checks
Hombergs' suggestion was to use the data tier validation as a safety check, but not the only layer. [77]
That, of course, makes a lot of sense since the data tier may not need to know what a valid e-mail looks like or (to go a bit further) what type of e-mail addresses we accept? However, the data tier will want to sanity check that required fields exist and may want to go as far as validating format if query implementations require the data to be in a specific form.
400.2. Use case-specific Validation
Re-use is commonly a goal in software development. However, as we saw with validation groups — some data types have use case-specific constraints.
The simple example is when |
Figure 176. Re-usable Data Class with Use case-Specific Semantics
|
As more use case-specific constraints pile up on re-usable classes they can get very cluttered and present a violation of single purpose Single-responsibility principle.
400.2.1. Separate Syntactic from Semantic Validation
Hombergs proposes we
-
use Bean Validation for syntactical validation for re-usable data classes
-
implement query methods in the data classes for semantic state and perform checks against that specific state within the use case-specific code. [77]
One way of implementing use case-specific query methods and have them leverage Bean Validation constraints and a re-used data type would be to create use case-specific decorators or wrappers.
Lombok’s experimental @Delegate
code generation may be of assistance here.
400.3. Anti: Validation Everywhere
It is likely for us to want to validate at the client interface (Web API) since these are very external inputs. It is also likely for us to want to validate at the service level because our service could be injected into multiple client interfaces. It is then likely that internal components see how easy it is to add validation triggers and add to the mix. At the end of the line — the persistence layer adds a final check.
In some cases, we can get the same information validated several times. We have already shown in the Bean Validation details earlier in this topic — the challenge it can be to determine what is a client versus internal issue when a violation occurs.
400.3.1. Establish Validation Architecture
Hombergs recommends having a clear validation strategy versus ad-hoc everywhere [77]
I agree with that strategy and like to have a clear dividing line of "once it reaches this point — data its valid". This is where I like to establish service entry points (validated) and internal components (sanity checked). Entry points check everything about the data. Internal components trust that the data given is valid and only need to verify if a programming error produced a null or some other common illegal value.
I also believe separating data types into external ("DTOs") and internal ("BOs") helps thin down the concerns. DTO classes would commonly be thorough and allow clients to know exactly what constraints exist. BO classes — used by the business and persistence logic only accept valid DTOs and should be done with detailed validation by the time they are mapped to BO classes.
400.3.2. Separating Persistence Concerns/Constraints
Hombergs went on to discuss a third tier of data types — persistence tier data types — separate from BOs as a way of separating persistence concerns away from BO data types. [78] This is part of implementing a Hexagonal Software Architecture where the core application has no dependency on any implementation details of the other tiers. This is more of a plain software architecture topic than specific to validation — but it does highlight how there can be different contexts for the same conceptual type of data processed.
401. Summary
In this module we learned:
-
to add Bean Validation dependencies to the project
-
to add declarative pre-conditions and post-conditions to components using the Bean Validation API
-
to define declarative validation constraints
-
to configure a
ValidatorFactory
and obtain aValidator
-
to programmatically validate an object
-
to programmatically validate parameters to and response from a method call
-
to enable Spring/AOP validation for components
-
to implement custom validation constraints
-
to implement a cross-parameter validation constraint
-
to configure Web API constraint violation responses
-
to configure Web API parameter validation
-
to identify patterns/anti-patterns for validation
-
to configure JPA validation
-
to configure Spring Data Mongo Validation
-
to identify some patterns/anti-patterns for using validation
Unresolved directive in jhu784-notes.adoc - include::/builds/ejava-javaee/ejava-springboot-docs/courses/jhu784-notes/target/resources/docs/asciidoc/assignment6-homesales-async-{assignment6}.adoc[]
Integration Unit Testing
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
402. Introduction
In the testing lectures I made a specific point to separate the testing concepts of
-
focusing on a single class with stubs and mocks
-
integrating multiple classes through a Spring context
-
having to manage separate processes using the Maven integration test phases and plugins
Having only a single class under test meets most definitions of "unit testing". Having to manage multiple processes satisfies most definitions of "integration testing". Having to integrate multiple classes within a single JVM using a single JUnit test is a bit of a middle ground because it takes less heroics (thanks to modern test frameworks) and can be moderately fast.
I have termed the middle ground "integration unit testing" in an earlier lecture and labeled them with the suffix "NTest" to signify that they should run within the surefire unit test Maven phase and will take more time than a mocked unit test. In this lecture, I am going to expand the scope of "integration unit test" to include simulated resources like databases and JMS servers. This will allow us to write tests that are moderately efficient but more fully test layers of classes and their underlying resources within the context of a thread that is more representative of an end-to-end usecase.
Given an application like the following with databases and a JMS server…
Figure 178. Votes Application
|
|
402.1. Goals
You will learn:
-
how to integrate MongoDB into a Spring Boot application
-
how to integrate a Relational Database into a Spring Boot application
-
how to integrate a JMS server into a Spring Boot application
-
how to implement an integration unit test using embedded resources
402.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
embed a simulated MongoDB within a JUnit test using Flapdoodle
-
embed an in-memory JMS server within a JUnit test using ActiveMQ
-
embed a relational database within a JUnit test using H2
-
verify an end-to-end test case using a unit integration test
403. Votes and Elections Service
For this example, I have created two moderately parallel services — Votes and Elections — that follow a straight forward controller, service, repository, and database layering.
403.1. Main Application Flows
Figure 179. VotesService
|
The Votes Service accepts a vote (VoteDTO) from a caller and stores that directly in a database (MongoDB). |
Figure 180. ElectionsService
|
The Elections service transforms received votes (VoteDTO) into database entity instances (VoteBO) and stores them in a separate database (Postgres) using Java Persistence API (JPA). The service uses that persisted information to provide election results from aggregated queries of the database. |
The fact that the applications use MongoDB, Postgres Relational DB, and JPA will only be a very small part of the lecture material. However, it will serve as a basic template of how to integrate these resources for much more complicated unit integration tests and deployment scenarios.
403.2. Service Event Integration
The two services are integrated through a set of Aspects, ApplicationEvent, and JMS logic that allow the two services to be decoupled from one another.
Figure 181. Votes Service
|
The Votes Service events layer defines a pointcut on the successful return of the
|
Figure 182. Elections Service
|
The Elections Service eventing layer subscribes to the |
The fact that the applications use JMS will only be a small part of the lecture material. However, it too will serve as a basic template of how to integrate another very pertinent resource for distributed systems.
404. Physical Architecture
I described five (5) functional services in the previous section: Votes, Elections, MongoDB, Postgres, and ActiveMQ (for JMS).
Figure 183. Physical Architecture
|
I will eventually mapped them to four (4) physical nodes: api, mongo, postgres, and activemq. Both Votes and Elections have been co-located in the same Spring Boot application because the Internet deployment platform may not have a JMS server available for our use. |
404.1. Integration Unit Test Physical Architecture
For integration unit tests, we will use a single JUnit JVM with the Spring Boot Services and the three resources embedded using the following options:
Figure 184. Integration Unit Testing Physical Architecture
|
|
-
H2 Database in memory RDBMS we used for user management during the later security topics
-
ActiveMQ (Classic) used in embedded mode
405. Mongo Integration
In this section we will go through the steps of adding the necessary MongoDB dependencies to implement a MongoDB repository and simulate that with an in-memory DB during unit integration testing.
405.1. MongoDB Maven Dependencies
As with most starting points with Spring Boot — we can bootstrap our application
to implement a MongoDB repository by forming an dependency on spring-boot-starter-data-mongodb
.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
That brings in a few driver dependencies that will also activate the MongoAutoConfiguration
to establish a default MongoClient
from properties.
[INFO] +- org.springframework.boot:spring-boot-starter-data-mongodb:jar:2.3.2.RELEASE:compile [INFO] | +- org.mongodb:mongodb-driver-sync:jar:4.0.5:compile [INFO] | | +- org.mongodb:bson:jar:4.0.5:compile [INFO] | | \- org.mongodb:mongodb-driver-core:jar:4.0.5:compile [INFO] | \- org.springframework.data:spring-data-mongodb:jar:3.0.2.RELEASE:compile
405.2. Test MongoDB Maven Dependency
For testing, we add a dependency on de.flapdoodle.embed.mongo
. By setting scope
to test
, we avoid deploying that with our application outside of our module testing.
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<scope>test</scope>
</dependency>
The flapdoodle
dependency brings in the following artifacts.
[INFO] +- de.flapdoodle.embed:de.flapdoodle.embed.mongo:jar:2.2.0:test [INFO] | \- de.flapdoodle.embed:de.flapdoodle.embed.process:jar:2.1.2:test [INFO] | +- org.apache.commons:commons-lang3:jar:3.10:compile [INFO] | +- net.java.dev.jna:jna:jar:4.0.0:test [INFO] | +- net.java.dev.jna:jna-platform:jar:4.0.0:test [INFO] | \- org.apache.commons:commons-compress:jar:1.18:test
405.3. MongoDB Properties
The following lists a core set of MongoDB properties we will use no matter
whether we are in test or production. If we implement the most common
scenario of a single single database — things get pretty easy to work
through properties. Otherwise we would have to provide our own MongoClient
@Bean
factories to target specific instances.
#mongo
spring.data.mongodb.authentication-database=admin (1)
spring.data.mongodb.database=votes_db (2)
1 | identifies the mongo database with user credentials |
2 | identifies the mongo database for our document collections |
405.4. MongoDB Repository
Spring Data provides a very nice repository layer that can handle basic
CRUD and query capabilities with a simple interface definition that
extends MongoRepository<T,ID>
. The following shows an example declaration
for a VoteDTO
POJO class that uses a String for a primary key value.
import info.ejava.examples.svc.docker.votes.dto.VoteDTO;
import org.springframework.data.mongodb.repository.MongoRepository;
public interface VoterRepository extends MongoRepository<VoteDTO, String> {
}
405.5. VoteDTO MongoDB Document Class
The following shows the MongoDB document class that doubles as a Data Transfer Object (DTO) in the controller and JMS messages.
import lombok.*;
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
import java.time.Instant;
@Document("votes") (1)
@Data
@NoArgsConstructor
@AllArgsConstructor
@Builder
public class VoteDTO {
@Id
private String id; (2)
private Instant date;
private String source;
private String choice;
}
1 | MongoDB Document class mapped to the votes collection |
2 | VoteDTO.id property mapped to _id field of MongoDB collection |
{
"_id":{"$oid":"5f3204056ac44446600b57ff"},
"date":{"$date":{"$numberLong":"1597113349837"}},
"source":"jim",
"choice":"quisp",
"_class":"info.ejava.examples.svc.docker.votes.dto.VoteDTO"
}
405.6. Sample MongoDB/VoterRepository Calls
The following snippet shows the injection of the repository into the service class and two sample calls. At this point in time, it is only important to notice that our simple repository definition gives us the ability to insert and count documents (and more!!!).
@Service
@RequiredArgsConstructor (1)
public class VoterServiceImpl implements VoterService {
private final VoterRepository voterRepository; (1)
@Override
public VoteDTO castVote(VoteDTO newVote) {
newVote.setId(null);
newVote.setDate(Instant.now());
return voterRepository.insert(newVote); (2)
}
@Override
public long getTotalVotes() {
return voterRepository.count(); (3)
}
1 | using constructor injection to initialize service with repository |
2 | repository inherits ability to insert new documents |
3 | repository inherits ability to get count of documents |
This service is then injected into the controller and accessed through the /api/votes
URI.
At this point we are ready to start looking at the details of how to report the new votes
to the ElectionsService
.
406. ActiveMQ Integration
In this section we will go through the steps of adding the necessary ActiveMQ dependencies to implement a JMS publish/subscribe and simulate that with an in-memory JMS server during unit integration testing.
406.1. ActiveMQ Maven Dependencies
The following lists the dependencies we need to implement the Aspects and JMS capability within the application.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-activemq</artifactId>
</dependency>
The ActiveMQ starter brings in the following dependencies and actives the
ActiveMQAutoConfiguration
class that will setup a JMS connection based on
properties.
[INFO] +- org.springframework.boot:spring-boot-starter-activemq:jar:2.3.2.RELEASE:compile
[INFO] | +- org.springframework:spring-jms:jar:5.2.8.RELEASE:compile
[INFO] | | +- org.springframework:spring-messaging:jar:5.2.8.RELEASE:compile
[INFO] | | \- org.springframework:spring-tx:jar:5.2.8.RELEASE:compile
406.2. ActiveMQ Integration Unit Test Properties
The following lists the core property required by ActiveMQ in all environments. Without
the pub-sub-domain
property defined, ActiveMQ defaults to a queue model — which will
not allow our integration tests to observe the traffic flow if we care to.
#activemq spring.jms.pub-sub-domain=true (1)
1 | tells ActiveMQ to use topics versus queues |
The following lists the properties that are unique to the local integration unit tests.
#activemq
spring.activemq.in-memory=true (1)
spring.activemq.pool.enabled=false
1 | activemq will establish in-memory destinations |
406.3. Service Joinpoint Advice
I used Aspects to keep the Votes Service flow clean of external integration and performed
that by enabling Aspects using the @EnableAspectJAutoProxy
annotation and defining
the following @Aspect
class, joinpoint, and advice.
@Aspect
@Component
@RequiredArgsConstructor
public class VoterAspects {
private final VoterJMS votePublisher;
@Pointcut("within(info.ejava.examples.svc.docker.votes.services.VoterService+)")
public void voterService(){} (1)
@Pointcut("execution(*..VoteDTO castVote(..))")
public void castVote(){} (2)
@AfterReturning(value = "voterService() && castVote()", returning = "vote")
public void afterVoteCast(VoteDTO vote) { (3)
try {
votePublisher.publish(vote);
} catch (IOException ex) {
...
}
}
}
1 | matches all calls implementing the VoterService interface |
2 | matches all calls called castVote that return a VoteDTO |
3 | injects returned VoteDTO from matching calls and calls publish to report event |
406.4. JMS Publish
The publishing of the new vote event using JMS is done within the VoterJMS
class using an injected jmsTemplate
and ObjectMapper
. Essentially, the
method marshals the VoteDTO
object into a JSON text string and publishes that
in a TextMessage
to the "votes" topic.
@Component
@RequiredArgsConstructor
public class VoterJMS {
private final JmsTemplate jmsTemplate; (1)
private final ObjectMapper mapper; (2)
...
public void publish(VoteDTO vote) throws JsonProcessingException {
final String json = mapper.writeValueAsString(vote); (3)
jmsTemplate.send("votes", new MessageCreator() { (4)
@Override
public Message createMessage(Session session) throws JMSException {
return session.createTextMessage(json); (5)
}
});
}
}
1 | inject a jmsTemplate supplied by ActiveMQ starter dependency |
2 | inject ObjectMapper that will marshal objects to JSON |
3 | marshal vote to JSON string |
4 | publish the JMS message to the "votes" topic |
5 | publish vote JSON string using a JMS TextMessage |
406.5. ObjectMapper
The ObjectMapper
that was injected in the VoterJMS
class was built
using a custom factory that configured it to use formatting and write
timestamps in ISO format versus binary values.
@Bean
public Jackson2ObjectMapperBuilder jacksonBuilder() {
Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder()
.indentOutput(true)
.featuresToDisable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
return builder;
}
@Bean
public ObjectMapper jsonMapper(Jackson2ObjectMapperBuilder builder) {
return builder.createXmlMapper(false).build();
}
406.6. JMS Receive
The JMS receive capability is performed within the same VoterJMS
class to
keep JMS implementation encapsulated. The class implements a method accepting
a JMS TextMessage
annotated with @JmsListener
. At this point we could have
directly called the ElectionsService
but I chose to go another level of indirection
and simply issue an ApplicationEvent
.
@Component
@RequiredArgsConstructor
public class VoterJMS {
...
private final ApplicationEventPublisher eventPublisher;
private final ObjectMapper mapper;
@JmsListener(destination = "votes") (2)
public void receive(TextMessage message) throws JMSException { (1)
String json = message.getText();
try {
VoteDTO vote = mapper.readValue(json, VoteDTO.class); (3)
eventPublisher.publishEvent(new NewVoteEvent(vote)); (4)
} catch (JsonProcessingException ex) {
//...
}
}
}
1 | implements a method receiving a JMS TextMessage |
2 | method annotated with @JmsListener against the votes topic |
3 | JSON string unmarshaled into a VoteDTO instance |
4 | Simple NewVote POJO event created and issued internal |
406.7. EventListener
An EventListener
@Component
is supplied to listen for the application
event and relay that to the ElectionsService
.
import org.springframework.context.event.EventListener;
@Component
@RequiredArgsConstructor
public class ElectionListener {
private final ElectionsService electionService;
@EventListener (2)
public void newVote(NewVoteEvent newVoteEvent) { (1)
electionService.addVote(newVoteEvent.getVote()); (3)
}
}
1 | method accepts NewVoteEvent POJO |
2 | method annotated with @EventListener looking for application events |
3 | method invokes addVote of ElectionsService when NewVoteEvent occurs |
At this point we are ready to look at some of the implementation details of the Elections Service.
407. JPA Integration
In this section we will go through the steps of adding the necessary dependencies to implement a JPA repository and simulate that with an in-memory RDBMS during unit integration testing.
407.1. JPA Core Maven Dependencies
The Elections Service uses a relational database and interfaces with that using
Spring Data and Java Persistence API (JPA). To do that, we need the following
core dependencies defined. The starter sets up the default JDBC DataSource and
JPA layer. The postgresql
dependency provides a client for Postgres and one that takes
responsibility for Postgres-formatted JDBC URLs.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
There are too many (~20) dependencies to list that come in from the spring-boot-starter-data-jpa
dependency.
You can run mvn dependency:tree
yourself to look, but basically it brings in Hibernate and
connection pooling. The supporting libraries for Hibernate and JPA are quite substantial.
407.2. JPA Test Dependencies
During integration unit testing we add the H2 database dependency to provide another option.
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>test</scope>
</dependency>
407.3. JPA Properties
The test properties include a direct reference to the in-memory H2 JDBC URL. I will explain the use of Flyway next, but this is considered optional for this case because Spring Data will trigger auto-schema population for in-memory databases.
#rdbms
spring.datasource.url=jdbc:h2:mem:users (1)
spring.jpa.show-sql=true (2)
# optional: in-memory DB will automatically get schema generated
spring.flyway.enabled=true (3)
1 | JDBC in-memory H2 URL |
2 | show SQL so we can see what is occurring between service and database |
3 | optionally turn on Flyway migrations |
407.4. Database Schema Migration
Unlike the NoSQL MongoDB, relational databases have a strict schema that defines how data is stored. That must be accounted for in all environments. However — the way we do it can vary:
-
Auto-Generation - the simplest way to configure a development environment is to use JPA/Hibernate auto-generation. This will delegate the job of populating the schema to Hibernate at startup. This is perfect for dynamic development stages where schema is changing constantly. This is unacceptable for production and other environments where we cannot loose all of our data when we restart our application.
-
Manual Schema Manipulation - relational database schema can get more complex than what can get auto-generated and event auto-generated schema normally passes through the review of human eyes before making it to production. Deployment can be a manually intensive and likely the choice of many production environments where database admins must review, approve, and possibly execute the changes.
Once our schema stabilizes, we can capture the changes to a versioned file and use the Flyway plugin to automate the population of schema. If we do this during integration unit testing, we get a chance to supply a more tested product for production deployment.
407.5. Flyway RDBMS Schema Migration
Flyway is a schema migration library that can do forward (free) and reverse (at a cost) RDBMS schema migrations. We include Flyway by adding the following dependency to the application.
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
<scope>runtime</scope>
</dependency>
The Flyway test properties include the JDBC URL that we are using for the application and a flag to enable.
spring.datasource.url=jdbc:h2:mem:users (1)
spring.flyway.enabled=true
1 | Flyway makes use of the Spring Boot database URL |
407.6. Flyway RDBMS Schema Migration Files
We feed the Flyway plugin schema migrations that move the database from version N
to version N+1, etc. The default directory for the migrations is in db/migration
of the classpath. The directory is populated with files that are executed in order
according to a name syntax that defaults to V#_#_#__description
(double underscore between last digit of version and first character of description;
the number of digits in the version is not mandatory)
dockercompose-votes-svc/src/main/resources/
`-- db
`-- migration
|-- V1.0.0__initial_schema.sql
`-- V1.0.1__expanding_choice_column.sql
The following is an example of a starting schema (V1_0_0).
create table vote (
id varchar(50) not null,
choice varchar(40),
date timestamp,
source varchar(40),
constraint vote_pkey primary key(id)
);
comment on table vote is 'countable votes for election';
The following is an example of a follow-on migration after it was determined that
the original choice
column size was too small.
alter table vote alter column choice type varchar(60);
407.7. Flyway RDBMS Schema Migration Output
The following is an example Flyway migration occurring during startup.
Database: jdbc:h2:mem:users (H2 1.4) Successfully validated 2 migrations (execution time 00:00.022s) Creating Schema History table "PUBLIC"."flyway_schema_history" ... Current version of schema "PUBLIC": << Empty Schema >> Migrating schema "PUBLIC" to version 1.0.0 - initial schema Migrating schema "PUBLIC" to version 1.0.1 - expanding choice column Successfully applied 2 migrations to schema "PUBLIC" (execution time 00:00.069s)
For our integration unit test — we end up at the same place as auto-generation, except we are taking the opportunity to dry-run and regression test the schema migrations prior to them reaching production.
407.8. JPA Repository
The following shows an example of our JPA/ElectionRepository. Similar to the MongoDB repository — this extension will provide us with many core CRUD and query methods. However, the one aggregate query targeted for this database cannot be automatically supplied without some help. We must provide the JPA Query that translates into SQL query to return the choice, vote count, and latest vote data for that choice.
...
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;
public interface ElectionRepository extends JpaRepository<VoteBO, String> {
@Query("select choice, count(id), max(date) from VoteBO group by choice order by count(id) DESC") (1)
public List<Object[]> countVotes(); (2)
}
1 | JPA query language to return choices aggregated with vote count and latest vote for each choice |
2 | a list of arrays — one per result row — with raw DB types is returned to caller |
407.9. Example VoteBO Entity Class
The following shows the example JPA Entity class used by the repository and service. This is a standard JPA definition that defines a table override, primary key, and mapping aspects for each property in the class.
...
import javax.persistence.*;
@Entity (1)
@Table(name="VOTE") (2)
@Data
@NoArgsConstructor
@AllArgsConstructor
@Builder
public class VoteBO {
@Id (3)
@Column(length = 50) (4)
private String id;
@Temporal(TemporalType.TIMESTAMP)
private Date date;
@Column(length = 40)
private String source;
@Column(length = 40)
private String choice;
}
1 | @Entity annotation required by JPA |
2 | overriding default table name (VOTEBO ) |
3 | JPA requires valid Entity classes to have primary key marked by @Id |
4 | column size specifications only used when generating schema — otherwise depends on migration to match |
407.10. Sample JPA/ElectionRepository Calls
The following is an example service class that is injected with the ElectionRepository
and
is able to make a few sample calls. save()
is pretty straight forward but notice that
countVotes()
requires some extra processing. The repository method returns a list of Object[]
values populated with raw values from the database — representing choice, voteCount, and lastDate.
The newest lastDate is used as the date of the election results. The other two values are stored
within a VoteCountDTO
object within ElectionResultsDTO
.
@Service
@RequiredArgsConstructor
public class ElectionsServiceImpl implements ElectionsService {
private final ElectionRepository votesRepository;
@Override
@Transactional(value = Transactional.TxType.REQUIRED)
public void addVote(VoteDTO voteDTO) {
VoteBO vote = map(voteDTO);
votesRepository.save(vote); (1)
}
@Override
public ElectionResultsDTO getVoteCounts() {
List<Object[]> counts = votesRepository.countVotes(); (2)
ElectionResultsDTO electionResults = new ElectionResultsDTO();
//...
return electionResults;
}
1 | save() inserts a new row into the database |
2 | countVotes() returns a list of Object[] with raw values from the DB |
408. Unit Integration Test
Stepping outside of the application and looking at the actual unit integration test — we see the majority of the magical meat in the first several lines.
-
@SpringBootTest
is used to define an application context that includes our complete application plus a test configuration that is used to inject necessary test objects that could be configured differently for certain types of tests (e.g., security filter) -
The port number is randomly generated and injected into the constructor to form baseUrls. We will look at a different technique in the Testcontainers lecture that allows for more first-class support for late-binding properties.
@SpringBootTest( classes = {ClientTestConfiguration.class, VotesExampleApp.class},
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT, (1)
properties = "test=true") (2)
@ActiveProfiles("test") (3)
@DisplayName("votes integration unit test")
public class VotesTemplateNTest {
@Autowired (4)
private RestTemplate restTemplate;
private final URI baseVotesUrl;
private final URI baseElectionsUrl;
public VotesTemplateNTest(@LocalServerPort int port) (1)
throws URISyntaxException {
baseVotesUrl = new URI( (5)
String.format("http://localhost:%d/api/votes", port));
baseElectionsUrl = new URI(
String.format("http://localhost:%d/api/elections", port));
}
...
1 | configuring a local web environment with the random port# injected into constructor |
2 | adding a test=true property that can be used to turn off conditional logic during tests |
3 | activating the test profile and its associated application-test.properties |
4 | restTemplate injected for cases where we may need authentication or other filters added |
5 | constructor forming reusable baseUrls with supplied random port value |
408.1. ClientTestConfiguration
The following shows how the restTemplate
was formed. In this case — it is extremely simple.
However, as you have seen in other cases, we could have required some authentication and logging
filters to the instance and this is the best place to do that when required.
@SpringBootConfiguration()
@EnableAutoConfiguration //needed to setup logging
public class ClientTestConfiguration {
@Bean
public RestTemplate anonymousUser(RestTemplateBuilder builder) {
RestTemplate restTemplate = builder.build();
return restTemplate;
}
}
408.2. Example Test
The following shows a very basic example of an end-to-end test of the Votes Service. We use the baseUrl to cast a vote and then verify that is was accurately recorded.
@Test
public void cast_vote() {
//given - a vote to cast
Instant before = Instant.now();
URI url = baseVotesUrl;
VoteDTO voteCast = create_vote("voter1","quisp");
RequestEntity<VoteDTO> request = RequestEntity.post(url).body(voteCast);
//when - vote is casted
ResponseEntity<VoteDTO> response = restTemplate.exchange(request, VoteDTO.class);
//then - vote is created
then(response.getStatusCode()).isEqualTo(HttpStatus.CREATED);
VoteDTO recordedVote = response.getBody();
then(recordedVote.getId()).isNotEmpty();
then(recordedVote.getDate()).isAfterOrEqualTo(before);
then(recordedVote.getSource()).isEqualTo(voteCast.getSource());
then(recordedVote.getChoice()).isEqualTo(voteCast.getChoice());
}
At this point in the lecture we have completed covering the important aspects of forming an integration unit test with embedded resources in order to implement end-to-end testing on a small scale.
409. Summary
At this point we should have a good handle on how to add external resources (e.g., MongoDB, Postgres, ActiveMQ) to our application and configure our integration unit tests to operate end-to-end using either simulated or in-memory options for the real resource. This gives us the ability to identify more issues early before we go into more manually intensive integration or production. In this following lectures — I will be expanding on this topic to take on several Docker-based approaches to integration testing.
In this module we learned:
-
how to integrate MongoDB into a Spring Boot application
-
and how to integration unit test MongoDB code using Flapdoodle
-
-
how to integrate a ActiveMQ server into a Spring Boot application
-
and how to integration unit test JMS code using an embedded ActiveMQ server
-
-
how to integrate a Postgres into a Spring Boot application
-
and how to integration unit test relational code using an in-memory H2 database
-
-
how to implement an integration unit test using embedded resources
Docker Compose Integration Testing
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
410. Introduction
But what if we wanted or needed to use real resources? |
Figure 185. Integration Unit Test with In-Memory/Local Resources
|
Figure 186. How Can We Test with Real Resources
|
What if we needed to test with a real or specific version of MongoDB, ActiveMQ, Postgres, or some other resource? What if some of those other resources were a supporting microservice? We could implement an integration test — but how can we automate it? |
|
Figure 187. Integration Test with Docker and Docker Compose
|
410.1. Goals
You will learn:
-
how to implement a network of services for development and testing using Docker Compose
-
how to implement an integration test between real instances running in a virtualized environment
-
how to interact with the running instances during testing
410.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
create a Docker Compose file that defines a network of services and their dependencies
-
execute ad-hoc commands inside running images
-
integrate Docker Compose into a Maven integration test phase
-
author an integration test that uses real resource instances with dynamically assigned ports
411. Integration Testing with Real Resources
We are in a situation were, we need to run integration tests against real components. These "real" components can be virtualized, but they primarily need to contain a specific feature of a specific version we are taking advantage of.
Figure 188. Need to Integrate with Specific Real Services
|
Figure 189. Virtualize Services with Docker
|
My example uses generic back-end resources as examples of what we need to integrate with. However, in the age of microservices — these examples could easily be lower-level applications offering necessary services for our client application to properly operate.
We need access to these resources in the development environment but would soon need them during automated integration tests running regression tests in the CI server as well.
Lets look to Docker for a solution …
411.1. Managing Images
You know from our initial Docker lectures that we can easily download the images
and run them individually (given some instructions) with the docker run
command.
Knowing that — we could try doing the following and almost get it to work.
$ docker run --rm -p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=secret mongo:4.4.0-bionic
$ docker run --rm -p 5432:5432 \
-e POSTGRES_PASSWORD=secret postgres:12.3-alpine
$ docker run --rm -p 61616:61616 -p 8161:8161 \
rmohr/activemq:5.15.9
$ docker run --rm -p 9080:8080 \
-e MONGODB_URI='mongodb://admin:secret@host.docker.internal:27017/votes_db?authSource=admin' \
-e DATABASE_URL='postgres://postgres:secret@host.docker.internal:5432/postgres' \
-e spring.profiles.active=integration dockercompose-votes-api:latest
However, this begins to get complicated when:
-
we start integrating the API image with the individual resources through networking
-
we want to make the test easily repeatable
-
we want multiple instances of the test running concurrently on the same machine without interference with one another
Lets not mess with manual Docker commands for too long! There are better ways to do this with Docker Compose — covered earlier. I will review some of the aspects.
412. Docker Compose Configuration File
The Docker Compose (configuration) file is based on
YAML — which uses a concise way to express information
based on indentation and firm symbol rules. Assuming we have a simple network of four (4)
nodes, we can limit our definition to a version
and services
.
version: '3.8'
services:
mongo:
...
postgres:
...
activemq:
...
api:
...
Refer to the Compose File Reference for more details.
412.1. mongo Service Definition
The mongo
service defines our instance of MongoDB.
mongo:
image: mongo:4.4.0-bionic
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: secret
# ports:
# - "27017:27017"
412.2. postgres Service Definition
The postgres
service defines our instance of Postgres.
postgres:
image: postgres:12.3-alpine
# ports:
# - "5432:5432"
environment:
POSTGRES_PASSWORD: secret
412.3. activemq Service Definition
The activemq
service defines our instance of ActiveMQ.
activemq:
image: rmohr/activemq:5.15.9
# ports:
# - "61616:61616"
# - "8161:8161"
-
port 61616 is used for JMS communication
-
port 8161 is an HTTP server that can be used for HTML status
412.4. api Service Definition
The api
service defines our API server with the Votes and Elections Services.
This service will become a client of the other three services.
api:
build:
context: ../dockercompose-votes-svc
dockerfile: Dockerfile
image: dockercompose-votes-api:latest
ports:
- "${API_PORT}:8080"
depends_on:
- mongo
- postgres
- activemq
environment:
- spring.profiles.active=integration
- MONGODB_URI=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin
- DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres
412.5. Compose Override Files
I left off port definitions from the primary file on purpose.
That will become more evident in the Testcontainers topic in the next lecture when we need dynamically assigned port numbers.
However, for purposes here we need well-known port numbers and can do so easily with an additional configuration file — docker-compose.override.yml
.
Docker Compose files can be layered from base (shown above) to specialized. The following example shows the previous definitions being extended to include mapped host port# mappings. We might add this override in the development environment to make it easy to access the service ports on the host’s local network.
version: '3.8'
services:
mongo:
ports:
- "27017:27017"
postgres:
ports:
- "5432:5432"
activemq:
ports:
- "61616:61616"
- "8161:8161"
When started — notice how the container port# is mapped according to how the override file has specified.
$ docker ps
IMAGE PORTS
dockercompose-votes-api:latest 0.0.0.0:9090->8080/tcp
postgres:12.3-alpine 0.0.0.0:5432->5432/tcp, 0.0.0.0:32812->5432/tcp
mongo:4.4.0-bionic 0.0.0.0:27017->27017/tcp, 0.0.0.0:32813->27017/tcp
rmohr/activemq:5.15.9 1883/tcp, 5672/tcp, 0.0.0.0:8161->8161/tcp, 61613-61614/tcp, 0.0.0.0:61616->61616/tcp (1)
1 | notice that only the ports we mapped are exposed |
Override Limitations May Cause Compose File Refactoring
There is a limit to what you can override versus augment. Single values can replace single
values. However, lists of values can only contribute to a larger list. That means we cannot
create a base file with ports mapped and then a build system override with the port mappings
taken away.
|
413. Test Drive
Lets test out our services before demonstrating a few more commands. Everything is up and running and only the API port is exposed to the local host network using port# 9090.
$ docker ps
IMAGE PORTS
dockercompose-votes-api:latest 0.0.0.0:9090->8080/tcp (1)
postgres:12.3-alpine 5432/tcp
mongo:4.4.0-bionic 27017/tcp
rmohr/activemq:5.15.9 1883/tcp, 5672/tcp, 8161/tcp, 61613-61614/tcp, 61616/tcp
1 | only the API has its container port# (8080 ) mapped to a host port# (9090 ) |
413.1. Clean Starting State
We start off with nothing in the Vote or Election databases.
$ curl http://localhost:9090/api/votes/total
0
$ curl http://localhost:9090/api/elections/counts
{
"date" : "1970-01-01T00:00:00Z",
"results" : [ ]
}
413.2. Cast Two Votes
We can then cast votes for different choices and have them added to MongoDB and have a JMS message published.
$ curl -X POST http://localhost:9090/api/votes -H "Content-Type: application/json" -d '{"source":"jim","choice":"quisp"}'
{
"id" : "5f31eed580cfe474aeaa1536",
"date" : "2020-08-11T01:05:25.168505Z",
"source" : "jim",
"choice" : "quisp"
}
$ curl -X POST http://localhost:9090/api/votes -H "Content-Type: application/json" -d '{"source":"jim","choice":"quake"}'
{
"id" : "5f31eee080cfe474aeaa1537",
"date" : "2020-08-11T01:05:36.374043Z",
"source" : "jim",
"choice" : "quake"
}
413.3. Observe Updated State
At this point we can locate some election results in Postgres using API calls.
$ curl http://localhost:9090/api/elections/counts
{
"date" : "2020-08-11T01:05:36.374Z",
"results" : [ {
"choice" : "quake",
"votes" : 1
}, {
"choice" : "quisp",
"votes" : 1
} ]
}
414. Inspect Images
This is a part that I think is really useful and easy. Docker Compose provides an easy interface for running commands within the images.
414.1. Exec Mongo CLI
In the following example, I am running the mongo
command line interface (CLI)
command against the running mongo
service and passing in credentials as
command line arguments. Once inside, I can locate our votes_db
database,
votes
collection, and two documents that represent the votes
I was able to cast earlier.
$ docker-compose exec mongo mongo -u admin -p secret --authenticationDatabase admin (1)
MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("1fbd09ab-73e3-459f-b5f5-5d23903f672c") }
MongoDB server version: 4.4.0
> show dbs (2)
admin 0.000GB
config 0.000GB
local 0.000GB
votes_db 0.000GB
> use votes_db
switched to db votes_db
> show collections
votes
> db.votes.find({},{"choice":1}) (3)
{ "_id" : ObjectId("5f31eed580cfe474aeaa1536"), "choice" : "quisp" }
{ "_id" : ObjectId("5f31eee080cfe474aeaa1537"), "choice" : "quake" }
> exit (4)
bye
1 | running mongo CLI command inside running mongo image with command line args expressing credentials |
2 | running CLI commands to inspect database |
3 | listing documents in the votes database |
4 | exiting CLI and returning to host shell |
414.2. Exec Postgres CLI
In the following example, I am running the psql
CLI command against the running postgres
service and passing in credentials as command line arguments. Once inside, I can locate
our Flyway migration and VOTE table and list some of the votes that are in the election.
$ docker-compose exec postgres psql -U postgres (1)
psql (12.3)
Type "help" for help.
postgres=# \d+ (2)
List of relations
Schema | Name | Type | Owner | Size | Description
--------+-----------------------+-------+----------+------------+------------------------------
public | flyway_schema_history | table | postgres | 16 kB |
public | vote | table | postgres | 8192 bytes | countable votes for election
(2 rows)
postgres=# select * from vote; (3)
id | choice | date | source
--------------------------+--------+-------------------------+--------
5f31eed580cfe474aeaa1536 | quisp | 2020-08-11 01:05:25.168 | jim
5f31eee080cfe474aeaa1537 | quake | 2020-08-11 01:05:36.374 | jim
(2 rows)
postgres=# \q (4)
1 | running psql CLI command inside running postgres image with command line args expressing credentials |
2 | running CLI commands to inspect database |
3 | listing table rows in the vote table |
4 | exiting CLI and returning to host shell |
414.3. Exec Impact
With the capability to exec a command inside the running containers, we can gain access to a significant amount of state of our application and databases without having to install any software beyond Docker.
415. Integration Test Setup
At this point we should understand what Docker Compose is and how to configure it for use with our specific integration test. I now want to demonstrate it being used in an automated "integration test" where it will get executed as part of the Maven integration-test phases.
415.1. Integration Properties
We will be launching our API image with the following Docker environment expressed in the Docker Compose file.
environment:
- spring.profiles.active=integration
- MONGODB_URI=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin
- DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres
That will get digested by the run_env.sh script to produce the following.
--spring.datasource.url=jdbc:postgresql://postgres:5432/postgres \
--spring.datasource.username=postgres \
--spring.datasource.password=secret \
--spring.data.mongodb.uri=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin
That will be integrated with the following properties from the integration
profile.
#activemq
spring.activemq.broker-url=tcp://activemq:61616
#rdbms
spring.jpa.show-sql=true
spring.jpa.generate-ddl=false
spring.jpa.hibernate.ddl-auto=validate
spring.flyway.enabled=true
I have chosen to hard-code the integration URL for ActiveMQ into the properties file since we won’t be passing in an ActiveMQ URL in production. The MongoDB and Postgres properties will originate from environment variables versus hard coding them into the integration properties file to better match the production environment and further test the run_env.sh launch script.
415.2. Maven Build Helper Plugin
We will want a random, not in use port# assigned when we run the integration tests so that
multiple instances of the test can be run concurrently on the same build server
without colliding. We can leverage the build-helper-maven-plugin
to identify a
port# and have it assigned the value to a Maven property.
I am assigning it to a docker.http.port
property that I made up.
<!-- assigns a random port# to property server.http.port -->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<executions>
<execution>
<id>reserve-network-port</id>
<goals>
<goal>reserve-network-port</goal>
</goals>
<phase>pre-integration-test</phase>
<configuration>
<portNames>
<portName>docker.http.port</portName> (1)
</portNames>
</configuration>
</execution>
</executions>
</plugin>
1 | a dynamically obtained network port# is assigned to the docker.http.port Maven property |
The following is an example output of the build-helper-maven-plugin
during the build.
[INFO] --- build-helper-maven-plugin:3.1.0:reserve-network-port (reserve-network-port) @ dockercompose-votes-it --- [INFO] Reserved port 60616 for docker.http.port
415.3. Maven Docker Compose Plugin
After generating a random port#, we can start our Docker Compose network.
I am using the
https://github.com/br4chu/docker-compose-maven-plugin
docker-compose-maven-plugin
] to perform that role. It automatically
hooks into the pre-integration-test
phase to issue the up
command and the post-integration-test
phase to issue the down
command when we configure it the following way. It also
allows us to name and pass variables into the Docker Compose file.
<plugin>
<groupId>io.brachu</groupId>
<artifactId>docker-compose-maven-plugin</artifactId>
<configuration>
<projectName>${project.artifactId}</projectName>
<file>${project.basedir}/docker-compose.yml</file>
<env>
<API_PORT>${docker.http.port}</API_PORT> (1)
</env>
</configuration>
<executions>
<execution>
<goals>
<goal>up</goal>
<goal>down</goal>
</goals>
</execution>
</executions>
</plugin>
1 | dynamically obtained network port# is assigned to Docker Compose file’s API_PORT variable,
which controls the port mapping of the API server |
415.4. Maven Docker Compose Plugin Output
The following shows example plugin output during the pre-integration-test
phase
that is starting the services prior to running the tests.
[INFO] --- docker-compose-maven-plugin:0.8.0:up (default) @ dockercompose-votes-it ---
Creating network "dockercompose-votes-it_default" with the default driver
...
Creating dockercompose-votes-it_mongo_1 ... done
Creating dockercompose-votes-it_api_1 ... done
The following shows example plugin output during the post-integration-test
phase
that is shutting down the services after running the tests.
[INFO] --- docker-compose-maven-plugin:0.8.0:down (default) @ dockercompose-votes-it ---
Killing dockercompose-votes-it_api_1 ...
Killing dockercompose-votes-it_api_1 ... done
Killing dockercompose-votes-it_postgres_1 ... done
Removing dockercompose-votes-it_mongo_1 ... done
Removing dockercompose-votes-it_postgres_1 ... done
Removing network dockercompose-votes-it_default
415.5. Maven Failsafe Plugin
The following shows the configuration of the maven-failsafe-plugin
. Generically, it
runs in the integration-test
phase, matches/runs the IT
tests, and adds test
classes to the classpath. More specific to Docker Compose — it accepts the dynamically
assigned port# and passes it to JUnit using the it.server.port
property.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<executions>
<execution>
<id>integration-test</id>
<goals>
<goal>integration-test</goal>
</goals>
<configuration>
<includes>
<include>**/*IT.java</include>
</includes>
<systemPropertyVariables>
<it.server.port>${docker.http.port}</it.server.port> (1)
</systemPropertyVariables>
<additionalClasspathElements>
<additionalClasspathElement>${basedir}/target/classes</additionalClasspathElement>
</additionalClasspathElements>
</configuration>
</execution>
</executions>
</plugin>
1 | passing in generated docker.http.port value into it.server.port property |
At this point, both Docker Compose and Failsafe/JUnit have been given the same dynamically assigned port#.
415.6. IT Test Client Configuration
The following shows the IT test configuration class that maps the it.server.port
property to the baseUrl for the tests.
@SpringBootConfiguration()
@EnableAutoConfiguration //needed to setup logging
public class ClientTestConfiguration {
@Value("${it.server.host:localhost}")
private String host;
@Value("${it.server.port:9090}") (1)
private int port;
@Bean
public URI baseUrl() {
return UriComponentsBuilder.newInstance()
.scheme("http")
.host(host)
.port(port)
.build()
.toUri();
}
@Bean
public URI electionsUrl(URI baseUrl) {
return UriComponentsBuilder.fromUri(baseUrl).path("api/elections")
.build().toUri();
}
@Bean
public RestTemplate anonymousUser(RestTemplateBuilder builder) {
RestTemplate restTemplate = builder.build();
return restTemplate;
}
1 | API port# property injected through Failsafe plugin configuration |
415.7. Example Failsafe Output
The following shows the Failsafe and JUnit output that runs during the integration-test
.
[INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-test) @ dockercompose-votes-it ---
[INFO]
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running info.ejava.examples.svc.docker.votes.ElectionIT
...
...ElectionIT#init:46 votesUrl=http://localhost:60616/api/votes (1)
...ElectionIT#init:47 electionsUrl=http://localhost:60616/api/elections
...
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.372 s - in info.ejava.examples.svc.docker.votes.ElectionIT
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
1 | URLs with dynamic host port# assigned for API |
415.8. IT Test Setup
The following shows the common IT test setup where the various URLs are being constructed around the injected.
@SpringBootTest(classes={ClientTestConfiguration.class},
webEnvironment = SpringBootTest.WebEnvironment.NONE)
@Slf4j
public class ElectionIT {
@Autowired
private RestTemplate restTemplate;
@Autowired
private URI votesUrl;
@Autowired
private URI electionsUrl;
private static Boolean serviceAvailable;
@PostConstruct
public void init() {
log.info("votesUrl={}", votesUrl);
log.info("electionsUrl={}", electionsUrl);
}
415.9. Wait For Services Startup
We have at least one more job to do before our tests — we have to wait for the API server to finish starting up. We can add that logic to a @BeforeEach and remember the answer from the first attempt in all following attempts.
@BeforeEach public void serverRunning() { List<URI> urls = new ArrayList<>(Arrays.asList( UriComponentsBuilder.fromUri(votesUrl).path("/total").build().toUri(), UriComponentsBuilder.fromUri(electionsUrl).path("/counts").build().toUri() )); if (serviceAvailable!=null) { assumeTrue(serviceAvailable);} else { assumeTrue(() -> { (1) for (int i=0; i<10; i++) { try { for (Iterator<URI> itr = urls.iterator(); itr.hasNext();) { URI url = itr.next(); restTemplate.getForObject(url, String.class); (2) itr.remove(); (3) } return serviceAvailable = true; (4) } catch (Exception ex) { //... } } return serviceAvailable=false; }); } }
1 | Assume.assumeTrue will not run the tests if evaluates false |
2 | checking for a non-exception result |
3 | removing criteria once satisfied |
4 | evaluate true if all criteria satisfied |
At this point our tests are the same as most other Web API test where we invoke the server using HTTP calls using the assembled URLs.
416. Summary
In this module we learned:
-
to create a Docker Compose file that defines a network of services and their dependencies
-
to integrate Docker Compose into a Maven integration test phase
-
to implement an integration test that uses dynamically assigned ports
-
execute ad-hoc commands inside running images
Testcontainers
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
417. Introduction
In a previous section we implemented "unit integration tests" with in-memory instances for back-end resources. We later leveraged Docker and Docker Compose to implement "integration tests" with real resources operating in a virtual environment. We self-integrated Docker Compose in that later step, using several Maven plugins and Maven’s integration testing phases.
In this lecture I will demonstrate an easier, more seamless way to integrate Docker Compose into our testing using Testcontainers. This will allow us to drop back into the Maven test phase and implement the integration tests using straight forward unit test constructs.
417.1. Goals
You will learn:
-
how to better integrate Docker and DockerCompose into unit tests
-
how to inject dynamically assigned values into the application context startup
417.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
implement an integration unit test using Docker Compose and Testcontainers library
-
implement a
Spring DynamicPropertySource
to obtain dynamically assigned port numbers in time for concrete URL injections -
execute shell commands from a JUnit test into a running Docker container using Testcontainers library
-
establish client connection to back-end resources to inspect state as part of the test
418. Testcontainers Overview
Testcontainers is a Java library that supports running Docker containers within JUnit tests and other test frameworks.
Testcontainers provides a layer of integration that is well aware of the integration challenges that are present when testing with Docker images and can work both outside and inside a Docker container itself.
Spring making changes to support Testcontainers
As a self observation — by looking at documentation, articles, and timing of
feature releases — it is my opinion that Spring and Spring Boot are very high
on Testcontainers and have added features to their framework to help make testing
with Testcontainers as seamless as possible.
|
419. Example
This example builds on the previous Docker Compose lecture that uses the
same Votes and Elections services. The main difference is that we will be
directly interfacing with the Docker images using Testcontainers in the test
phase versus starting up the resources at the beginning of the tests and shutting
down at the end.
By having such direct connect with the containers — we can control what gets reused from test to test. Sharing reused container state between tests can be error prone. Starting up and shutting down containers takes a noticeable amount of time to complete. Alternatively, we want to have more control over when we do which approach without going through extreme heroics.
419.1. Maven Dependencies
The following lists the Testcontainers Maven dependencies. The core library calls
are within the testcontainers
artifact and JUnit-specific capabilities are within
the junit-jupiter
artifact. I have declared junit-jupiter
dependency
at the test
scope and testcontainers
at compile
(default) scope because
-
this is a pure test module — with no packaged implementation code
-
helper methods have been placed in
src/main
-
as the test suite grows larger, this allows the helper code and other test support features to be shared among different testing modules
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers</artifactId> (1)
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId> (2)
<scope>test</scope>
</dependency>
1 | core Testcontainers calls will be placed in src/main to begin to form a test helper library |
2 | JUnit-specific calls will be placed in src/test |
419.2. Main Tree
The module’s main tree contains a source copy of the Docker Compose file describing the network of services, a helper class that encapsulates initialization and configuration status of the network, and a JMS listener that can be used to subscribe to the JMS messages between the Voters and Elections services.
src/main/
|-- java
| `-- info
| `-- ejava
| `-- examples
| `-- svc
| `-- docker
| `-- votes
| |-- ClientTestConfiguration.java
| `-- VoterListener.java
`-- resources
`-- docker-compose-votes.yml
419.3. Test Tree
The test tree contains artifacts that are going to pertain to this test only.
The JUnit test will rely heavily on the artifacts in the src/main
tree and
we should try to work like that might come in from a library shared
by multiple integration unit tests.
src/test/
|-- java
| `-- info
| `-- ejava
| `-- examples
| `-- svc
| `-- docker
| `-- votes
| `-- ElectionCNTest.java
`-- resources
|-- application.properties
`-- junit-platform.properties
420. Example: Main Tree Artifacts
The main tree contains artifacts that are generic to serving up the
network for specific tests hosted in the src/test
tree. This division
has nothing directly related to do with Testcontainers — except to show
that once we get one of these going, we are going to want more.
420.1. Docker Compose File
Our Docker Compose file is tucked away within the test module since it is primarily meant to support testing. I have purposely removed all external port mapping references because they are not needed. Testcontainers will provide another way to map and locate the host port#. I have eliminated the build of the image. It should have been built by now based on Maven module dependencies. However, if we can create a resolvable source reference to the module — Testcontainers will make sure it is built.
version: '3.8'
services:
mongo:
image: mongo:4.4.0-bionic
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: secret
postgres:
image: postgres:12.3-alpine
environment:
POSTGRES_PASSWORD: secret
activemq:
image: rmohr/activemq:5.15.9
api:
image: dockercompose-votes-api:latest
depends_on:
- mongo
- postgres
- activemq
environment:
- spring.profiles.active=integration
- MONGODB_URI=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin
- DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres
420.2. Docker Compose File Reference
Testcontainers will load one to many layered Docker Compose files — but insists that they
each be expressed as a java.io.File
. If we assume the code in the src/main
tree is
always going to be in source form — then we can make a direct reference there.
However, assuming that this could be coming from a JAR — I decided to copy the
data from classpath and into a referencable file in the target tree.
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
...
public static File composeFile() {
Path targetPath = Paths.get("target/docker-compose-votes.yml"); (2)
try (InputStream is = ClientTestConfiguration.class (1)
.getResourceAsStream("/docker-compose-votes.yml")) {
Files.copy(is, targetPath, StandardCopyOption.REPLACE_EXISTING);
} catch (IOException ex) {
Assertions.fail("error creating source Docker Compose file", ex);
}
return targetPath.toFile();
}
1 | assuming worse case that the file will be coming in from a test support JAR |
2 | placing referencable file in target path — actual name does not matter |
The following shows the source and target locations of the Docker Compose file written out.
target/
| `-- classes/
| `-- docker-compose-votes.yml (1)
`-- docker-compose-votes.yml (2)
1 | source coming from classpath |
2 | target written as a known file in target directory |
420.3. DockerComposeContainer
Testcontainers provides many containers — including a generic Docker container, image-specific containers, and a Docker Compose container. We are going to leverage our knowledge of Docker Compose and the encapsulation of details of the Docker Compose file here and have Testcontainers directly parse the Docker Compose file.
The example shows us supplying a project name, file reference(s), and then exposing individual container ports from each of the services. Originally — only the API port needed to be exposed. However, because of the simplicity to do more with Testcontainers, I am going to expose the other ports as well. Testcontainers will also conveniently wait for activity on each of the ports when the network is started — before returning control back to our test. This can eliminate the need for "is server ready?" checks.
public static DockerComposeContainer testEnvironment() {
DockerComposeContainer env =
new DockerComposeContainer("testcontainers-votes", composeFile())
.withExposedService("api", 8080) (1)
.withExposedService("activemq", 61616) (2)
.withExposedService("postgres", 5432) (2)
.withExposedService("mongo", 27017) (2)
.withLocalCompose(true); (3)
return env;
}
1 | exposing container ports using random port and will wait for container port to become active |
2 | optionally exposing lower level resource services to demonstrate further capability |
3 | indicates whether this is a host machine that will run the images as children or whether this is running as a Docker image and the images will be tunneled (wormholed) out as sibling containers |
420.4. Obtaining Runtime Port Numbers
At runtime, we can obtain the assigned hostname and port numbers by calling
getServiceHost()
and getServicePort()
with the service name and container port
we exposed earlier.
DockerComposeContainer env = ClientTestConfiguration.testEnvironment(); (1)
...
env.start(); (2)
env.getServicePort("api", 8080)); (3)
env.getServiceHost("mongo", null); (4)
env.getServicePort("mongo", 27017);
env.getServiceHost("activemq", null);
env.getServicePort("activemq", 61616);
env.getServiceHost("postgres", null);
env.getServicePort("postgres", 5432);
1 | Docker Compose file is parsed |
2 | network/services must be started in order to determine mapped host port numbers |
3 | referenced port must have been listed with withExposedService() earlier |
4 | hostname is available as well if ever not available on localhost . Second param not used. |
421. Example: Test Tree Artifacts
421.1. Primary NTest Setup
We construct our test as a normal Spring Boot integration unit test (NTest) except we have no core application to include in the Spring context — everything is provided through the test configuration. There is no need for a web server — we will use HTTP calls from the test’s JVM to speak to the remote web server.
Docker images and Docker Compose networks of services take many seconds (~10-15secs)
to completely startup. Thus we want to promote some level of efficiency between tests. We will
instantiate and store the DockerComposeContainer
in a static variable, initialize
and shutdown once per test class, and reuse for each test method within that class.
Since we are sharing the same network for each test method — I am also
demonstrating the ability to control the order of the test methods.
Lastly — we can have the lifecycle of the network integrated with the JUnit test case by adding the @Testcontainers
annotation to the class and the @Container
annotation to the field holding the overall container.
This takes care of automatically starting and stopping the network defined in the env
variable.
import org.testcontainers.containers.DockerComposeContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
...
@Testcontainers (5)
@TestMethodOrder(MethodOrderer.OrderAnnotation.class) (4)
@SpringBootTest(classes={ClientTestConfiguration.class}, (1)
webEnvironment = SpringBootTest.WebEnvironment.NONE) (2)
public class ElectionCNTest {
@Container (5)
private static DockerComposeContainer env = (3)
ClientTestConfiguration.testEnvironment();
@Test @Order(1)
public void vote_counted_in_election() { //...
@Test
@Order(3) (4)
public void test3() { vote_counted_in_election(); }
@Test @Order(2)
public void test2() { vote_counted_in_election(); }
1 | Only test constructs in our application context — no application beans |
2 | we do not need a web server — we are the client of a web server |
3 | sharing same network in all tests within this test case |
4 | controlling order of tests when using shared network |
5 | @Testcontainers and @Container annotations integrate the lifecycle of
the network with the test case |
421.2. Injecting Dynamically Assigned Port#s
We soon hit a chicken-and-the-egg problem when we attempt to inject the URLs int the test class.
|
|
|
|
|
|
|
|
421.3. DynamicPropertySource
In what seemed like a special favor to Testcontainers — Spring added a
DynamicPropertySource
construct to the framework that allows for a property to
be supplied late in the startup process.
-
after starting the network but prior to injecting any URIs and running a test, Spring invokes the following annotated method in the JUnit test so that it may inject any late properties.
@DynamicPropertySource private static void properties(DynamicPropertyRegistry registry) { (1) ClientTestConfiguration.initProperties(registry, env); }
1 method is required to be static -
the callback method can then supply the missing property that will allow for the URI injections needed for the tests
public static void initProperties(DynamicPropertyRegistry registry, DockerComposeContainer env){ registry.add("it.server.port", ()->env.getServicePort("api", 8080)); //... }
Nice!
421.4. Injections Complete prior to Tests
With the injections in place, we can show that URLs with the dynamically assigned
port numbers. We also have the opportunity to have the test wait for anything
we can think of. Testcontainers waited for the container port to become active. The
example below instructs Testcontainers to wait for our API calls to be available as
well. This eliminates the need for that ugly @BeforeEach
call in the last lecture
where we needed to wait for the API server to be ready before running the tests.
@BeforeEach
public void init() throws IOException, InterruptedException {
log.info("votesUrl={}", votesUrl); (1)
log.info("electionsUrl={}", votesUrl);
/**
* wait for various events relative to our containers
*/
env.waitingFor("api", Wait.forHttp(votesUrl.toString())); (2)
env.waitingFor("api", Wait.forHttp(electionsUrl.toString()));
1 | logging injected URLs with dynamically assigned host port numbers |
2 | instructing Testcontainers to also wait for the API to come available |
ElectionCNTest#init:73 votesUrl=http://localhost:32989/api/votes
ElectionCNTest#init:74 electionsUrl=http://localhost:32989/api/votes
422. Exec Commands
Testcontainers gives us the ability to execute commands against specific running containers. The following executes the database CLI interfaces, requests a dump of information, and then obtains the results from stdout.
import org.testcontainers.containers.Container.ExecResult;
import org.testcontainers.containers.ContainerState;
...
ContainerState mongo = (ContainerState) env.getContainerByServiceName("mongo_1")
.orElseThrow();
ExecResult result = mongo.execInContainer("mongo",
"-u", "admin", "-p", "secret", "--authenticationDatabase", "admin",
"--eval", "db.getSiblingDB('votes_db').votes.find()");
log.info("voter votes = {}", result.getStdout());
ContainerState postgres = (ContainerState)env.getContainerByServiceName("postgres_1")
.orElseThrow();
result = postgres.execInContainer("psql",
"-U", "postgres",
"-c", "select * from vote");
log.info("election votes = {}", result.getStdout());
That is a bit unwieldy, but demonstrates what we can do from a shell perspective and we will improve on this in a moment by using the API.
422.1. Exec MongoDB Command Output
The following shows the stdout obtained from the MongoDB container after executing the login and
query of the votes
collection.
ElectionCNTest#init:105 voter votes = MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("5f903fe7-b43c-4ce8-b6ae-7ef53fcbf434") }
MongoDB server version: 4.4.0
{ "_id" : ObjectId("5f357fef01737362e202a96d"), "date" : ISODate("2020-08-13T18:01:19.872Z"), "source" : "b67e012e-3e2f-4a66-b24b-b64d06d9b4c2", "choice" : "quisp-de5fd4f2-8ab8-4997-852e-2bfb97862c87", "_class" : "info.ejava.examples.svc.docker.votes.dto.VoteDTO" }
{ "_id" : ObjectId("5f357ff001737362e202a96e"), "date" : ISODate("2020-08-13T18:01:20.515Z"), "source" : "af366d9b-53cb-4487-8f21-e634eca08d67", "choice" : "quake-784f3df6-c6c4-4c3b-8d45-58636b335096", "_class" : "info.ejava.examples.svc.docker.votes.dto.VoteDTO" }
...
422.2. Exec Postgres Command Output
The following shows the stdout from the Postgres container after executing the login and query
of the VOTE
table.
ElectionCNTest#init:99 election votes = id | choice | date | source --------------------------+--------------------------------------------+-------------------------+-------------------------------------- 5f357fef01737362e202a96d | quisp-de5fd4f2-8ab8-4997-852e-2bfb97862c87 | 2020-08-13 18:01:19.872 | b67e012e-3e2f-4a66-b24b-b64d06d9b4c2 5f357ff001737362e202a96e | quake-784f3df6-c6c4-4c3b-8d45-58636b335096 | 2020-08-13 18:01:20.515 | af366d9b-53cb-4487-8f21-e634eca08d67 ... (6 rows)
423. Connect to Resources
Executing a command against a running service may be useful for interactive work.
In fact, we could create a breakpoint in the test and then manually go out to inspect the back-end resources (using docker ps
to locate the container and docker exec
to run a shell within the container) if we have access to the host network.
However, it can be clumsy to make any sense of the stdout result when writing an automated test. If we actually need to get state from the resource — it will be much simpler to use a first-class resource API to obtain results.
Lets do that now.
423.1. Maven Dependencies
To add resource clients for our three back-end resources we just need to add the following familiar dependencies. We first introduced them in the API module’s dependencies in an earlier lecture.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-activemq</artifactId>
</dependency>
423.2. Injected Clients
|
Resource Clients to be Injected
|
|
Required URL properties
|
423.3. URL Templates
The URLs can be built using the following hard-coded helper methods as long as we know the host and port number of each service.
public static String mongoUrl(String host, int port) {
return String.format("mongodb://admin:secret@%s:%d/votes_db?authSource=admin", host, port);
}
public static String jmsUrl(String host, int port) {
return String.format("tcp://%s:%s", host, port);
}
public static String jdbcUrl(String host, int port) {
return String.format("jdbc:postgresql://%s:%d/postgres", host, port);
}
423.4. Providing Dynamic Resource URL Declarations
The host and port numbers can be supplied from the network — just like we did with the API. Therefore, we can expand the dynamic property definition to include the three other properties.
public static void initProperties(DynamicPropertyRegistry registry, DockerComposeContainer env) {
registry.add("it.server.port", ()->env.getServicePort("api", 8080));
registry.add("spring.data.mongodb.uri",()-> mongoUrl( (2)
env.getServiceHost("mongo", null),
env.getServicePort("mongo", 27017))); (1)
registry.add("spring.activemq.broker-url", ()->jmsUrl(
env.getServiceHost("activemq", null),
env.getServicePort("activemq", 61616)));
registry.add("spring.datasource.url",()->jdbcUrl(
env.getServiceHost("postgres", null),
env.getServicePort("postgres", 5432)));
}
1 | dynamically assigned host port numbers are made available from running network |
2 | properties are provided to Spring late in the startup process — but in time to inject before the tests |
423.5. Application Properties
The dynamically created URLs properties will be joined up with the following hard-coded application properties to complete and connection information.
#activemq
spring.jms.pub-sub-domain=true
#postgres
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.username=postgres
spring.datasource.password=secret
423.6. JMS Listener
To obtain the published JMS messages — we add the following component with a JMS Listener method. This will print a debug of the message and increment a counter.
//...
import org.springframework.jms.annotation.JmsListener;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.TextMessage;
@Component
@Slf4j
public class VoterListener {
@Getter
private AtomicInteger msgCount=new AtomicInteger(0);
@JmsListener(destination = "votes")
public void receive(Message msg) throws JMSException {
log.info("jmsMsg={}, {}", msgCount.incrementAndGet(), ((TextMessage) msg).getText());
}
}
We must add the JMS listener class to the Spring application context of the test.
The following example shows that being explicitly done in the @SpringBootTest.classes
annotation.
@SpringBootTest(classes={ClientTestConfiguration.class, VoterListener.class}, (1)
webEnvironment = SpringBootTest.WebEnvironment.NONE)
//...
public class ElectionCNTest {
1 | adding VoterListener component class to Spring context |
423.7. Obtain Client Status
The following shows a set calls to the client interfaces to show the basic capability to communicate with the network services. This gives us the ability to add debug or obscure test verification.
@BeforeEach
public void init() throws IOException, InterruptedException {
...
/**
* connect directly to explosed port# of images to obtain sample status
*/
log.info("mongo client vote count={}", (1)
mongoClient.getDatabase("votes_db").getCollection("votes").countDocuments());
log.info("activemq msg={}", listener.getMsgCount().get()); (2)
log.info("postgres client vote count={}", (3)
jdbcTemplate.queryForObject("select count (*) from vote", Long.class));
1 | getting the count of vote documents from MongoDB client |
2 | getting number of messages received from JMS listener |
3 | getting the number of vote rows from Postgres client |
423.8. Client Status Output
The following shows an example of the client output in the @BeforeEach
method,
captured after the first test and before the second test.
ElectionCNTest#init:85 mongo client vote count=6
ElectionCNTest#init:87 activemq msg=6
ElectionCNTest#init:88 postgres client vote count=6
Very complete!
424. Summary
In this module we learned:
-
how to more seamlessly integrate Docker and DockerCompose into unit tests using Testcontainers library
-
how to inject dynamically assigned values into the application context to allow them to be injected into components at startup
-
to execute shell commands from a JUnit test into a running container using Testcontainers library
-
to establish client connection to back-end resources from our JUnit JVM operating the unit test
-
in the event that we need this information to verify test success or simply perform some debug of the scenario
-
Although integration tests should never replace unit tests, the capability demonstrated in this lecture shows how we can create very capable end-to-end tests to verify the parts will come together correctly. For example, it was not until I wrote and executed the integration tests in this lecture that I discovered I was accidentally using JMS queuing semantics versus topic semantics between the two services. When I added the extra JMS listener — the Elections Service suddenly started loosing messages. Good find!!
Testcontainers with Spock
copyright Β© 2022 jim stafford (jim.stafford@jhu.edu)
425. Introduction
In several other lectures in this section I have individually covered the use of embedded resources, Docker, Docker Compose, and Testcontainers for the purpose of implementing integration tests using JUnit Jupiter.
|
Figure 190. Target Integration Environment
|
Integration Unit Test terminology
I use the term "integration test" somewhat loosely but use
the term "integration unit test" to specifically mean a test that
uses the Spring context under the control of a simple unit test
capable of being run inside of an IDE (without assistance) and
executed during the
Maven test phase. I use the term "unit test"
to mean the same thing except with stubs or mocks and the lack
of the overhead (and value) of the Spring context.
|
425.1. Goals
You will learn:
-
to identify the capability of Docker Compose to define and implement a network of virtualized services running in Docker
-
to identify the capability of Testcontainers to seamlessly integrate Docker and Docker Compose into unit test frameworks including Spock
-
to author end-to-end, integration unit tests using Spock, Testcontainers, Docker Compose, and Docker
-
to implement inspections of running Docker images
-
to implement inspects of virtualized services during tests
-
to instantiate virtualized services for use in development
425.2. Objectives
At the conclusion of this lecture and related exercises, you will be able to:
-
define a simple network of Docker-based services within Docker Compose
-
control the lifecycle of a Docker Compose network from the command line
-
implement a Docker Compose override file
-
control the lifecycle of a Docker Compose network using Testcontainers
-
implement an integration unit test within Spock, using Testcontainers and Docker Compose
-
implement a hierarchy of test classes to promote reuse
426. Background
426.1. Application Background
The application we are implementing and looking to test is a set of voting services with back-end resources. Users cast votes using the Votes Service and obtain election results using the Elections Service. Casted votes are stored in MongoDB and election results are stored and queried in Postgres. The two services stay in sync through a JMS topic hosted on ActiveMQ. Because of deployment constraints unrelated to testing — the two services have been hosted in the same JVM |
Figure 191. Voting and Election Services
|
426.2. Integration Testing Approach
The target of this lecture is the implementation of end-to-end integration tests. Integration tests do not replace fine-grain unit tests. In fact there are people with strong opinions ( expressed) that believe any attention given to integration tests takes away from the critical role of unit tests when it comes to thorough testing. I will agree there is some truth to that — we should not get too distracted by this integration verification playground to the point that we end up placing tests that could be verified in pure, fast unit tests — inside of larger, slower integration tests. However, there has to be a point in the process where we need to verify some amount of useful end-to-end threads of our application in an automated manner — especially in today’s world of microservices where critical supporting services have been broken out. Without the integration test — there is nothing that proves everything comes together during dynamic operation. Without the automation — there is no solid chance regression testing.
Figure 192. In-Memory/Simulated Integration Testing Environment
|
One way to begin addressing automated integration testing with back-end resources is through the use of in-memory configurations and simulation of dependencies — local to the unit test JVM. This addresses some of the integration need when it is something like a database or JMS server, but will miss the mark completely when we need particular versions of a full fledged application service. |
|
Figure 193. Virtualized Integration Testing Environment
|
426.3. Docker Compose
A network of services can be complex and managing many individual Docker images is clumsy. It would be best if we took advantage of a Docker network/service management layer called Docker Compose.
Docker Compose uses a YAML file to define the network, services, and even builds services with "source" build information. With that in place, we can issue a build, start and stop of the services as well as execute commands to run within the running images. All of this must be on the same machine.
Because Docker Compose is limited to a single machine and is primarily just a thin coordination layer around Docker — it is MUCH simpler to use than Kubernetes or MiniKube. For those familiar with Kubernetes — I like to refer to it as a "poor man’s Helm Chart".
At a minimum, Docker Compose provides a convenient wrapper where we can place environment and runtime options for individual containers. These containers could be simple databases or JMS servers — eliminating the need to install software on the local development machine. The tool really begins to shine when we need to define dependencies and communication paths between services.
426.4. Testcontainers
Testcontainers provides a seamless integration of Docker and Docker Compose into unit test frameworks — including JUnit 4, JUnit 5, and Spock. Testcontainers manages a library of resource-specific containers that can provide access to properties that are specific to a particular type of image (e.g., databaseUrl for a Postgres container). Testcontainers also provide a generic container and a Docker Compose container — which provide all the necessary basics of running either a single image or a network of images.
Testcontainers provides features to
-
parse the Docker Compose file to learn the configuration of the network
-
assign optional variables used by the Docker Compose file
-
expose specific container ports as random host ports
-
identify the host port value of a mapped container port
-
delay the start of tests while built-in and customizable "wait for" checks execute to make sure the network is up and ready for testing
-
execute shell commands against the running containers
-
share a running network between (possibly ordered) tests or restart a dirty network between tests
427. Docker Compose
Before getting into testing, I will cover Docker Compose as a stand-alone capability. Docker Compose is very useful in standing up one or more Docker containers on a single machine, in a development or integration environment, without installing any software beyond Docker and Docker Compose (a simple binary).
427.1. Docker Compose File
Docker Compose uses one or more YAML
Docker Compose files for configuration.
The default primary file name is docker-compose.yml
,
but you can reference any file using the -f
option.
The following is a Docker Compose File that defines a simple network of services.
I reduced the version of the file in the example to 2
versus a current version
of 3.8
since what I am demonstrating has existed for many (>5) years.
I have limited the service definitions to an image spec, environment variables, and dependencies. I have purposely not exposed any container ports at this time to avoid concurrent execution conflicts in the base file. I have also purposely left out any build information for the API image since that should have been built by an earlier module in the Maven dependencies. However, you will see a decoupled way to add port mappings and build information shortly when we get to the Docker Compose Override/Extend topic. For now — this is our core network definition.
version: '2'
services:
mongo:
image: mongo:4.4.0-bionic
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: secret
postgres:
image: postgres:12.3-alpine
environment:
POSTGRES_PASSWORD: secret
activemq:
image: rmohr/activemq:5.15.9
api:
image: dockercompose-votes-api:latest
depends_on: (1)
- mongo
- postgres
- activemq
environment:
- spring.profiles.active=integration
- MONGODB_URI=mongodb://admin:secret@mongo:27017/votes_db?authSource=admin
- DATABASE_URL=postgres://postgres:secret@postgres:5432/postgres
1 | defines a requirement as well as an /etc/hostname entry to dependent |
427.2. Start Network
We can start the network using the up
command. We can add a -d
option to make
all services run in the background. The runtime container names will have a project
prefix and that value defaults to the name of the parent directory. It can be overridden
using the -p
option.
$ docker-compose -p foo up -d
Creating foo_activemq_1 ... done
Creating foo_postgres_1 ... done
Creating foo_mongo_1 ... done
Creating foo_api_1 ... done
The following shows the runtime Docker image name and port numbers for the running images. They all start with the project prefix "foo". This is important when trying to manage multiple instances of the network. Notice too that none of the ports have been mapped to a host port at this time. However, they are available on the internally defined "foo" network (i.e., accessible from the API service).
$ docker ps (1)
IMAGE PORTS NAMES
dockercompose-votes-api:latest foo_api_1
postgres:12.3-alpine 5432/tcp foo_postgres_1
rmohr/activemq:5.15.9 1883/tcp, 5672/tcp, ... foo_activemq_1
mongo:4.4.0-bionic 27017/tcp foo_mongo_1
1 | no internal container ports are being mapped to localhost ports at this time |
427.3. Access Logs
You can access the logs of all running services or specific services running
in the background using the logs
command and by naming the services desired.
You can also limit the historical size with --tail
option
and follow the log with -f
option.
$ docker-compose -p foo logs --tail 2 -f mongo activemq
Attaching to foo_activemq_1, foo_mongo_1
mongo_1 | {"t":{"$date":"2020-08-15T14:10:20.757+00:00"},"s":"I", ...
mongo_1 | {"t":{"$date":"2020-08-15T14:11:41.580+00:00"},"s":"I", ...
activemq_1 | INFO | No Spring WebApplicationInitializer types detected ...
activemq_1 | INFO | jolokia-agent: Using policy access restrictor classpath:...
427.4. Execute Commands
You can execute commands inside a running container. The following shows an example
of running the Postgres CLI (psql
) against the postgres
container to issue a
SQL command against the VOTE
table. This can be very useful during
test debugging — where you can interactively inspect the state of the databases
during a breakpoint in the automated test.
$ docker-compose -p foo exec postgres psql -U postgres -c "select * from VOTE"
id | choice | date | source (1)
----+--------+------+--------
(0 rows)
1 | executing command that runs inside the running container |
427.5. Shutdown Network
We can shutdown the network using the down
command or <ctl>-C
if it was launched
in the foreground. The project name is required if it is different from the parent
directory name.
$ docker-compose -p foo down Stopping foo_api_1 ... done Stopping foo_activemq_1 ... done Stopping foo_mongo_1 ... done Stopping foo_postgres_1 ... done Removing foo_api_1 ... done Removing foo_activemq_1 ... done Removing foo_mongo_1 ... done Removing foo_postgres_1 ... done Removing network foo_default
427.6. Override/Extend Docker Compose File
If CLI/shell access to the VMs is not enough, we can create an override file to specialize the base file. The following example maps key ports in each Docker container to a host port.
version: '2'
services:
mongo: (1)
ports:
- "27017:27017"
postgres:
ports:
- "5432:5432"
activemq:
ports:
- "61616:61616"
- "8161:8161"
api:
build: (2)
context: ../dockercompose-votes-svc
dockerfile: Dockerfile
ports:
- "${API_PORT}:8080"
1 | extending definitions of services from base file |
2 | adding source module info to be able to rebuild image from this module |
427.7. Using Mapped Host Ports
Mapping container ports to host ports is useful if you want to simply use Docker Compose to manage a development environment or you have a tool — like Mongo Compass — that requires a standard URL.
427.8. Supplying Properties
Properties can be passed into the image by naming the variable. The value is derived from one of the following (in priority order):
-
NAME: value
explicitly supplied in the Docker Compose File -
NAME=value
defined in environment variable -
NAME=value
defined in an environment file
The following are example environment files mapping API_PORT
to either 9999 or 9090. We can
activate an environment file using the --env-file
option or have it automatically applied
when named .env
.
$ cat alt-env (1) API_PORT=9999 $ cat .env (2) API_PORT=9090
1 | used when --env-file alt-env supplied |
2 | used by default |
427.9. Specifying an Override File
You can specify an override file by specifying multiple Docker Compose files
in priority order with the -f
option. The following will use docker-compose.yml
as a base and apply the augmentations from development.yml
.
$ docker-compose -p foo -f ./docker-compose.yml -f ./development.yml up -d
Creating network "foo_default" with the default driver
You can have the additional file applied automatically if named docker-compose.override.xml
.
The example below uses the docker-compose.xml
file as the primary
and the docker-compose.override.yml
file as the override.
$ ls docker-compose*
docker-compose.override.yml docker-compose.yml
$ docker-compose -p foo up -d (1)
1 | using default Docker Compose file with default override file |
427.10. Override File Result
The following shows the new network configuration that shows the impact of the override file. Key communication ports of the back-end resources have been exposed on the localhost network.
$ docker ps (1) (2)
IMAGE PORTS NAMES
dockercompose-votes-api:latest 0.0.0.0:9090->8080/tcp foo_api_1
mongo:4.4.0-bionic 0.0.0.0:27017->27017/tcp foo_mongo_1
rmohr/activemq:5.15.9 1883/tcp, ... 0.0.0.0:61616->61616/tcp foo_activemq_1
postgres:12.3-alpine 0.0.0.0:5432->5432/tcp foo_postgres_1
1 | container ports are now mapped to (fixed) host ports |
2 | API host port used the variable defined in .env file |
Override files cannot reduce or eliminate collections
Override files can replace single elements but can only augment multiple
elements. That means one cannot eliminate exposed ports from a base configuration
file. Therefore it is best to keep from adding properties that may be needed in the
base file versus adding to environment-specific files.
|
428. Testcontainers and Spock
With an understanding of Docker Compose and a few Maven plugins — we could easily see how we could integrate our Docker images into an integration test using the Maven integration-test phases.
However, by using Testcontainers — we can integrate Docker Compose into our unit test framework much more seamlessly and launch tests in an ad-hoc manner right from within the IDE.
428.1. Source Tree
The following shows the structure of the example integration module.
We have already been working with the Docker Compose files at the
root level in the previous section. Those files can be placed within
the src
directories if not being used interactively for developer
commands — to keep the root less polluted.
This is an integration test-only module, so there will be no application
code in the src/main
tree. I took the opportunity to place common network
helper code in the src/main
tree to mimic what might be packaged up into
test module support JAR if we need this type of setup in multiple test modules.
The src/test
tree contains files that are specific to the specific
integration tests performed. I also went a step further and factored out a
base test class and then copied the initial ElectionCNSpec
test case to demonstrate reuse within a test case and shutdown/startup
in between test cases.
|-- alt-env
|-- docker-compose.override.yml
|-- docker-compose.yml
|-- pom.xml
`-- src
|-- main
| |-- java
| | `-- info
...
| | `-- votes
| | |-- ClientTestConfiguration.java
| | `-- VoterListener.java
| `-- resources
`-- test
|-- groovy
| `-- info
...
| `-- votes
| |-- VotesEnvironmentSpec.groovy
| |-- ElectionCNSpec.groovy
| |-- Election2CNSpec.groovy
| `-- Election3CNSpec.groovy
`-- resources
`-- application.properties
428.2. @SpringBootConfiguration
Configuration is being supplied to the tests by the ClientTestConfiguration
class.
The following shows some traditional @Value
property value injections that could have
also been supplied through a @ConfigurationProperties
class. We want these values
set to the assigned host information at runtime.
@SpringBootConfiguration()
@EnableAutoConfiguration
public class ClientTestConfiguration {
@Value("${it.server.host:localhost}")
private String host; (1)
@Value("${it.server.port:9090}")
private int port; (2)
...
1 | value is commonly localhost |
2 | value is dynamically generated at runtime |
428.3. Traditional @Bean Factories
The configuration class supplies a traditional set of @Bean
factories with base URLs
to the two services. We want the later two URIs injected into our test.
So far so good.
//public class ClientTestConfiguration { ...
@Bean
public URI baseUrl() {
return UriComponentsBuilder.newInstance()
.scheme("http").host(host).port(port).build().toUri();
}
@Bean
public URI votesUrl(URI baseUrl) {
return UriComponentsBuilder.fromUri(baseUrl).path("api/votes")
.build().toUri();
}
@Bean
public URI electionsUrl(URI baseUrl) {
return UriComponentsBuilder.fromUri(baseUrl).path("api/elections")
.build().toUri();
}
@Bean
public RestTemplate anonymousUser(RestTemplateBuilder builder) {
RestTemplate restTemplate = builder.build();
return restTemplate;
}
428.4. DockerComposeContainer
In order to obtain the assigned port information required by the URI injections,
we first need to define our network container. The following shows a set of static
helper methods that locates the Docker Compose file, instantiates the
Docker Compose network container, assigns it a project name, and exposes
container port 8080
from the API to a random available host port.
During network startup, Testcontainers will also wait for network activity on that port before returning control back to the test.
public static File composeFile() {
File composeFile = new File("./docker-compose.yml"); (1)
Assertions.assertThat(composeFile.exists()).isTrue();
return composeFile;
}
public static DockerComposeContainer testEnvironment() {
DockerComposeContainer env =
new DockerComposeContainer("dockercompose-votes", composeFile())
.withExposedService("api", 8080);
return env;
}
1 | Testcontainers will fail if Docker Compose file reference does not include
an explicit parent directory (i.e., ./ is required) |
Mapped Volumes may require additional settings
Testcontainers automatically detects whether the test is being launched from
within or outside a Docker image (outside in this example). Some additional
tweaks to the Docker Compose file are required only if disk volumes are
being mapped. These tweaks are called forming a "wormhole"
to have Docker spawn sibling containers and share resources.
We are not using volumes and will not be covering the wormhole pattern here.
|
428.5. @SpringBootTest
The following shows an example @SpringBootTest
declaration. The test is
a pure client to the server-side and contains no service web tier. The configuration
was primarily what I just showed you — being primarily based on the URIs.
The test uses an optional @Stepwise
orchestration for tests in case there
is an issue sharing the dirty service state that a known sequence can solve.
This should also allow for a lengthy end-to-end scenario to be broken into
ordered steps along test method boundaries.
Here is also where the URIs are being injected — but we need our network started before we can derive the ports for the URIs.
@SpringBootTest(classes = [ClientTestConfiguration.class],
webEnvironment = SpringBootTest.WebEnvironment.NONE)
@Stepwise
@Slf4j
@DirtiesContext
abstract class VotesEnvironmentSpec extends Specification {
@Autowired
protected RestTemplate restTemplate
@Autowired
protected URI votesUrl
@Autowired
protected URI electionsUrl
def setup() {
log.info("votesUrl={}", votesUrl) (1)
log.info("electionsUrl={}", electionsUrl)
}
1 | URI injections — based on dynamic values — must occur before tests |
428.6. Spock Network Management
Testcontainers management within Spock is more manual that with JUnit — mostly because Spock
does not provide first-class framework support for static variables. No problem,
we can find many ways to get this to work. The following shows the network container
being placed in a @Shared
property and started/stopped at the Spec level.
@Shared (1)
protected DockerComposeContainer env = ClientTestConfiguration.testEnvironment()
def setupSpec() {
env.start() (2)
}
def cleanupSpec() {
env.stop() (3)
}
1 | network is instantiated and stored in a @Shared variable accessible to all tests |
2 | test case initialization starts the network |
3 | test case cleanup stops the network |
But what about the dynamically assigned port numbers? We have three ways that can be used to resolve them.
428.7. Set System Property
During setupSpec
, we can set System Properties to be used when forming the Spring Context
for each test.
def setupSpec() {
env.start() (1)
System.setProperty("it.server.port", ""+env.getServicePort("api", 8080));
}
1 | after starting network, dynamically assigned port number obtained and set as a System Property for individual test cases |
In hindsight, this looks like a very concise way to go. However, there were two other options available that might be of interest in case they solve other issues that arise elsewhere.
428.8. ApplicationContextInitializer
A more verbose and likely legacy Spring way of adding the port values is
through a Spring ApplicationContextInitializer
that can get added to the Spring
application context using the @ContextConfiguration
annotation and some static
constructs within the Spock test.
The network container gets initialized — like usual — except a reference to the
container gets assigned to a static variable where the running container can be
inspected for dynamic values during an initialize()
callback.
...
import org.springframework.context.ApplicationContextInitializer
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.boot.test.util.TestPropertyValues;
...
@SpringBootTest(...
@ContextConfiguration(initializers = Initializer.class) (4)
...
abstract class VotesEnvironmentSpec extends Specification {
private static DockerComposeContainer staticEnv (1)
static class Initializer (3)
implements ApplicationContextInitializer<ConfigurableApplicationContext> {
@Override
void initialize(ConfigurableApplicationContext ctx) {
TestPropertyValues values = TestPropertyValues.of(
"it.server.port=" + staticEnv.getServicePort("api", 8080))
values.applyTo(ctx)
}
}
@Shared
protected DockerComposeContainer env = ClientTestConfiguration.testEnvironment()
def setupSpec() {
staticEnv = env (2)
env.start()
...
1 | static variable declared to hold reference to singleton network |
2 | @Shared network assigned to static variable |
3 | Initializer class defined to obtain network information from network
and inject into test properties |
4 | Initializer class registered with Spring application context |
428.9. DynamicPropertySource
A similar, but more concise way to leverage the callback approach is to leverage the
newer Spring @DynamicPropertySource
construct. At a high level — nothing has changed
with the management of the network container. Spring simply eliminated the need to create
the boilerplate class, etc. when supplying properties dynamically.
import org.springframework.test.context.DynamicPropertyRegistry
import org.springframework.test.context.DynamicPropertySource
...
private static DockerComposeContainer staticEnv (1)
@DynamicPropertySource (3)
static void properties(DynamicPropertyRegistry registry) {
registry.add("it.server.port", ()->staticEnv.getServicePort("api", 8080));
}
@Shared
protected DockerComposeContainer env = ClientTestConfiguration.testEnvironment()
def setupSpec() {
staticEnv = env (2)
env.start()
}
1 | static variable declared to hold reference to singleton network |
2 | @Shared network assigned to static variable |
3 | @DynamicPropertySource defined on a static method to obtain
network information from network and inject into test properties |
428.10. Resulting Test Initialization Output
The following shows an example startup prior to executing the first test. You will see TestContainers start Docker Compose in the background and then wait close to ~12 seconds for the API port 8080 to become active.
13:52:28.467 DEBUG π³ [docker-compose] - Set env COMPOSE_FILE=
.../dockercompose-votes-example/testcontainers-votes-spock-ntest/./docker-compose.yml
13:52:28.467 INFO π³ [docker-compose] - Local Docker Compose is running command: up -d
13:52:28.472 DEBUG org.testcontainers.shaded.org.zeroturnaround.exec.ProcessExecutor -
Executing [docker-compose, up, -d]
...
13:52:28.996 INFO π³ [docker-compose] - Creating network "dockercompose-votesdkakfi_default" with the default driver
INFO π³ [docker-compose] - Creating dockercompose-votesdkakfi_mongo_1 ...
INFO π³ [docker-compose] - Creating dockercompose-votesdkakfi_postgres_1 ...
INFO π³ [docker-compose] - Creating dockercompose-votesdkakfi_activemq_1 ...
INFO π³ [docker-compose] - Creating dockercompose-votesdkakfi_activemq_1 ... done
INFO π³ [docker-compose] - Creating dockercompose-votesdkakfi_mongo_1 ... done
INFO π³ [docker-compose] - Creating dockercompose-votesdkakfi_postgres_1 ... done
INFO π³ [docker-compose] - Creating dockercompose-votesdkakfi_api_1 ...
INFO π³ [docker-compose] - Creating dockercompose-votesdkakfi_api_1 ... done
13:52:30.803 DEBUG org.testcontainers.shaded.org.zeroturnaround.exec.WaitForProcess - Process...
13:52:30.804 INFO π³ [docker-compose] - Docker Compose has finished running
... (waiting for containers to start)
13:52:45.100 DEBUG org.springframework.test.context.support.DependencyInjectionTestExecutionListener -
:: Spring Boot :: (v2.3.2.RELEASE)
...
---
At this point, we are ready to use normal `restTemplate` or `WebClient` calls to
test our interface to the overall application.
---
13:52:48.031 VotesEnvironmentSpec votesUrl=http://localhost:32838/api/votes
13:52:48.032 VotesEnvironmentSpec electionsUrl=http://localhost:32838/api/elections
429. Additional Waiting
Testcontainers will wait for the exposed port to become active. We can add additional wait tests to be sure the network is in a ready state to be tested. The following adds a check for the two URLs to return a successful response.
def setup() {
/**
* wait for various events relative to our containers
*/
env.waitingFor("api", Wait.forHttp(votesUrl.toString())) (1)
env.waitingFor("api", Wait.forHttp(electionsUrl.toString()))
1 | test setup holding up start of test for two API URL calls to be successful |
430. Executing Commands
If useful, we can also invoke commands within the running network containers at points in the test. The following shows a CLI command invoked against each database container that will output the current state at this point in the test.
/**
* run sample commands directly against containers
*/
ContainerState mongo = (ContainerState) env.getContainerByServiceName("mongo_1")
.orElseThrow()
ExecResult result = mongo.execInContainer("mongo", (1)
"-u", "admin", "-p", "secret", "--authenticationDatabase", "admin",
"--eval", "db.getSiblingDB('votes_db').votes.find()");
log.info("voter votes = {}", result.getStdout()) (2)
ContainerState postgres = (ContainerState) env.getContainerByServiceName("postgres_1")
.orElseThrow()
result = postgres.execInContainer("psql",
"-U", "postgres",
"-c", "select * from vote");
log.info("election votes = {}", result.getStdout())
1 | executing shell command inside running container in network |
2 | obtaining results in stdout |
430.1. Example Command Output
The following shows the output of the standard output obtained from the two containers after running the CLI query commands.
14:32:15.075 ElectionCNSpec#setup:67 voter votes = MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("a824b7b8-634a-426b-8d21-24c5680864f6") }
MongoDB server version: 4.4.0
{ "_id" : ObjectId("5f382a2c62cb0d4f36d96cfa"),
"date" : ISODate("2020-08-15T18:32:12.706Z"),
"source" : "684c586f...",
"choice" : "quisp-82...",
"_class" : "info.ejava.examples.svc.docker.votes.dto.VoteDTO" }
{ "_id" : ObjectId("5f382a2d62cb0d4f36d96cfb"),
"date" : ISODate("2020-08-15T18:32:13.511Z"),
"source" : "df3a973a...",
"choice" : "quake-5e...",
"_class" : "info.ejava.examples.svc.docker.votes.dto.VoteDTO" }
...
14:32:15.263 main INFO i.e.e.svc.docker.votes.ElectionCNSpec#setup:73 election votes =
id | choice | date | source
-------------------------+-------------+-------------------------+------------
5f382a2c62cb0d4f36d96cfa | quisp-82... | 2020-08-15 18:32:12.706 | 684c586f...
5f382a2d62cb0d4f36d96cfb | quake-5e... | 2020-08-15 18:32:13.511 | df3a973a...
...
(6 rows)
431. Client Connections
Although an interesting and potentially useful feature to be able to execute a random shell command against a running container under test — it can be very clumsy to interpret the output when there is another way. We can — instead — establish a resource client to any of the services we need additional state from.
The following will show adding resource client capabilities that were originally added to the API server. If necessary, we can use this low-level access to trigger specific test conditions or evaluate something performed.
431.1. Maven Dependencies
The following familiar Maven dependencies can be added to the pom.xml to add the resources necessary to establish a client connection to each of the three back-end resources.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-activemq</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
</dependency>
431.2. Hard Coded Application Properties
We can simply add the following hard-coded resource properties to a property file since this is static information necessary to complete the connections.
#activemq
spring.jms.pub-sub-domain=true
#postgres
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.username=postgres
spring.datasource.password=secret
However, we still will need the following properties added that consist of dynamically assigned values.
spring.data.mongodb.uri
spring.activemq.broker-url
spring.datasource.url
431.3. Dynamic URL Helper Methods
The following helper methods are used to form a valid URL String once the hostname and port number are known.
public static String mongoUrl(String host, int port) {
return String.format("mongodb://admin:secret@%s:%d/votes_db?authSource=admin", host, port);
}
public static String jmsUrl(String host, int port) {
return String.format("tcp://%s:%s", host, port);
}
public static String jdbcUrl(String host, int port) {
return String.format("jdbc:postgresql://%s:%d/postgres", host, port);
431.4. Adding Dynamic Properties
The hostname and port number(s) can be obtained from the running network
and supplied to the Spring context using one of the three techniques shown
earlier (System.setProperty
, ConfigurableApplicationContext
, or DynamicPropertyRegistry
).
The following shows the DynamicPropertyRegistry
technique.
public static void initProperties( (1)
DynamicPropertyRegistry registry, DockerComposeContainer env) {
registry.add("it.server.port", ()->env.getServicePort("api", 8080));
registry.add("spring.data.mongodb.uri",()-> mongoUrl(
env.getServiceHost("mongo", null),
env.getServicePort("mongo", 27017)
));
registry.add("spring.activemq.broker-url", ()->jmsUrl(
env.getServiceHost("activemq", null),
env.getServicePort("activemq", 61616)
));
registry.add("spring.datasource.url",()->jdbcUrl(
env.getServiceHost("postgres", null),
env.getServicePort("postgres", 5432)
));
}
1 | helper method called from @DynamicPropertySource callback in unit test |
431.5. Adding JMS Listener
We can add a class to subscribe and listen to the votes
topic
by declaring a @Component
with a method accepting a JMS TextMessage
and annotated
with @JmsListener
. The following example just prints debug messages of the
events and counts the number of messages received.
...
import org.springframework.jms.annotation.JmsListener;
import javax.jms.TextMessage;
@Component
@Slf4j
public class VoterListener {
@Getter
private AtomicInteger msgCount=new AtomicInteger(0);
@JmsListener(destination = "votes")
public void receive(TextMessage msg) throws JMSException {
log.info("jmsMsg={}, {}", msgCount.incrementAndGet(), msg.getText());
}
}
We also need to add the JMS Listener @Component
to the Spring application context
using the @SpringBootTest.classes
property
@SpringBootTest(classes = [ClientTestConfiguration.class, VoterListener.class],
431.6. Injecting Resource Clients
The following shows injections for the resource clients. I have already
showed the details behind the VoterLister
. That is ultimately supported
by the JMS AutoConfiguration and the spring.activemq.broker-url
property.
The MongoClient
and JdbcClient
are directly provided by the Mongo and JPA
AutoConfiguration and the spring.data.mongodb.uri
and spring.datasource.url
properties.
@Autowired
protected MongoClient mongoClient
@Autowired
protected VoterListener listener
@Autowired
protected JdbcTemplate jdbcTemplate
431.7. Resource Client Calls
The following shows an example set of calls that simply obtains document/message/row counts. However, with that capability demonstrated — much more is easily possible.
/**
* connect directly to explosed port# of images to obtain sample status
*/
log.info("mongo client vote count={}",
mongoClient.getDatabase("votes_db").getCollection("votes").countDocuments())
log.info("activemq msg={}", listener.getMsgCount().get())
log.info("postgres client vote count={}",
jdbcTemplate.queryForObject("select count (*) from vote", Long.class))
The following shows the output from the example resource client calls
ElectionCNSpec#setup:54 mongo client vote count=18
ElectionCNSpec#setup:55 activemq msg=18
ElectionCNSpec#setup:57 postgres client vote count=18
432. Test Hierarchy
Much of what I have covered can easily go into a helper class or test base class and potentially be part of a test dependency library if the amount of integration testing significantly increases and must be broken out.
432.1. Network Helper Class
The following summarizes the helper class that can encapsulate the integration between Testcontainers and Docker Compose. This class is not specific to running in any one test framework.
public class ClientTestConfiguration { (1)
public static File composeFile() { ...
public static DockerComposeContainer testEnvironment() { ...
public static void initProperties(DynamicPropertyRegistry registry, DockerComposeContainer env) { ...
public static void initProperties(DockerComposeContainer env) { ...
public static void initProperties(ConfigurableApplicationContext ctx, DockerComposeContainer env) { ...
public static String mongoUrl(String host, int port) { ...
public static String jmsUrl(String host, int port) { ...
public static String jdbcUrl(String host, int port) { ...
1 | Helper class can encapsulate details of network without ties to actual test framework |
432.2. Integration Spec Base Class
The following summarizes the base class that encapsulates starting/stopping the network and any helper methods used by tests. This class is specific to operating tests within Spock.
abstract class VotesEnvironmentSpec extends Specification { (1)
def setupSpec() {
configureEnv(env)
...
void configureEnv(DockerComposeContainer env) {} (2)
def cleanupSpec() { ...
def setup() { ...
public ElectionResultsDTO wait_for_results(Instant resultTime) { ...
public ElectionResultsDTO get_election_counts() { ...
1 | test base class integrates helper methods in with test framework |
2 | extra environment setup call added to allow subclass to configure network before started |
432.3. Specialized Integration Test Classes
The specific test cases can inherit all the setup and focus on their individual tests. Note that the example I provided uses the same running network within a test case class (i.e., all test methods in a test class share the same network state). Separate test cases use fresh network state (i.e., the network is shutdown, removed, and restarted between test classes).
class ElectionCNSpec extends VotesEnvironmentSpec { (1)
@Override
def void configureEnv(DockerComposeContainer dc) { ...
def cleanup() { ...
def setup() { ...
def "vote counted in election"() { ...
def "test 2"() { ...
def "test 3"() { ...
1 | concrete test cases provide specific tests and extra configuration, setup, and cleanup specific to the tests |
class Election2CNSpec extends VotesEnvironmentSpec {
def "vote counted in election"() { ...
def "test 2"() { ...
def "test 3"() { ...
432.4. Test Execution Results
The following image shows the completion results of the integration tests. Once thing to note with Spock is that it only seems to attribute time to a test setup/execution/cleanup and not to the test case’s setup and cleanup. Active MQ is very slow to shutdown and there is easily 10-20 seconds in between test cases that is not depicted in the timing results.
433. Summary
This lecture covered a summary of capability for Docker Compose and Testcontainers integrated into Spock to implement integrated unit tests. The net result is a seamless test environment that can verify that a network of components — further tested in unit tests — integrate together to successfully satisfy one or more end-to-end scenarios. For example, it was not until integration testing that I realized my JMS communications was using a queue versus a topic.
In this module we learned:
-
to identify the capability of Docker Compose to define and implement a network of virtualized services running in Docker
-
to identify the capability of Testcontainers to seamlessly integrate Docker and Docker Compose into unit test frameworks including Spock
-
to author end-to-end, unit integration tests using Spock, Testcontainers, Docker Compose, and Docker
-
to implement inspections of running Docker images
-
to implement inspects of virtualized services during tests
-
to instantiate virtualized services for use in development
-
to implement a hierarchy of test classes to promote reuse