Laboratory Training 2
Advanced Features of Working with Files
1 Training Tasks
1.1 Individual Task
Create a new Maven project and transfer to this project the previously created classes that represent entities of the individual tasks of laboratory trainings # 2 and # 3 of the "Fundamentals of Java Programming" course. Create derived classes in which to override the implementation of all methods related to processing sequences through the use of Stream API tools. If the class representing the second entity has no sequence processing, the base class can be used.
The program must demonstrate:
- reproduction of the implementation of individual tasks of laboratory training # 2 and # 3 (without using
Set
interface) of the course "Fundamentals of Java Programming"; - using Stream API tools for processing and outputting sequences;
- outputting data to a text file using Stream API with subsequent reading;
- serialization of objects into an XML file and a JSON file and corresponding deserialization using the XStream library;
- recording events related to program execution in the system log;
- testing individual classes using JUnit.
Note: localization and translation of texts can be carried out at the wish of the student.
1.2 List of Files of all Subdirectories
Enter the name of a specific folder. Display the names of all files in this directory, as well as all files in subdirectories, their subdirectories, and so on. Implement two approaches:
- search by means of a
java.io.File
class using a recursive function; - search by means of the
java.nio.file
package.
Display both results sequentially on the screen. If the folder does not exist, display an error message.
1.3 Search for all Divisors
Using the Stream API tools, implement a search for all divisors of a positive integer. Create a separate static
function that accepts an integer and returns an array of integers. Inside the function create a stream of type IntStream
.
Apply range()
function and filter. Do not use explicit loops. Provide class testing using JUnit.
1.4 Working with Text Files using the Stream API
Use the Files.lines()
function to read strings from a text file, sort by increasing length, and output
strings that contain the letter "a" to another file.
1.5 Creating files and Reading from Files Data About the Student and the Academic Group
Create the classes "Student" and "Academic group" (with array of students as the field). Create objects. Provide file creation and reading from files using the following approaches:
- using Stream API tools for working with text files;
- serialization and deserialization in XML and JSON (by means of XStream).
1.6 Working with the org.json Library (Additional Task)
Complete task 1.5 using tools for working with JSON files of the org.json library.
2 Instructions
2.1 Testing in Java. Using JUnit
2.1.1 Overview
Testing is one of the most important components of the software development process. Software testing is performed in order to obtain information about the quality of the software product. There are many approaches and techniques for testing and verifying software.
The paradigm of test-driven development (development through testing) defines the technique of software development, based on the use of tests to stimulate the writing of code, and to verify it. Code development is reduced to repeating the test-code-test cycle with subsequent refactoring.
The level of testing at which the least possible component to be tested, such as a single class or function, is called unit testing. Appropriate testing technology assumes that tests are developed in advance, before writing the real code, and the development of the code of the unit (class) is completed when its code passes all the tests.
2.1.2 Java Tools for Diagnosing Runtime Errors
Many modern programming languages, including Java, include syntactic assertions. The assert
keyword
has appeared in Java since version JDK 1.4 (Java 2). The assert
work can be turned
on or off. If the execution of diagnostic statements is enabled, the work of assert
is
as follows: an expression of type boolean
is executed and if the result is true
,
the program continues, otherwise an exception of java.lang.AssertionError
throws. Suppose, according
to the logic of the program, the variable c
must always be positive. Execution of such a fragment of
the program will not lead to any consequences (exceptions, emergency stop of the program, etc.):
int a = 10; int b = 1; int c = a - b; assert c > 0;
If, due to an incorrect software implementation of the algorithm, the variable c
still received a
negative value, the execution of a fragment of the program will lead to the throwing of an exception and an abnormal
termination of the program, if the processing of this exception was not provided:
int a = 10; int b = 11; int c = a - b; assert c > 0; // exception is thrown
After the assertion, you can put a colon, followed by a string of the message. Example:
int a = 10; int b = 11; int c = a - b; assert c > 0 : "c cannot be negative";
In this case, the corresponding string is the exception message string.
Assert execution is usually disabled in integrated development environments. To enable assert execution in the
IntelliJ IDEA environment, use the Run | Edit Configurations menu function. In the Run/Debug
Configurations window, enter -ea
in the VM Options input line.
In these examples, the values that are checked with assert
are not entered from the
keyboard, but are defined in the program to demonstrate the correct use of assert
-
the search for logical errors, rather than checking the correctness of user input. Exceptions, conditional statements,
etc. should be used to verify the correctness of the data entered. The use of assertion validation is not allowed,
because in the future the program will be started without the -ea
option and all assertions will be
ignored. The expression specified in the statement should not include actions that are important in terms of program
functionality. For example, if the assertion check is the only place in the program from which a very important
function is called,
public static void main(String[] args) { //... assert f() : "failed"; //... } public static boolean f() { // Very important calculations return true; }
then after disabling assertions the function will not be called at all.
2.1.3 Basics of Using JUnit
In contrast to the use of diagnostic statements, which performs testing of algorithms "from the inside", unit testing provides verification of a particular unit as a whole, testing "outside" the functionality of the unit.
The most common unit testing support for Java software is JUnit, an open unit testing library. JUnit allows:
- create tests for individual classes;
- create test suits;
- create a series of tests on repeating sets of objects.
Now the JUnit 5 version is now relevant. But also a very widespread is JUnit 4 version.
To create a test, you need to create a class that needs to be tested, as well as create a public class for testing
with a set of methods that implement specific tests. Each test method must be public
, void
,
and have no parameters. The method must be marked with an annotation @Test
:
public class MyTestCase { ... @Test public void testXXX() { ... } ... }
Note: to use the @Test
and other similar annotations should be added import statements import org.junit.jupiter.api.*;
for
JUnit 5) or import
org.junit.*;
(for JUnit 4)
.
Within such methods, you can use the following assertion methods:
assertTrue(expression); // Fails the test if false assertFalse(expression); // Fails the test if true assertEquals(expected, actual); // Fails the test if not equivalent assertNotNull(new MyObject(params)); // Fails the test if null assertNull(new MyObject(params)); // Fails the test if not null assertNotSame(expression1, expression2);// Fails the test if both links refer to the same object assertSame(expression1, expression2); // Fails the test if the objects are different fail(message) // Immediately terminates the test with a failure message
Here MyObject
is a class that is being tested. These Assertion
class methods (Assert
class
methods for JUnit 4) are accessed using static import: import static org.junit.jupiter.api.Assertion.*;
(for
JUnit 5) or import static org.junit.Assert.*;
. These methods also are implemented
with an additional message
parameter of type String
, which specifies the message that
will be displayed if the test failed.
The IntelliJ IDEA provides built-in JUnit support. Suppose a new project has been created. The project contains a class with two functions (static and non-static) that should be tested:
package ua.inf.iwanoff.java.advanced.second; public class MathFuncs { public static int sum(int a, int b) { return a + b; } public int mult(int a, int b) { return a * b; } }
Within the project, we can manually create a folder, for example, tests
. Next we should set Mark
Directory as | Test Sources Root with the context menu.
Returning to the MathFuncs
class, choosing it in the code editor, through the context menu we can
generate tests: Generate... | Test.... In the dialog that opened, we select the version of the JUnit library.
The desired option is JUnit5. We can also correct the class name that we offer: MathFuncsTest
.
In most cases, the correction of this name is not needed. Then we select the names of methods that are subject to
testing. In our case, there are sum()
and mult()
. Such a code will be received:
package ua.inf.iwanoff.java.advanced.second; import static org.junit.jupiter.api.Assertions.*; class MathFuncsTest { @org.junit.jupiter.api.Test void sum() { } @org.junit.jupiter.api.Test void mult() { } }
IntelliJ IDEA indicates errors in this code (Cannot resolve symbol 'junit'). By clicking Alt+Enter, we get a hint: Add 'JUnit 5.7.0' to classpath. Taking advantage of this prompt, we add the relevant library and get the code without errors.
We can optimize the code by adding imports. We add testing of MathFuncs
class methods into MathFuncsTest
methods.
To test the work of mult()
we need to create an object:
package ua.inf.iwanoff.java.advanced.second; import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.*; class MathFuncsTest { @Test void sum() { assertEquals(MathFuncs.sum(4, 5), 9); } @Test void mult() { assertEquals(new MathFuncs().mult(3, 4), 12); } }
You can run tests to run through the Run menu. The normal completion of the process indicates no errors
during verification. If you add a code that distorts computing in the MathFuncs
class, for example
public int mult(int a, int b) { return a * b + 1; }
running tests will result in AssertionFailedError
message. You can see how many tests have been successful,
and how much it is not passed.
If some actions need to be taken before performing the test function, for example, to format the values of variables,
then such initialization is made in a separate static method, which is preceded by an annotation @BeforeAll
(@BeforeClass
in
JUnit 4):
@BeforeAll public static void setup(){ ... }
Similarly, the methods in which the actions needed after testing are preceded by@AfterAll
annotation
(@AfterClass
in JUnit 4). Methods must be public static void
.
In our example, we can create an object in advance, as well as add messages after the tests are completed:
package ua.inf.iwanoff.java.advanced.second; import org.junit.jupiter.api.*; import static org.junit.jupiter.api.Assertions.*; class MathFuncsTest { private static MathFuncs funcs; @BeforeAll public static void init() { funcs = new MathFuncs(); } @Test void sum() { assertEquals(MathFuncs.sum(4, 5), 9); } @Test void mult() { assertEquals(funcs.mult(3, 4), 12); } @AfterAll public static void done() { System.out.println("Tests finished"); } }
Annotation @BeforeEach
(@Before
in JUnit 4) indicates that the method is
called before each test method. Accordingly, @AfterEach
(@After
in JUnit 4) indicates
that the method is called after each successful test method. Methods marked by these annotations should not be static.
You can also test methods that return void
. Calling such a method involves performing
an action (for example, creating a file, changing the value of a field, etc.). It is necessary to check whether
such action took place. For example:
void setValue(into value) { this.value = value; } ... public void testSetValue() { someObject.setValue(123); assertEquals(123, someObject.getValue()); }
However, as a rule, testing the simplest access methods (setters and getters) seems excessive and is not recommended
2.2 Use of Build Automation Tools
2.2.1 Overview
Build automation provides for use of special tools for automatic tracking of dependencies between files within the project. Build automation also involves performing typical actions such as, for example,
- source code compiling;
- programs assembly from individual parts;
- preparation of documentation;
- creation of a JAR archive;
- deployment of the program.
Integrated development environments (IDE) are most often assuming project assembly management. However, these tools are usually limited and not compatible in different IDEs. Sometimes, the need for transferring the project to another IDE occurs. In addition, it would be convenient to describe and fix a sequence of some actions over the project artifacts while performing a typical set of tasks of the development process.
An alternative offers independent build automation tools. The most popular build tools are Apache Ant, Gradle and Apache Maven.Apache Ant is a Java-based set of tools for automating software build process compatible with different platforms. This is Apache Software Foundation project. Management of the build process takes place with the XML scenario, the so-called build file ( build.xml by default), which corresponds to certain rules.
Actions that can be performed with Ant are described by targets . Targets may depend on each other. If another target should be performed to perform a certain goal, then you can make a dependence of one target from another. Targets contain invocations of tasks. Every task is a command that performs some elementary action. There are several predefined tasks that are designed to describe typical actions: compiling using javac , running a program, creation of JAR, deployment, etc.
There is the possibility of expanding the set of ANT tasks. Ant tasks also include work with a file system (creating directories, copying and deleting files), documentation generation, etc.
Today, Ant has become less popular, compared with Gradle and Maven because of their limitations. In addition, compared to Maven, Ant offers an imperative (command) approach to projecting: The developer must describe the sequence of actions performed during harvesting rather than the expected result.
The Gradle build automation tool was first established in 2007 under the Apache License 2.0. In September 2012, a stable implementation was issued 2.7. Gradle uses concepts of Apache Ant and Apache Maven, but instead of XML it uses the language built on the Groovy language syntax. Gradle means are mainly used in Android development. Gradle is available as a separate download, but can also be found bundled in products such as Android Studio.
2.2.2 Apache Maven
Apache Maven is a build automation tool that uses XML syntax to specify the build options, but compared
with Ant it provides a higher level of automation. Maven is created and published by Apache Software Foundation
since 2004 . To determine the build options, POM (Project Object Model) is used. Unlike Apache Ant, Maven provides
a declarative, and not imperative description of a project: pom.xml
project files contains its declarative description
(which we want to get), not separate commands.
Like Ant, Maven allows you to start the compiling processes, the creation of JAR files, documentation generation, etc.
The most important function of Maven is the management of dependencies that are present in projects using third-party libraries (which, in turn, use other third-party libraries). Also, Maven allows you to solve libraries version conflicts.
Maven is based on Plugin architecture that allows you to apply plugins for various tasks (compile
, test
, build
, deploy
, checkstyle
, pmd
, scp-transfer
)
without having to install them in a specific project. There is a large number of plugins developed for different
purposes.
Information for the Maven support project is contained in a pom.xml
file, which specifies dependencies
of the Maven-controlled package, from other packages and libraries.
IntelliJ IDEA establishes support for maven projects. To create a new project with Maven support in the New Project window, you can select Maven on the left side. For the project, the JDK version (Project SDK) is determined. Suppose it's JDK 11. In addition, the project can be created, based on archetype. For the first Maven project you can bypass without archetypes.
On the next page of the wizard, we select the Name (for example, HelloMaven
), Location and Artifact
Coordinates, which includes the so-called GAV (groupId
, artifactId
, version
).
- groupId: reference to the author or organization (subdivision) where the project has been created; the corresponding identifier is built by the rules of constructing package names: an inverted domain name;
- artifactId: project name; It does not necessarily have to coincide with the name of the Intellij IDEA project, but the use of the same names in this context is desirable; When creating a project, this field is automatically completed by the project name;
- version: project version; The 1
1.0-SNAPSHOT
is established, that is, this is the first version of the project that is in development; This means that the project is under development.
For our first project IntelliJ IDEA automatically creates a pom.xml
file with such content :
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>ua.inf.iwanoff.java.advanced.second</groupId> <artifactId>HelloMaven</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> </project>
The properties
section specifies the JDK version (JDK 11).
Let's take a look on the project structure. This is a typical Maven project structure. At the project level, the src
folder
is created with this structure:
src main java resources test java
The src
directory is a root directory of the source code and the code of test classes. The main
directory
is a root directory for a source code that is concerned directly with the application (without tests). The test
directory
contains a source code of test classes. Packages of the source code are allocated in the subdirectories of the java
directory.
The resources
directory contains other resources required for the project. This can be properties that
are used to internationalize programs, GUI markup files, styles, or something.
After compiling the project, a target
directory with compiled classes will be added to the project
structure.
The Maven tool window, whose shortcut is usually located on the right, contains a list of Maven commands that provide the life cycle of the project:
clean
– project cleaning and deleting all files that have been created by previous build;validate
– checking the correctness of metainformation about the project;compile
– compiling project;test
– testing with JUnit;package
– creation of an archive jar, war or ear;verify
– verifying the correctness of the package and compliance with quality requirements;install
– installation (copying) of .jar, .war or .ear files in local repository;site
– site generation;deploy
– project publication in a remote repository.
Note: if you use Maven outside of IntelliJ IDEA, these commands are entered in a command prompt, for example: mvn
clean
; to use Maven without IntelliJ IDEA, it should be downloaded and installed separately.
Some commands for their successful execution require the implementation of previous lifecycle commands. For example, package
involves
performing compile
. Performing previous commands is automatically done. Execution of commands involves
output of typical Maven messages in a console window.
In the java
folder we can find appropriate package and class with main()
method:
package ua.inf.iwanoff.java.advanced.second; public class Main { public static void main(String[] args) { System.out.println("Hello world!"); } }
Class code can be changed:
package ua.inf.iwanoff.java.advanced.second; public class Main { public static int multiply(int i, int k) { return i * k; } public static void main(String[] args) { System.out.println("Hello, Maven!"); System.out.println("2 * 2 = " + multiply(2, 2)); } }
Among Maven commands there is no direct execution of the program. In order to execute the program, IntelliJ IDEA Run menu functions should be used. But the necessary Maven commands that cover certain phases of the lifecycle are automatically performed.
Note: a set of standard Maven commands can be extended with the plugin mechanism.
Very important function of Maven is dependency management. Usually a real project uses numerous libraries to connect which JAR files must be downloaded. These libraries are based on the use of other libraries, which also needs to be loaded. A separate problem arises with versions of libraries and their compatibility.
Maven provides a simple declarative approach to dependency management. It is enough to add information about the
required library in the <dependencies>
section. For example, to test our project, it is advisable
to use JUnit 5. You can, of course, add the necessary dependency manually, but it is better to use IntelliJ IDEA
interactive capabilities. By selecting the pom.xml
file window, you can click Alt+Insert,
then in the Generate list you should select Dependency, then in the Search For Artifact dialog
box type junit
and select org.junit.jupiter:junit-jupiter-api:5.8.1
. The necessary group <dependencies>
will
be added:
<dependencies> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-api</artifactId> <version>5.8.1</version> </dependency> </dependencies>
In this code, the data are marked as an error. You need to reload the Maven project. In the Maven tool
window, you can find and press the first button (Reload All Maven Projects). Errors in pom.xml
should
disappear.
Now you can add a test. Use Generate... | Test... context menu function. A parallel hierarchy of packages,
as well as necessary class will be added to test
branch of the project tree.
2.3 Working with the File System
2.3.1 Overview
Java allows you to work not only with files, but also with a file system as a whole. The file system is a set of principles and mechanisms used by the operating system for storing information in the form of files on information media. Also, this term is also used to indicate a set of files and directories (folders) that are placed on a logical or physical device.
The typical functions of working with the file system are:
- checking the existence of a file or directory
- getting a list of files and subdirectories of a specified directory
- creation files and links to files
- copying files
- renaming and moving files
- manage files attributes
- deleting files
- bypassing the tree of subdirectories
- tracking file changes.
Java offers two approaches to work with file system:
- use of
java.io.File
class - use of
java.nio.file
package.
2.3.2 Using File Class
The java.io
package provides the ability to work both with file contents and with the file system
as a whole. This feature implements the File
class. To create an object of this class, the (full or
relative) path to the file should be passed as a parameter of the constructor. For example:
File dir = new File("C:\\Users"); File currentDir = new File("."); // Project folder (current)
The File
class contains methods for obtaining a list of files of the specified folder (list()
, listFiles()
),
obtaining and modifying the file attributes (setLastModified()
, setReadOnly()
, isHidden()
, isDirectory()
,
etc.), creating a new file (createNewFile()
, createTempFile()
), create directories (mkdir()
),
delete files and folders (delete()
) and many more. The work of some of these methods can be demonstrated
by the following example:
package ua.inf.iwanoff.java.advanced.second; import java.io.*; import java.util.*; public class FileTest { public static void main(String[] args) throws IOException { Scanner scanner = new Scanner(System.in); System.out.print("Enter the name of the folder you want to create:"); String dirName = scanner.next(); File dir = new File(dirName); // Create a new folder: if (!dir.mkdir()) { System.out.println("Unable to create a folder!"); return; } // Create a new file inside a new folder: File file = new File(dir + "\\temp.txt"); file.createNewFile(); // Display a list of files in the folder: System.out.println(Arrays.asList(dir.list())); file.delete(); // delete the file dir.delete(); // delete the folder } }
The list()
function without parameters allows you to obtain an array of strings that contains all
files and subdirectories of a folder defined when creating an object of type File
. You can see relative
filenames (without paths). In the following example, we get a list of files and subdirectories of the folder whose
name is entered from the keyboard:
package ua.inf.iwanoff.java.advanced.second; import java.io.File; import java.io.FilenameFilter; import java.util.Scanner; public class ListOfFiles { public static void main(String[] args) { Scanner scanner = new Scanner(System.in); System.out.print("Enter folder name:"); String dirName = scanner.next(); File dir = new File(dirName); if (!dir.isDirectory()) { System.out.println("Invalid folder name!"); return; } String[] list = dir.list(); for(String name : list) { System.out.println(name); } } }
Unlike list()
, the listFiles()
function returns an array of objects of type File
.
This provides additional features: getting full file names, checking file attributes, working with folders, etc.
These additional features will be shown in the following example:
File[] list = dir.listFiles(); // Outputs file data in the default form: for(File file : list) { System.out.println(file); } // The full path is displayed: for(File file : list) { System.out.println(file.getCanonicalPath()); } // Only subdirectories are displayed: for(File file : list) { if (file.isDirectory()) System.out.println(file.getCanonicalPath()); }
To determine the filter mask, you should create an object of a class that implements the FilenameFilter
interface.
In the following example, we get a list of files and subdirectories whose names begin with the letter 's'
:
String[] list = dir.list(new FilenameFilter() { @Override public boolean accept(File dir, String name) { return name.toLowerCase().charAt(0) == 's'; } }); for(String name : list) { System.out.println(name); }
A similar parameter of type FilenameFilter
can be applied to the listFiles()
function.
2.3.3 Working with java.nio Package
The java.nio
package that appeared in JDK 1.4 originally included alternative I/O tools. Compared
to traditional I/O streams, java.nio
provides a higher efficiency of I/O operations. This is achieved
due to the fact that traditional I/O tools work with data in streams, while java.nio
works with data
in blocks. Central objects in java.nio
are Channel
and Buffer
. Channels are
similar to streams in the java.io
package. Buffer is a container object. All data that is transmitted
to the channel must first be placed in the buffer. Any data that is read from the channel is read into the buffer.
The java.nio
means are effective when working with binary files.
The Java 7 version provides an alternative approach to working with the file system: a set of classes described
in the java.nio.files
package. The java.nio.files
package provides a Path
class
to represent the path in the file system. Separate components of this path can be represented by a certain collection
of intermediate subdirectories and the name of the file itself (subdirectory). You can get the Path
class
object using the get()
method of the Path
class. The get()
method is obtains
the path string:
Path path = Paths.get("c:/Users/Public");
Now you can get information about the path:
System.out.println(path.toString()); // c:\Users\Public System.out.println(path.getFileName()); // Public System.out.println(path.getName(0)); // Users System.out.println(path.getNameCount()); // 2 System.out.println(path.subpath(0, 2)); // Users\Public System.out.println(path.getParent()); // c:\Users System.out.println(path.getRoot()); // c:\
After the Path
class object is created, you can use as an argument of static functions of the java.nio.files.Files
.
To check the presence (absence) of file, the functions exists()
and notExists()
relatively
used::
Path dir = Paths.get("c:/Windows"); System.out.println(Files.exists(dir)); // most likely true System.out.println(Files.notExists(dir)); // most likely false
The presence of two separate functions is associated with the possibility of obtaining an indefinite result (it is prohibited to access the file).
To make sure that the program can get the necessary access to the file, you can use isReadable(Path)
, isWritable(Path)
and isExecutable(Path)
methods.
Suppose the Path
type object is created and the path to the file is set. The following code fragment
checks whether a specific file exists, and is it possible to open it for execution:
boolean isRegularExecutableFile = Files.isRegularFile(file) & Files.isReadable(file) & Files.isExecutable(file);
To obtain metadata (data on files and directories), the Files
class provides a number of static methods:
Methods | Explanation |
---|---|
size(Path) |
Returns the size of the specified file in bytes |
isDirectory(Path, LinkOption...) |
Returns |
isRegularFile(Path, LinkOption...) |
Returns true if the specified Path indicates a regular file |
isHidden(Path) |
Returns true if the specified Path indicates a hidden
file |
getLastModifiedTime(Path, LinkOption...) setLastModifiedTime(Path, FileTime) |
Gets / sets the time of the last modification of the specified file |
getOwner(Path, LinkOption...) setOwner(Path, UserPrincipal) |
Gets / sets the owner of the file |
getAttribute(Path, String, LinkOption...) setAttribute(Path, String, Object, LinkOption...) |
Gets / sets the file attribute value |
For various versions of MS Windows, the attribute string should begin with the prefix "dos:
:".
For example, you can set necessary attributes to some file:
Path file = ... Files.setAttribute(file, "dos:archive", false); Files.setAttribute(file, "dos:hidden", true); Files.setAttribute(file, "dos:readonly", true); Files.setAttribute(file, "dos:system", true);
You can also read required attributes using readAttributes()
method. This method requires metadata
about resulting type as its second parameter. These metadata can be obtained from class
field (metadata
will be considered later). The most appropriate resulting type is java.nio.file.attribute.BasicFileAttributes
class.
For example, you can get some file data:
package ua.inf.iwanoff.java.advanced.second; import java.nio.file.*; import java.nio.file.attribute.BasicFileAttributes; import java.util.Scanner; public class Attributes { public static void main(String[] args) throws Exception { System.out.println("Enter file or directory name:"); Path path = Paths.get(new Scanner(System.in).nextLine()); BasicFileAttributes attr = Files.readAttributes(path, BasicFileAttributes.class); System.out.println("Time of creation: " + attr.creationTime()); System.out.println("Last access time: " + attr.lastAccessTime()); System.out.println("Last change time: " + attr.lastModifiedTime()); System.out.println("Directory: " + attr.isDirectory()); System.out.println("Regular file: " + attr.isRegularFile()); System.out.println("Size: " + attr.size()); } }
The DosFileAttributes
class derived from BasicFileAttributes
, also provides isReadOnly()
, isHidden()
, isArchive()
and isSystem()
methods.
In contrast to java.io
tools for working with the file system, java.nio.files.Files
class
provides a copy()
function for file copying. For example:
Files.copy(Paths.get("c:/autoexec.bat"), Paths.get("c:/Users/autoexec.bat")); Files.copy(Paths.get("c:/autoexec.bat"), Paths.get("c:/Users/autoexec.bat"), StandardCopyOption.REPLACE_EXISTING);
There are also StandardCopyOption.ATOMIC_MOVE
and StandardCopyOption.COPY_ATTRIBUTES
options.
Options can be listed separated by commas.
To move files, use the move()
function (with similar attributes or without them). Rename is performed
by the same function:
Files.move(Paths.get("c:/Users/autoexec.bat"), Paths.get("d:/autoexec.bat")); // moving Files.move(Paths.get("d:/autoexec.bat"), Paths.get("d:/unnecessary.bat")); // renaming
A new directory can be created using the createDirectory()
function of Files
class feature.
The function parameter has a Path
type.
Path dir = Paths.get("c:/NewDir"); Files.createDirectory(dir);
To create a directory of several levels in depth, when one or more intermediate directories may not yet exist,
you can use the createDirectories()
method:
Path dir = Paths.get("c:/NewDir/1/2"); Files.createDirectories(dir);
To get the list of files allocated in subdirectory, you can use the DirectoryStream
class.
package ua.inf.iwanoff.java.advanced.second; import java.io.IOException; import java.nio.file.*; public class FileListDemo { public static void main(String[] args) { Path dir = Paths.get("c:/Windows"); try (DirectoryStream<Path> ds = Files.newDirectoryStream(dir)) { for (Path p : ds) { System.out.println(p.getFileName()); } } catch (IOException e) { e.printStackTrace(); } } }
Deleting files and folders is carried out using the delete()
and deleteIfExists()
functions:
Files.delete(Paths.get("d:/unnecessary.bat")); Files.deleteIfExists(Paths.get("d:/unnecessary.bat"));
To bypass the directory tree, the java.nio.files
package provides functions that do not require recursive
algorithms. There is a walkFileTree()
method of the Files
class, which ensures the bypass
of the subdirectory tree. As parameters, you should specify the initial directory (object of type Path
),
as well as an object that implements the generic FileVisitor
interface.
Note: there is another method that allows you to set directory bypass options and restrictions on the depth of subdirectories.
To implement the FileVisitor
interface, you need to define preVisitDirectory()
, postVisitDirectory()
, visitFile()
and visitFileFailed()
methods.
The result of these functions is the enumeration of the FileVisitResult
type. Possible values of this
enumeration are CONTINUE
, TERMINATE
, SKIP_SUBTREE
and SKIP_SIBLINGS
.
To do not implement all the FileVisitor
interface methods each time, you can use the generalized SimpleFileVisitor
class
instead of implementation of FileVisitor
methods. This class provides the default implementation of
the interface functions. In this case, you only need to override necessary functions. The following example searches
for all files of the specified directory and its subdirectories:
package ua.inf.iwanoff.java.advanced.second; import java.io.IOException; import java.nio.file.*; import java.nio.file.attribute.BasicFileAttributes; import java.util.Scanner; public class FindAllFiles { private static class Finder extends SimpleFileVisitor<Path> { @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { System.out.println(file); return FileVisitResult.CONTINUE; } @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { System.out.println("----------------" + dir + "----------------"); return FileVisitResult.CONTINUE; } } public static void main(String[] args) { String dirName = new Scanner(System.in).nextLine(); try { Files.walkFileTree(Paths.get(dirName), new Finder()); // Current directory } catch (IOException e) { e.printStackTrace(); } } }
You can use patterns to search for files (so-called "Glob" patterns) actively used in all operating systems.
Examples of such patterns: - "a *. *" (File names start on the letter a), "*.txt
" (files
with the extension * .txt), etc. Suppose the pattern
string contains some glob pattern. Now you create
the PathMatcher
class object:
PathMatcher matcher = FileSystems.getDefault().getPathMatcher("glob:" + pattern);
The following program search files by specified pattern the specified directory:
package ua.inf.iwanoff.java.advanced.second; import java.io.IOException; import java.nio.file.*; import java.util.Scanner; public class FindMatched { public static void main(String[] args) { Scanner scanner = new Scanner(System.in); String dirName = scanner.nextLine(); String pattern = scanner.nextLine(); Path dir = Paths.get(dirName); PathMatcher matcher = FileSystems.getDefault().getPathMatcher("glob:" + pattern); try (DirectoryStream<Path> ds = Files.newDirectoryStream(dir)) { for (Path file : ds) { if (matcher.matches(file.getFileName())) { System.out.println(file.getFileName()); } } } catch (IOException e) { e.printStackTrace(); } } }
Patterns can be combined with bypassing catalogs.
One of the tasks of the file system is to track the state of the specified directory. For example, the program
must update the file and subdirectories data of some directory if other processes or threads create, change, delete
files and folders, etc. The java.nio.files
package provides tools for registering such directories
and track their status. To track changes, you can implement the WatchService
interface. Suitable implementation
can be obtained using the FileSystems.getDefault().newWatchService()
method invocation. The StandardWatchEventKinds
class
provides the necessary constants for possible events.
You must first register the necessary directory, and then in an infinite cycle read information about the events
related to its changes. The WatchEvent
interface provides a description of a possible event. For example,
we can offer the following program:
package ua.inf.iwanoff.java.advanced.second; import java.nio.file.*; import java.util.Scanner; import static java.nio.file.StandardWatchEventKinds.*; public class WatchDir { public static void main(String[] args) throws Exception { System.out.println("Enter directory name:"); Path dir = Paths.get(new Scanner(System.in).nextLine()); // Create an object of WatchService type: WatchService watcher = FileSystems.getDefault().newWatchService(); // Register monitored events: WatchKey key = dir.register(watcher, ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY); while (true) { // endless loop key = watcher.take(); // wait for the next set of events for (WatchEvent<?> event: key.pollEvents()) { WatchEvent<Path> ev = (WatchEvent<Path>)event; System.out.printf("%s: %s\n", ev.kind().name(), dir.resolve(ev.context())); } key.reset(); // reset the status of the event set } } }
The java.nio.files
library supports work with symbolic links (symlinks, soft links) and hard
links. The createSymbolicLink(new_link, existing_object)
method of the Files
class
creates a symbolic link, the createLink(new_link, existing_file)
method creates a hard link. The isSymbolicLink()
method
returns true
if the obtained argument it is a symbolic link. The readSymbolicLink()
method
allows you to find an object referenced by a symbolic link.
2.4 Using java.nio Tools for Reading and Writing Data
2.4.1 Overview
Compared to traditional I/O streams, java.nio
provides higher efficiency of I/O operations. This is
achieved due to the fact that traditional I/O tools work with data in streams, while java.nio
working
with data in blocks. The central objects in java.nio
are Channel
and Bufer
.
- Channels are used to provide data transfer, for example, between files and buffers. In addition to working with files, channels are also used to work with datagrams and sockets.
- A buffer is a container object. All data transmitted to the channel must first be buffered.
Any data that is read from the channel goes into the buffer. Additional efficiency is provided through the integration of buffer objects with operating system buffers.
The tools java.nio
are effective when working with binary files, first of all, in multithreading tasks
where special selectors are used.
2.4.2 Using the Files Class to Work with Text Files
In addition to complex mechanisms for working with channels, buffers and selectors, the package java.nio.file
provides
simple means of reading from text files and writing to text files.
The following static functions can be used to read data:
readString()
reads the current line from the specified file (variable of typePath
);readAllLines()
reads the current line from the specified file.
So, for example, you can read the first line from the file:
Path path = Paths.get("SomeFile.txt"); String s = Files.readString(path);
And this is how you can read all lines of a text file:
Path path = Paths.get("SomeFile.txt"); List<String> lines = Files.readAllLines(path); for (String s: lines) { System.out.println(s); }
The following functions are used for recording:
writeString()
writes a string to the current file positionwrite()
is a more universal function that writes an array of bytes.
An example of using writeString()
:
Path path = Paths.get("newFile.txt"); String question = "To be or not to be?"; Files.writeString(path, question);
An example of using the function write()
:
Path path = Paths.get("newFile.txt"); String question = "To be or not to be?"; Files.write(path, question.getBytes());
Additional options can be specified for reading and writing.
There are also static functions for interacting with java.io
streams.
2.5 Using the Java 8 Stream API
2.5.1 Overview
Streams for work with collections, or streams of elements, data streams (Stream API) are designed for high-level processing of data stored in containers. They should not be confused with input / output streams.
The Stream API was added to the standard starting with Java 8.
The Stream API is used to search, filter, transform, find the minimum and maximum values, as well as other data manipulation. An important advantage of the Stream API is the ability to work reliably and efficiently in a multithreading environment.
Streams should be understood not as a new kind of collections, but as a channel for transmission and processing of data. The stream of elements works with some data source, such as an array or collection. The stream does not store data directly, but performs transferring, filtering, sorting, etc. The actions performed by the stream do not change the source data. For example, sorting data in a stream does not change their order in the source, but creates a separate resulting collection.
You can create sequential and parallel streams of elements. Parallel streams are secure in terms of multithreading. From the available parallel stream you can get sequential one and vice versa.
To work with streams Java 8 java.util.stream
package provides a set of interfaces and classes that
provide operations on a stream of elements in the style of functional programming. The stream is represented by
an object that implements the java.util.stream.Stream
interface. In turn, this interface inherits the
methods of the general interface java.util.stream.BaseStream
.
Stream operations (methods) defined in the BaseStream
, Stream
, and other derived interfaces
are divided into intermediate and terminal. Intermediate operations receive and generate data streams and
serve to create so-called pipelines, in which a sequence of actions is performed over a sequence. Terminal operations give
the final result and thus "consume" the output stream. This means that the output stream cannot be reused
and, if necessary, must be re-created.
2.3.2 Basic Methods for Working with Streams
The most significant methods of the generic java.util.stream.BaseStream
interface are given in the
table (S
is type of the stream, E
is type of the element, R
is container
type):
Method | Description | Note |
---|---|---|
S parallel() |
returns a parallel stream received from the current one | intermediate operation |
S sequential() |
returns a sequential stream received from the current one | intermediate operation |
boolean isParallel() |
returns true if the stream is parallel or false if
it is sequential |
intermediate operation |
S unordered() |
returns an unordered data stream obtained from the current | intermediate operation |
Iterator<T> iterator() |
returns an iterator for the elements of this stream | terminal operation |
Spliterator<T> spliterator() |
returns a spliterator (split iterator) for the elements of this stream. | terminal operation |
The use of stream iterators will be discussed later.
The Stream
interface extends the set of methods for working with streaming elements. It is also a
generic interface and is suitable for working with any reference types. The following are the most commonly used Stream
interface
methods:
Method | Description | Note |
---|---|---|
void forEach(Consumer<? super T> action) |
executes the code specified by the action for each element of the stream |
terminal operation |
Stream<T> filter(Predicate<? super T> pred) |
returns a stream of elements satisfying the predicate | intermediate operation |
Stream<T> sorted() |
returns a stream of elements sorted in natural order | intermediate operation |
Stream<T> sorted(Comparator<? super T> comparator) |
returns a stream of elements sorted in the specified order | intermediate operation |
|
applies the given function to the elements of the stream and returns a new stream | intermediate operation |
Optional<T> min(Comparator<? super T> comp) |
returns the minimum value using the specified comparison | terminal operation |
Optional<T> max(Comparator<? super T> comp) |
returns the maximum value using the specified comparison | terminal operation |
long count() |
returns the number of elements in the stream |
terminal operation |
Stream<T> distinct() |
returns a stream of differing elements | intermediate operation |
Optional<T> reduce(BinaryOperator<T> accumulator) |
returns the scalar result calculated by the values of the elements | terminal operation |
Object[] toArray() |
creates and returns an array of stream elements | terminal operation |
2.5.3 Creation of Streams
There are several ways to create a stream. You can use the factory methods added to the Collection
interface
(with default implementations), respectively stream()
(for synchronous work) and parallelStream()
(for
asynchronous work):
List<Integer> intList = Arrays.asList(3, 4, 1, 2); Stream<Integer> sequential = intList.stream();
You can create a stream from an array:
Integer[] a = { 1, 2, 3 }; Stream<Integer> fromArray = Arrays.stream(a);
You can create a data source with the specified items. To do this, use the "factory" method of()
:
Stream<Integer> newStream = Stream.of(4, 5, 6);
Streams of items can be created from input streams (BufferedReader.lines()
), filled with random values
(Random.ints()
), and also obtained from archives, bit sets, etc.
You can get an array from a stream using the toArray()
method. The following example creates a stream
and then outputs to the console by creating an array and obtaining a string representation using the static Arrays.toString()
method:
Stream<Integer> s = Stream.of(1, -2, 3); Object[] a = s.toArray(); System.out.println(Arrays.toString(a)); // [1, -2, 3]
2.5.4 Iteration over elements
Streams provide iteration over data elements using the forEach()
method. The function parameter is
the standard Consumer
functional interface, which defines a method with a single parameter and a void
result
type. For example:
fromList.forEach(System.out::println); fromArray.forEach(System.out::println); newStream.forEach(System.out::println);
Intermediate operations are characterized by so-called lazy behavior: they are performed not instantaneously, but as the need arises - when the final operation is working with a new data stream. Lazy behavior increases the efficiency of work with the stream of elements.
Streams provide iterators. The iterator()
method of the Stream
interface returns an object
that implements the java.util.Iterator
interface. The iterator can be used explicitly:
s = Stream.of(11, -2, 3); Iterator<Integer> it = s.iterator(); while (it.hasNext()) { System.out.println(it.next()); }
There is also a special type of iterator, a split iterator (implemented by the Spliterator
interface).
it allows you to split a stream into several parts that you can work with in parallel. The forEachRemaining()
method provides
an iteration for Spliterator
. Example:
List<Integer> list = List.of(1, 2, 3, 4, 5, 6, 7, 8); Spliterator<Integer> spliterator1 = list.spliterator(); Spliterator<Integer> spliterator2 = spliterator1.trySplit(); spliterator1.forEachRemaining(System.out::println); System.out.println("========"); spliterator2.forEachRemaining(System.out::println);
The result of this program fragment will be as follows:
5 6 7 8 ======== 1 2 3 4
Now you can work with the two parts of the list separately.
2.5.5 Operations with Streams
The simplest stream operation is filtering. The intermediate filter()
operation returns a filtered
stream, taking a parameter of Predicate
type. The Predicate
type is a functional interface
that describes a method with a single parameter and boolean
result type. For example,
you can filter out only even numbers from the stream s
:
s.filter(k -> k % 2 == 0).forEach(System.out::println);
The previous example illustrates the use of lambda expressions when working with streams, as well as a small conveyor that includes one intermediate operation.
The intermediate sorted()
operation returns the sorted representation of the stream. Elements are
ordered in the natural order (if it is defined). In other cases, the Comparator
interface should be
implemented, for example, using the lambda expression:
// Sort ascending: Stream<Integer> s = Stream.of(4, 5, 6, 1, 2, 3); s.sorted().forEach(System.out::println); // Sort descending: s = Stream.of(4, 5, 6, 1, 2, 3); s.sorted((k1, k2) -> Integer.compare(k2, k1)).forEach(System.out::println);
The last example shows that after each call to the terminal operation, the stream should be recreated.
Most operations are implemented in such a way that actions on individual elements do not depend on other elements.
Such operations are called stateless operations. Other operations that require working on all elements
at once (for example, sorted()
) are called stateful operations.
The intermediate operation map()
receives a functional interface that defines a certain function for
transforming and forming a new stream from the resulting transformed elements. For example, we calculate the squares
of numbers:
s = Stream.of(1, 2, 3); s.map(x -> x * x).forEach(System.out::println);
Using the distinct()
method, you can get a stream containing only different elements of the collection.
For example:
s = Stream.of(1, 1, -2, 3, 3); System.out.println(Arrays.toString(s.distinct().toArray())); // [1, -2, 3]
The end operation count()
with the resulting type long
returns the number
of elements in the stream:
s = Stream.of(4, 5, 6, 1, 2, 3); System.out.println(s.count()); // 6
The terminal operations min()
and max()
return Optional
objects with a minimum
and maximum value, respectively. A Comparator
type parameter is used for comparison. For example:
s = Stream.of(11, -2, 3); System.out.println(s.min(Integer::compare).get()); // -2
Using a terminal reduce()
operation, we can calculate a scalar value. The reduce()
operation
in its simplest form performs the specified action with two operands, the first of which is the result of performing
the action on the previous elements, and the second is the current element. In the following example, we find the
sum of the elements of the data stream:
s = Stream.of(1, 1, -2, 3, 3); Optional<Integer> sum = s.reduce((s1, s2) -> s1 + s2); sum.ifPresent(System.out::println); // 6
The min()
, max()
, and reduce()
operations get a scalar value from the stream,
so they are called reduction operations.
2.5.6 Using Streams to Work with Primitive Types
There are also streams for working with primitive types: IntStream
, LongStream
and DoubleStream
.
Consider the work with IntStream
and DoubleStream
.
The easiest way to create streams is to use a static function of()
:
IntStream intStream = IntStream.of(1, 2, 4, 8); DoubleStream doubleStream = DoubleStream.of(1, 1.5, 2);
You can create streams from the corresponding arrays:
int[] intArr = { 10, 11, 12 }; double[] doubleArr = { 10, 10.5, 11, 11.5, 12 }; intStream = Arrays.stream(intArr); doubleStream = Arrays.stream(doubleArr);
The range()
method of IntStream
class allows you can create streams by filling them with sequential values.
You can also simultaneously define a filter:
intStream = IntStream.range(0, 10).filter(n -> n % 2 == 0); // 0 2 4 6 8
The iterate()
method can be used to create an infinite stream. The next element is calculated from
the previous one. You can limit the stream using the limit()
function. So, for example, you can get consecutive powers
of the number 3:
intStream = IntStream.iterate(1, i -> i * 3).limit(6); // 1 3 9 27 81 243
The generate()
method also allows you to generate items, but without taking into account the previous
ones. For example, you can fill an array with random numbers:
doubleStream = DoubleStream.generate(() -> (Math.random() * 10000)).limit(20);
Further work is similar to work with streams of Stream
class. For example, you can sort and display
only odd values:
intStream = IntStream.of(11, 2, 43, 81, 8, 0, 5, 3); intStream.sorted().filter(n -> n % 2 != 0).forEach(System.out::println);
The resulting streams can be used to create new arrays:
int[] newIntArr = intStream.toArray(); double[] newDoubleArr = doubleStream.toArray();
Note: it is assumed that the intStream
and doubleStream
streams
were doubleStream not used in the final operations.
2.5.7 Using Streams to Work with Text Files
Stream API streams are integrated with working with text files and java.nio.file
tools.
We will demonstrate the capabilities of reading from text files by reading from a file called source.txt
.
Suppose such a file is located in the project folder and has the following content:
First Second Third
A static lines()
method of Files
class used to read lines from a text file and create
a stream. In the following example, all lines of the file source.txt are read and output to the console. It is advisable
to place the stream creation in the try-with-resources block:
try (Stream<String> strings = Files.lines(Path.of("source.txt"))) { strings.forEach(System.out::println); } catch (IOException e) { throw new RuntimeException(e); }
Note: if it was necessary to work with only one line, or with a part of lines, only the necessary lines would be read from the file.
The same results can be obtained using the java.io.BufferedReader
class:
try (BufferedReader bufferedReader = Files.newBufferedReader(Paths.get("source.txt"))) { Stream<String> strings = bufferedReader.lines(); strings.forEach(System.out::println); } catch (IOException e) { throw new RuntimeException(e); }
You can also create a list:
List<String> list = Files.readAllLines(Path.of("source.txt")); Stream<String> lines = list.stream();
To write to a file, you can use the Files.write()
function:
Stream<String> digits = Stream.of("1", "3", "2", "4"); Files.write(Path.of("digits.txt"), digits.toList());
Example 3.2 demonstrates working with files in conjunction with the Stream API.
2.6 Working with JSON Files
2.6.1 JSON File Format and its Features
JSON is a lightweight data exchange format, which is mainly used to exchange data between computers at different levels of interaction. The name JSON is short for JavaScript Object Notation. Although the JSON syntax is the syntax for JavaScript objects, JSON files can be used separately from JavaScript. Currently, work with this format is supported by many programming languages.
JSON can be seen as a lightweight and modern alternative to XML. XML documents and JSON files have a lot in common features:
- files are always textual;
- the format does not require additional explanations for the person;
- data provides for the possibility of hierarchical representation.
But unlike XML documents, JSON files are shorter, easier to read, and offer some additional features.
Suppose the following XML document was previously created:
<students> <student> <firstName>Frodo</firstName> <lastName>Baggins</lastName> </student> <student> <firstName>Samwise</firstName> <lastName>Gamgee</lastName> </student> </students>
The corresponding JSON file will look like this (students.json
):
{"students":[ { "firstName":"Frodo", "lastName":"Baggins" }, { "firstName":"Samwise", "lastName":"Gamgee" } ]}
The following file contains the main elements of JSON syntax:
- the "students" array, the elements of which are enclosed in square brackets;
- objects located in braces;
- strings.
In addition to strings, values can be numbers, boolean values (false
and true
)
and null
.
2.6.2 Using the org.json Library to Work with JSON Files
The org.json library was introduced in late 2010 and was originally implemented by Douglas Crockford, the author of JSON. Therefore, this library can be considered as a reference implementation for JSON in Java.
The easiest way to connect the org.json library to the project is to add the necessary dependency to the
pom.xml
file:
<dependencies> <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20230227</version> </dependency> </dependencies>
Note: the current version of the library may change.
According to the types of values in the JSON file, the library defines the types JSONObject
and JSONArray
. There
are different ways to create an object JSONObject
. You can create it manually, for example:
JSONObject someObject = new JSONObject() .put("number", 10) .put("object", new JSONObject() .put("greetings", "Hello")) .put("array", new JSONArray() .put(12.95) .put("Some text"));
You can read it from the string :
JSONObject theSame = new JSONObject(new JSONTokener(""" { "number": 10, "object": { "greetings":"Hello" }, "array": [ 12.95, "Some text" ] } """));
To read data from an existing JSON file, you can use a static readAllBytes()
function of
the Files
class. After creating JSONObject
, you can divide it into separate objects and
arrays. You can use the FileWriter
class to write data to a new file.We will consider these actions
in an example.
A file called students.json
was previously created:
{"students":[ { "firstName":"Frodo", "lastName":"Baggins" }, { "firstName":"Samwise", "lastName":"Gamgee" } ]}
After reading data from this file, we can add two more students and write to a new JSON file. The program code can be as follows:
package ua.inf.iwanoff.java.advanced.second; import java.io.FileWriter; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; import org.json.JSONArray; import org.json.JSONObject; public class JsonTest { public static void main(String[] args) throws IOException { JSONObject object = new JSONObject(new String(Files.readAllBytes(Paths.get("students.json")))); System.out.println(object.toString(1)); JSONArray students = object.getJSONArray("students"); for (int i = 0; i < students.length(); i++) { JSONObject student = students.getJSONObject(i); System.out.println(" - " + student.getString("firstName")); } students.put(new JSONObject().put("firstName", "Merry").put("lastName", "Brandybuck")); students.put(new JSONObject().put("firstName", "Pippin").put("lastName", "Took")); try (FileWriter file = new FileWriter("newStudents.json")) { file.write(object.toString(1)); } } }
Using the toString(1)
method allows you to get a formatted JSON file:
{"students": [ { "firstName": "Frodo", "lastName": "Baggins" }, { "firstName": "Samwise", "lastName": "Gamgee" }, { "firstName": "Merry", "lastName": "Brandybuck" }, { "firstName": "Pippin", "lastName": "Took" } ]}
There are also other libraries for working with JSON files, e.g. Gson (from Google), Jackson, JSON-P, JSON-B, etc.
2.7 Serialization into XML and JSON Files using XStream Tools
In the course "Fundamentals of Java programming" the technologies of serialization and deserialization of objects were considered – recording and reproduction of the state of objects using sequential streams, in particular, files.
In addition to binary serialization, standard XML serialization tools were considered. The disadvantages of standard means of serialization in XML are:
- restrictions on object types (JavaBeans);
- the ability to serialize only properties that are defined by public setters and getters;
- lack of ability to manage tag format and names.
There are non-standard implementations of XML serialization. One of the most popular libraries is XStream. This
freely distributed library makes it very easy to serialize and deserialize XML files. To work with this library,
it is enough to download the necessary JAR files. But a more convenient and modern approach is to use Maven
to connect the library. The necessary dependency should be added to the pom.xml
file:
<dependencies> <dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.4.20</version> </dependency> </dependencies>
The library also allows you to serialize and deserialize JSON files. Example 3.3 shows the code that allows you to serialize and deserialize data.
2.8 Logging
2.8.1 Overview
Working with logging is used to register in a special file (usually a text file) protocol of events that occur during the program execution. This is, for example, tracing constructors, methods, processing exceptions and other evident messages concerned with debugging.
Logger is an entry point to the logging system. Each logger can be considered as a named channel of messages to which they are sent for future processing.
An important concept of logging is log level, which determines the relative importance of messages to be logged. When the message is sent to the logger, the message logging level is compared with logger logging level. If the logging level of the message above or is equal to the logger logging level, the message will be processed, otherwise ignored.
2.8.2 Standard Java Tools for Logging
Standard tools of java.util.logging
package give ways to protocol events. There are such levels of
logging in increasing order: FINEST
, FINER
, FINE
, CONFIG
, INFO
, WARNING
, SEVERE
,
as well as ALL
and OFF
, which turns on and offs all levels respectively. To create a log
you should use static methods of java.util.logging.Logger
class. For example:
Logger log = Logger.getLogger("MyLog"); log.setLevel(Level.ALL);
The log name is determined arbitrarily. Now you can write data, in particular, the messages:
log.log(Level.INFO, "OK"); // output to the console
If we want to put the messages also to the file, you should use the java.util.logging.FileHandler
class:
FileHandler fileHandler = new FileHandler("C:/MyFile.log"); log.addHandler(fileHandler); log.log(Level.INFO, "OK"); // output to the console and into a file
Note: recording to the file involves catch of the java.io.IOException
.
In the following example, a log that receives messages to all levels is created. Simultaneously with the output on the console messages are recorded in a defined file:
package ua.inf.iwanoff.java.advanced.second; import java.io.IOException; import java.util.logging.FileHandler; import java.util.logging.Level; import java.util.logging.Logger; public class LogDemo { public static void main(String[] args) throws IOException { Logger log = Logger.getLogger("MyLog"); log.setLevel(Level.ALL); FileHandler fileHandler = new FileHandler("C:/MyFile.log"); log.addHandler(fileHandler); log.log(Level.INFO, "OK"); // output to the console and into a file } }
For the configuration of standard logging tools, a special file of properties (with .properties
extension)
is used. In particular, you can separately set the logging options for output to the console and into a file.
2.8.3 Using Log4j Library
There are drawbacks of standard logging tools (java.util.logging
). These are the difficulties of setting
up, low efficiency, limited logging capabilities, configuration is not enough intuitive. These disadvantages stimulated
the independent development of alternative login libraries.
Apache Log4j 2 is Java logging library, which actually become an industrial standard. It provides significant improvements over its predecessor, Log4j 1. Since 2015, the version of Log4J 1 is not recommended for use.
Currently, version 2.20 is relevant. The log4j API can be downloaded at https://logging.apache.org/log4j/2.x/.
In order to take advantage of Log4J 2 library capabilities, you can create a new Maven project, e.g. log4j-test
.
You should add such dependencies to the pom.xml
file:
<dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.20.0</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.20.0</version> </dependency> </dependencies>
After reloading the project (Reload All Maven Projects button), you can use Log4J 2.
Now we create a class with main()
function. We create an object of org.apache.logging.log4j.Logger
class.
This object allows recording messages in accordance with the level.
package ua.inf.iwanoff.java.advanced.second; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; public class HelloLog4J { public static void main(String[] args) { Logger logger = LogManager.getLogger(HelloLog4J.class); logger.fatal("Hello, Log4j!"); } }
The information about the date and time, function and class precedes the text "Hello, Log4j!
".
Logging options are stored in a special configuration file. Since the configuration of the login is not yet defined
(there is no corresponding file), the default configuration is operated, according to which only error
and fatal
messages
are displayed. Logging of fatal
, which is used to output the message, has the highest priority. All
messages are shown on the console.
In order to change the logging policy, you must create a configuration file. His name is log4j2.xml
.
Such a file should be created in the java\src\resources
folder. Its content in the simplest case will
be as follows:
<?xml version="1.0" encoding="UTF-8"?> <Configuration status="INFO"> <Appenders> <Console name="ConsoleAppender" target="SYSTEM_OUT"/> <File name="FileAppender" fileName="hello-app-${date:yyyyMMdd}.log" immediateFlush="false" append="true"/> </Appenders> <Loggers> <Root level="debug"> <AppenderRef ref="ConsoleAppender" /> <AppenderRef ref="FileAppender"/> </Root> </Loggers> </Configuration>
The file contains a group <Appenders>
, which indicates that the output is carried out on the
console and to a file whose name contains the "hello-app
" string and the current date. The <Loggers>
group
contains levels of output. In our case, this is "debug
".
Log4J supports such levels of output, in order of growing priority:
trace debug info warn error fatal
Setting a certain level means that only messages of this or a higher priority are recorded. Therefore, in our case,
the output of fatal
level is also performed.
Since default configuration is no longer used, information about the date and time, function and class disappeared.
It can be restored by changing the log4j2.xml
file content:
<?xml version="1.0" encoding="UTF-8"?> <Configuration status="INFO"> <Appenders> <Console name="ConsoleAppender" target="SYSTEM_OUT"> <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %logger{36} - %msg%n" /> </Console> <File name="FileAppender" fileName="hello-app-${date:yyyyMMdd}.log" immediateFlush="false" append="true"> <PatternLayout pattern="%d{yyy-MM-dd HH:mm:ss.SSS} [%t] %logger{36} - %msg%n"/> </File> </Appenders> <Loggers> <Root level="debug"> <AppenderRef ref="ConsoleAppender" /> <AppenderRef ref="FileAppender"/> </Root> </Loggers> </Configuration>
In addition to XML format, the configuration file can be created in JSON, YAML, or PROPERTIES formats.
3 Sample Programs
3.1 Obtaining a Table of Prime Numbers using Data Streams
The following program allows you to get a table of prime numbers in a given range. To obtain simple numbers, it
is advisable to use IntStream
:
package ua.inf.iwanoff.java.advanced.second; import java.util.stream.IntStream; public class PrimeFinder { private static boolean isPrime(int n) { return n > 1 && IntStream.range(2, n - 1).noneMatch(k -> n % k == 0); } public static void printAllPrimes(int from, int to) { IntStream primes = IntStream.range(from, to + 1).filter(PrimeFinder::isPrime); primes.forEach(System.out::println); } public static void main(String[] args) { printAllPrimes(6, 199); } }
The isPrime()
method checks whether the number n is prime. For numbers greater than 1, a
set of consecutive integers is formed, for each of which it is checked whether n is divisible by this number. In
the printAllPrimes()
method, we form a stream of simple numbers using a filter and output the numbers
using the forEach()
method.
3.2 Reading from a File and Sorting Strings
Suppose we need to read strings from a text file, sort them in reverse alphabetical order, and write strings that start with the letter "F" to a new file..
A file with strings (strings.txt
)
can be like this:
First Second Third Fourth Fifth
The program can be as follows:
package ua.inf.iwanoff.java.advanced.second; import java.io.BufferedReader; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; import java.util.stream.Stream; public class ReadAndSort { public static void main(String[] args) throws IOException { try (BufferedReader reader = Files.newBufferedReader(Paths.get("strings.txt"))) { Stream<String> stream = reader.lines().sorted((s1, s2) -> s2.compareTo(s1)). filter(s -> s.startsWith("F")); Files.write(Paths.get("results.txt"), stream.toList()); } } }
After running of the program, we will get the file called results.txt
:
Fourth First Fifth
3.3 Serialization and Deserialization using the XStream Library
Suppose you need to serialize and deserialize data about a line described by two points. We are creating a new
Maven project called LineAndPoints
. We add a dependency on the xstream library to the pom.xml
file.
We will get the following pom.xml
file:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>ua.inf.iwanoff.java.advanced.second</groupId> <artifactId>LineAndPoints</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.4.20</version> </dependency> </dependencies> </project>
We create Line
and Point
classes. These classes have no parameterless constructors and
public properties, so they cannot be serialized using java.beans.XMLEncoder
and .beans.XMLDecoder
.
But XStream allows you to serialize them because this library focuses on fields, not properties.
The Point
class:
package ua.inf.iwanoff.java.advanced.second; public class Point { private double x, y; public Point(double x, double y) { this.x = x; this.y = y; } @Override public String toString() { return x + " " + y; } }
The Line
class:
package ua.inf.iwanoff.java.advanced.second; public class Line { private Point first, second; public Line(double firstX, double firstY, double secondX, double secondY) { first = new Point(firstX, firstY); second = new Point(secondX, secondY); } @Override public String toString() { return first + " " + second; } }
The following class can be created to serialize data:
package ua.inf.iwanoff.java.advanced.second; import com.thoughtworks.xstream.XStream; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; public class XMLSerialization { public static void main(String[] args) { XStream xStream = new XStream(); Line line = new Line(1, 2, 3, 4); xStream.alias("line", Line.class); String xml = xStream.toXML(line); try (FileWriter fw = new FileWriter("Line.xml"); PrintWriter out = new PrintWriter(fw)) { out.println(xml); } catch (IOException e) { e.printStackTrace(); } } }
We get an XML file:
<line> <first> <x>1.0</x> <y>2.0</y> </first> <second> <x>3.0</x> <y>4.0</y> </second> </line>
Note: if no alias is used, the root tag will be: <ua.inf.iwanoff.java.advanced.second.Line>
We deserialize objects in another program:
package ua.inf.iwanoff.java.advanced.second; import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.security.AnyTypePermission; import java.io.File; public class XMLDeserialization { public static void main(String[] args) { XStream xStream = new XStream(); xStream.addPermission(AnyTypePermission.ANY); xStream.alias("line", Line.class); Line newLine = (Line) xStream.fromXML(new File("Line.xml")); System.out.println(newLine); } }
In order to use XStream tools for working with JSON files, we need to add one more dependency to the pom.xml
file:
<dependency> <groupId>org.codehaus.jettison</groupId> <artifactId>jettison</artifactId> <version>1.5.2</version> </dependency>
The program of serializing into a JSON file will be as follows:
package ua.inf.iwanoff.java.advanced.second; import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.io.json.JsonHierarchicalStreamDriver; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; public class JSONSerialization { public static void main(String[] args) { XStream xStream = new XStream(new JsonHierarchicalStreamDriver()); Line line = new Line(1, 2, 3, 4); xStream.alias("line", Line.class); String xml = xStream.toXML(line); try (FileWriter fw = new FileWriter("Line.json"); PrintWriter out = new PrintWriter(fw)) { out.println(xml); } catch (IOException e) { e.printStackTrace(); } } }
The following JSON file will be obtained:
{"line": { "first": { "x": 1.0, "y": 2.0 }, "second": { "x": 3.0, "y": 4.0 } }}
The program to deserialize from a JSON file would be:
package ua.inf.iwanoff.java.advanced.second; import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver; import com.thoughtworks.xstream.security.AnyTypePermission; import java.io.File; public class JSONDeserialization { public static void main(String[] args) { XStream xStream = new XStream(new JettisonMappedXmlDriver()); xStream.addPermission(AnyTypePermission.ANY); xStream.alias("line", Line.class); Line newLine = (Line) xStream.fromXML(new File("Line.json")); System.out.println(newLine); } }
3.4 "Country" and "Census" Classes
Suppose we want to reimplement a previously created project related to the country and censuses. The base classes that implement the basic functionality and data structures for representing countries and censuses were presented in examples of laboratory trianings # 2 and # 3 of the course "Fundamentals of Java Programming". It is necessary to add derived classes in which override the implementation of all methods related to the processing of sequences through the use of Stream API tools. In addition to reproducing the implementation of the functionality, the new project should include:
- outputting data to a text file using Stream API with subsequent reading;
- serialization of objects into an XML file and a JSON file and corresponding deserialization using the XStream library;
- recording events related to program execution in the system log;
- testing individual classes using JUnit.
Taking into account the addition of dependencies on external libraries, it is advisable to create a new Maven project into which to transfer previously created classes. You can copy files from one project to another via the clipboard: in the Projects sub-window, select the necessary files and copy them to the clipboard (Copy function of the context menu); in another project, select the required package and insert the files using the Paste function. You can also copy the entire package.
Now we can create a new package: ua.inf.iwanoff.java.advanced.second
. We add CensusWithStreams
class
to the package. It is advisable to create constructors and override the containsWord()
method, implementing
it with the help of streams. For example, the class code could be as follows:
package ua.inf.iwanoff.java.advanced.second; import ua.inf.iwanoff.java.second.Census; import java.util.Arrays; /** * The class is responsible for presenting the census.
* Stream API tools are used to process the sequence of words */ public class CensusWithStreams extends Census { /** * The constructor initializes the object with default values */ public CensusWithStreams() { } /** * The constructor initializes the object with the specified values * * @param year census year * @param population the population * @param comments comment text */ public CensusWithStreams(int year, int population, String comments) { setYear(year); setPopulation(population); setComments(comments); } /** * Checks either the word is contained in the text of the comment * * @param word the word we're looking for in the comment * @return {@code true} if the word is in the comment text * {@code false} otherwise */ @Override public boolean containsWord(String word) { return Arrays.stream(getComments().split("\s")).anyMatch(s -> s.equalsIgnoreCase(word)); } }
It is also possible to define main()
function to test class, but a better approach is to use
the capabilities of unit testing (JUnit). In the code window, we select the name of the class and using the context
menu Generate... | Test... select the functions for which test methods should be generated. In our case,
this is the containsWord()
method. IntelliJ IDEA automatically generates all necessary parallel packages
of the test branch and creates the class called CensusWithStreamsTest
. It looks like
this:
package ua.inf.iwanoff.java.advanced.second; import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.*; class CensusWithStreamsTest { @Test void containsWord() { } }
If errors are highlighted in the generated code, the required dependency should be added to the pom.xml
file.
IntelliJ IDEA can do this: you can choose More actions... | Add Maven dependency... from the options of
fixing the error and select Assertions. The following code will be added to pom.xml
:
<dependencies> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-api</artifactId> <version>5.8.1</version> <scope>test</scope> </dependency> </dependencies>
This code could also be added manually.
Now we can add the necessary testing, which partially reproduces the behavior of the testWord()
method
of the Census
class. The code of the file CensusWithStreamsTest.java
will be as follows:
package ua.inf.iwanoff.java.advanced.second; import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.*; class CensusWithStreamsTest { @Test void containsWord() { CensusWithStreams census = new CensusWithStreams(); census.setComments("The first census in independent Ukraine"); assertTrue(census.containsWord("Ukraine")); assertTrue(census.containsWord("FIRST")); assertFalse(census.containsWord("rain")); assertFalse(census.containsWord("censuses")); } }
After completing the tests, we will receive a successful exit code. If the expected results are changed in the code, the tests will throw an exception and the assertion that failed will be underlined in the code.
In the class that is responsible for the country, we can also override all methods through the use of threads. There are two ways for creating a new class:
- derived class from
CountryWithArray
; - derived class from
CountryWithList
.
The advantages of the second way are in working with a ArrayList
, which is more convenient compared
to a regular array, and even more effective in the case of adding new elements. The disadvantage of the second way
compared to the first one is the need to provide direct access to the list, which in our case involves adding methods
to the CountryWithList
class that was created earlier. Making changes to the base classes is generally
not desirable, but if you must do so, you should not change the set of public methods of the class.
Choosing the second way, we should keep the set of public methods unchanged. To ensure this, we can add protected
methods to the CountryWithList
class:
protected List<Census> getList() { return list; } protected void setList(List<Census> list) { this.list = list; }
In addition to the sortByPopulation()
, sortByComments()
and maxYear()
methods
that work with the sequence, the methods for accessing the list should be overridden, since it should be monitored
so that only references of type
CensusWithStreams
can be put into list. These methods are setCensus()
, addCensus()
in
two variants and setCensuses()
. It is also advisable to add functions that create a list of strings
from object data and vice versa. The source code of the CountryWithStreams
class will be as follows:
package ua.inf.iwanoff.java.advanced.second; import ua.inf.iwanoff.java.second.Census; import ua.inf.iwanoff.java.third.CountryWithList; import java.util.ArrayList; import java.util.Arrays; import java.util.Comparator; import java.util.List; /** * A class to represent the country in which the census is conducted.
* Stream API tools are used to process the sequences */ public class CountryWithStreams extends CountryWithList { /** * Sets a reference to the new census inside the sequence position * by the indicated index. * * @param i number (index) of the position in the sequence * @param census reference to the new census */ @Override public void setCensus(int i, Census census) { if (census instanceof CensusWithStreams) { super.setCensus(i, census); } else { new RuntimeException(); } } /** * Adds a reference to the new census at the end of the sequence * * @param census reference to the new census * @return {@code true} if the reference was successfully added * {@code false} otherwise */ @Override public boolean addCensus(Census census) { if (census instanceof CensusWithStreams) { return super.addCensus(census); } return false; } /** * Adds a reference to the new census at the end of the sequence. * * @param year census year * @param population the number of the population * @param comments comment text * @return {@code true} if the reference was successfully added * {@code false} otherwise */ @Override public boolean addCensus(int year, int population, String comments) { return super.addCensus(new CensusWithStreams(year, population, comments)); } /** * Rewrites data from an array of censuses to a sequence * * @param censuses an array of censuses */ @Override public void setCensuses(Census[] censuses) { if (Arrays.stream(censuses).anyMatch(c -> c instanceof CensusWithStreams)) { super.setCensuses(censuses); } else { new RuntimeException(); } } /** * Sorts the sequence of censuses by population */ @Override public void sortByPopulation() { setList(getList().stream().sorted().toList()); } /** * Sorts the sequence of censuses alphabetically by comment */ @Override public void sortByComments() { setList(getList().stream().sorted(Comparator.comparing(Census::getComments)).toList()); } /** * Finds and returns the year with the maximum population * * @return year with maximum population */ @Override public int maxYear() { return getList().stream().max(Comparator.comparing(Census::getPopulation)).get().getYear(); } /** * Creates and returns an array of censuses with the specified word in comments * * @param word the word to search for * @return an array of records with the specified word in comments */ @Override public Census[] findWord(String word) { return getList().stream().filter(c -> c.containsWord(word)).toArray(Census[]::new); } /** * Creates and returns a list of stings with data * about the country and about all population censuses * * @return a list of strings with country data */ public List<String> toListOfStrings() { ArrayList<String> list = new ArrayList<>(); list.add(getName() + " " + getArea()); Arrays.stream(getCensuses()).forEach(c -> list.add( c.getYear() + " " + c.getPopulation() + " " + c.getComments())); return list; } /** * Reads data about the country from the list of strings and puts data into the appropriate fields * * @param list a list of strings with country data */ public void fromListOfStrings(List<String> list) { String[] words = list.get(0).split("\s"); setName(words[0]); setArea(Double.parseDouble(words[1])); list.remove(0); list.stream().forEach(s -> { String[] line = s.split("\s"); addCensus(Integer.parseInt(line[0]), Integer.parseInt(line[1]), s.substring(s.indexOf(line[2]))); }); } /** * Creates and returns an object of type CountryWithStreams for testing * @return an object of type CountryWithStreams */ public static CountryWithStreams createCountryWithStreams() { CountryWithStreams country = new CountryWithStreams(); country.setName("Ukraine"); country.setArea(603628); country.addCensus(1959, 41869000, "The first postwar census"); country.addCensus(1970, 47126500, "Population increases"); country.addCensus(1979, 49754600, "No comments"); country.addCensus(1989, 51706700, "The last soviet census"); country.addCensus(2001, 48475100, "The first census in the independent Ukraine"); return country; } }
The given code uses a function call: toArray(Census[]::new)
. This ensures that an
array of the required type (references to Census
) is created, rather than an array of references to
Object
, which is returned by the corresponding function without parameters.
We add a CountryWithStreams
class for testing. This is done in the same way as for CensusWithStreams
.
It is advisable to choose methods sortByPopulation()
, sortByComments()
, maxYear()
and findWord()
. We
will get the following code:
package ua.inf.iwanoff.java.advanced.second; import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.*; class CountryWithStreamsTest { @Test void sortByPopulation() { } @Test void sortByComments() { } @Test void maxYear() { } @Test void findWord() { } }
Note: the functionality of toListOfStrings()
and fromListOfStrings()
methods
will be checked when working with text files.
Since it is necessary to perform several tests on the object and these tests must be independent, it is advisable
to create the object before executing each test method. For this, we need to add a method with @BeforeEach
annotation.
The corresponding method can also be generated automatically (Generate... | SetUp Method in the context
menu). We create a new country object in the method called setUp()
. We create the appropriate
field of type CountryWithStreams
manually. The object will be used in test methods.
For convenient testing of functions related to searching and sorting, we can create getYears()
function
that retrieves an array of years from an array of censuses. This static function will use a static variable
index to fill specific array items. The variable cannot be local because we are using it in a lambda expression.
We get the following code:
package ua.inf.iwanoff.java.advanced.second; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import ua.inf.iwanoff.java.second.Census; import java.util.Arrays; import static org.junit.jupiter.api.Assertions.*; class CountryWithStreamsTest { private CountryWithStreams country; static int index; static int[] getYears(Census[] censuses) { int[] years = new int[censuses.length]; index = 0; Arrays.stream(censuses).forEach(c -> years[index++] = c.getYear()); return years; } @BeforeEach void setUp() { country = new CountryWithStreams(); country.addCensus(1959, 41869000, "The first postwar census"); country.addCensus(1970, 47126500, "Population increases"); country.addCensus(1979, 49754600, "No comments"); country.addCensus(1989, 51706700, "The last soviet census"); country.addCensus(2001, 48475100, "The first census in the independent Ukraine"); } @Test void sortByPopulation() { country.sortByPopulation(); assertArrayEquals(getYears(country.getCensuses()), new int[] { 1959, 1970, 2001, 1979, 1989 }); } @Test void sortByComments() { country.sortByComments(); assertArrayEquals(getYears(country.getCensuses()), new int[] { 1970, 1989, 2001, 1959, 1979 }); } @Test void maxYear() { assertEquals(country.maxYear(), 1989); } @Test void findWord() { assertArrayEquals(getYears(country.findWord("census")), new int[] { 1959, 1979, 1989, 2001 }); } }
Arrays of years corresponding to correct sorting and searching results were manually prepared.
The Program
class will be responsible for storing in a text file, reading from a text file, serializing
and deserializing data (from XML and JSON), as well as writing to the event log in the program. Logging will be
done every time we read or write data to files of various formats. To work with XStream and log4j tools, you
need to add the following dependencies to the pom.xml
file:
<dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.4.20</version> </dependency> <dependency> <groupId>org.codehaus.jettison</groupId> <artifactId>jettison</artifactId> <version>1.5.2</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.20.0</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.20.0</version> </dependency>
We also need to configure the log4j properties by adding a file log4j2.xml
to the java\src\resources
folder.
In our case, this file will be as follows:
<?xml version="1.0" encoding="UTF-8"?> <Configuration status="INFO"> <Appenders> <Console name="ConsoleAppender" target="SYSTEM_OUT"/> <File name="FileAppender" fileName="country-${date:yyyyMMdd}.log" immediateFlush="false" append="true"/> </Appenders> <Loggers> <Root level="debug"> <AppenderRef ref="FileAppender"/> </Root> </Loggers> </Configuration>
Individual actions related to reading and writing can be implemented as static methods. The code of FileUtils
class
will be as follows:
package ua.inf.iwanoff.java.advanced.second; import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver; import com.thoughtworks.xstream.security.AnyTypePermission; import org.apache.logging.log4j.Logger; import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; import java.nio.file.Files; import java.nio.file.Path; import java.util.List; /** * The class implements writing and reading data in TXT, XML and JSON formats. * Country and census data are read and written. * At the same time, events are recorded in the system log. */ public class FileUtils { private static Logger logger = null; public static Logger getLogger() { return logger; } public static void setLogger(Logger logger) { FileUtils.logger = logger; } /** * Writes country and census data to the specified file * * @param country the country * @param fileName the name of the file */ public static void writeToTxt(CountryWithStreams country, String fileName) { if (logger != null) { logger.info("Write to text file"); } try { Files.write(Path.of(fileName), country.toListOfStrings()); } catch (IOException e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } /** * Reads country and census data from the specified file * * @param fileName the name of the file * @return the object that was created */ public static CountryWithStreams readFromTxt(String fileName) { CountryWithStreams country = new CountryWithStreams(); if (logger != null) { logger.info("Read from text file"); } try { List<String> list = Files.readAllLines(Path.of(fileName)); country.fromListOfStrings(list); } catch (IOException e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } return country; } /** * Serializes country and census data into the specified XML file * * @param country the country * @param fileName the name of the file */ public static void serializeToXML(CountryWithStreams country, String fileName) { if (logger != null) { logger.info("Serializing to XML"); } XStream xStream = new XStream(); xStream.alias("country", CountryWithStreams.class); xStream.alias("census", CensusWithStreams.class); String xml = xStream.toXML(country); try (FileWriter fw = new FileWriter(fileName); PrintWriter out = new PrintWriter(fw)) { out.println(xml); } catch (IOException e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } /** * Deserializes country and census data from the specified XML file * * @param fileName the name of the file * @return the object that was created */ public static CountryWithStreams deserializeFromXML(String fileName) { if (logger != null) { logger.info("Deserializing from XML"); } try { XStream xStream = new XStream(); xStream.addPermission(AnyTypePermission.ANY); xStream.alias("country", CountryWithStreams.class); xStream.alias("census", CensusWithStreams.class); CountryWithStreams country = (CountryWithStreams) xStream.fromXML(new File(fileName)); return country; } catch (Exception e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } /** * Serializes country and census data into the specified JSON file * * @param country the country * @param fileName the name of the file */ public static void serializeToJSON(CountryWithStreams country, String fileName) { if (logger != null) { logger.info("Serializing to JSON"); } XStream xStream = new XStream(new JettisonMappedXmlDriver()); xStream.alias("country", CountryWithStreams.class); xStream.alias("census", CensusWithStreams.class); String xml = xStream.toXML(country); try (FileWriter fw = new FileWriter(fileName); PrintWriter out = new PrintWriter(fw)) { out.println(xml); } catch (IOException e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } /** * Deserializes country and census data from the specified JSON file * * @param fileName the name of the file * @return the object that was created */ public static CountryWithStreams deserializeFromJSON(String fileName) { if (logger != null) { logger.info("Deserializing from JSON"); } try { XStream xStream = new XStream(new JettisonMappedXmlDriver()); xStream.addPermission(AnyTypePermission.ANY); xStream.alias("country", CountryWithStreams.class); xStream.alias("census", CensusWithStreams.class); CountryWithStreams country = (CountryWithStreams) xStream.fromXML(new File(fileName)); return country; } catch (Exception e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } }
The code of Program
class, in which all the created functions are demonstrated, will be as follows:
package ua.inf.iwanoff.java.advanced.second; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; /** * The class demonstrates writing and reading data in TXT, XML and JSON formats. * At the same time, events are recorded in the system log. */ public class Program { /** * Demonstration of the program. * Data in TXT, XML and JSON formats are sequentially written and read. * At the same time, events are recorded in the system log * * @param args command line arguments (not used) */ public static void main(String[] args) { Logger logger = LogManager.getLogger(Program.class); FileUtils.setLogger(logger); logger.info("Program started"); CountryWithStreams country = CountryWithStreams.createCountryWithStreams(); FileUtils.writeToTxt(country, "Country.txt"); country = FileUtils.readFromTxt("Country.txt"); System.out.println(country); FileUtils.serializeToXML(country, "Country.xml"); country = FileUtils.deserializeFromXML("Country.xml"); System.out.println(country); FileUtils.serializeToJSON(country, "Country.json"); country = FileUtils.deserializeFromJSON("Country.json"); System.out.println(country); logger.info("Program finished"); } }
The result of the work will be output to the console of data about the country, read from various sources.
New files will appear in the root directory of the project. Text file (Country.txt
):
Ukraine 603628.0 1959 41869000 The first postwar census 1970 47126500 Population increases 1979 49754600 No comments 1989 51706700 The last soviet census 2001 48475100 The first census in the independent Ukraine
The XML file (Country.xml
):
<country> <name>Ukraine</name> <area>603628.0</area> <list> <census> <year>1959</year> <population>41869000</population> <comments>The first postwar census</comments> </census> <census> <year>1970</year> <population>47126500</population> <comments>Population increases</comments> </census> <census> <year>1979</year> <population>49754600</population> <comments>No comments</comments> </census> <census> <year>1989</year> <population>51706700</population> <comments>The last soviet census</comments> </census> <census> <year>2001</year> <population>48475100</population> <comments>The first census in the independent Ukraine</comments> </census> </list> </country>
Unfortunately, the JSON file (Country.json
) generated by the program will be poorly formatted (the
entire content of the file is in one line). But if you open this file in the IntelliJ IDEA environment and apply
code formatting (Code | Reformat Code), its content in the editor window will be as follows:
{ "country": { "name": "Ukraine", "area": 603628, "list": [ { "census": [ { "year": 1959, "population": 41869000, "comments": "The first postwar census" }, { "year": 1970, "population": 47126500, "comments": "Population increases" }, { "year": 1979, "population": 49754600, "comments": "No comments" }, { "year": 1989, "population": 51706700, "comments": "The last soviet census" }, { "year": 2001, "population": 48475100, "comments": "The first census in the independent Ukraine" } ] } ] } }
In addition, a log file with the .log
extension will be created, to which the following text fragment
will be added after each program launch:
Program started Write to text file Read from text file Serializing to XML Deserializing from XML Serializing to JSON Deserializing from JSON Program finished
Information about exceptions that occurred while working with files will be also logged.
4 Exercises
- Read floating point values from a text file (up to the end of the file), find their sum, and output to another text file. Apply Stream API facilities.
- Read integer values from a text file, replace negative values with modules, positive values with zeros, and output the received values to another text file. Apply Stream API facilities.
- Read integer values from a text file,divide even elements by 2, increase odd ones by 2 times and output the received values to another text file. Apply Stream API facilities.
- Create classes "Library" (with an array of books as a field) and "Book". Create objects, serialize and deserialize them in XML and JSON using XStream.
- Create the classes "Faculty" and "Institute" (with an array of faculties as a field). Create objects, serialize and deserialize them in XML and JSON using XStream.
5 Quiz
- What are the standard means of checking assertions in Java?
- What is unit testing?
- What is JUnit?
- How are test methods annotated in JUnit?
- How to make a logical grouping of tests?
- How to use JUnit in a development environment?
- What tasks do build automation tools solve?
- What is the main difference between Apache Maven and Apache Ant?
- What is GAV?
- What is the file structure pom.xml?
- What are the typical tasks of working with the file system?
- What standard Java tools provide the ability to work with the file system? What are the differences between these means?
- What are the ways to get information about files and subdirectories?
- What are the advantages and features of Stream API?
- How to get a stream from a collection?
- How to get a stream from an array?
- What is the difference between intermediate and terminal operations?
- What are the streams for working with primitive types?
- How is data read and written using the Stream API?
- What is the JSON format, and what are its advantages?
- What are the main elements of a JSON file?
- What tools exist to support working with JSON files?
- What XML serialization is performed by means of XStream?
- What JSON-serialization is performed by means of XStream?
- What is a logger and logging level?
- What facilities exist to maintain logs?
- What are the advantages of the Log4j library compared to standard logging tools?
- What are output levels (priorities)?