Laboratory Training 3
Advanced Features of Working with Files
1 Training Tasks
1.1 Individual Task
Create a new Maven project and transfer to this project the previously created classes that represent entities of the individual tasks of laboratory trainings # 3 and # 4 of the "Fundamentals of Java Programming" course. Create derived classes in which to override the implementation of all methods related to processing sequences through the use of Stream API tools. If the class representing the second entity has no sequence processing, the base class can be used.
The program must demonstrate:
- reproduction of the implementation of individual tasks of laboratory training # 3 and # 4 of the course "Fundamentals of Java Programming";
- using Stream API tools for processing and outputting sequences;
- outputting data to a text file using Stream API with subsequent reading;
- serialization of objects into an XML file and a JSON file and corresponding deserialization using the XStream library;
- recording events related to program execution in the system log;
- testing individual classes using JUnit.
To present the individual data, you should use the classes that were created during the implementation of the individual assignment of laboratory training #1 of this course.
Note: localization and translation of texts can be carried out at the wish of the student.
1.2 List of Files of all Subdirectories
Enter the name of a specific folder. Display the names of all files in this directory, as well as all files in subdirectories, their subdirectories, and so on. Implement two approaches:
- search by means of a
java.io.File
class using a recursive function; - search by means of the
java.nio.file
package.
Display both results sequentially on the screen. If the folder does not exist, display an error message.
1.3 Working with Text Files using the Stream API
Use the Files.lines()
function to read strings from a text file, sort by increasing length, and output
strings that contain the letter "a" to another file.
1.4 Creating files and Reading from Files Data About the Student and the Academic Group
Create the classes "Student" and "Academic group" (with array of students as the field). Create objects. Provide file creation and reading from files using the following approaches:
- using Stream API tools for working with text files;
- serialization and deserialization in XML and JSON (by means of XStream).
1.5 Working with the org.json Library (Additional Task)
Complete task 1.4 using tools for working with JSON files of the org.json library.
1.6 Use of SAX and DOM Technologies (Additional Task)
Prepare an XML document with data about the students of the academic group. Using SAX technology read data from an XML document and display data on the console. Using DOM technology, read data from the same XML document, modify the data, and write it to a new document.
2 Instructions
2.1 Java Tools for Working with Files
Working with files is one of the most widespread tasks in modern operating systems.
Starting with the first version of the JDK, Java provides tools for working with the contents of files embodied
in the concept of streams and implemented by the corresponding classes of the java.io
package:
- tools for working with text files (for example,
FileReader
,FileWriter
,BufferedReader
,PrintWriter
, etc.); - tools for working with binary files (eg
FileInputStream
,FileOutputStream
,DataInputStream
,DataOutputStream
, etc.); - serialization and deserialization tools;
- tools for working with archives.
To these tools, you can add classes from the java.nio.file
package, which will be discussed in this laboratory
training.
There are also separate tools for working with special file formats – XML, JSON, etc.
Working with the file system should be considered separately from working with file contents.
2.2 Working with XML Documents
2.5.1 Overview
XML (eXtensible Markup Language) is a platform-independent way of structuring information. Because XML separates the content of a document from its structure, it is successfully used for the exchange of information. For example, XML may be used to transfer data between the application and the database or between databases having different formats.
XML documents are always text files. The syntax of XML is similar to the syntax of HTML, which is used for marking up texts published over the Internet. XML language can also be applied directly to the markup text.
Most often, an XML document begins with a so-called prefix. The prefix for the document in general is as follows:
<?xml version="1.0" [other-attributes] ?>
Among the possible attributes, the encoding = "character-set"
attribute is most useful.
It specifies the encoding for the text. If you want to use non-UNICODE Cyrillic characters, you can define it, for
example, in the following way:
<?xml version="1.0" encoding="Windows-1251"?>
The next line may contain information about document type. The rest of document contains a set of XML elements.
Elements are delimited by tags. Start tags begin with <
followed by the element name. End tags begin
with </
followed by the element name. Both start and end-tags terminate with >
.
Everything in between the two tags is the content of the element. All start tags must be matched by end tags. All
attribute values must be quoted. Each document must contain the only root element in which all other elements are
inserted.
Unlike HTML, XML allows you to use an unlimited set of tag pairs, each of which represents not what the data it contains should look like, but what it means. XML allows you to create your own set of tags for each class of documents. Thus, it is more accurate to call it not a language, but a meta-language.
Having formally described document structure, you can check its correctness. The presence of markup tags allows both the person and the program to analyze the document. XML documents, in the first place, are intended for software analysis of their contents.
The following XML-document stores prime numbers.
<?xml version="1.0" encoding="UTF-8"?> <Numbers> <Number>1</Number> <Number>2</Number> <Number>3</Number> <Number>5</Number> <Number>7</Number> <Number>11</Number> </Numbers>
The Numbers
and Number
tags are invented by author. Text indents are used for better
readability.
Tags may contain attributes - additional information about the elements contained inside the corner brackets. Values
of attributes must be taken with quotation marks. The followings example shows Message
tag with to
and from
attributes:
<Message to="you" from="me"> <Text> How to use XML? </Text> </Message>
Use of end-tags is obligatory in XML. Furthermore, you must close inner elements before outer ones. Such code snippet produces an error:
<A> <B> text </A> </B>
And such fragment is correct:
<A> <B> text </B> </A>
Tags can be empty. Such tags end with backslash symbol. For example, you can write <Nothing/>
instead
of <Nothing></Nothing>
.
In contrast HTML-tags, XML-tags are case sensitive, therefore <cat>
and <CAT>
are
different tags.
XML-documents can contain comments:
<!-- Here are comments -->
XML recognition programs, the so-called XML parsers, perform the analysis of a document before finding the first error, in contrast to the HTML parsers embedded in the browser. Browsers are trying to display a document, even if the code contains errors.
An XML document that conforms to all XML syntax rules is considered to be a well-formed document.
2.6.2 Standard Approaches to Working with XML Documents
There are two standard approaches to working with XML documents in your program:
- event-based document model (Simple API for XML, SAX) supports processing events concerned with particular XML tags by reading XML data;
- Document Object Model, DOM allows creation and processing of collection of objects organized in a hierarchy.
Event-based approach does not allow the developer to change the data in the source document. If the part of the data needs to be corrected, the document must be completely updated. In contrast, the DOM provides API, which allows developers to add or remove nodes in any part of the tree in the application.
Both approaches use concept of parser. Parser is an application program, which parses document and split it into tokens. The parser can initiate events (as in SAX), or build a data tree.
In order to implement standard approaches to working with XML in Java SE, we use Java API for XML Processing (JAXP). JAXP provides tools for validating and parsing XML documents. To implement the object model, the JAXP document includes the DOM software interface, SAX implemented with the appropriate software interface. In addition to them, the Streaming API for XML (StAX) and the XSLT (XML Stylesheet Language Transformations) tools are provided.
2.6.3 Using Simple API for XML and StAX
Simple API for XML (SAX, a simple application programming interface for working with XML) provides a consistent mechanism for analyzing an XML document. The analyzer that implements the SAX interface (SAX Parser) processes information from an XML document as a single data stream. This data stream is only available in one direction, that is, previously processed data cannot be re-read without re-analysis. Most programmers agree that the processing of XML documents using SAX is generally faster than using DOM. This is because SAX stream requires much less memory compared with the construction of a complete DOM tree.
SAX analyzers implement an event-driven approach when the programmer needs to create event handlers that are called by the parsers when processing an XML document.
The Java SE tools for working with SAX are implemented in the packages javax.xml.parsers
and org.xml.sax
,
as well as in the packages included in them. To create an object of javax.xml.parsers.SAXParser
class,
you should use the class javax.xml.parsers.SAXParserFactory
, representing the corresponding factory
methods. The SAX parser does not create an XML document view in memory. Instead, the SAX parser informs clients
about the structure of the XML document using the callback mechanism. You can create a class by yourself implementing
a number of necessary interfaces, in particular org.xml.sax.ContentHandler
. However, the simplest and
most recommended way is to use the org.xml.sax.helpers.DefaultHandler
class, creating a derived class
and overriding its methods that should be called when various events in the process of document analysis occurs.
Most often overridden methods are:
-
startDocument()
andendDocument()
: methods that are called at the beginning and end of the analysis of an XML document startElement()
andendElement()
: methods that are called at the beginning and end of the document element analysis-
characters()
: method called when retrieving the text content of an XML document element.
The following example illustrates the use of SAX to read a document. Suppose the Hello.xml file in the project directory has the following content:
<?xml version="1.0" encoding="UTF-8" ?> <Greetings> <Hello Text="Hi, this is an attribute!"> Hi, this is the text! </Hello> </Greetings>
Note. When saving the file, you must specify the UTF-8 encoding.
The code of the program that reads data from XML will be:
package ua.inf.iwanoff.java.advanced.third; import java.io.IOException; import javax.xml.parsers.ParserConfigurationException; import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import org.xml.sax.Attributes; import org.xml.sax.InputSource; import org.xml.sax.SAXException; import org.xml.sax.helpers.DefaultHandler; public class HelloSAX extends DefaultHandler { @Override public void startDocument() { System.out.println("Opening document"); } @Override public void endDocument() { System.out.println("Done"); } @Override public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException { System.out.println("Opening tag: " + qName); if (attributes.getLength() > 0) { System.out.println("Attributes: "); for (int i = 0; i < attributes.getLength(); i++) { System.out.println(" " + attributes.getQName(i) + ": " + attributes.getValue(i)); } } } @Override public void endElement(String uri, String localName, String qName) throws SAXException { System.out.println("Closin tag: " + qName); } @Override public void characters(char[] ch, int start, int length) throws SAXException { String s = new String(ch).substring(start, start + length).trim(); if (s.length() > 0) { System.out.println(s); } } public static void main(String[] args) { SAXParser parser = null; try { parser = SAXParserFactory.newInstance().newSAXParser(); } catch (ParserConfigurationException | SAXException e) { e.printStackTrace(); } if (parser != null) { InputSource input = new InputSource("Hello.xml"); try { parser.parse(input, new HelloSAX()); } catch (SAXException | IOException e) { e.printStackTrace(); } } } }
Since the characters()
method is called for each tag, it is advisable to output the contents if the
string is not empty.
StAX was designed as a cross between DOM and SAX interfaces. This programming interface uses a cursor metaphor that represents the entry point within the document. The application moves the cursor forward by reading the information, receiving information from the parser as needed.
2.6.4 Using the Document Object Model (DOM)
The DOM is a series of Recommendations produced by the World Wide Web Consortium (W3C). DOM began as a way of identifying and manipulating items on an HTML page (DOM Level 0).
The current DOM Recommendation (DOM Level 3) is an API that defines the objects represented in the XML document, as well as the methods and properties that are used to access and manipulate them.
Beginning with Level 1 DOM, the DOM API contains interfaces that represent all kinds of information that can be found in an XML document. It also includes the methods needed to work with these objects. Some of the most common methods of standard DOM interfaces are listed below.
The Node
interface is the primary data type of the DOM. It defines a number of useful methods for
obtaining data about nodes and navigating through them:
getFirstChild()
andgetLastChild()
return the first or last child of this node;getNextSibling()
andgetPreviousSibling()
return the next or previous sibling of this node;getChildNodes()
returns a reference to the list ofNodeList
type of children of this node; using theNodeList
interface methods, you can get the i-th node (item(i)
method) and the total number of such nodes (getLength()
method);getParentNode()
returns the parent node;getAttributes()
returns an associative array of typeNamedNodeMap
attributes of this node;hasChildNodes()
returnstrue
if the node has children.
There are a number of methods for modifying an XML document: insertBefore()
, replaceChild()
, removeChild()
, appendChild()
,
etc.
In addition to the Node
, DOM also defines several subinterfaces of the Node
interface:
-
Element
represents the XML element in the source document; the element includes a pair of tags (opening and closing) and all the text between them; Attr
represents the attribute of the element;Text
represents the element content;Document
represents the entire XML document; Only oneDocument
object exists for each XML document; having theDocument
object, you can find the root of the DOM tree using thegetDocumentElement()
method; from the root you can manipulate the entire tree.
Additional types of nodes are:
Comment
represents a comment in an XML fileProcessingInstruction
represents the processing instructionCDATASection
represents theCDATA
section.
XML parsers require the creation of an instance of a particular class. The disadvantage is that when changing the
parsers, you need to change the source code. For some parsers, you can use so-called factory classes. Using the
static newInstance()
method, an instance of the factory object is created, which creates a class object
that implements the DocumentBuilder
interface. Such an object is directly a necessary parser: it implements
DOM methods that are needed to parse and process the XML file. When creating a parser object, exceptions may be
thrown that need to be handled. Then you can create an object of type Document
, load data from a file
with a name, for example, fileName
and pares it:
try { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(fileName); . . .
After traversing and modifying the tree, you can save it in another file.
Using DOM will considered on the example with the previous file (Hello.xml
). The following program
outputs the text of the attribute to the console, modifies it and stores it in a new XML document:
package ua.inf.iwanoff.java.advanced.third; import java.io.*; import org.w3c.dom.*; import javax.xml.parsers.*; import javax.xml.transform.*; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; public class HelloDOM { public static void main(String[] args) throws Exception { Document doc; // Create a document builder using the factory method: DocumentBuilder db = DocumentBuilderFactory.newInstance().newDocumentBuilder(); doc = db.parse(new File("Hello.xml")); // Find the root tag: Node rootNode = doc.getDocumentElement(); // View all child tags: for (int i = 0; i < rootNode.getChildNodes().getLength(); i++) { Node currentNode = rootNode.getChildNodes().item(i); if (currentNode.getNodeName().equals("Hello")) { // View all attributes: for (int j = 0; j < currentNode.getAttributes().getLength(); j++) { if (currentNode.getAttributes().item(j).getNodeName().equals("Text")) { // Found the required attribute. Display the text of the attribute (greeting): System.out.println(currentNode.getAttributes().item(j).getNodeValue()); // Changing the contents of the attribute: currentNode.getAttributes().item(j).setNodeValue("Hi, there was DOM here!"); // Further search is inappropriate: break; } } // Change the text: System.out.println(currentNode.getTextContent()); currentNode.setTextContent("\n Hi, here was also DOM!\n"); break; } } // Create a converter object (in this case, to write to a file). // We use the factory method: Transformer transformer = TransformerFactory.newInstance().newTransformer(); // Write to file: transformer.transform(new DOMSource(doc), new StreamResult(new FileOutputStream(new File("HelloDOM.xml")))); } }
After running the program in the project folder, you will be able to find the following file (HelloDOM.xml
):
<?xml version="1.0" encoding="UTF-8" standalone="no"?><Greetings> <Hello Text="Hi, there was DOM here!"> Hi, here was also DOM! </Hello> </Greetings>
In the above example, the javax.xml.transform.Transformer
class is used to save the modified document
in the file. In general, this class is used in the implementation of the so-called XSLT-transformation. XSLT (eXtensible
Stylesheet Language Transformations) is a language of converting XML documents to other XML documents or other objects
such as HTML, plain text, etc. The XSLT processor accepts one or more XML source documents, as well as one or more
modules, and processes them to obtain the output document. The transformation contains a set of template rules:
instructions and other directives that guide the XSLT processor when generating an output document.
2.6.5 Use of Document Template Definition and XML Schema
Structured data stored in XML document need additional information about rules of elements' order. The most commonly used are two ways of structure representation: Document Template Definition (DTD) and XML Schema (XSD).
DTD (Document Template Definition) is a simple set of rules, which describe structure of XML documents of particular type. DTD is not an XML document itself. DTD is very simple, but it does not describe types of elements. The DTD directives can be present both in the header of the XML document itself (internal DTD) and in a separate file (external DTD). Availability of DTD is optional.
For example, we have the following XML file:
<?xml version="1.0" encoding="UTF-8"?> <Pairs> <Pair> <x>1</x> <y>4</y> </Pair> <Pair> <x>2</x> <y>2</y> </Pair> . . . </Pairs>
The DTD file that describes the structure of this document will look like this:
<?xml version="1.0" encoding="UTF-8"?> <!ELEMENT Pair (x, y)> <!ELEMENT x (#PCDATA)> <!ELEMENT y (#PCDATA)> <!ELEMENT Pairs (Pair+)>
The plus sign in the last line indicates that the Pairs
tag can contain one or many Pair
elements
inside. In addition, you can also use * (0 or many), a question mark (0 or 1). The absence of a sign means that
only one element can be present.
XML Schema is an alternative to DTD method for description of a document structure. The schema is more convenient than DTD in that the description of the structure of the document is performed on the XML itself. In addition, the XML scheme of its capabilities significantly exceeds the DTD. For example, in a schema you can specify tag types and attributes, define restrictions, and more.
An XML document that is well-formed, refers to the grammatical rules and fully responds to it, is called the valid document.
In order to prevent conflicts of tag names, XML allows you to create so-called namespaces. The namespace defines the prefix associated with a particular schema of the document and is attached to the tags. Custom namespace is determined using the following construct:
<root xmlns:pref="http://www.someaddress.org/">
In this example, root
is the root XML document tag, pref
is the prefix that defines the
namespace, "http://www.someaddress.org/
" is some address, such as the domain name of the
author of the schema. Applications that handle XML documents never check this address. It is only necessary to ensure
the uniqueness of the namespace.
The schema itself uses the namespace xs
.
The use of a document schema can be demonstrated in the following example. Suppose we have such an XML file:
<?xml version="1.0" encoding="Windows-1251" ?> <Student Name="John" Surname="Smith"> <Marks> <Mark Subject="Mathematics" Value="B"/> <Mark Subject="Physics" Value="A"/> <Mark Subject="Programming" Value="C"/> </Marks> <Comments> Strange student </Comments> </Student>
Creating a schema file should start with a standard construct:
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> </xs:schema>
The information about document structure must be placed between <xs:schema>
and </xs:schema>
tags.
In order to describe the tags of a document, you can add standard tags inside it. For complex tags that are embedded
in others or have parameters:
<xs:element name="tag name"> <xs:complexType> . . . </xs:complexType> </xs:element>
Inside the tag you can place a list of items:
<xs:sequence> . . . </xs:sequence>
The reference to another tag:
<xs:element ref="tag_name"/>
The following element contains data:
<xs:element name="tag_name" type="type_name"/>
The following table contains some standard data types used in schema:
Name |
Description |
xs:string |
string value that contains a sequence of Unicode (or ISO/IEC) characters, including spaces, tabs, LF and CR symbols. |
xs:integer |
integer value |
xs:boolean |
binary logical values: true or false,1 or 0. |
xs:float |
32-bit floating point value |
xs:double |
64-bit floating point value |
xs:anyURI |
Uniform Resource Identifier |
The following tag
<xs:attribute name="attribute_name" type="type_name"/>
provides a way for an attribute description.
There is also a large number of additional tag parameters. The maxOccurs
parameter specifies the maximum
number of occurrences for the element, minOccurs
specifies the minimum number of occurrences for an
element, unbounded
determines an unlimited number of occurrences, required
specifies the
mandatory entry, mixed
specifies an element that has a mixed type, and so on.
We can offer such a scheme file for our student (Student.xsd
):
<?xml version="1.0"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="Student"> <xs:complexType> <xs:sequence> <xs:element name="Comments" type="xs:string"/> <xs:element name="Marks"> <xs:complexType> <xs:sequence> <xs:element ref="Mark" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> <xs:attribute name="Name" type="xs:string" /> <xs:attribute name="Surname" type="xs:string" /> </xs:complexType> </xs:element> <xs:element name="Mark"> <xs:complexType> <xs:attribute name="Subject" type="xs:string" /> <xs:attribute name="Value" type="xs:string" /> </xs:complexType> </xs:element> </xs:schema>
2.3 Use of Build Automation Tools
2.3.1 Overview
Build automation provides for use of special tools for automatic tracking of dependencies between files within the project. Build automation also involves performing typical actions such as, for example,
- source code compiling;
- programs assembly from individual parts;
- preparation of documentation;
- creation of a JAR archive;
- deployment of the program.
Integrated development environments (IDE) are most often assuming project assembly management. However, these tools are usually limited and not compatible in different IDEs. Sometimes, the need for transferring the project to another IDE occurs. In addition, it would be convenient to describe and fix a sequence of some actions over the project artifacts while performing a typical set of tasks of the development process.
An alternative offers independent build automation tools. The most popular build tools are Apache Ant, Gradle and Apache Maven.Apache Ant is a Java-based set of tools for automating software build process compatible with different platforms. This is Apache Software Foundation project. Management of the build process takes place with the XML scenario, the so-called build file ( build.xml by default), which corresponds to certain rules.
Actions that can be performed with Ant are described by targets . Targets may depend on each other. If another target should be performed to perform a certain goal, then you can make a dependence of one target from another. Targets contain invocations of tasks. Every task is a command that performs some elementary action. There are several predefined tasks that are designed to describe typical actions: compiling using javac , running a program, creation of JAR, deployment, etc.
There is the possibility of expanding the set of ANT tasks. Ant tasks also include work with a file system (creating directories, copying and deleting files), documentation generation, etc.
Today, Ant has become less popular, compared with Gradle and Maven because of their limitations. In addition, compared to Maven, Ant offers an imperative (command) approach to projecting: The developer must describe the sequence of actions performed during harvesting rather than the expected result.
The Gradle build automation tool was first established in 2007 under the Apache License 2.0. In September 2012, a stable implementation was issued 2.7. Gradle uses concepts of Apache Ant and Apache Maven, but instead of XML it uses the language built on the Groovy language syntax. Gradle means are mainly used in Android development. Gradle is available as a separate download, but can also be found bundled in products such as Android Studio.
2.3.2 Apache Maven
Apache Maven is a build automation tool that uses XML syntax to specify the build options, but compared
with Ant it provides a higher level of automation. Maven is created and published by Apache Software Foundation
since 2004 . To determine the build options, POM (Project Object Model) is used. Unlike Apache Ant, Maven provides
a declarative, and not imperative description of a project: pom.xml
project files contains its declarative description
(which we want to get), not separate commands.
Like Ant, Maven allows you to start the compiling processes, the creation of JAR files, documentation generation, etc.
The most important function of Maven is the management of dependencies that are present in projects using third-party libraries (which, in turn, use other third-party libraries). Also, Maven allows you to solve libraries version conflicts.
Maven is based on Plugin architecture that allows you to apply plugins for various tasks (compile
, test
, build
, deploy
, checkstyle
, pmd
, scp-transfer
)
without having to install them in a specific project. There is a large number of plugins developed for different
purposes.
Information for the Maven support project is contained in a pom.xml
file, which specifies dependencies
of the Maven-controlled package, from other packages and libraries.
IntelliJ IDEA establishes support for maven projects. To create a new project with Maven support in the New Project window, you can select Maven on the left side. For the project, the JDK version (Project SDK) is determined. Suppose it's JDK 11. In addition, the project can be created, based on archetype. For the first Maven project you can bypass without archetypes.
On the next page of the wizard, we select the Name (for example, HelloMaven
), Location and Artifact
Coordinates, which includes the so-called GAV (groupId
, artifactId
, version
).
- groupId: reference to the author or organization (subdivision) where the project has been created; the corresponding identifier is built by the rules of constructing package names: an inverted domain name;
- artifactId: project name; It does not necessarily have to coincide with the name of the Intellij IDEA project, but the use of the same names in this context is desirable; When creating a project, this field is automatically completed by the project name;
- version: project version; The 1
1.0-SNAPSHOT
is established, that is, this is the first version of the project that is in development; This means that the project is under development.
For our first project IntelliJ IDEA automatically creates a pom.xml
file with such content :
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>ua.inf.iwanoff.java.advanced.second</groupId> <artifactId>HelloMaven</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> </project>
The properties
section specifies the JDK version (JDK 11).
Let's take a look on the project structure. This is a typical Maven project structure. At the project level, the src
folder
is created with this structure:
src main java resources test java
The src
directory is a root directory of the source code and the code of test classes. The main
directory
is a root directory for a source code that is concerned directly with the application (without tests). The test
directory
contains a source code of test classes. Packages of the source code are allocated in the subdirectories of the java
directory.
The resources
directory contains other resources required for the project. This can be properties that
are used to internationalize programs, GUI markup files, styles, or something.
After compiling the project, a target
directory with compiled classes will be added to the project
structure.
The Maven tool window, whose shortcut is usually located on the right, contains a list of Maven commands that provide the life cycle of the project:
clean
– project cleaning and deleting all files that have been created by previous build;validate
– checking the correctness of metainformation about the project;compile
– compiling project;test
– testing with JUnit;package
– creation of an archive jar, war or ear;verify
– verifying the correctness of the package and compliance with quality requirements;install
– installation (copying) of .jar, .war or .ear files in local repository;site
– site generation;deploy
– project publication in a remote repository.
Note: if you use Maven outside of IntelliJ IDEA, these commands are entered in a command prompt, for example: mvn
clean
; to use Maven without IntelliJ IDEA, it should be downloaded and installed separately.
Some commands for their successful execution require the implementation of previous lifecycle commands. For example, package
involves
performing compile
. Performing previous commands is automatically done. Execution of commands involves
output of typical Maven messages in a console window.
In the java
folder we can find appropriate package and class with main()
method:
package ua.inf.iwanoff.java.advanced.third; public class Main { public static void main(String[] args) { System.out.println("Hello world!"); } }
Class code can be changed:
package ua.inf.iwanoff.java.advanced.third; public class Main { public static int multiply(int i, int k) { return i * k; } public static void main(String[] args) { System.out.println("Hello, Maven!"); System.out.println("2 * 2 = " + multiply(2, 2)); } }
Among Maven commands there is no direct execution of the program. In order to execute the program, IntelliJ IDEA Run menu functions should be used. But the necessary Maven commands that cover certain phases of the lifecycle are automatically performed.
Note: a set of standard Maven commands can be extended with the plugin mechanism.
Very important function of Maven is dependency management. Usually a real project uses numerous libraries to connect which JAR files must be downloaded. These libraries are based on the use of other libraries, which also needs to be loaded. A separate problem arises with versions of libraries and their compatibility.
Maven provides a simple declarative approach to dependency management. It is enough to add information about the
required library in the <dependencies>
section. For example, to test our project, it is advisable
to use JUnit 5. You can, of course, add the necessary dependency manually, but it is better to use IntelliJ IDEA
interactive capabilities. By selecting the pom.xml
file window, you can click Alt+Insert,
then in the Generate list you should select Dependency, then in the Search For Artifact dialog
box type junit
and select org.junit.jupiter:junit-jupiter-api:5.8.1
. The necessary group <dependencies>
will
be added:
<dependencies> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-api</artifactId> <version>5.8.1</version> </dependency> </dependencies>
In this code, the data are marked as an error. You need to reload the Maven project. In the Maven tool
window, you can find and press the first button (Reload All Maven Projects). Errors in pom.xml
should
disappear.
Now you can add a test. Use Generate... | Test... context menu function. A parallel hierarchy of packages,
as well as necessary class will be added to test
branch of the project tree.
2.4 Working with the File System
2.4.1 Overview
Java allows you to work not only with files, but also with a file system as a whole. The file system is a set of principles and mechanisms used by the operating system for storing information in the form of files on information media. Also, this term is also used to indicate a set of files and directories (folders) that are placed on a logical or physical device.
The typical functions of working with the file system are:
- checking the existence of a file or directory
- getting a list of files and subdirectories of a specified directory
- creation files and links to files
- copying files
- renaming and moving files
- manage files attributes
- deleting files
- bypassing the tree of subdirectories
- tracking file changes.
Java offers two approaches to work with file system:
- use of
java.io.File
class - use of
java.nio.file
package.
2.4.2 Using File Class
The java.io
package provides the ability to work both with file contents and with the file system
as a whole. This feature implements the File
class. To create an object of this class, the (full or
relative) path to the file should be passed as a parameter of the constructor. For example:
File dir = new File("C:\\Users"); File currentDir = new File("."); // Project folder (current)
The File
class contains methods for obtaining a list of files of the specified folder (list()
, listFiles()
),
obtaining and modifying the file attributes (setLastModified()
, setReadOnly()
, isHidden()
, isDirectory()
,
etc.), creating a new file (createNewFile()
, createTempFile()
), create directories (mkdir()
),
delete files and folders (delete()
) and many more. The work of some of these methods can be demonstrated by the following example:
package ua.inf.iwanoff.java.advanced.third; import java.io.*; import java.util.*; public class FileTest { public static void main(String[] args) throws IOException { Scanner scanner = new Scanner(System.in); System.out.print("Enter the name of the folder you want to create:"); String dirName = scanner.next(); File dir = new File(dirName); // Create a new folder: if (!dir.mkdir()) { System.out.println("Unable to create a folder!"); return; } // Create a new file inside a new folder: File file = new File(dir + "\\temp.txt"); file.createNewFile(); // Display a list of files in the folder: System.out.println(Arrays.asList(dir.list())); file.delete(); // delete the file dir.delete(); // delete the folder } }
The list()
function without parameters allows you to obtain an array of strings that contains all
files and subdirectories of a folder defined when creating an object of type File
. You can see relative
filenames (without paths). In the following example, we get a list of files and subdirectories of the folder whose
name is entered from the keyboard:
package ua.inf.iwanoff.java.advanced.third; import java.io.File; import java.io.FilenameFilter; import java.util.Scanner; public class ListOfFiles { public static void main(String[] args) { Scanner scanner = new Scanner(System.in); System.out.print("Enter folder name:"); String dirName = scanner.next(); File dir = new File(dirName); if (!dir.isDirectory()) { System.out.println("Invalid folder name!"); return; } String[] list = dir.list(); for(String name : list) { System.out.println(name); } } }
Unlike list()
, the listFiles()
function returns an array of objects of type File
.
This provides additional features: getting full file names, checking file attributes, working with folders, etc.
These additional features will be shown in the following example:
File[] list = dir.listFiles(); // Outputs file data in the default form: for(File file : list) { System.out.println(file); } // The full path is displayed: for(File file : list) { System.out.println(file.getCanonicalPath()); } // Only subdirectories are displayed: for(File file : list) { if (file.isDirectory()) System.out.println(file.getCanonicalPath()); }
To determine the filter mask, you should create an object of a class that implements the FilenameFilter
interface.
In the following example, we get a list of files and subdirectories whose names begin with the letter 's'
:
String[] list = dir.list(new FilenameFilter() { @Override public boolean accept(File dir, String name) { return name.toLowerCase().charAt(0) == 's'; } }); for(String name : list) { System.out.println(name); }
A similar parameter of type FilenameFilter
can be applied to the listFiles()
function.
2.4.3 Working with java.nio Package
The java.nio
package that appeared in JDK 1.4 originally included alternative I/O tools. Compared
to traditional I/O streams, java.nio
provides a higher efficiency of I/O operations. This is achieved
due to the fact that traditional I/O tools work with data in streams, while java.nio
works with data
in blocks. Central objects in java.nio
are Channel
and Buffer
. Channels are
similar to streams in the java.io
package. Buffer is a container object. All data that is transmitted
to the channel must first be placed in the buffer. Any data that is read from the channel is read into the buffer.
The java.nio
means are effective when working with binary files.
The Java 7 version provides an alternative approach to working with the file system: a set of classes described
in the java.nio.files
package. The java.nio.files
package provides a Path
class
to represent the path in the file system. Separate components of this path can be represented by a certain collection
of intermediate subdirectories and the name of the file itself (subdirectory). You can get the Path
class
object using the get()
method of the Path
class. The get()
method is obtains
the path string:
Path path = Paths.get("c:/Users/Public");
Now you can get information about the path:
System.out.println(path.toString()); // c:\Users\Public System.out.println(path.getFileName()); // Public System.out.println(path.getName(0)); // Users System.out.println(path.getNameCount()); // 2 System.out.println(path.subpath(0, 2)); // Users\Public System.out.println(path.getParent()); // c:\Users System.out.println(path.getRoot()); // c:\
After the Path
class object is created, you can use as an argument of static functions of the java.nio.files.Files
.
To check the presence (absence) of file, the functions exists()
and notExists()
relatively
used::
Path dir = Paths.get("c:/Windows"); System.out.println(Files.exists(dir)); // most likely true System.out.println(Files.notExists(dir)); // most likely false
The presence of two separate functions is associated with the possibility of obtaining an indefinite result (it is prohibited to access the file).
To make sure that the program can get the necessary access to the file, you can use isReadable(Path)
, isWritable(Path)
and isExecutable(Path)
methods.
Suppose the Path
type object is created and the path to the file is set. The following code fragment
checks whether a specific file exists, and is it possible to open it for execution:
boolean isRegularExecutableFile = Files.isRegularFile(file) & Files.isReadable(file) & Files.isExecutable(file);
To obtain metadata (data on files and directories), the Files
class provides a number of static methods:
Methods | Explanation |
---|---|
size(Path) |
Returns the size of the specified file in bytes |
isDirectory(Path, LinkOption...) |
Returns |
isRegularFile(Path, LinkOption...) |
Returns true if the specified Path indicates a regular file |
isHidden(Path) |
Returns true if the specified Path indicates a hidden
file |
getLastModifiedTime(Path, LinkOption...) setLastModifiedTime(Path, FileTime) |
Gets / sets the time of the last modification of the specified file |
getOwner(Path, LinkOption...) setOwner(Path, UserPrincipal) |
Gets / sets the owner of the file |
getAttribute(Path, String, LinkOption...) setAttribute(Path, String, Object, LinkOption...) |
Gets / sets the file attribute value |
For various versions of MS Windows, the attribute string should begin with the prefix "dos:
:".
For example, you can set necessary attributes to some file:
Path file = ... Files.setAttribute(file, "dos:archive", false); Files.setAttribute(file, "dos:hidden", true); Files.setAttribute(file, "dos:readonly", true); Files.setAttribute(file, "dos:system", true);
You can also read required attributes using readAttributes()
method. This method requires metadata
about resulting type as its second parameter. These metadata can be obtained from class
field (metadata
will be considered later). The most appropriate resulting type is java.nio.file.attribute.BasicFileAttributes
class.
For example, you can get some file data:
package ua.inf.iwanoff.java.advanced.third; import java.nio.file.*; import java.nio.file.attribute.BasicFileAttributes; import java.util.Scanner; public class Attributes { public static void main(String[] args) throws Exception { System.out.println("Enter file or directory name:"); Path path = Paths.get(new Scanner(System.in).nextLine()); BasicFileAttributes attr = Files.readAttributes(path, BasicFileAttributes.class); System.out.println("Time of creation: " + attr.creationTime()); System.out.println("Last access time: " + attr.lastAccessTime()); System.out.println("Last change time: " + attr.lastModifiedTime()); System.out.println("Directory: " + attr.isDirectory()); System.out.println("Regular file: " + attr.isRegularFile()); System.out.println("Size: " + attr.size()); } }
The DosFileAttributes
class derived from BasicFileAttributes
, also provides isReadOnly()
, isHidden()
, isArchive()
and isSystem()
methods.
In contrast to java.io
tools for working with the file system, java.nio.files.Files
class
provides a copy()
function for file copying. For example:
Files.copy(Paths.get("c:/autoexec.bat"), Paths.get("c:/Users/autoexec.bat")); Files.copy(Paths.get("c:/autoexec.bat"), Paths.get("c:/Users/autoexec.bat"), StandardCopyOption.REPLACE_EXISTING);
There are also StandardCopyOption.ATOMIC_MOVE
and StandardCopyOption.COPY_ATTRIBUTES
options.
Options can be listed separated by commas.
To move files, use the move()
function (with similar attributes or without them). Rename is performed
by the same function:
Files.move(Paths.get("c:/Users/autoexec.bat"), Paths.get("d:/autoexec.bat")); // moving Files.move(Paths.get("d:/autoexec.bat"), Paths.get("d:/unnecessary.bat")); // renaming
A new directory can be created using the createDirectory()
function of Files
class feature.
The function parameter has a Path
type.
Path dir = Paths.get("c:/NewDir"); Files.createDirectory(dir);
To create a directory of several levels in depth, when one or more intermediate directories may not yet exist,
you can use the createDirectories()
method:
Path dir = Paths.get("c:/NewDir/1/2"); Files.createDirectories(dir);
To get the list of files allocated in subdirectory, you can use the DirectoryStream
class.
package ua.inf.iwanoff.java.advanced.third; import java.io.IOException; import java.nio.file.*; public class FileListDemo { public static void main(String[] args) { Path dir = Paths.get("c:/Windows"); try (DirectoryStream<Path> ds = Files.newDirectoryStream(dir)) { for (Path p : ds) { System.out.println(p.getFileName()); } } catch (IOException e) { e.printStackTrace(); } } }
Deleting files and folders is carried out using the delete()
and deleteIfExists()
functions:
Files.delete(Paths.get("d:/unnecessary.bat")); Files.deleteIfExists(Paths.get("d:/unnecessary.bat"));
To bypass the directory tree, the java.nio.files
package provides functions that do not require recursive
algorithms. There is a walkFileTree()
method of the Files
class, which ensures the bypass
of the subdirectory tree. As parameters, you should specify the initial directory (object of type Path
),
as well as an object that implements the generic FileVisitor
interface.
Note: there is another method that allows you to set directory bypass options and restrictions on the depth of subdirectories.
To implement the FileVisitor
interface, you need to define preVisitDirectory()
, postVisitDirectory()
, visitFile()
and visitFileFailed()
methods.
The result of these functions is the enumeration of the FileVisitResult
type. Possible values of this
enumeration are CONTINUE
, TERMINATE
, SKIP_SUBTREE
and SKIP_SIBLINGS
.
To do not implement all the FileVisitor
interface methods each time, you can use the generalized SimpleFileVisitor
class
instead of implementation of FileVisitor
methods. This class provides the default implementation of
the interface functions. In this case, you only need to override necessary functions. The following example searches
for all files of the specified directory and its subdirectories:
package ua.inf.iwanoff.java.advanced.third; import java.io.IOException; import java.nio.file.*; import java.nio.file.attribute.BasicFileAttributes; import java.util.Scanner; public class FindAllFiles { private static class Finder extends SimpleFileVisitor<Path> { @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { System.out.println(file); return FileVisitResult.CONTINUE; } @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { System.out.println("----------------" + dir + "----------------"); return FileVisitResult.CONTINUE; } } public static void main(String[] args) { String dirName = new Scanner(System.in).nextLine(); try { Files.walkFileTree(Paths.get(dirName), new Finder()); // Current directory } catch (IOException e) { e.printStackTrace(); } } }
You can use patterns to search for files (so-called "Glob" patterns) actively used in all operating systems.
Examples of such patterns: - "a *. *" (File names start on the letter a), "*.txt
" (files
with the extension * .txt), etc. Suppose the pattern
string contains some glob pattern. Now you create
the PathMatcher
class object:
PathMatcher matcher = FileSystems.getDefault().getPathMatcher("glob:" + pattern);
The following program search files by specified pattern the specified directory:
package ua.inf.iwanoff.java.advanced.third; import java.io.IOException; import java.nio.file.*; import java.util.Scanner; public class FindMatched { public static void main(String[] args) { Scanner scanner = new Scanner(System.in); String dirName = scanner.nextLine(); String pattern = scanner.nextLine(); Path dir = Paths.get(dirName); PathMatcher matcher = FileSystems.getDefault().getPathMatcher("glob:" + pattern); try (DirectoryStream<Path> ds = Files.newDirectoryStream(dir)) { for (Path file : ds) { if (matcher.matches(file.getFileName())) { System.out.println(file.getFileName()); } } } catch (IOException e) { e.printStackTrace(); } } }
Patterns can be combined with bypassing catalogs.
One of the tasks of the file system is to track the state of the specified directory. For example, the program
must update the file and subdirectories data of some directory if other processes or threads create, change, delete
files and folders, etc. The java.nio.files
package provides tools for registering such directories
and track their status. To track changes, you can implement the WatchService
interface. Suitable implementation
can be obtained using the FileSystems.getDefault().newWatchService()
method invocation. The StandardWatchEventKinds
class
provides the necessary constants for possible events.
You must first register the necessary directory, and then in an infinite cycle read information about the events
related to its changes. The WatchEvent
interface provides a description of a possible event. For example,
we can offer the following program:
package ua.inf.iwanoff.java.advanced.third; import java.nio.file.*; import java.util.Scanner; import static java.nio.file.StandardWatchEventKinds.*; public class WatchDir { public static void main(String[] args) throws Exception { System.out.println("Enter directory name:"); Path dir = Paths.get(new Scanner(System.in).nextLine()); // Create an object of WatchService type: WatchService watcher = FileSystems.getDefault().newWatchService(); // Register monitored events: WatchKey key = dir.register(watcher, ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY); while (true) { // endless loop key = watcher.take(); // wait for the next set of events for (WatchEvent<?> event: key.pollEvents()) { WatchEvent<Path> ev = (WatchEvent<Path>)event; System.out.printf("%s: %s\n", ev.kind().name(), dir.resolve(ev.context())); } key.reset(); // reset the status of the event set } } }
The java.nio.files
library supports work with symbolic links (symlinks, soft links) and hard
links. The createSymbolicLink(new_link, existing_object)
method of the Files
class
creates a symbolic link, the createLink(new_link, existing_file)
method creates a hard link. The isSymbolicLink()
method
returns true
if the obtained argument it is a symbolic link. The readSymbolicLink()
method
allows you to find an object referenced by a symbolic link.
2.5 Using java.nio Tools for Reading and Writing Data
2.5.1 Overview
Compared to traditional I/O streams, java.nio
provides higher efficiency of I/O operations. This is
achieved due to the fact that traditional I/O tools work with data in streams, while java.nio
working
with data in blocks. The central objects in java.nio
are Channel
and Bufer
.
- Channels are used to provide data transfer, for example, between files and buffers. In addition to working with files, channels are also used to work with datagrams and sockets.
- A buffer is a container object. All data transmitted to the channel must first be buffered.
Any data that is read from the channel goes into the buffer. Additional efficiency is provided through the integration of buffer objects with operating system buffers.
The tools java.nio
are effective when working with binary files, first of all, in multithreading tasks
where special selectors are used.
2.5.2 Using the Files Class to Work with Text Files
In addition to complex mechanisms for working with channels, buffers and selectors, the package java.nio.file
provides
simple means of reading from text files and writing to text files.
The following static functions can be used to read data:
readString()
reads the current line from the specified file (variable of typePath
);readAllLines()
reads all lines from the specified file.
So, for example, you can read the first line from the file:
Path path = Paths.get("SomeFile.txt"); String s = Files.readString(path);
And this is how you can read all lines of a text file:
Path path = Paths.get("SomeFile.txt"); List<String> lines = Files.readAllLines(path); for (String s: lines) { System.out.println(s); }
The following functions are used for recording:
writeString()
writes a string to the current file positionwrite()
is a more universal function that writes an array of bytes.
An example of using writeString()
:
Path path = Paths.get("newFile.txt"); String question = "To be or not to be?"; Files.writeString(path, question);
An example of using the function write()
:
Path path = Paths.get("newFile.txt"); String question = "To be or not to be?"; Files.write(path, question.getBytes());
Additional options can be specified for reading and writing.
There are also static functions for interacting with java.io
streams.
2.6 Using Streams to Work with Text Files
Stream API streams are integrated with working with text files and java.nio.file
tools.
We will demonstrate the capabilities of reading from text files by reading from a file called source.txt
.
Suppose such a file is located in the project folder and has the following content:
First Second Third
A static lines()
method of Files
class used to read lines from a text file and create
a stream. In the following example, all lines of the file source.txt are read and output to the console. It is advisable
to place the stream creation in the try-with-resources block:
try (Stream<String> strings = Files.lines(Path.of("source.txt"))) { strings.forEach(System.out::println); } catch (IOException e) { throw new RuntimeException(e); }
Note: if it was necessary to work with only one line, or with a part of lines, only the necessary lines would be read from the file.
The same results can be obtained using the java.io.BufferedReader
class:
try (BufferedReader bufferedReader = Files.newBufferedReader(Paths.get("source.txt"))) { Stream<String> strings = bufferedReader.lines(); strings.forEach(System.out::println); } catch (IOException e) { throw new RuntimeException(e); }
You can also create a list:
List<String> list = Files.readAllLines(Path.of("source.txt")); Stream<String> lines = list.stream();
To write to a file, you can use the Files.write()
function:
Stream<String> digits = Stream.of("1", "3", "2", "4"); Files.write(Path.of("digits.txt"), digits.toList());
Example 3.2 demonstrates working with files in conjunction with the Stream API.
2.7 Working with JSON Files
2.7.1 JSON File Format and its Features
JSON is a lightweight data exchange format, which is mainly used to exchange data between computers at different levels of interaction. The name JSON is short for JavaScript Object Notation. Although the JSON syntax is the syntax for JavaScript objects, JSON files can be used separately from JavaScript. Currently, work with this format is supported by many programming languages.
JSON can be seen as a lightweight and modern alternative to XML. XML documents and JSON files have a lot in common features:
- files are always textual;
- the format does not require additional explanations for the person;
- data provides for the possibility of hierarchical representation.
But unlike XML documents, JSON files are shorter, easier to read, and offer some additional features.
Suppose the following XML document was previously created:
<students> <student> <firstName>Frodo</firstName> <lastName>Baggins</lastName> </student> <student> <firstName>Samwise</firstName> <lastName>Gamgee</lastName> </student> </students>
The corresponding JSON file will look like this (students.json
):
{"students":[ { "firstName":"Frodo", "lastName":"Baggins" }, { "firstName":"Samwise", "lastName":"Gamgee" } ]}
The following file contains the main elements of JSON syntax:
- the "students" array, the elements of which are enclosed in square brackets;
- objects located in braces;
- strings.
In addition to strings, values can be numbers, boolean values (false
and true
)
and null
.
2.7.2 Using the org.json Library to Work with JSON Files
The org.json library was introduced in late 2010 and was originally implemented by Douglas Crockford, the author of JSON. Therefore, this library can be considered as a reference implementation for JSON in Java.
The easiest way to connect the org.json library to the project is to add the necessary dependency to the
pom.xml
file:
<dependencies> <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20230227</version> </dependency> </dependencies>
Note: the current version of the library may change.
According to the types of values in the JSON file, the library defines the types JSONObject
and JSONArray
. There
are different ways to create an object JSONObject
. You can create it manually, for example:
JSONObject someObject = new JSONObject() .put("number", 10) .put("object", new JSONObject() .put("greetings", "Hello")) .put("array", new JSONArray() .put(12.95) .put("Some text"));
You can read it from the string :
JSONObject theSame = new JSONObject(new JSONTokener(""" { "number": 10, "object": { "greetings":"Hello" }, "array": [ 12.95, "Some text" ] } """));
To read data from an existing JSON file, you can use a static readAllBytes()
function of
the Files
class. After creating JSONObject
, you can divide it into separate objects and
arrays. You can use the FileWriter
class to write data to a new file.We will consider these actions
in an example.
A file called students.json
was previously created:
{"students":[ { "firstName":"Frodo", "lastName":"Baggins" }, { "firstName":"Samwise", "lastName":"Gamgee" } ]}
After reading data from this file, we can add two more students and write to a new JSON file. The program code can be as follows:
package ua.inf.iwanoff.java.advanced.third; import java.io.FileWriter; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; import org.json.JSONArray; import org.json.JSONObject; public class JsonTest { public static void main(String[] args) throws IOException { JSONObject object = new JSONObject(new String(Files.readAllBytes(Paths.get("students.json")))); System.out.println(object.toString(1)); JSONArray students = object.getJSONArray("students"); for (int i = 0; i < students.length(); i++) { JSONObject student = students.getJSONObject(i); System.out.println(" - " + student.getString("firstName")); } students.put(new JSONObject().put("firstName", "Merry").put("lastName", "Brandybuck")); students.put(new JSONObject().put("firstName", "Pippin").put("lastName", "Took")); try (FileWriter file = new FileWriter("newStudents.json")) { file.write(object.toString(1)); } } }
Using the toString(1)
method allows you to get a formatted JSON file:
{"students": [ { "firstName": "Frodo", "lastName": "Baggins" }, { "firstName": "Samwise", "lastName": "Gamgee" }, { "firstName": "Merry", "lastName": "Brandybuck" }, { "firstName": "Pippin", "lastName": "Took" } ]}
There are also other libraries for working with JSON files, e.g. Gson (from Google), Jackson, JSON-P, JSON-B, etc.
2.8 Serialization into XML and JSON Files using XStream Tools
In the course "Fundamentals of Java programming" the technologies of serialization and deserialization of objects were considered – recording and reproduction of the state of objects using sequential streams, in particular, files.
In addition to binary serialization, standard XML serialization tools were considered. The disadvantages of standard means of serialization in XML are:
- restrictions on object types (JavaBeans);
- the ability to serialize only properties that are defined by public setters and getters;
- lack of ability to manage tag format and names.
There are non-standard implementations of XML serialization. One of the most popular libraries is XStream. This
freely distributed library makes it very easy to serialize and deserialize XML files. To work with this library,
it is enough to download the necessary JAR files. But a more convenient and modern approach is to use Maven
to connect the library. The necessary dependency should be added to the pom.xml
file:
<dependencies> <dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.4.20</version> </dependency> </dependencies>
The library also allows you to serialize and deserialize JSON files. Example 3.3 shows the code that allows you to serialize and deserialize data.
2.9 Logging
2.9.1 Overview
Working with logging is used to register in a special file (usually a text file) protocol of events that occur during the program execution. This is, for example, tracing constructors, methods, processing exceptions and other evident messages concerned with debugging.
Logger is an entry point to the logging system. Each logger can be considered as a named channel of messages to which they are sent for future processing.
An important concept of logging is log level, which determines the relative importance of messages to be logged. When the message is sent to the logger, the message logging level is compared with logger logging level. If the logging level of the message above or is equal to the logger logging level, the message will be processed, otherwise ignored.
2.9.2 Standard Java Tools for Logging
Standard tools of java.util.logging
package give ways to protocol events. There are such levels of
logging in increasing order: FINEST
, FINER
, FINE
, CONFIG
, INFO
, WARNING
, SEVERE
,
as well as ALL
and OFF
, which turns on and offs all levels respectively. To create a log
you should use static methods of java.util.logging.Logger
class. For example:
Logger log = Logger.getLogger("MyLog"); log.setLevel(Level.ALL);
The log name is determined arbitrarily. Now you can write data, in particular, the messages:
log.log(Level.INFO, "OK"); // output to the console
If we want to put the messages also to the file, you should use the java.util.logging.FileHandler
class:
FileHandler fileHandler = new FileHandler("C:/MyFile.log"); log.addHandler(fileHandler); log.log(Level.INFO, "OK"); // output to the console and into a file
Note: recording to the file involves catch of the java.io.IOException
.
In the following example, a log that receives messages to all levels is created. Simultaneously with the output on the console messages are recorded in a defined file:
package ua.inf.iwanoff.java.advanced.third; import java.io.IOException; import java.util.logging.FileHandler; import java.util.logging.Level; import java.util.logging.Logger; public class LogDemo { public static void main(String[] args) throws IOException { Logger log = Logger.getLogger("MyLog"); log.setLevel(Level.ALL); FileHandler fileHandler = new FileHandler("C:/MyFile.log"); log.addHandler(fileHandler); log.log(Level.INFO, "OK"); // output to the console and into a file } }
For the configuration of standard logging tools, a special file of properties (with .properties
extension)
is used. In particular, you can separately set the logging options for output to the console and into a file.
2.9.3 Using Log4j Library
There are drawbacks of standard logging tools (java.util.logging
). These are the difficulties of setting
up, low efficiency, limited logging capabilities, configuration is not enough intuitive. These disadvantages stimulated
the independent development of alternative login libraries.
Apache Log4j 2 is Java logging library, which actually become an industrial standard. It provides significant improvements over its predecessor, Log4j 1. Since 2015, the version of Log4J 1 is not recommended for use.
Currently, version 2.20 is relevant. The log4j API can be downloaded at https://logging.apache.org/log4j/2.x/.
In order to take advantage of Log4J 2 library capabilities, you can create a new Maven project, e.g. log4j-test
.
You should add such dependencies to the pom.xml
file:
<dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.20.0</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.20.0</version> </dependency> </dependencies>
After reloading the project (Reload All Maven Projects button), you can use Log4J 2.
Now we create a class with main()
function. We create an object of org.apache.logging.log4j.Logger
class.
This object allows recording messages in accordance with the level.
package ua.inf.iwanoff.java.advanced.third; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; public class HelloLog4J { public static void main(String[] args) { Logger logger = LogManager.getLogger(HelloLog4J.class); logger.fatal("Hello, Log4j!"); } }
The information about the date and time, function and class precedes the text "Hello, Log4j!
".
Logging options are stored in a special configuration file. Since the configuration of the login is not yet defined
(there is no corresponding file), the default configuration is operated, according to which only error
and fatal
messages
are displayed. Logging of fatal
, which is used to output the message, has the highest priority. All
messages are shown on the console.
In order to change the logging policy, you must create a configuration file. His name is log4j2.xml
.
Such a file should be created in the java\src\resources
folder. Its content in the simplest case will
be as follows:
<?xml version="1.0" encoding="UTF-8"?> <Configuration status="INFO"> <Appenders> <Console name="ConsoleAppender" target="SYSTEM_OUT"/> <File name="FileAppender" fileName="hello-app-${date:yyyyMMdd}.log" immediateFlush="false" append="true"/> </Appenders> <Loggers> <Root level="debug"> <AppenderRef ref="ConsoleAppender" /> <AppenderRef ref="FileAppender"/> </Root> </Loggers> </Configuration>
The file contains a group <Appenders>
, which indicates that the output is carried out on the
console and to a file whose name contains the "hello-app
" string and the current date. The <Loggers>
group
contains levels of output. In our case, this is "debug
".
Log4J supports such levels of output, in order of growing priority:
trace debug info warn error fatal
Setting a certain level means that only messages of this or a higher priority are recorded. Therefore, in our case,
the output of fatal
level is also performed.
Since default configuration is no longer used, information about the date and time, function and class disappeared.
It can be restored by changing the log4j2.xml
file content:
<?xml version="1.0" encoding="UTF-8"?> <Configuration status="INFO"> <Appenders> <Console name="ConsoleAppender" target="SYSTEM_OUT"> <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %logger{36} - %msg%n" /> </Console> <File name="FileAppender" fileName="hello-app-${date:yyyyMMdd}.log" immediateFlush="false" append="true"> <PatternLayout pattern="%d{yyy-MM-dd HH:mm:ss.SSS} [%t] %logger{36} - %msg%n"/> </File> </Appenders> <Loggers> <Root level="debug"> <AppenderRef ref="ConsoleAppender" /> <AppenderRef ref="FileAppender"/> </Root> </Loggers> </Configuration>
In addition to XML format, the configuration file can be created in JSON, YAML, or PROPERTIES formats.
3 Sample Programs
3.1 Using DOM Technology
Suppose a XML document with about the continent data (Continent.xml
) is prepared:
<?xml version="1.0" encoding="UTF-8"?> <ContinentData Name="Europe"> <CountriesData> <CountryData Name="Ukraine" Area="603700" Population="46314736" > <CapitalData Name="Kiev" /> </CountryData> <CountryData Name="France" Area="547030" Population="61875822" > <CapitalData Name="Moscow" /> </CountryData> <CountryData Name="Germany" Area="357022" Population="82310000" > <CapitalData Name="Berlin" /> </CountryData> </CountriesData> </ContinentData>
Note: error with the capital of France is intentional.
It is necessary use DOM tools to read the data, fix the error and save it in a new file. The program will look like this:
package ua.inf.iwanoff.java.advanced.third; import java.io.*; import org.w3c.dom.*; import javax.xml.parsers.*; import javax.xml.transform.*; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; public class ContinentWithDOM { public static void main(String[] args) { try { Document doc; DocumentBuilder db = DocumentBuilderFactory.newInstance().newDocumentBuilder(); doc = db.parse(new File("Continent.xml")); Node rootNode = doc.getDocumentElement(); mainLoop: for (int i = 0; i < rootNode.getChildNodes().getLength(); i++) { Node countriesNode = rootNode.getChildNodes().item(i); if (countriesNode.getNodeName().equals("CountriesData")) { for (int j = 0; j < countriesNode.getChildNodes().getLength(); j++) { Node countryNode = countriesNode.getChildNodes().item(j); if (countryNode.getNodeName().equals("CountryData")) { // Find the attribute by name: if (countryNode.getAttributes().getNamedItem("Name").getNodeValue().equals("France")) { for (int k = 0; k < countryNode.getChildNodes().getLength(); k++) { Node capitalNode = countryNode.getChildNodes().item(k); if (capitalNode.getNodeName().equals("CapitalData")) { capitalNode.getAttributes().getNamedItem("Name").setNodeValue("Paris"); break mainLoop; } } } } } } } Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.transform(new DOMSource(doc), new StreamResult(new FileOutputStream(new File("CorrectedConinent.xml")))); } catch (Exception e) { e.printStackTrace(); } } }
3.2 Reading from a File and Sorting Strings
Suppose we need to read strings from a text file, sort them in reverse alphabetical order, and write strings that start with the letter "F" to a new file..
A file with strings (strings.txt
)
can be like this:
First Second Third Fourth Fifth
The program can be as follows:
package ua.inf.iwanoff.java.advanced.third; import java.io.BufferedReader; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; import java.util.stream.Stream; public class ReadAndSort { public static void main(String[] args) throws IOException { try (BufferedReader reader = Files.newBufferedReader(Paths.get("strings.txt"))) { Stream<String> stream = reader.lines().sorted((s1, s2) -> s2.compareTo(s1)). filter(s -> s.startsWith("F")); Files.write(Paths.get("results.txt"), stream.toList()); } } }
After running of the program, we will get the file called results.txt
:
Fourth First Fifth
3.3 Serialization and Deserialization using the XStream Library
Suppose you need to serialize and deserialize data about a line described by two points. We are creating a new
Maven project called LineAndPoints
. We add a dependency on the xstream library to the pom.xml
file.
We will get the following pom.xml
file:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>ua.inf.iwanoff.java.advanced.third</groupId> <artifactId>LineAndPoints</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.4.20</version> </dependency> </dependencies> </project>
We create Line
and Point
classes. These classes have no parameterless constructors and
public properties, so they cannot be serialized using java.beans.XMLEncoder
and .beans.XMLDecoder
.
But XStream allows you to serialize them because this library focuses on fields, not properties.
The Point
class:
package ua.inf.iwanoff.java.advanced.third; public class Point { private double x, y; public Point(double x, double y) { this.x = x; this.y = y; } @Override public String toString() { return x + " " + y; } }
The Line
class:
package ua.inf.iwanoff.java.advanced.third; public class Line { private Point first, second; public Line(double firstX, double firstY, double secondX, double secondY) { first = new Point(firstX, firstY); second = new Point(secondX, secondY); } @Override public String toString() { return first + " " + second; } }
The following class can be created to serialize data:
package ua.inf.iwanoff.java.advanced.third; import com.thoughtworks.xstream.XStream; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; public class XMLSerialization { public static void main(String[] args) { XStream xStream = new XStream(); Line line = new Line(1, 2, 3, 4); xStream.alias("line", Line.class); String xml = xStream.toXML(line); try (FileWriter fw = new FileWriter("Line.xml"); PrintWriter out = new PrintWriter(fw)) { out.println(xml); } catch (IOException e) { e.printStackTrace(); } } }
We get an XML file:
<line> <first> <x>1.0</x> <y>2.0</y> </first> <second> <x>3.0</x> <y>4.0</y> </second> </line>
Note: if no alias is used, the root tag will be: <ua.inf.iwanoff.java.advanced.third.Line>
We deserialize objects in another program:
package ua.inf.iwanoff.java.advanced.third; import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.security.AnyTypePermission; import java.io.File; public class XMLDeserialization { public static void main(String[] args) { XStream xStream = new XStream(); xStream.addPermission(AnyTypePermission.ANY); xStream.alias("line", Line.class); Line newLine = (Line) xStream.fromXML(new File("Line.xml")); System.out.println(newLine); } }
In order to use XStream tools for working with JSON files, we need to add one more dependency to the pom.xml
file:
<dependency> <groupId>org.codehaus.jettison</groupId> <artifactId>jettison</artifactId> <version>1.5.2</version> </dependency>
The program of serializing into a JSON file will be as follows:
package ua.inf.iwanoff.java.advanced.third; import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.io.json.JsonHierarchicalStreamDriver; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; public class JSONSerialization { public static void main(String[] args) { XStream xStream = new XStream(new JsonHierarchicalStreamDriver()); Line line = new Line(1, 2, 3, 4); xStream.alias("line", Line.class); String xml = xStream.toXML(line); try (FileWriter fw = new FileWriter("Line.json"); PrintWriter out = new PrintWriter(fw)) { out.println(xml); } catch (IOException e) { e.printStackTrace(); } } }
The following JSON file will be obtained:
{"line": { "first": { "x": 1.0, "y": 2.0 }, "second": { "x": 3.0, "y": 4.0 } }}
The program to deserialize from a JSON file would be:
package ua.inf.iwanoff.java.advanced.third; import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver; import com.thoughtworks.xstream.security.AnyTypePermission; import java.io.File; public class JSONDeserialization { public static void main(String[] args) { XStream xStream = new XStream(new JettisonMappedXmlDriver()); xStream.addPermission(AnyTypePermission.ANY); xStream.alias("line", Line.class); Line newLine = (Line) xStream.fromXML(new File("Line.json")); System.out.println(newLine); } }
3.4 "Country" and "Census" Classes
Suppose we want to reimplement a previously created project related to the country and censuses. The base classes that implement the basic functionality and data structures for representing countries and censuses were presented in examples of laboratory trianings # 3 and # 4 of the course "Fundamentals of Java Programming". It is necessary to add derived classes in which override the implementation of all methods related to the processing of sequences through the use of Stream API tools. In addition to reproducing the implementation of the functionality, the new project should include:
- outputting data to a text file using Stream API with subsequent reading;
- serialization of objects into an XML file and a JSON file and corresponding deserialization using the XStream library;
- recording events related to program execution in the system log;
- testing individual classes using JUnit.
Taking into account the addition of dependencies on external libraries, it is advisable to create a new Maven project into which to transfer previously created classes. You can copy files from one project to another via the clipboard: in the Projects sub-window, select the necessary files and copy them to the clipboard (Copy function of the context menu); in another project, select the required package and insert the files using the Paste function. You can also copy the entire package.
The FileUtils
class will be responsible for storing in a text file, reading from a text file, serializing
and deserializing data (from XML and JSON), as well as writing to the event log in the program. Logging will be
done every time we read or write data to files of various formats. To work with XStream and log4j tools, you
need to add the following dependencies to the pom.xml
file:
<dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.4.20</version> </dependency> <dependency> <groupId>org.codehaus.jettison</groupId> <artifactId>jettison</artifactId> <version>1.5.2</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.20.0</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.20.0</version> </dependency>
We also need to configure the log4j properties by adding a file log4j2.xml
to the java\src\resources
folder.
In our case, this file will be as follows:
<?xml version="1.0" encoding="UTF-8"?> <Configuration status="INFO"> <Appenders> <Console name="ConsoleAppender" target="SYSTEM_OUT"/> <File name="FileAppender" fileName="country-${date:yyyyMMdd}.log" immediateFlush="false" append="true"/> </Appenders> <Loggers> <Root level="debug"> <AppenderRef ref="FileAppender"/> </Root> </Loggers> </Configuration>
Individual actions related to reading and writing can be implemented as static methods. The code of FileUtils
class
will be as follows:
package ua.inf.iwanoff.java.advanced.third; import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver; import com.thoughtworks.xstream.security.AnyTypePermission; import org.apache.logging.log4j.Logger; import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; import java.nio.file.Files; import java.nio.file.Path; import java.util.List; import ua.inf.iwanoff.java.advanced.first.CensusWithStreams; import ua.inf.iwanoff.java.advanced.first.CountryWithStreams; /** * The class implements writing and reading data in TXT, XML and JSON formats. * Country and census data are read and written. * At the same time, events are recorded in the system log. */ public class FileUtils { private static Logger logger = null; public static Logger getLogger() { return logger; } public static void setLogger(Logger logger) { FileUtils.logger = logger; } /** * Writes country and census data to the specified file * * @param country the country * @param fileName the name of the file */ public static void writeToTxt(CountryWithStreams country, String fileName) { if (logger != null) { logger.info("Write to text file"); } try { Files.write(Path.of(fileName), country.toListOfStrings()); } catch (IOException e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } /** * Reads country and census data from the specified file * * @param fileName the name of the file * @return the object that was created */ public static CountryWithStreams readFromTxt(String fileName) { CountryWithStreams country = new CountryWithStreams(); if (logger != null) { logger.info("Read from text file"); } try { List<String> list = Files.readAllLines(Path.of(fileName)); country.fromListOfStrings(list); } catch (IOException e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } return country; } /** * Serializes country and census data into the specified XML file * * @param country the country * @param fileName the name of the file */ public static void serializeToXML(CountryWithStreams country, String fileName) { if (logger != null) { logger.info("Serializing to XML"); } XStream xStream = new XStream(); xStream.alias("country", CountryWithStreams.class); xStream.alias("census", CensusWithStreams.class); String xml = xStream.toXML(country); try (FileWriter fw = new FileWriter(fileName); PrintWriter out = new PrintWriter(fw)) { out.println(xml); } catch (IOException e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } /** * Deserializes country and census data from the specified XML file * * @param fileName the name of the file * @return the object that was created */ public static CountryWithStreams deserializeFromXML(String fileName) { if (logger != null) { logger.info("Deserializing from XML"); } try { XStream xStream = new XStream(); xStream.addPermission(AnyTypePermission.ANY); xStream.alias("country", CountryWithStreams.class); xStream.alias("census", CensusWithStreams.class); CountryWithStreams country = (CountryWithStreams) xStream.fromXML(new File(fileName)); return country; } catch (Exception e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } /** * Serializes country and census data into the specified JSON file * * @param country the country * @param fileName the name of the file */ public static void serializeToJSON(CountryWithStreams country, String fileName) { if (logger != null) { logger.info("Serializing to JSON"); } XStream xStream = new XStream(new JettisonMappedXmlDriver()); xStream.alias("country", CountryWithStreams.class); xStream.alias("census", CensusWithStreams.class); String xml = xStream.toXML(country); try (FileWriter fw = new FileWriter(fileName); PrintWriter out = new PrintWriter(fw)) { out.println(xml); } catch (IOException e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } /** * Deserializes country and census data from the specified JSON file * * @param fileName the name of the file * @return the object that was created */ public static CountryWithStreams deserializeFromJSON(String fileName) { if (logger != null) { logger.info("Deserializing from JSON"); } try { XStream xStream = new XStream(new JettisonMappedXmlDriver()); xStream.addPermission(AnyTypePermission.ANY); xStream.alias("country", CountryWithStreams.class); xStream.alias("census", CensusWithStreams.class); CountryWithStreams country = (CountryWithStreams) xStream.fromXML(new File(fileName)); return country; } catch (Exception e) { if (logger != null) { logger.error(e.toString()); } throw new RuntimeException(e); } } }
The code of Program
class, in which all the created functions are demonstrated, will be as follows:
package ua.inf.iwanoff.java.advanced.second; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; /** * The class demonstrates writing and reading data in TXT, XML and JSON formats. * At the same time, events are recorded in the system log. */ public class Program { /** * Demonstration of the program. * Data in TXT, XML and JSON formats are sequentially written and read. * At the same time, events are recorded in the system log * * @param args command line arguments (not used) */ public static void main(String[] args) { Logger logger = LogManager.getLogger(Program.class); FileUtils.setLogger(logger); logger.info("Program started"); CountryWithStreams country = CountryWithStreams.createCountryWithStreams(); FileUtils.writeToTxt(country, "Country.txt"); country = FileUtils.readFromTxt("Country.txt"); System.out.println(country); FileUtils.serializeToXML(country, "Country.xml"); country = FileUtils.deserializeFromXML("Country.xml"); System.out.println(country); FileUtils.serializeToJSON(country, "Country.json"); country = FileUtils.deserializeFromJSON("Country.json"); System.out.println(country); logger.info("Program finished"); } }
The result of the work will be output to the console of data about the country, read from various sources.
New files will appear in the root directory of the project. Text file (Country.txt
):
Ukraine 603628.0 1959 41869000 The first postwar census 1970 47126500 Population increases 1979 49754600 No comments 1989 51706700 The last soviet census 2001 48475100 The first census in the independent Ukraine
The XML file (Country.xml
):
<country> <name>Ukraine</name> <area>603628.0</area> <list> <census> <year>1959</year> <population>41869000</population> <comments>The first postwar census</comments> </census> <census> <year>1970</year> <population>47126500</population> <comments>Population increases</comments> </census> <census> <year>1979</year> <population>49754600</population> <comments>No comments</comments> </census> <census> <year>1989</year> <population>51706700</population> <comments>The last soviet census</comments> </census> <census> <year>2001</year> <population>48475100</population> <comments>The first census in the independent Ukraine</comments> </census> </list> </country>
Unfortunately, the JSON file (Country.json
) generated by the program will be poorly formatted (the
entire content of the file is in one line). But if you open this file in the IntelliJ IDEA environment and apply
code formatting (Code | Reformat Code), its content in the editor window will be as follows:
{ "country": { "name": "Ukraine", "area": 603628, "list": [ { "census": [ { "year": 1959, "population": 41869000, "comments": "The first postwar census" }, { "year": 1970, "population": 47126500, "comments": "Population increases" }, { "year": 1979, "population": 49754600, "comments": "No comments" }, { "year": 1989, "population": 51706700, "comments": "The last soviet census" }, { "year": 2001, "population": 48475100, "comments": "The first census in the independent Ukraine" } ] } ] } }
In addition, a log file with the .log
extension will be created, to which the following text fragment
will be added after each program launch:
Program started Write to text file Read from text file Serializing to XML Deserializing from XML Serializing to JSON Deserializing from JSON Program finished
Information about exceptions that occurred while working with files will be also logged.
4 Exercises
- Read floating point values from a text file (up to the end of the file), find their sum, and output to another text file. Apply Stream API facilities.
- Read integer values from a text file, replace negative values with modules, positive values with zeros, and output the received values to another text file. Apply Stream API facilities.
- Read integer values from a text file,divide even elements by 2, increase odd ones by 2 times and output the received values to another text file. Apply Stream API facilities.
- Create classes "Library" (with an array of books as a field) and "Book". Create objects, serialize and deserialize them in XML and JSON using XStream.
- Create the classes "Faculty" and "Institute" (with an array of faculties as a field). Create
objects, serialize and deserialize them in XML by means of
java.beans
pfckage. - Create the classes "Faculty" and "Institute" (with an array of faculties as a field). Create objects, serialize and deserialize them in XML and JSON using XStream.
5 Quiz
- What are purposes of XML documents?
- What restrictions are imposed on the structure of the XML document, the syntax and location of the tags?
- What is the difference between SAX and DOM technologies?
- How do I read and write XML documents?
- What is XSLT?
- What is the difference between a valid and well-formed XML document?
- What are differences between document template definition and document schema?
- Is document template definition (DTD) also XML document?
- Which classes correspond to the Java Beans specification?
- What are the disadvantages and advantages of XML serialization?
- What tasks do build automation tools solve?
- What is the main difference between Apache Maven and Apache Ant?
- What is GAV?
- What is the structure of
pom.xml
file? - What are the typical tasks of working with the file system?
- What standard Java tools provide the ability to work with the file system? What are the differences between these means?
- What are the ways to get information about files and subdirectories?
- How is data read and written using the Stream API?
- What is the JSON format, and what are its advantages?
- What are the main elements of a JSON file?
- What tools exist to support working with JSON files?
- What XML serialization is performed by means of XStream?
- What JSON-serialization is performed by means of XStream?
- What is a logger and logging level?
- What facilities exist to maintain logs?
- What are the advantages of the Log4j library compared to standard logging tools?
- What are output levels (priorities)?