Java InputStream and Java Input from User (Best Tutorial 2019)

Java InputStream

Java InputStream and Java Input from User tutorial 2019

Oftentimes in applications, there is a requirement to obtain and manipulate the Input/output terminals. In today’s operating systems, that usually means file access and network connectivity. In this tutorial, we cover recipes that demonstrate different Java Input/Output Stream with best examples 2019 and Java Input from users. You learn about serialization of files, sending files over the network, file manipulation, and much more.


In previous releases, Java was slow to adopt a good file and network framework in order to maintain universal compatibility.


Standing true to its roots of write once, read everywhere, a lot of the original file Input/output and network connectivity needed to be simple and universal. Since the release of Java 7, developers have been taking advantage of much better Input/output APIs.


The file and networkInput/output have evolved over the years into a much better framework for handling files, network scalability, and ease of use. As of the network input-output version 2 API (NIO.2), Java has the capability of monitoring folders, accessing OS-dependent methods, and creating scalable asynchronous network sockets.


This is in addition to the already robust library for handling input and output streams, and serializing (and deserializing) object information.

After reading the recipes in this blog, you will be armed with the capability to develop applications containing sophisticated input and output tasks.




Java Input/Output Stream is the foundation of most of the Java Input/output and include a plethora of ready-made streams for just about any occasion, but they are very confusing to use if some context is not provided. A stream (like a river) represents an inflow/outflow of data. Think about it this way.


When you type, you create a stream of characters that the system receives (input stream). When the system produces sounds, it sends them to the speaker (output stream). The system could be receiving keystrokes and sending sound all day long, and thus the streams can be either processing data or waiting for more data.


When a stream doesn’t receive any data, it waits (nothing else to do, right?). As soon as data comes in, the stream starts processing this data. The stream then stops and waits for the next data item to come. This keeps going until this proverbial river becomes dry (the stream is closed).


Like a river, streams can be connected to each other (this is the decorator pattern). For the content of this blog, there are mainly two input streams that you care about. One of them is the file Java input stream, and the other is the network socket input stream.


These two streams are a source of data for your input/output programs. There are also their corresponding output streams: file output stream and the network socket output streams (how creative, isn’t it?).


Like a plumber, you can hook them together and create something new. For example, you could weld together a file input stream to a network output stream to send the contents of the file through a network socket.


Or you could do the opposite and connect a network input stream (data coming in) to a file output stream (data being written to disk). In Input/output parlance, the input streams are called sources, while the output streams are called sinks.


There are other inputs and output streams that can be glued together. For example, there is a BufferedInputStream, which allows you to read the data in chunks (it’s more efficient than reading it byte by byte), and DataOutputStream allows you to write Java primitives to an output stream (instead of just writing bytes).


One of the most useful streams is the ObjectInputStream and ObjectOutputStream pair, which will allow you to serialize/deserialize the object.


The decorator pattern allows you to keep plucking streams together to get many different effects. The beauty of this design is that you can actually create a stream that will take any input and produce any output, and then can be thrown together with every other stream.


Serializing Java Objects

Serializing Java Objects

Problem: You need to serialize a class (save the contents of the class) so that you can restore it at a later time.



Java implements a built-in serialization mechanism. You access that machine via the ObjectOutputStream class. In the following example, the method saveSettings() uses an ObjectOutputStream to serialize the settings object in preparation for writing the object to disk:

public class Ch_8_1_SerializeExample {
public static void main(String[] args) {
Ch_8_1_SerializeExample example = new Ch_8_1_SerializeExample(); example.start();
private void start() {
ProgramSettings settings = new ProgramSettings(new Point(10,10),
new Dimension(300,200),,
"The title of the application" );
ProgramSettings loadedSettings = loadSettings("settings.bin"); if(loadedSettings != null)
System.out.println("Are settings are equal? :"+loadedSettings.equals(settings));
private void saveSettings(ProgramSettings settings, String filename) { try {
FileOutputStream fos = new FileOutputStream(filename);
try (ObjectOutputStream oos = new ObjectOutputStream(fos)) { oos.writeObject(settings);
} catch (IOException e) { e.printStackTrace();
private ProgramSettings loadSettings(String filename) { try {
FileInputStream fis = new FileInputStream(filename); ObjectInputStream ois = new ObjectInputStream(fis); return (ProgramSettings) ois.readObject();
} catch (IOException | ClassNotFoundException e) { e.printStackTrace();
return null;


How It Works

Java Serialization framework

Java supports serialization, which is the capability of taking an object and creating a byte representation that can be used to restore the object at a later time.


By using an internal serialization mechanism, most of the setup to serialize objects is taken care of. Java will transform the properties of an object into a byte stream, which can then be saved to a file or transmitted over the wire.


Note The original Java Serialization framework uses reflection to serialize the objects, so it might be an issue if serializing/deserializing heavily. There are plenty of open source frameworks that offer different trade-offs depending on your need (speed versus size versus ease of use). See eishay (Eishay Smith) jvm-serializers/wiki/.


For a class to be serializable, it needs to implement the Serializable interface, which is a Marker interface: it doesn’t have any methods, but instead tells the serialization mechanism that you have allowed your class to be serialized.


While not evident from the onset, serialization exposes all the internal workings of your class (including protected and private members), so if you want to keep secret the authorization code for a nuclear launch, you might want to make any class that contains such information nonserializable.


It is also necessary that all properties (a.k.a. members, variables, or fields) of the class are serializable (and/or transient, which we will get to in a minute). All primitives—int, long, double, and float (plus their wrapper classes)—and the String class, are serializable by design. Other Java classes are serializable on a case-by-case basis.


For example, you can’t serialize any Swing components (like JButton or JSpinner), and you can’t serialize File objects, but you can serialize the Color class (awt.color, to be more precise).


As a design principle you don’t want to serialize your main classes, but instead, you want to create classes that contain only the properties that you want to serialize. It will save a lot of headache in debugging because serialization becomes very pervasive.


If you mark a major class as serializable (implements Serializable), and this class contains many other properties, you need to declare those classes as serializable as well.


If your Java class inherits from another class, the parent class should also be serializable. In the case where the parent class is not serializable, the parent’s properties will not be serialized.


If you want to mark a property as nonserializable, you may mark it as transient. Transient properties tell the Java compiler that you are not interested in saving/loading the property value, so it will be ignored.


Some properties are good candidates for being transient, like cached calculations, or a date formatter that you always instantiate to the same value.


By the virtue of the Serialization framework, static properties are not serializable; neither are static classes. The reason is that a static class cannot be instantiated, although a public static inner class can be instantiated.


Therefore, if you save and then load the static class at the same time, you will have loaded another copy of the static class, throwing the JVM for a loop.


The Java serialization mechanism works behind the scenes to convert and traverse every object within the class that is marked as Serializable. If an application contains objects within objects, and even perhaps contains cross-referenced objects, the Serialization framework will resolve those objects, and store only one copy of an object.


Each property then gets translated to a byte[] representation. The format of the byte array includes the actual class name (for example com.somewhere.over.the.rainbow.preferences. UserPreferences), followed by the encoding of the properties (which in turn may encode another object class, with its properties, etc., etc., ad infinitum).


For the curious, if you look at the file generated (even in a text editor), you can see the class name as almost the first part of the file.


Note Serialization is very brittle. By default, the Serialization framework generates a Stream Unique Identifier (SUID) that captures information about what fields are presented in the class, what kind they are (public/protected), and what is transient, among other things.


Even a perceived slight modification of the class (for example, changing an int to a long property) will generate a new SUID. A class that has been saved with a prior SUID cannot be deserialized on the new SUID. This is done to protect the serialization/deserialization mechanism, while also protecting the designers.


You can actually tell the Java class to use a specific SUID. This will allow you to serialize classes, modify them, and then deserialize the original classes while implementing some backward compatibility.


The danger you run into is that the deserialization must be backward-compatible. Renaming or removing fields will generate an exception as the class is being deserialized.


If you are specifying your own serial Serializable on your Serializable class, be sure to have some unit tests for backward compatibility every time you change the class.


In general, the changes that can be made on a class to keep it backward-compatible are found here:


Due to the nature of serialization, don’t expect constructors to be called when an object is deserialized. If you have initialization code in constructors that is required for your object to function properly, you may need to refactor the code out of the constructor to allow proper execution after construction.


The reason is that in the deserialization process, the deserialized objects are “restored” internally (not created) and do not invoke constructors.


Serializing Java Objects More Efficiently

Serializing Java Objects


You want to serialize a class, but want to make the output more efficient, or smaller in size, than the product generated via the built-in serialization method.



By making the object implement the Externalizable interface, you instruct the Java Virtual Machine to use a custom serialization/deserialization mechanism, as provided by the readExternal/writeExternal methods in the following example.

public class ExternalizableProgramSettings implements Externalizable { private Point locationOnScreen;
private Dimension frameSize;
private Color defaultFontColor;
private String title;
Empty constructor, required for Externalizable implementors public ExternalizableProgramSettings() {
public void writeExternal(ObjectOutput out) throws IOException {
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { locationOnScreen = new Point(in.readInt(), in.readInt());
frameSize = new Dimension(in.readInt(), in.readInt()); defaultFontColor = new Color(in.readInt()); title = in.readUTF();
getters and setters omitted for brevity


How It Works

The Java Serialization framework provides the ability for you to specify the implementation for serializing an object. As such, it requires implementing the Externalizable interface in lieu of the Serializable interface.


The Externalizable interface contains two methods: writeExternal(ObjectOutput out) and readExternal(ObjectInput in). By implementing these methods, you are telling the framework how to encode/decode your object.


The writeExternal() method will pass in as a parameter an ObjectOutput object. This object will then let you write your own encoding for the serialization. 

ObjectOutput              ObjectInput Description

writeBoolean (boolean v) booleanreadBoolean () Read/writes the Boolean primitive.
writeByte(int v) intreadByte() Read/writes a byte.
so an int is used as a parameter, but only the
least-significant byte will be written.
writeShort(int v) intreadShort() Read/writes two bytes.
writeChar(int v) intreadChar() Read/writes two bytes as a char (reverse
order than writeShort).
writeInt (int v) intreadInt() Read/writes an integer.
writeLong (long v) intreadLong() Read/writes a long.
writeDouble (double v) double readDouble Read/writes a double.


One reason you may choose to implement the Externalizable interface instead of the Serializable interface is that Java’s default serialization is very inefficient.


Because the Java Serialization framework needs to ensure that every object (and dependent object) is serialized, it will write even objects that have default values or that might be empty and/or null. 


Implementing the Externalizable interface also provides for finer-grained control on how your class is being serialized. In our example, the Serializable version created a setting of 439 bytes, compared with the Externalizable version of only 103 bytes!


Note Classes that implement the Externalizable interface must contain an empty (no-arg) constructor.


Serializing Java Objects as XML

Serializing Java Objects as XML


Although you love the Serialization framework, you want to create something that is at least cross-language-compatible (or human readable). You would like to save and load your objects using XML.



In this example, the XMLEncoder object is used to encode the Settings object, which contains program settings information and writes it to the settings.xml file. The XMLDecoder takes the settings.xml file and reads it as a stream, decoding the Settings object.


A FileSystem is used to gain access to the machine’s file system; FileOutputStream is used to write a file to the system; and FileInputStream is used to obtain input bytes from a file within the file system. In this example, these three file objects are used to create new XML files, as well as read them for processing.

FileSystem fileSystem = FileSystems.getDefault();
try (FileOutputStream fos = new FileOutputStream("settings.xml"); XMLEncoder encoder = new XMLEncoder(fos)) {
encoder.setExceptionListener((Exception e) -> { System.out.println("Exception! :"+e.toString());
// Decoding
try (FileInputStream fis = new FileInputStream("settings.xml"); XMLDecoder decoder = new XMLDecoder(fis)) {
ProgramSettings decodedSettings = (ProgramSettings) decoder.readObject(); System.out.println("Is same? "+settings.equals(decodedSettings));
Path file= fileSystem.getPath("settings.xml");
List<String> xmlLines = Files.readAllLines(file, Charset.defaultCharset()); -> {


How It Works

How It Works

XMLEncoder and XMLDecoder, like the Serialization framework, use reflection to determine which fields are to be written, but instead of writing the fields as binary, they are written as XML. Objects that are to be encoded do not need to be serializable, but they do need to follow the Java Beans specification.


Java Bean is the name of an object that conforms to the following contract:

The object contains a public empty (no-arg) constructor.

The object contains public getters and setters for each protected/private property that takes the name of get{Property}() and set{Property}().


The XMLEncoder and XMLDecoder will encode/decode only the properties of the Bean that have public accessors (get{property}, set{property}), so any properties that are private and do not have accessors will not be encoded/decoded.


Tip It is a good idea to register an Exception Listener when encoding/decoding.


The XmlEncoder creates a new instance of the class that is serialized (remember that they need to be Java Beans, so they must have an empty no-arg constructor), and then figures out which properties are accessible (via get{property}, set{property}).


And if a property of the newly instantiated class contains the same value as the property of the original class (i.e., has the same default value), the XmlEncoder doesn’t write that property.


In other words, if the default value of a property hasn’t changed, the XmlEncoder will not write it out. This provides the flexibility of changing what a “default” value is between versions.


For example, if the default value of a property is 2 when an object is encoded, and later decoded after the default property changed from 2 to 4, the decoded object will contain the new default property of 4 (which might not be correct).


The XMLEncoder also keeps track of references. If an object appears more than once when being persisted in the object graph (for example, an object is inside a Map from the main class, but is also as the DefaultValue property), then the XMLEncoder will only encode it once, and link up a reference by putting a link in the XML.


The XMLEncoder/XMLDecoder is much more forgiving than the Serialization framework. When decoding, if a property type is changed, or if it was deleted/added/moved/renamed, the decoding will decode “as much as it can” while skipping the properties that it couldn’t decode.


The recommendation is to not persist your main classes (even though the XMLEncoder is more forgiving), but to create special objects that are simple, hold the basic information, and do not perform many tasks by themselves.


[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]


Creating a Socket Connection and Sending Serializable Objects Across the Wire

Java’s New Input Output API version 2


You need to open a network connection and send/receive objects from it.



Use Java’s New Input Output API version 2 (NIO.2) to send and receive objects. The following solution utilizes the NIO.2 features of nonblocking sockets (by using Future tasks):

public class Ch_8_4_AsyncChannel {
private AsynchronousSocketChannel clientWorker;
InetSocketAddress hostAddress;
public Ch_8_4_AsyncChannel() {
private void start() throws IOException, ExecutionException, TimeoutException, InterruptedException {
hostAddress = new InetSocketAddress(InetAddress.getByName(""), 2583);
Thread serverThread = new Thread(() -> {
Thread clientThread = new Thread(() -> {
private void clientStart() {
try {
try (AsynchronousSocketChannel clientSocketChannel = AsynchronousSocketChannel. open()) {
Future<Void> connectFuture = clientSocketChannel.connect(hostAddress);
connectFuture.get(); // Wait until connection is done. OutputStream os = Channels.newOutputStream(clientSocketChannel); try (ObjectOutputStream oos = new ObjectOutputStream(os)) {
for (int i = 0; i < 5; i++) {
oos.writeObject("Look at me " + i);
} catch (IOException | InterruptedException | ExecutionException e) { e.printStackTrace();
private void serverStart() {
try {
AsynchronousServerSocketChannel serverSocketChannel =; Future<AsynchronousSocketChannel> serverFuture = serverSocketChannel.accept();
final AsynchronousSocketChannel clientSocket = serverFuture.get();
if ((clientSocket != null) && (clientSocket.isOpen())) { try (InputStream connectionInputStream = Channels. newInputStream(clientSocket)) {
ObjectInputStream ois = null;
ois = new ObjectInputStream(connectionInputStream); while (true) {
Object object = ois.readObject();
if (object.equals("EOF")) {
System.out.println("Received :" + object);
} catch (IOException | InterruptedException | ExecutionException | ClassNotFoundException e) {
public static void main(String[] args) throws IOException, ExecutionException, TimeoutException, InterruptedException {
Ch_8_4_AsyncChannel example = new Ch_8_4_AsyncChannel(); example.start();


How It Works

asynchronous calls

At its basic level, sockets require a type, IP address, and port. While sockets literature has consumed whole blogs, the main idea is pretty straightforward. Like the post office, socket communication relies on addresses.


These addresses are used to deliver data. In this example, we picked the loopback (the same computer where the program is running) address (, and chose a random port number (2583).


The advantage of the new NIO.2 is that it is asynchronous in nature. By using asynchronous calls, you can scale your application without creating thousands of threads for each connection.


In our example, we take the asynchronous calls and wait for a connection, effectively making it single-threaded for the sake of the example, but don’t let that stop you for enhancing this example with more asynchronous calls.


For a client to connect, it requires a socket channel. The NIO.2 API allows the creation of asynchronous socket channels. Once a socket channel is created, it will need an address to connect to.


The socketChannel.connect() operation does not block; instead it returns a Future object (this is a different from traditional NIO, where calling socketChannel.connect() will block until a connection is established).


The Future object allows a Java program to continue what it is doing and simply query the status of the submitted task. To take the analogy further, instead of waiting at the front door for your mail to arrive, you go do other stuff, and “check” periodically to see whether the mail has arrived.


Future objects have methods like isDone() and isCancelled() that let you know if the task is done or canceled.


It also has the get() method, which allows you to actually wait for the task to finish. In our example, we use the Future.get() to wait for the client connection to be established.


Once the connection is established, we use Channels.newOutputStream() to create an output stream to send information. Using the decorator pattern, we decorate the outputStream with our ObjectOutputStream to finally send objects through the socket.


The server code is a little more elaborate. Server socket connections allow more than one connection to occur, thus they are used to monitor or receive connections instead of initiating a connection. For this reason, the server is usually waiting for a connection asynchronously.


The server begins by establishing the address it listens to ( and accepting connections. The call to server socket channel.accept() returns another Future object that will give you the flexibility of how to deal with incoming connections.


In our example, the server connection simply calls Future.get(), which will block (stop the execution of the program) until a connection is accepted.


After the server acquires a socket channel, it creates an input stream by calling Channels. newInputStream(socket) and then wrapping that input stream with an ObjectInputStream.


The server then proceeds to loop and read each object coming from the ObjectInputStream. If the object received’s toString() method equals EOF, the server stops looping and the connection is closed.


Note Using an ObjectOutputStream and ObjectInputStream to send and receive a lot of objects can lead to memory leaks. ObjectOutputStream keeps a copy of the sent object for efficiency.


If you were to send the same object again, ObjectOutputStream and ObjectInputStream will not send the same object again, but instead, send a previously sent Object ID. This behavior or just sending the Object ID instead of the whole object raises two issues.


The first issue is that objects that are changed in place (mutable) will not get the change reflected in the receiving client when sent through the wire.


The reason is that because the object was sent once, the ObjectOutputStream believes that the object is already transmitted and will only send the ID, negating any changes to the object that have happened since it was sent.


To avoid this, don’t make changes to objects that were sent down the wire. This rule also applies to subobjects from the object graph.


The second issue is that because ObjectOutputStream maintains a list of sent objects and their Object IDs if you send a lot of objects the dictionary of sent objects to keys grows indefinitely, causing memory starvation on a long-running program.


To alleviate this issue, you can call ObjectOutputStream.reset(), which will clear the dictionary of sent objects. Alternatively, you can invoke ObjectOutputStream.writeUnshared() to not cache the object in the ObjectOutputStream dictionary.


Obtaining the Java Execution Path

Obtaining the Java Execution Path


Problem: You want to get the path where the Java program is running.

Solution: Invoke the System class’s getProperty method. For example: String path = System.getProperty("user.dir");


How It Works

When a Java program starts, the JDK updates the user.dir system property to record where the JDK was invoked. The solution example passes the property name "user.dir" to the getProperty method, which returns the value.


Copying a File

Copying a File

Problem You need to copy a file from one folder to another.


From the default FileSystem, you create the “to” and “from” paths where the files/folders exist and then use the Files.copy static method to copy files between the created paths:

FileSystem fileSystem = FileSystems.getDefault(); Path sourcePath = fileSystem.getPath("file.log"); Path targetPath = fileSystem.getPath("file2.log"); System.out.println("Copy from "+sourcePath.toAbsolutePath().toString()+
to "+targetPath.toAbsolutePath().toString()); try {
Files.copy(sourcePath, targetPath, StandardCopyOption.REPLACE_EXISTING); } catch (IOException e) {


How It Works

NIO.2 libraries

In the new NIO.2 libraries, Java works with an abstraction level that allows for more direct manipulation of file attributes belonging to the underlying operating system.


FileSystem.getDefaults() gets the usable abstract system that you can do file operations on. For example, running this example in Windows will get you a WindowsFileSystem; if you were running this example in Linux, a Linux filesystem object would be returned; on OS X, a MacOSXFileSystem is returned.


all filesystems supports basic operations; in addition, each concrete FileSystem provides access to the unique features offered for that operating system.


After getting the default FileSystem object, you can query for file objects. In the NIO.2 file, folders and links are all called paths. Once you get a path, you can perform operations with it.


In this example, Files.copy is called with the source and destination paths. The last parameter refers to the different copy options.


The different copy options are file system dependent so make sure that the one that you choose is compatible with the operating system you intend to run the application in.


Moving a File

Moving a File

Problem You need to move a file from one filesystem location to another.


you use the default FileSystem to create the “to” and “from” paths, and invoke the

Files.move() static method:
FileSystem fileSystem = FileSystems.getDefault(); Path sourcePath = fileSystem.getPath("file.log"); Path targetPath = fileSystem.getPath("file2.log"); System.out.println("Copy from "+sourcePath.toAbsolutePath().toString()+
" to "+targetPath.toAbsolutePath().toString());
try {
Files.move(sourcePath, targetPath);
} catch (IOException e) {


How It Works

In the same manner, as copying a file, create the path of source and destination. After having the source and destination paths, Files.move will take care of moving the file from one location to another for you. Other methods provided by the Files object are the following:

Delete (path): Deletes a file (or a folder, if it’s empty).
Exists (path): Checks whether a file/folder exists.
isDirectory (path): Checks whether the path created points to a directory.
isExecutable (path): Checks whether the file is an executable.
isHidden (path): Checks whether the file is visible or hidden in the operating system.


Creating a Directory

Creating a Directory

Problem You need to create a directory from your Java application.

Solution 1

By using the default FileSystem, you instantiate a path pointing to the new directory; then invoke the Files.createDirectory() static method, which creates the directory specified in the path.

FileSystem fileSystem = FileSystems.getDefault(); Path directory= fileSystem.getPath("./newDirectory"); try {


} catch (IOException e) { e.printStackTrace();



Solution 2

If using a *nix operating system, you can specify the folder attributes by invoking the PosixFilePermission() method, which lets you set access at the owner, group, and world levels. For example:

FileSystem fileSystem = FileSystems.getDefault();
Path directory= fileSystem.getPath("./newDirectoryWPermissions"); try {
Set<PosixFilePermission> perms = PosixFilePermissions.fromString("rwxr-x---");
FileAttribute<Set<PosixFilePermission>> attr =
Files.createDirectory(directory, attr);
} catch (IOException e) { e.printStackTrace();


How It Works

The Files.createDirectory() method takes a path as a parameter and then creates the directory, as demonstrated in solution 1. By default, the directory created will inherit the default permissions.


If you wanted to specify specific permissions in Linux, you can use the POSIX attributes as an extra parameter in the createDirectory() method.


Solution 2 demonstrates the ability to pass a Set of PosixFilePermissions to set up the permissions on the newly created directory.


Iterating Over Files in a Directory

Iterating Over Files in a Directory


You need to scan files from a directory. There are possibly subdirectories with more files. You want to include those in your scan.



Using the NIO.2, create a FileVisitor object and perform the desired implementation within its visitFile method. Next, obtain the default FileSystem object and grab a reference to the Path that you’d like to scan via the getPath() method.


Lastly, invoke the Files.walkFileTree() method, passing the Path and the FileVisitor that you created. The following code demonstrates how to perform these tasks.

FileVisitor<Path> myFileVisitor = new SimpleFileVisitor<Path>() { @Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs)
throws IOException {
System.out.println("Visited File: "+file.toString());
return FileVisitResult.CONTINUE;
FileSystem fileSystem = FileSystems.getDefault(); Path directory= fileSystem.getPath("."); try {
Files.walkFileTree(directory, myFileVisitor);
} catch (IOException e) { e.printStackTrace();


How It Works

Before NIO.2, trying to traverse a directory tree involved recursion, and depending on the implementation, it could be very brittle. The calls to get files within a folder were synchronous and required the scanning of the whole directory before returning; generating what would appear to be an unresponsive method call to an application user.


With NIO.2, one can specify which folder to start traversing on, and the NIO.2 calls will handle the recursion details.


The only item that you provide to the NIO.2 API is a class that tells it what to do when a file/folder is found (SimpleFileVisitor implementation). NIO.2 uses a Visitor pattern, so it isn’t required to prescan the entire folder, but instead processes files as they are being iterated over.


The implementation of the SimpleFileVisitor class as an anonymous inner class includes overriding the visitFile(Path file, BasicFileAttributesattrs() method.


When you override this method, you can specify the tasks to perform when a file is encountered. The visitFile method returns a FileVisitReturn enum. This enum then tells the FileVisitor which action to take:

CONTINUE: Continues with the traversing of the directory tree.
TERMINATE: Stops the traversing.
SKIP_SUBTREE: Stops going deeper from the current tree level (useful only if this enum is returned on the preVisitDirectory() method).
SKIP_SIBLINGS: Skips the other directories at the same tree level as the current.

The SimpleFileVisitor class, aside from the visitFile() method, also contains the following:

preVisitDirectory: Called before entering a directory to be traversed.
postVisitDirectory: Called after finished traversing a directory.
visitFile: Called as it visits the file, as in the example code.
visitFileFailed: Called if the file cannot be visited; for example, on anInput/output error.


Querying (and Setting) File Metadata

Querying File Metadata


You need to get information about a particular file, such as file size, whether it is a directory, and so on. Also, you might want to mark a file as archived in the Windows operating system or grant specific POSIX file permissions in the *nix operating system.



Using Java NIO.2 you can obtain any file information by simply invoking methods on the java.nio.file. Files utility class, passing the path for which you’d like to obtain the metadata.


You can obtain attribute information by calling the Files.getFileAttributeView() method, passing the specific implementation for the attribute view that you would like to use. The following code demonstrates these techniques for obtaining metadata.

Path path = FileSystems.getDefault().getPath("./file2.log"); try {
General file attributes, supported by all Java systems System.out.println("File Size:"+Files.size(path));
System.out.println("Is Directory:"+Files.isDirectory(path)); System.out.println("Is Regular File:"+Files.isRegularFile(path)); System.out.println("Is Symbolic Link:"+Files.isSymbolicLink(path)); System.out.println("Is Hidden:"+Files.isHidden(path)); System.out.println("Last Modified Time:"+Files.getLastModifiedTime(path)); System.out.println("Owner:"+Files.getOwner(path));
// Specific attribute views.
DosFileAttributeView view = Files.getFileAttributeView(path, DosFileAttributeView.class);
System.out.println("DOS File Attributes\n"); System.out.println("------------------------------------\n"); System.out.println("Archive :"+view.readAttributes().isArchive()); System.out.println("Hidden :"+view.readAttributes().isHidden()); System.out.println("Read-only:"+view.readAttributes().isReadOnly()); System.out.println("System :"+view.readAttributes().isSystem());
} catch (IOException e) { e.printStackTrace();


How It Works

Java NIO.2 allows much more flexibility in getting and setting file attributes than olderInput/output techniques. NIO.2 abstracts the different operating system attributes into both a “Common” set of attributes and an “OS Specific” set of attributes. The standard attributes are the following:

isDirectory: True if it’s a directory.
isRegularFile: Returns false if the file isn’t considered a regular file, the file doesn’t exist, or it can’t be determined whether it’s a regular file.
isSymbolicLink: True if the link is symbolic (most prevalent in Unix systems).
isHidden: True if the file is considered to be hidden in the operating system.
LastModifiedTime: The time the file was last updated.
Owner: The file’s owner per the operating system.


Also, NIO.2 allows entering the specific attributes of the underlying operating system. To do so, you first need to get a view that represents the operating system’s file attributes (in this example, it is a DosFileAttributeView). Once you get the view, you can query and change the OS-specific attributes.


Note The AttributeView will only work for the operating system that is intended (you cannot use the DosFileAttributeView in a *nix machine).


Monitoring a Directory for Content Changes

Monitoring a Directory for Content Changes


You need to keep track when a directory’s content has changed (for example, a file was added, changed, or deleted) and act upon those changes.



By using a WatchService, you can subscribe to be notified about events occurring within a folder. 

In the following example, we subscribe for ENTRY_CREATE, ENTRY_MODIFY, and ENTRY_DELETE events:
try {
System.out.println("Watch Event, press q<Enter> to exit"); FileSystem fileSystem = FileSystems.getDefault(); WatchService service = fileSystem.newWatchService(); Path path = fileSystem.getPath("."); System.out.println("Watching :"+path.toAbsolutePath());
path.register(service, StandardWatchEventKinds.ENTRY_CREATE, StandardWatchEventKinds. ENTRY_DELETE, StandardWatchEventKinds.ENTRY_MODIFY);
boolean shouldContinue = true;
while(shouldContinue) {
WatchKey key = service.poll(250, TimeUnit.MILLISECONDS);
// Code to stop the program
while ( > 0) {
int readChar =;
if ((readChar == 'q') || (readChar == 'Q')) {
shouldContinue = false;
if (key == null) continue;
.filter((event) -> !(event.kind() == StandardWatchEventKinds.OVERFLOW))
.map((event) -> (WatchEvent<Path>)event).forEach((ev) -> { Path filename = ev.context();
System.out.println("Event detected :"+filename.toString()+" "+ev.kind());
boolean valid = key.reset();
if (!valid) {
} catch (IOException | InterruptedException e) { e.printStackTrace();


How It Works


NIO.2 includes a built-in polling mechanism to monitor for changes in the FileSystem. Using a poll mechanism allows you to wait for events and poll for updates at a specified interval. Once an event occurs, you can process and consume it. A consumed event tells the NIO.2 framework that you are ready to handle a new event.


To start monitoring a folder, create a WatchService that you can use to poll for changes. After the WatchService has been created, register the WatchService with a path. A path symbolizes a folder in the file system. When the WatchService is registered with the path, you define the kinds of events you want to monitor.


Table Types of watch events

OVERFLOW An event that has overflown (ignore)
ENTRY_CREATE A directory or file was created
ENTRY_DELETE A directory or file has been deleted
ENTRY_MODIFY A directory or file has been modified


After registering the WatchService with the path, you can then “poll” the WatchService for event occurrences. By calling the watchService.poll() method, you will wait for a file/folder event to occur on that path.


Using the watchService.poll(int timeout, Timeunit timeUnit) will wait until the specified timeout is reached before continuing.


If the watchService receives an event, or if the allowed time has passed, then it will continue execution. If there were no events and the timeout was reached, the WatchKey object returned by the watchService.poll(int timeout) will be null; otherwise, the WatchKey object returned will contain the relevant information for the event that has occurred.


Because many events can occur at the same time (say, for example, moving an entire folder or pasting a bunch of files into a folder), the WatchKey might contain more than one event. You can use the WatchKey to obtain all the events that are associated with that key by calling the watch key.pollEvents() method.


The watch key.pollEvents() call will return a list of watch events that can be iterated over. Each watchEvent contains information on the actual file or folder to which the event refers (for example, an entire subfolder could have been moved or deleted), and the event type (add, edit, delete). Only those events that were registered on the WatchService will be processed. 


Once an event has been processed, it is important to call the EventKey.reset(). The reset will return a Boolean value determining whether the WatchKey is still valid.


A WatchKey becomes invalid if it is canceled or if its originating WatchService is closed. If the eventKey returns false, you should break from the watch loop.


Reading Property Files

Reading Property Files


You want to establish some configurational settings for your application, and you want to have the ability to modify the settings manually or programmatically. Moreover, you wish to enable some of the configurations to be changed on the fly without the need to recompile and redeploy.



Create a properties file to store the application configurations. Using the Properties object, load properties stored within the properties file for application processing.


Properties can also be updated and modified within the properties file. The following example demonstrates how to read a properties file named properties.conf, load the values for application use, and finally set a property and write it to the file.

File file = new File("properties.conf");
Properties properties = null;
try {
if (!file.exists()) {
properties = new Properties();
properties.load(new FileInputStream("properties.conf")); } catch (IOException e) {
boolean shouldWakeUp = false;
int startCounter = 100;
String shouldWakeUpProperty = properties.getProperty("ShouldWakeup");
shouldWakeUp = (shouldWakeUpProperty == null) ? false : Boolean.parseBoolean(shouldWakeUp Property.trim());
String startCounterProperty = properties.getProperty("StartCounter"); try {
startCounter = Integer.parseInt(startCounterProperty); } catch (Exception e) {
System.out.println("Couldn't read startCounter, defaulting to " + startCounter);
String dateFormatStringProperty = properties.getProperty("DateFormatString", "MMM dd yy");
System.out.println("Should Wake up? " + shouldWakeUp);
System.out.println("Start Counter: " + startCounter);
System.out.println("Date Format String:" + dateFormatStringProperty);
//setting property
properties.setProperty("StartCounter", "250");
try { FileOutputStream("properties.conf"), "Properties Description");
} catch (IOException e) { e.printStackTrace();


How It Works

The Java Properties class helps you manage program properties. It allows you to manage the properties either via external modification (someone editing a property file) or internally by using the method.


The Properties object can be instantiated either without a file or with a preloaded file. The files that the Properties object read are in the form of [name]=[value] and are textually represented. If you need to store values in other formats, you need to write to and read from a String.


If you are expecting the files to be modified outside the program (the user directly opens a text editor and changes the values), be sure to sanitize the inputs; like trimming the values for extra spaces and ignoring case if need be.


To query the different properties programmatically, you call the getProperty(String) method, passing the String-based name of the property whose value you want to retrieve. The method will return null if the property is not found.


Alternatively, you can invoke the getProperty (String, String) method, on which if the property is not found in the Properties object, it will return the second parameter as its value. It is a good practice to specify default values in case the file doesn’t have an entry for a particular key.


Upon looking at a generated property file, you will notice that the first two lines indicate the description of the file and the date when it was modified. These two lines start with #, which in Java property files is the equivalent of a comment. The Properties object will skip any line starting with # when processing the file.


Note If you allow users to modify your configuration files directly, it is important to have validation in place when retrieving properties from the Properties object. One of the most common issues encountered in the value of properties is leading and/or trailing spaces.


If specifying a Boolean or integer property, be sure that they can be parsed from a String. At a minimum, catch an exception when trying to parse to survive an unconventional value (and log the offending value).


Uncompressing Files

Uncompressing Files

Problem Your application has the requirement to decompress and extract files from a compressed .zip file.


Using the package, you can open a .zip file and iterate through its entries. While traversing the entries, directories can be created for directory entries.


Similarly, when a file entry is encountered, write the decompressed file to the file .unzipped. The following lines of code demonstrate how to perform the decompress and file iteration technique, as described.

ZipFile file = null;
try {
file = new ZipFile("");
FileSystem fileSystem = FileSystems.getDefault(); Enumeration<? extends ZipEntry> entries = file.entries(); String uncompressedDirectory = "uncompressed/"; Files.createDirectory(fileSystem.getPath(uncompressedDirectory));
while (entries.hasMoreElements()) {
ZipEntry entry = entries.nextElement();
if (entry.isDirectory()) {
System.out.println("Creating Directory:" + uncompressedDirectory + entry.getName()); Files.createDirectories(fileSystem.getPath(uncompressedDirectory +
} else {
InputStream is = file.getInputStream(entry); System.out.println("File :" + entry.getName()); BufferedInputStream bis = new BufferedInputStream(is);
String uncompressedFileName = uncompressedDirectory + entry.getName(); Path uncompressedFilePath = fileSystem.getPath(uncompressedFileName); Files.createFile(uncompressedFilePath);
try (FileOutputStream fileOutput = new FileOutputStream(uncompressedFileName)) { while (bis.available() > 0) {
System.out.println("Written :" + entry.getName());
} catch (IOException e) {


How It Works

create a ZipFile object


To work with the contents of a.Zip archive, create a ZipFile object. A ZipFile object can be instantiated, passing the name of a .zip archive to the constructor. After creating the object, you gain access to the file information. 


Each ZipFile object will contain a collection of entries that represent the directories and files contained within the archive, and by iterating through the entries you can obtain information on each of the compressed files.


Each ZipEntry instance will have the compressed and uncompressed size, the name, and the input stream of the uncompressed bytes.


The uncompressed bytes can be read into a byte buffer by generating an InputStream, and later (in our case) written to a file. Using the FileStream, it is possible to determine how many bytes can be read without blocking the process.


Once the determined number of bytes has been read, then those bytes are written to the output file. This process continues until the total number of bytes has been read.


Note Reading the entire file into memory may not be a good idea if the file is extremely large. If you need to work with a large file, it’s best to first write it in an uncompressed format to disk (as in the example) and then open it and load it in chunks.


If the file that you are working on is not large (you can limit the size by checking the getSize() method), you can probably load it in memory.


Managing Operating System Processes

Managing Operating System Processes

Problem: You would like the ability to identify and control native operating system processes from your Java application.



Utilize the Process API, enhanced in Java 9, to obtain information regarding individual operating system processes or destroy them. In this example, we will call upon the method to retrieve information about an operating system process.


In particular, we will take a look at the current JVM process that is running, and we’ll start another process from it. Lastly, we’ll interrogate the new process.

import java.lang.ProcessBuilder;
import java.lang.Process;
import java.time.Instant;
import java.time.Duration;
import java.time.temporal.ChronoUnit;
public class Recipe08_14 {
public static void printProcessDetails(ProcessHandle currentProcess){ //Get the instance of process info
ProcessHandle.Info currentProcessInfo =; if ( currentProcessInfo.command().orElse("").equals("")){
//Get the process id
System.out.println("Process id: " + currentProcess.getPid());
//Get the command pathname of the process
System.out.println("Command: " + currentProcessInfo.command().orElse(""));
//Get the arguments of the process
String[] arguments = currentProcessInfo.arguments().orElse(new String[]{}); if ( arguments.length != 0){
System.out.print("Arguments: ");
for(String arg : arguments){
System.out.print(arg + " ");
//Get the start time of the process
System.out.println("Started at: " + currentProcessInfo.startInstant().orElse(Instant. now()).toString());
//Get the time the process ran for
System.out.println("Ran for: " + currentProcessInfo.totalCpuDuration().orElse(Duration.
ofMillis(0)).toMillis() + "ms");
//Get the owner of the process
System.out.println("Owner: " + currentProcessInfo.user().orElse(""));
public static void main(String[] args){ ProcessHandle current = ProcessHandle.current(); ProcessHandle.Info currentInfo =; System.out.println("Command Line Process: " + currentInfo.commandLine()); System.out.println("Process User: " + currentInfo.user()); System.out.println("Process Start Time: " + currentInfo.startInstant()); System.out.println("PID: " + current.getPid());
ProcessBuilder pb = new ProcessBuilder("ls");
try {
Process process = pb.start();
.forEach((p) ->{
ProcessHandle pHandle = process.toHandle(); System.out.println("Parent of Process: " + pHandle.parent()); } catch ( e){
Command Line Process: Optional[/Library/Java/JavaVirtualMachines/jdk1.9.0.jdk/Contents/Home/ bin/java Recipe0814]
Process User: Optional[Juneau]
Process Start Time: Optional[2016-02-20T06:14:56.064Z]
PID: 10892
Parent of Process: Optional.empty


How It Works

ProcessHandle.Info object

The process API has been enhanced in Java 9 to provide the ability to obtain valuable information about operating system processes. The ProcessHandle interface has been added to the API, providing an info() method that can be used to interrogate a specified process and retrieve more information. A number of other useful utility methods have been added to obtain information about a specified process.


The ProcessHandle.Info object, an informational snapshot of the current process, is returned from calling upon the ProcessHandle info() method.


ProcessHandle.Info can be utilized to return the executable command of a process, the process start time, and several other useful features. The table shows the different methods available to ProcessHandle.Info.

Method Description

arguments() Returns array of Strings of the process arguments.
command() Returns executable pathname of process.
commandLine() Returns command line of the process.
startInstant() Returns the start time of the process.
totalCpuDuration() Returns the total accumulated CPU time of the process.
user() Returns the user under which the process is running.


The ProcessHandle interface can be utilized to return information, such as the process children, PID (Process ID), parent, and so forth. It can also be used to determine the number of useful bits of information, such as if the process is still alive. 


To utilize the API, call upon the method to retrieve a object. The object can then be used to execute commands or retrieve information about the process.


If utilized together with the Process and ProcessBuilder classes, the API can be used to spawn, monitor, and terminate operating system processes.


Java Modularity

Java Modularity

One of the most important new features of Java 9 is the modular system, which came to fruition via Project Jigsaw. Project Jigsaw may also be referred to as JSR 376:


The Java Platform Module System. The purpose of the project was to construct a system that provided a reliable configuration which would replace the classpath system.


It also focused on providing strong encapsulation between different modules. The module system is composed of all modules that constitute the Java Platform, as the platform was reconstructed from the ground up and modularized as part of this project.


Application developers and library creators can also create modules…whether they be single modules that perform a specific task or a number of modules that together create an application.


In this blog, the basic fundamentals for development and management of modules will be touched upon. Although Java Modularity is a very large topic, this blog is terse, providing enough information to get started with module development quickly. I recommend reading more in-depth blogs and documentation for those interested in learning more details about Java Modularity.


Constructing a Module

Constructing a Module


Problem: You wish to create a simple module that will print a message to the command line or via a logger.


Develop a module so that it can be executed via the java executable. Begin by creating a new directory somewhere on your file system…in this case name it “recipe.” Create a new file named module-info. java, which is the module descriptor. In this file, list the module name as follows:

module org.firstModule {}

Next, create a folder named org within the recipe directory that was created previously. Next, create a folder named firstModule within the org folder. Now, create the bulk of the module by adding a new file named inside of the org.firstModule folder. Place the following code within the file:

package org.firstModule;

public class Main {

public static void main(String[] args) { System.out.println("This is my first module");




How It Works

module descriptor

The easiest modules can be built with two files, those being the module descriptor and a single Java class file that contains the business logic. The solution to this example follows this pattern to create a very basic module that performs a single task of printing a phrase to the command line.


The module is packaged inside a directory that is entitled the same as the module name. In the example, this directory is named org.firstModule, as it follows the standard module naming convention.


In reality, a module can be named anything, so long as it does not conflict with other module names. However, it is recommended to utilize the inverse-domain-name pattern of packages. This causes the module name to become prefixed with its containing package names.


In this solution, the module descriptor contains the module name, followed by opening and closing braces. In a more complex module, the names of other module dependencies can be placed within the braces, along with the names of packages that this module exports for others to use. The module descriptor should be located at the root of the module directory.


The inclusion of this file indicates to the JVM that this is a module. This directory can be made into a JAR file as I will discuss later in the blog, and this creates a Modular JAR.


The other file that must be created to develop a simple module is the Java class file containing the business logic. This file should be placed inside of the org/firstModule directory, and the package should indicate org.firstModule. In this solution, the Main method will be invoked when the module is executed.


Note that any dependencies that the module would require must be listed within the module descriptor. In this simple module, there are no dependencies. After setting up this directory structure and placing these two files into their respective locations, the module development is complete.


Compiling and Executing a Module

Compiling and Executing a Module

Problem: You’ve developed a basic module. Now you would like to compile the module and execute it.



Make use of the javac utility to compile the module, specifying the d flag to list the folder into which the compiled code will be placed. After the d option, each of the source files to be compiled must be listed, including the descriptor.


Separate each of the file paths with space.

 The following command compiles the sources and places the result into a directory named mods/org.firstModule.

javac d class='lazy' data-src/mods/org.firstModule class='lazy' data-src/org.firstModule/ class='lazy' data-src/org.firstModule/ org/firstModule/


Now that the code has been compiled, it is time to execute the module. This can be done with the standard java executable. However, the --module-path option, which is new in Java 9, must be used to indicate the path of the module sources. The -m option is used to specify the Main class of the module.

java --module-path mods -m org.firstModule/org.firstModule.Main


The output from executing the module should be as follows: This is my first module

If there were more than one module that was going to be compiled, they could be compiled separately using a similar technique to the one described previously, or they could be compiled all at once. The syntax for compiling two modules that contain a dependency is as follows:

javac -d mods --module-source-path class='lazy' data-src $(find class='lazy' data-src -name "*.java")


How It Works

Java application

As you know, before a Java application can be executed, it must be compiled. Modules are the same way in that they must be compiled before they can be used.


The standard javac utility has been enhanced so that it can accommodate the compilation of modules by simply listing out the fully qualified paths to the file and each subsequent .java file contained within the module. The d option is used to specify the destination for the compiled sources.


In the solution, the javac utility is invoked and the destination is set the location class='lazy' data-src/mods/org.firstModule. Each of the .java files that constitute the module is listed afterward, separated by a space.


If a particular module included many .java source files, then simply specifying an asterisk (*) wildcard in the path after each package, rather than the individual file names, would suffice to compile each .java file contained within the specified package(s).


javac -d mods/class='lazy' data-src/org.firstModule class='lazy' data-src/org.firstModule/ class='lazy' data-src/org.firstModule/ org/firstModule/*


The same java executable that is used to execute most Java applications can be used to execute a module. With the help of some new options, the java executable is able to execute a module with all of the required dependencies.


The --module-path option specifies the path to where the compiled module resides. If there are a number of modules that comprise an application, specify the path to the module that contains the application entry point.


The -m option is used to specify the path application entry point class, as well as its fully qualified name. In the solution, the main class resides within a directory named org.firstModule, and within a package named org.firstModule.


Creating a Module Dependency

Creating a Module Dependency

Problem You wish to develop a module that depends upon and utilizes another module.


Develop at least two modules, where one of the modules depends upon the other. Then specify the dependency within the module descriptor.


The module that was developed in the previous recipes will be used in this solution as well, but it will be altered a bit to make use of another module named org. second module. This second module will accept a number and then calculate a room rate.


To start, create the module org.secondModule by creating a new directory within the class='lazy' data-src directory. Next, create a .java file named and place it into that location. The contents of the module descriptor should look as follows:

module org.secondModule {

exports org.secondModule;



The module will be making sources contained within the org.secondModule package available to other modules that require it. The sources for the module should be placed in a class named Calculator.

java and this file should be placed into the class='lazy' data-src/org.secondModule/org/secondModule directory. Copy the following code into


package org.secondModule;
import java.math.BigDecimal;
public class Calculator {
public static BigDecimal calculateRate(BigDecimal days, BigDecimal rate) { return days.multiply(rate);
The code that was originally used for org.firstModule (Recipes 22-1, and 22.2) should be modified to make use of org.secondModule as follows:
package org.firstModule;
import org.secondModule.Calculator;
import java.math.BigDecimal;
public class Main {
public static void main(String[] args) {
System.out.println("This is my first module.");
System.out.println("The hotel stay will cost " + Calculator.calculateRate(
BigDecimal.TEN, new BigDecimal(22.95)
The module descriptor for org.firstModule must be modified to require the dependency:
module org.firstModule {
requires org.secondModule;
To compile the modules, specify the javac command, using a wildcard to compile all code within the class='lazy' data-src directory:
javac -d mods --module-source-path class='lazy' data-src $(find class='lazy' data-src -name "*.java")

Lastly, to execute org.firstModule along with its dependency, use the same syntax that was used previously to execute the module. The module system takes care of gathering the required dependencies.


How It Works

readability of a module

A module can contain zero or many dependencies. The readability of a module depends upon what has been exported in the module descriptor of that module. Likewise, a module must require another module in order to read from it.


The module system practices strong encapsulation. A module always is readable to itself, but other modules can only make use of those packages that are exported from the module. Furthermore, only public methods and so on are available for use by other modules.


To make one module dependent upon another, a required declaration must be placed in the module descriptor, specifying the name of the module on which it is dependent. In the solution, org.firstModule is dependent upon org.secondModule since the module descriptor declares it.


This means that first module is able to utilize any public features residing within the org.secondModule package of the org.secondModule module. If there were more packages contained within org.secondModule, then they would not be available to org.firstModule since they have not been exported within the module descriptor for org.secondModule.


Utilization of the module descriptor for Java 9 modules trumps the classpath, as it is a much more robust means of declaring dependencies. However, if a Java 9 module were packaged as a JAR, it can be used on older versions of Java by placing the JAR into the classpath, and the module descriptor will be ignored.


Modules can be compiled separately using the javac command, or they can be compiled using the wildcard notation, Execution of the module is the same, whether it depends upon zero or more other modules.


Packaging a Module

Packaging a Module

Problem Your module has been developed and you wish to package it to make it portable.


Utilize the enhanced jar utility to package modules and also to make executable modules. To package the module that was developed in Recipe, navigate to the directory which contains the mods and class='lazy' data-src directories. From within that directory, execute the following commands via the command line:

mkdir lib

jar --create --file=lib/org.firstModule@1.0.jar --module-version=1.0 --main-class=org.

firstModule.Main -C mods/org.firstModule .

This utility will package the module into a JAR file within the lib directory. The JAR file can then be executed with the java executable as follows:

java -p lib -m org.firstModule


Listing Dependencies or Determining JDK-Internal API Use


Problem: You would like to determine whether an existing application relies upon any of the inaccessible internal JDK APIs with Java 9.


Use the jdeps tool to list module dependencies from the command line.

To see the list of dependencies for a given module, specify the --list-deps option as follows:

jdeps --list-deps <<your-jar.jar>>


Invoking this command will initiate output that includes each of the packages that the specified JAR file depends upon. For example, choosing a random JAR file from the GlassFish application server modules directory would produce something similar to the following:

jdeps --list-deps acc-config.jar



unnamed module: acc-config.jar


There are also applications that may make use of JDK-Internal APIs, which are now inaccessible to standard applications starting with Java 9. The jdeps tool can list such dependencies, making it possible to determine whether an application will run on Java 9 without issue. To utilize this functionality, specify the -jdkinternals option as follows:

jdeps –jdkinternals <<your-jar.jar>>
Invoking the jdeps utility to review a JAR that contains dependencies upon JDK-Internal APIs will produce output such as the following:
jdeps -jdkinternals security.jar
security.jar -> java.base -> JDK internal API (java.base) -> JDK internal API (java.base) -> JDK internal API (java.base)$4 -> JDK internal API (java.base) -> JDK internal API (java.base) -> JDK internal API (java.base) -> JDK internal API (java.base) -> JDK internal API (java.base) -> JDK internal API (java.base)


Warning: JDK internal APIs are unsupported and private to JDK implementation that is subject to be removed or changed incompatibly and could break your application. Please modify your code to eliminate dependence on any JDK internal APIs. For the most recent update on JDK internal API replacements, please check:
JDK Internal API Suggested Replacement
---------------- --------------------- Use @since 1.4


How It Works

The jdeps (Java Dependency Analysis) tool was introduced in Java 8, and it is a command-line tool that is useful for listing static dependencies of JAR files.


Java 9 encapsulates many of the internal JDK APIs, making them inaccessible to standard applications. Prior to Java 9, there were circumstances that required applications to make use of such internal APIs.


Those applications will not run as expected on Java 9, so it is imperative such dependencies are found and resolved before attempting to run older code on Java 9.


The jdeps tool can be very useful for finding whether a JAR depends upon these internal APIs by listing out the dependencies if they exist.


If you wish to list the output in the .dot file format, specify the -dotoutput option along with -jdkinternals, as follows:

jdeps -dotoutput /java_dev/ -jdkinternals security.jar


The jdeps tool can also be helpful for determining JAR dependencies, in general.

The tool contains a --list-deps option to do just that.

Simply put, the --list-deps option lists each of the modules a specified JAR depends upon.


Providing Loose Coupling Between Modules

Providing Loose Coupling Between Modules

Problem: You would like to provide loose coupling between modules, such that one module may call upon another module as a service.


Make use of the service architecture that has been built into the Java 9 modularity system. A service consumer can specify loose coupling by specifying a “uses” clause in the module descriptor to indicate that the module makes use of a particular service.


The following example could be used for a module that may have the task of providing a web service discovery API. In the example, the org.java9recipes.serviceDiscovery module both requires and exports modules. It also then specifies that it uses the org.java9recipes.spi. ServiceRegistry service.

module org.java9recipes.serviceDiscovery {
exports org.java9recipes.serviceDiscovery;
uses org.java9recipes.spi.ServiceRegistry;


Similarly, a service provider must specify that it is providing an implementation of a particular service. One can do so by including a “provide” clause within the module descriptor.


In this example, the following module descriptor indicates that the service provider module provides the org.java9recipes.spi. ServiceRegistry with the implementation of org.dataregistry.DatabaseRegistry.
module org.dataregistry {
requires org.java9recipes.serviceDiscovery;
provides org.java9recipes.spi.ServiceRegistry
with org.dataregistry.DatbaseRegistry;

The corresponding modules can now be compiled and used, and they will enforce loose coupling.


How It Works

loose coupling


The concept of module services allows for loose coupling to be had between two or more modules. A module that makes use of a provided service is known as a service consumer, whereas a module that provides a service is known as a service provider.


Service consumers do not use any of a service provider’s implementation classes, rather, they utilize interfaces. For the loose coupling to work, the module system must be able to easily identify any uses of previously resolved modules, and on the contrary, search for service providers through a set of observable modules.


To make the identification of the use of services easy, we specify the “uses” clause in a module descriptor to indicate that a module will make use of a provided service.


On the flip side, a service provider can easily be found by the module system as we specify the “provides” clause within the module descriptor of a service provider.


Utilizing the module service API, it is very easy for the compiler and runtime to see which modules make use of services, and also which modules provide. This enforces even stronger decoupling, as the compiler along with linking tools can ensure that providers are appropriately compiled and linked to such services.


Linking Modules

Problem You wish to link a set of modules in an effort to create a modular runtime image.



Make use of the jlink tool to link said set of modules, along with their transitive dependencies. In the following excerpt, a runtime image is created from the module.

jlink --module-path $JAVA_HOME/jmods:mods --add-modules org.firstModule --output firstmoduleapp


How It Works

Sometimes it is handy to generate a runtime image of modules to make for easier transportation. The jlink tool provides this functionality, amongst others. In the solution, a runtime image named firstmoduleapp is created from the module named org.firstModule.


The module-path option first indicates the path to the JVM jmods directory, followed by any directories that contain modules to be incorporated in the runtime image. The --add-modules option is used to specify the names of each module that should be included in the image.