You're using an actual logger - that's good. However, for the errors that appear in cleanupDirectory, it's probably a better idea to throw an exception. The caller should decide whether to leave these as fatal or catch and log them.
You should move from the old File API to the nio API. Among other reasons, it removes null-signalling and uses proper exceptions.
There's some risk that your recursive implementation will blow the stack. Through accident or evil, your program could be given a very deep directory tree. For that reason, Files.find() accepts an explicit maximum depth, and takes care of the recursion for you, which is safer.
As with all operations of this kind, it's highly valuable to be able to perform a dry-run that doesn't actually delete anything, and your current code can't do that. It should. However, adding this functionality is not entirely straightforward with this problem.
There are many different approaches - an iterator, a stream, callbacks, etc. I demonstrate with a recursion-free algorithm that uses a Spliterator with a queue to simplify the work of passing results to the caller. It isn't particularly efficient - it builds up a partial tree in memory as a map of sets - but it does work, and should serve as inspiration for you.
package com.stackexchange;
import java.io.IOException;
import java.nio.file.DirectoryStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.ArrayDeque;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedHashSet;
import java.util.Map;
import java.util.Queue;
import java.util.Set;
import java.util.Spliterator;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
public class Main {
public static class CleanupProcessor {
public final Path root;
public final int maxDepth;
public CleanupProcessor(String root) {
this(Path.of(root), 50);
}
public CleanupProcessor(Path root, int maxDepth) {
this.root = root;
this.maxDepth = maxDepth;
}
public void deleteAll() throws IOException {
paths().forEach(CleanupProcessor::uncheckedDelete);
}
private static void uncheckedDelete(Path path) {
try {
Files.delete(path);
} catch (IOException cause) {
throw new RuntimeException(cause);
}
}
public Stream<Path> paths() throws IOException {
return StreamSupport.stream(new CleanSpliterator(), false);
}
private class CleanSpliterator implements Spliterator<Path> {
private final Map<Path, Set<Path>> tree;
private final Set<Path> deleted = new HashSet<>();
private final Queue<Path> emit = new ArrayDeque<>();
public CleanSpliterator() throws IOException {
// Skips symlinks
try (Stream<Path> search = Files.find(
root, maxDepth, CleanSpliterator::dirWithNoFiles
)) {
tree = search.collect(Collectors.toMap(
Function.identity(), CleanSpliterator::treeInit
));
}
}
private static boolean dirWithNoFiles(Path path, BasicFileAttributes attrs) {
if (!attrs.isDirectory())
return false;
// If the directory has any non-subdirectory files, ignore it
try (
DirectoryStream<Path> dir = Files.newDirectoryStream(
path, p -> !Files.isDirectory((p))
)
) {
return !dir.iterator().hasNext();
} catch (IOException ex) {
throw new RuntimeException(ex);
}
}
private static Set<Path> treeInit(Path path) {
// Add all subdirectories to the mapped set
try (
DirectoryStream<Path> dir = Files.newDirectoryStream(path, Files::isDirectory)
) {
return StreamSupport.stream(
dir.spliterator(), false
).collect(Collectors.toUnmodifiableSet());
} catch (IOException ex) {
throw new RuntimeException(ex);
}
}
private void advance() {
Set<Path> transfer = new LinkedHashSet<>();
for (Map.Entry<Path, Set<Path>> kv: tree.entrySet()) {
// The set of deleted directories entirely includes the children of
// this directory, so schedule it for deletion
if (deleted.containsAll(kv.getValue()))
transfer.add(kv.getKey());
}
// If there are no results, clear the tree to signal to the spliterator that it's done
if (transfer.isEmpty())
tree.clear();
else {
// Queue to emit from the spliterator
emit.addAll(transfer);
// Add to the deleted set to change the results on the next call
deleted.addAll(transfer);
// Reduce the iteration load on the next call, or even prevent the next call
// if every directory has been emitted
for (Path transferred: transfer)
tree.remove(transferred);
}
}
@Override
public int characteristics() {
return ORDERED | DISTINCT | NONNULL;
}
@Override
public long estimateSize() {
// Wild guess: 50% of the remaining dirs will be empty
return tree.size() / 2;
}
@Override
public boolean tryAdvance(Consumer<? super Path> action) {
if (emit.isEmpty()) {
if (tree.isEmpty())
return false;
advance();
if (emit.isEmpty())
return false;
}
action.accept(emit.remove());
return true;
}
@Override
public Spliterator<Path> trySplit() {
// There are ways to parallel-split this algorithm, but they aren't shown here to
// promote simplicity
return null;
}
}
}
public static void main(String[] args) {
CleanupProcessor proc = new CleanupProcessor(".");
// Dryrun only
try {
proc.paths().forEach(System.out::println);
} catch (IOException e) {
e.printStackTrace();
}
}
}
When run on the current project's directory, it produces sensible results for the following (initially and eventually) empty directories:
.\src\main\resources
.\src\test\resources
.\build\generated\sources\headers\java\main
.\src\test\java
.\.gradle\8.4\vcsMetadata
.\build\generated\sources\annotationProcessor\java\main
.\build\tmp\compileJava\compileTransaction\backup-dir
.\src\test
.\build\generated\sources\annotationProcessor\java
.\build\generated\sources\headers\java
.\build\generated\sources\headers
.\build\generated\sources\annotationProcessor
.\build\generated\sources
.\build\generated