Similar Problems

Similar Problems not available

Crawler Log Folder - Leetcode Solution


LeetCode:  Crawler Log Folder Leetcode Solution

Difficulty: Easy

Topics: string stack array  

Problem Statement:

Suppose you are given a folder named "folder" that contains a list of subfolders and files. A crawler program is running to index this folder tree and document its structure. In the indexer, the program logs the folder names and file names to a log file in this format:

"dir\n\tsubdir1\n\tsubdir2\n\t\tfile.ext" The directory name is formatted as 'dir', followed by the newline character '\n'. The '\t' character denotes that the file or subfolder that follows it is a child of the parent folder above it, up to the root folder.

Given the log file logs, return the minimum depth of the folder tree (i.e., the number of directories traversed to reach the deepest folder).


Input: logs = ["dir1\n\tsubdir1\n\tsubdir2\n\t\tfile.ext","dir2\n\tsubdir1\n\tsubdir2\n\t\tsubdir3\n\t\t\tfile.ext"] Output: 3 Explanation: The directory tree looks like this:


In the log file, the directory name is followed by a newline character '\n' and a tab character '\t', then the files or subdirectories under the directory. The depth of the directory tree is 3 since the deepest directory 'subdir3' is 3 levels deep.


To solve the problem, first, we need to parse the logs and create a tree representation of the folder structure.

We can use a stack to keep track of the current directory's depth and a map to store the file and folder names.

We start by initializing the stack with a root node and setting its depth to 0.

Next, we iterate through each log in the logs array and split each record using the '\n' character as the separator, which gives us the directory name and its subdirectories and files.

We then split the subdirectories and files using the '\t' character.

For each subdirectory and file, we check its depth by looking at the stack's top element and incrementing it by 1.

We add the subdirectory or file name to the map with its corresponding depth level.

If the depth of a subdirectory is higher than the current top element of the stack, we push the subdirectory onto the stack.

If the depth is smaller, we pop elements from the stack until we reach the parent directory of the subdirectory.

Finally, we return the maximum depth level from the map.

Pseudo-code of the algorithm:

stack = [(0, "root")] tree = {}

for log in logs: directory, elements = log.split("\n", 1)

depth = elements.count('\t')
name = elements.replace('\t', '')

while stack[-1][0] >= depth:

if directory:
    stack.append((depth, name))

tree[name] = depth

return max(tree.values())

Time Complexity:

The time complexity of this algorithm is O(n), where n is the total number of logs.

The algorithm iterates once through each log, and for each, it performs simple operations such as string manipulation and stack operations.

Space Complexity:

The space complexity of this algorithm is also O(n).

We use a stack to keep track of the directory structure, and in the worst-case scenario, the depth could be equal to the number of logs, which means the stack's size could be n.

We also use a map to store the directory and file names with their corresponding depth levels, which also has a space complexity of O(n).

Crawler Log Folder Solution Code