<p>Given a list <code>paths</code> of directory info, including the directory path, and all the files with contents in this directory, return <em>all the duplicate files in the file system in terms of their paths</em>. You may return the answer in <strong>any order</strong>.</p>
<p>A group of duplicate files consists of at least two files that have the same content.</p>
<p>A single directory info string in the input list has the following format:</p>
<p>It means there are <code>n</code> files <code>(f1.txt, f2.txt ... fn.txt)</code> with content <code>(f1_content, f2_content ... fn_content)</code> respectively in the directory "<code>root/d1/d2/.../dm"</code>. Note that <code>n >= 1</code> and <code>m >= 0</code>. If <code>m = 0</code>, it means the directory is just the root directory.</p>
<p>The output is a list of groups of duplicate file paths. For each group, it contains all the file paths of the files that have the same content. A file path is a string that has the following format:</p>
<li><code>paths[i]</code> consist of English letters, digits, <code>'/'</code>, <code>'.'</code>, <code>'('</code>, <code>')'</code>, and <code>''</code>.</li>
<li>You may assume no files or directories share the same name in the same directory.</li>
<li>You may assume each given directory info represents a unique directory. A single blank space separates the directory path and file info.</li>
</ul>
<p> </p>
<p><strong>Follow up:</strong></p>
<ul>
<li>Imagine you are given a real file system, how will you search files? DFS or BFS?</li>
<li>If the file content is very large (GB level), how will you modify your solution?</li>
<li>If you can only read the file by 1kb each time, how will you modify your solution?</li>
<li>What is the time complexity of your modified solution? What is the most time-consuming part and memory-consuming part of it? How to optimize?</li>
<li>How to make sure the duplicated files you find are not false positive?</li>