During the course of a workday, I encounter repeating information within data files; which contain usernames, orderIDs, or other single lines of text.
In many of these cases, I need to obtain a list of the unique lines within these files. Here is how I accomplish it.
From the command line, use sort to reorder the file data & then pass it to uniq to remove similar consecutive lines.
sort FILE | uniq
The above command works fine if you don't mind the order of the lines being changed.
Another option is to use "sort -u" and then redirect the output to lpr or another file.
For example:
sort -u FILE | lpr
or
sort -u FILE >> OTHERFILE