Viewing Compressed Files from the Web
What do you get when you cross compressed text files, Perl, and the web? An easy way to save disk space, unclutter directories, and view contents of compressed files on the internet.
Files, files, and more files!
If you're like me, you have many text files laying about. System logs, server logs, script logs, security logs, and even some log logs. Plus, you may also have various gzipped files laying about. And, like me, you would rather have a nice web front-end to these files instead of having to use a shell interface, as well as having these files take up less space on your hard drive.
To give you an example, I keep all my web server logs, I have logs which monitor my home servers uptime, I have cron logs, and some gzipped software which I keep around to peek at the source files from time to time. All these files can really take up a lot of space as time goes on. How can I keep these files manageable, while freeing up some disk space? Three words; compression, Perl, web!
Compression of files is not only a useful way to distribute batches of files, but it is also a handy way to save space on your machine. There are various compression programs out there, but we will be dealing with files which have been passed through one of the most popular compression utility, gzip. The tar utility will take multiple files and bundle them together, without compressing them. Then, when the tar file is gzipped, it compresses that single file (of files). The script discussed in this article will work with both file formats.
Perl, of course, is a very powerful scripting language. One of the great things about Perl is if there is something you want to do, it is likely someone else already thought of doing it. Dealing with tar/gzipped files is one of those things. A simple search on the CPAN shows the Archive::Tar module is just what we need to accomplish our goal. Since this module does not come with the standard distribution of Perl, you will need to install it. There is also an Archive::Zip module for handling zipped files, but this article will be concentrating on tar/gzipped files.
The web is a great medium to emulate what is on a filesystem. So, this makes it the perfect medium to peek into tar/gzipped files withough having to manually interact with the shell. The web is also good for those who do not have shell access, or those who don't always have access to their shell.
So, with these three elements, we will construct a CGI script, using Perl, which will allow someone to peek into tar and gzip files, as well as the files contained within them. The following program will read in specified directories looking for certain file extensions, and display those files as hyperlinks, to a web client. When a hyperlink is chosen, the contents of the archive file will be displayed, with each file being a hyperlink. Then, when a file is chosen (or, clicked on), the contents of that file will be displayed to the web client. You will have a looking glass into the text archives on your disk, while saving disk space.
Viewing File Contents
Let's assume we have some compressed files in various locations on a system we want to be able to view over the web. In essence, we have saved some space by compressing the files, and now we can save some time by making these files browsable from the web.
Perl to the rescue! As you will see, by using a combination of the CGI.pm module, and the Archive::Tar module we will be able to view our compressed files in only 52 lines of Perl. Let's begin.
01: #!/usr/bin/perl -wT 02: use strict; 03: use CGI qw(:standard); 04: use Archive::Tar; 05: use File::Basename;
Lines 1 through 5 begin the script by turning on warnings, and taint checking. We also help preserve some sanity by using the strict pragma. We use the CGI.pm module, which we will use to get the incoming parameters, as well as use to generate some HTML for us. The Archive::Tar module is what will actually deal with the compressed files. On its own, the Archive::Tar module will handle tar files. If you wish to also use gzipped files, you will need to install the Compress::Zlib module as well. Luckily, you would not need to change your code, since Archive::Tar uses Compress::Zlib in the background. Finally, we pull in the File::Basename module, which we will use to get the name of our script for when we create hyperlinks to itself.
We now have all the tools we need to begin.
06: my @DIRS = qw{tars /tmp}; 07: my $SCRIPT = basename $0; 08: my $tar; 09: my %actions = ('list_archives' => \&list_archives, '' => \&list_archives, 'show_file' => \&show_file, 'show_content' => \&show_content, );
Lines 6 to 9 define our global variables. The @DIRS variable is an array of the directory locations where you wish to see the compressed files. In this case, we are looking in the 'tars' directory (off of the current working directory), as well as the '/tmp' directory. More directories could be added, or removed at your own whims.
The $SCRIPT variable gets it's value from the basename() method of File::Basename module. By passing it $0, which is the name and path of the currently running script, it will return just the filename. You will see how we use this later to create hyperlinks. A benefit of doing it like this is that you do not have to hard code a script name in the hyperlinks. Hard coding things like that can make future maintainance more of a pain.
The $tar variable is initialized, with no value. We will be using this variable to hold the name of the file we want to look at, but more on that in a moment.
Finally, we initialize the %actions hash. The hyperlinks we later create will have an 'action' parameter. This hash contains the allowed actions we can do, and the subroutine which will be performing the requested action. If someone requests an action which is not a key in this hash, they will see an error.
10: print header;
Line 10 simply uses the CGI::header() method to print out the standard header.
11: my $action = param('action');
Line 11 utilizes the CGI::param() method to get the value, if any, of the 'action' parameter passed in the URI of the request. The value of this is then put into the $action variable.
12: if (param('tar')) { 13: $tar = param('tar'); 14: $tar =~ s!\.\.?/!!g; 15: }
Lines 12 to 15 check if there is a 'tar' parameter being passed to the script via the requesting URI. If there is, the value is put into the $tar variable, and then the $tar variable is stripped of all occurences of ./ and ../. We expect that the $tar parameneter is going to contain a file path, and name, of a tar (or gzipped) file. But, we don't want people to try to trick the script into opening files not in the directories we have listed in @DIRS. By stripping out those two sequences, we make sure that someone isn't trying to backtrack directories and open arbitrary files you may have on your system. There are really two ways to handle this; 'collapse' the directory path, as I am doing here, or reject the bad path altogether. I chose to collapse the path simply to be lazy. The same end is achieved both ways, no directory backtracking.
16: if (exists $actions{$action}) { 17: $actions{$action}->(); 18: }else{ 19: print qq{Sorry, '$action' isn't supported.<BR>}; 20: }
Lines 16 through 20 look to see of the requested action is an action which we have defined. If so, line 17 calls the subroutine reference which is the value of the proper entry in %actions. If we do not support the action which was requested, line 19 will print an error to the screen.
21: sub list_archives { 22: print h2("List of files"); 23: for my $dir (@DIRS) { 24: print qq{Directory: $dir<br>\n}; 25: opendir(DIR, $dir) or print qq{$!<p>\n} and next; 26: my @files = grep {/\.tar(?:\.gz)?$/} readdir(DIR); 27: for (@files) { 28: (my $short_name = $_) =~ s/\.tar.*$//; 29: print a({-href=>"tar.cgi?action=show_file &tar=$dir/$_"},$short_name), br; 30: } 31: closedir DIR; 32: print p; 33: } 34: }
Lines 21 to 34 make up the list_archives() subroutine. This subroutine will be called when we are to list all of the .tar* files in the directories contained in @DIRS. We begin by displaying a heading for the page, on line 22. Then, we begin to look through the @DIRS array, initializing the $dir variable each fime with the directory name we are currently looking through. Line 24 prints out the directory name we are about to get a listing for. Line 25 uses the opendir() function to open a handle, DIR, to read in the file contents of the directory. If there is a problem opening the directory, the error, such as "No such file or directory" will be printed, and the next will send the iteration to the next directory.
Line 26 creates the @files array by grepping through the output of readdir(DIR), which would be the file contents of the directory. The grep is looking for all files which end in .tar or .tar.gz. If the filename matches that patter, it is added to the @files array.
Line 27 now begings to loop through the filenames. The name of the file then has the extension taken off, for display purposes, and the CGI::a() method, on line 29, is used to create a hyperlink back to this script. The link will pass the action parameter with the action of show_files, as well as a tar parameter, whose value is that of the location of the requested file. This link will allow the user to choose the file to look at (since the extension is removed, it would look like a directory name). If a user didn't know what was under the hood of this CGI, it would appear as if they were traversing a directory listing, as opposed to peering inside of compressed files.
Lines 31 and 32 close the directory handle, and print an HTML <p> tag to finish off the subroutine.
35: sub show_file { 36: print h2("Contents of $tar:"); 37: print qq{Sorry, '$tar' does not exist.} and return unless -e $tar; 38: my @files = Archive::Tar->list_archive($tar); 39: for (@files) { 40: if (/\/$/) { 41: print qq{$_<br>\n}; 42: }else{ 43: print '<dd>', a({-href => "tar.cgi?action= show_content&file=$_&tar=$tar"}, $_), '</dd>'; 44: } 45: } 46: }
Line 35 begins the show_file() subroutine. This subroutine will get called when a link displayed from the list_archives() subroutine is clicked. This subroutine will take the file name, and display a list of the files within the current file. Line 36 simply shows a heading. Line 37 checks for the existence for the tar file we want to peek inside of. If it does not exists, an error message is displayed and we return from the subroutine. This should only occur when someone attempts to alter the URI to have a different file name. Line 38 uses the Archive::Tar-list_archive()> method, with the one argument being the location of the wanted file, to get a list of files in the tarball.
Lines 39 to 45 loop through the list of files, which was assigned to @files. If the filename ends with a /, then it is a directory. We check for this on line 39, and if it matches, we disply only the name of the directory on line 41. Since we are peeking inside the tar file, and not actually extracting the contents, we cannot use -d to check if what we have is a directory or not. That is why we use the pattern match. If there is no trailing /, then we are dealing with a filename. Line 43 prints out a hyperlink with the action show_content, which we will cover in a moment. The link also passes the file parameter, which is the name of the file we want to see, and the tar parameter; which is the tar file. We also slightly indent the filename, simply for aesthetic reasons.
47: sub show_content { 48: my $file = param('file'); 49: my $tar_obj = Archive::Tar->new($tar); 50: print qq{<pre> } . $tar_obj->get_content($file) . qq{</pre>}; 51: }
Lines 47 to 51 comprise the show_content() subroutine. This is the final subroutine of the script. This subroutine will display the contents of the file the user selected. Firstly, line 49 gets the name of the file which was sent in QUERY_STRING. Line 49 then creates a new Archive::Tar object, $tar_obj. The name of the file we will be working with is passed as an arguement to new(), which will in turn open and read the file contents into memory. Line 50 displays the contents of the requested file. This is done by printing the output of Archive::Tar's get_contents() object method between a set of HTML <pre> tags. Since we are expecting only text files for this application, this should display the text nicely in the web client.
52: print p, a({-href=>$SCRIPT},"Home");
Finally, line 52 ends the script by displaying a link back to the original listing of filenames. This will be displayed on every page.