Lock files help to prevent multiple instances of a given operation (i.e. system command, script, batch file or application) from running at the same time. This is specially important when you are going to cron something to execute automatically, which may take an extended period of time to complete or when it's a system/bandwidth intensive process. With Linux, creating & utilizing lock files is very simple & straight forward.
To create a unique lock file, run this command:
cat /dev/urandom | tr -dc [:alnum:] | head -c12; echo;
This will return a random alpha-numeric value. For example: EU3K1qEPB6SM
To invoke the file locking, simply use the below command when you are going to execute a given command, script, batch file or application.
/usr/bin/flock -w 60 /var/lock/[LOCKFILE] [YOUR_COMMAND]
[LOCKFILE] is the unique lock file name you received from the proceeding command.
[YOUR_COMMAND] is the command, script, batch file or application you are attempting to execute.
Please note that you can change the value of the '-w' attribute to be whatever length of time (in seconds) you want to wait. If the lock cannot be acquired within that given time frame, the execute of the given command will fail.
So if we are trying to execute 'myscript.pl' & used the example lock file name above, the command would look something like this:
/usr/bin/flock -w 60 /var/lock/EU3K1qEPB6SM /home/someuser/myscript.pl;
Now that you have the basic idea of how you can invoke lock files, lets explain a little further how you can use this.
The underlying goal of file locking is to create/use a single file lock for a specific like group of operations (i.e. commands, scripts, batch files or applications), that should not be executed while any other in that group is currently running. Each group can consist of a single operation or multiple ones that work together in some way.
An example of this is where you have 2 scripts that both modify data within a very large flat file. And lets say that if more then 1 instance of either of those 2 scripts are running, it would result in damage to the flat file. In this case, you would want to use the same lock file when executing both scripts, because you don't want script A from destroying the changes that are actively being applied by script B, and viceversa. This also prevents a 2nd copy of the same script from executing. By using the lock file, you are always ensuring only 1 instance of those scripts are running at a given time & it needs to complete before another can proceed.
We highly recommend the use of lock files whenever you add things to your cron, so that you don't spawn multiple instances of whatever you are running; especially if the command is system or bandwidth intensive. It's very easy under this situation to not use lock files, and then have your system crash or become non-responsive, because the script executed 10, 100 or maybe even 1000 instances. The use of lock file in this condition, will help prevent that run-away situation from occurring.
Also you should not assume a given operation will handle file locking for you -- even with things like Yum. It's better to stay on the side of caution & invoke the file locking yourself. This way, even if a given application has file locking written into it, that built-in ability will simply be a backup, in case your file locking fails.
And one last reminder. You should check your file locked operations from time to time; just in case something happens, which prevents the file from unlocking. This is especially important in cases where you are going to cron something. Because if the file does not unlock, your cronned command will simply not execute as you would have expected.