Unix: Catching up with Unix errors

catching up stallio
Credit: flickr / stallio

Unix errors often seem cryptic and sometimes even obtuse, but they're actually well designed and useful. A little insight into the whys and hows of common error messages might help you appreciate not just error messages, but why you're bumping into them.

From time to time one of my students, when asked on a quiz what a command such as ifconfig does, will answer "displays 'command not found'". "No", I have to tell them, 'the ifconfig command isn't displaying that output. Either you misspelled the command or the executable isn't in one of the directories on your search path (i.e., $PATH)". For Unix newbies, these concepts take time to settle into their heads. But I try to press the point. "Type 'hadjtwuxx" or some line of random gibberish on your command line" I tell them. "Do you get the same error?". "Do you think hadjtwuxx or your random gibberish is a legitimate Unix command?". Eventually, they begin to understand that "command not found" really means just that; the command wasn't found. They're on their way toward understanding common system errors.

Some of the other errors that throw them at first are messages such as "directory not empty" or "bad interpreter". Running into these errors and then coming to understand why turns out to help them to eventually see that Unix commands and their output can be understood if they pay attention to the errors that they encounter. In fact, running into problems on the command line is one way to start on a journey that leads to a deeper understanding of how Unix works.

How these errors are captured and reported is a somewhat more interesting story. Most of the errors that you encounter on the command line when working on a Unix system are defined in a file called errno.h. The "h" stands for "header". This is a header file, sometimes referred to as an "include file" -- a basic issue for anyone who works in languages like C, but likely foreign to those whose development efforts are restricted to scripts and aliases. Like other header files that have settled onto Unix systems, errno.h makes it easier for a large set of errors to be handled by a large number of executables. The source code for the executables just has to pull the file into the mix at compilation time with a line like this:

#include <errno.h>

Including this file in program source gives the programs a way to understand the various errors that are likely to take place when interacting with the system. They also make it less likely that you will see the same problem worded half a dozen different ways depending on what command you were running when the error occurred. In a similar manner to using exit (return code 0) or exit 3 (return code 3) when you're building a script, compilable code might contain a command a line like this:

return -ENOENT;

ENOENT? That string may not mean anything to you, but it will show up in your errno.h file in a line like this:

#define ENOENT 2 /* No such file or directory */

This shows us that ENOENT has a value of 2 and represents the condition in which a file you try to access with a command doesn't exist -- or at least the command you're running can't find it. And the executable might have been built with lines like these that report the problem. In other words, if the error we just captured matches ENOENT (has a value of 2), it's going to print the message shown.

switch (errno) {
case ENOENT:
fprintf(stderr, "\n%s: command not found\n", ok_input[0]);
break;
...

The list of errors in errno.h goes on for a couple pages, but it's interesting to look through the errors and imagine what you might do if you see them in response to a command you type. Some of these are easy to understand -- like "out of memory" (ENOMEM) and "permission denied" (EACCES), but others "exec format error" might be
hard to generate even if you tried real hard. I do like the "try again" error though it's hard to imagine the conditions under which a Unix command might deliver this suggestion.

#define EPERM 1 /* Operation not permitted */
#define ENOENT 2 /* No such file or directory */
#define ESRCH 3 /* No such process */
#define EINTR 4 /* Interrupted system call */
#define EIO 5 /* I/O error */
#define ENXIO 6 /* No such device or address */
#define E2BIG 7 /* Arg list too long */
#define ENOEXEC 8 /* Exec format error */
#define EBADF 9 /* Bad file number */
#define ECHILD 10 /* No child processes */
#define EAGAIN 11 /* Try again */
#define ENOMEM 12 /* Out of memory */
#define EACCES 13 /* Permission denied */

That's just a small sample of the errors that are defined on a typical Unix system.

If you look at a complete errno.h file, you're likely to find 125-150 such error conditions defined and descriptive strings which explain what the ENOENT, E2BIG, and ENOMEM type error codes actually mean. You might not have suspected how many errors you could run into when working on the Unix command line. And these, of course, are the system errors, not those that are associated with particular applications that you might be hosting.

Let's look at a few of these.

The "operation not permitted" error seems pretty obvious. Kernel routines send EPERM when you're trying to do something that you don't have permission to do -- like trying to change the "finger" information on someone else's account, trying to kill someone else's process when you're not root, or running a command that you don't have execute
permission for.

The "command not found" error can be especially frustrating to Unix newbies when the file they're trying to use is right there in the directory they're sitting in. Unlike Windows, however, even if you're in the same directory as a file, if that directory isn't included literally in your path or referenced as the current (.) directory, you might as well be in /tmp. The shell is going to act as if the file doesn't exist unless you try to run it with ./filename or a full or relative path.

The "no such process" error will show up if you mistype a process ID and type "kill 123456" when you meant "kill 12345".

One of the other things that strains the brains of newbies is when the which command that so agreeably tells them where commands are located can't find the findme file that's right under their noses (again, maybe in the same directory). The which command won't report if the file is in a directory on your search path or in your current directory -- unless it's executable. Take execute permission away from the ls command and it would do the same thing -- which ls would no longer have anything to tell you.

Broken pipes can be hard to grasp when you're just getting used to pipes and maybe even when you've used them for years. If you're piping commands such as when you type something like grep string file | programx and anything goes wrong with the programx before grep is finished, you'll see this problem. It rarely happens, but I see problems like this every now and then.

Directory not empty errors are generally encountered when someone uses rmdir to remove a directory that contains files. The answer, of course, is to use rm -rf commands, but this isn't immediately obvious until you've run into this problem half a dozen times.

Another problem I've seen Unix newbies run into a lot is when they try to loop through files without fully understanding how the for command works. The command for file in /tmp reads like it should work, but it doesn't. It sounds good when read aloud anyway, but it doesn't do what it sounds like it might. This is because the for command will loop through every argument it's given or that its arguments expand into, but /tmp is only /tmp. Change /tmp to /tmp/* or `ls /tmp` and you'll get the looping you were looking for.<

New users also often have problems setting up cron jobs. One of the most consistent that I've seen is putting an * as the first field thinking that a cron job like * 1 * * * /home/me/runme going to run at 1 AM rather than every minute from 1 AM to 1:59 AM.

Keeping > and >> straight generally sinks in fairly quickly. After trashing a couple files by overwriting them with > when they meant to append to them with >>, most quickly get to the point at which the distinction is fairly clear.

The message that I try to promote is that paying attention to the errors that you run into can give you insights into how commands work and can help you to be that much more clever on the command line.

This article is published as part of the IDG Contributor Network. Want to Join?

To express your thoughts on Computerworld content, visit Computerworld's Facebook page, LinkedIn page and Twitter stream.
Related:
Windows 10 annoyances and solutions
Shop Tech Products at Amazon
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.