Probing into your systems with /proc

probes karl martin skontorp use
Credit: flickr / Karl-Martin Skontorp


It's not just a file system full of odd looking files that only the kernel understands. Instead, it's really something of a peep hole into your system. And there a quite a number of useful things that you can learn from the files that it contains.

So, what do you see when you cd over to /proc? Well, run ls and the first thing you're likely to notice is the very large group of directories with just numbers for names. These numbers correspond to the process IDs (PID) of processes that are running on your system -- everything from the init process that started the boot time ball rolling to the shell you're using right now. And you're likely to see quite a lot of them -- probably several hundred or more.

$ cd /proc
$ ls
1     15878 38   433 5266 579  67   7521 devices
10    1589  39   434 5267 5792 6788 7523 diskstats
10052 16    393  435 5268 58   6793 7525 dma
1021  1623  3956 436 5269 580  6794 7529 driver
10522 16571 3957 437 5270 581  6795 7531 execdomains
10552 16585 3958 438 5271 5810 6796 7533 fb
11    1695  3959 439 5272 582  6797 7535 filesystems
11984 17    3960 44  5273 583  6798 7537 fs

If you were to count the numeric (process) directories, your total should be the same as the response you'd get if you ran the command ps -ef --no-headers | wc -l (ps output without the header line).

The bulk of these directories will likely be owned by root but, depending on how your system is being used, you'll also see application service accounts (such as oracle in the example below) and usernames among the process owners listed.

# ls -l | more
total 0
dr-xr-xr-x  5 root      root          0 Oct  3  2013 1
dr-xr-xr-x  5 root      root          0 Oct  3  2013 10
dr-xr-xr-x  5 root      root          0 Oct  3  2013 1021
dr-xr-xr-x  5 root      root          0 Oct  3  2013 11
dr-xr-xr-x  5 oracle    oinstall      0 Feb  4 07:11 1167
dr-xr-xr-x  5 root      root          0 Jan 26 11:00 11920
dr-xr-xr-x  5 root      root          0 Mar  7  2014 11923
dr-xr-xr-x  5 gdm       gdm           0 Jan 26 11:01 11950

Notice that none of these are files in the same sense as files we see in our file systems. They don't take up space on the disk and they don't have content even if the cat command displays their data for you. Unlike the directories that we see in "real" file systems, these show up as using 0 bytes of data. Many will have dates and times that correspond to the last time the system was booted (i.e., when the related processes started) while other files in /proc may appear to be updated almost constantly. Only the /proc/kcore file will have a signficant size and it might appear to be huge, though even it isn't really using disk space) as it relates to the RAM on your system.

# ls -l kcore
-r-------- 1 root root 39460016128 Feb  8 09:10 kcore

You'll also see a collection of other files in /proc with names like cpuinfo, key-users and schedstat -- names that provide clues to what these files contain. In fact, you can think of the files in /proc as falling into two categories -- those that are represent processes running on your system and those represent some aspect of the system itself.

So, what are some useful things these interesting pseudo files can tell you?

For one thing, they can tell you how long the system has been up. Check out the /proc/uptime file. This file reports the system uptime, even though it might not be immediately obvious. The number 74216960.58 in the output below probably likely doesn't look like an uptime report to you. But type "cat uptime" a couple times in a row and you'll notice that the numbers are constantly changing. It's obviously keeping up.

$ cat /proc/uptime;sleep 10;cat /proc/uptime
74216960.58 73912315.63
74216970.58 73912325.61

As you'll note, this file actually contains two numbers. The first is the uptime of the system (as you'd expect from the name) while the second is the amount of time the system has spent idle. The numbers are constantly changing because we're always getting further from the time the system was last booted. After sleeping for ten seconds, the number on the left just happens to be 10 units larger, so it's clear that these numbers are reporting time in seconds.

No problem. A little command line math can turn those seconds into days. If we then compare the result of our calculation with the uptime command output, we'll the some connection between the numbers.

$ expr 74216970 / 60 / 60 / 24
$ uptime
 14:30:17 up 858 days, 23:50,  1 user,  load average: 0.08, 0.04, 0.00

Of course, almost no one would want to go through all the trouble of calculating uptime with an expr command when the uptime command can tell us what we want to know directly, especially if we have to think through the sixty seconds per minute, sixty minutes per hour, and 24 hours per day conversions.

And think your system is busy? Do a little more math with these numbers and you might see something like this. Notice how I added two zeroes to the end of the idle time figure to get an answer that would represent the percentage of the time this system has been idle. Yes, that's 99%. This system is clearly not straining -- at least not most of the time.

$ expr 7391232500 / 74216970

This uptime exercise is useful because it reinforces the idea that these "files" are plucking information from the system to update the virtual file content many times a second. Note, though, that the dates and times associated with this file keep up with the current time.

$ ls -l /proc | tail -11
-r--r--r--  1 root      root          0 Feb  9 14:42 stat
-r--r--r--  1 root      root          0 Feb  9 14:42 swaps
dr-xr-xr-x 11 root      root          0 Oct  3  2013 sys
--w-------  1 root      root          0 Feb  9 14:42 sysrq-trigger
dr-xr-xr-x  2 root      root          0 Feb  9 14:42 sysvipc
dr-xr-xr-x  4 root      root          0 Feb  9 14:42 tty
-r--r--r--  1 root      root          0 Feb  9 14:42 uptime
-r--r--r--  1 root      root          0 Feb  9 14:42 version
-r--------  1 root      root          0 Feb  9 14:42 vmcore
-r--r--r--  1 root      root          0 Feb  9 14:42 vmstat
-r--r--r--  1 root      root          0 Feb  9 14:42 zoneinfo

Another file with information that will likely seem familiar is the version file. This file supplies information on your operating system version, much like the output of the uname -a command and undoubtedly tapping the same system resources.

$ cat /proc/version
Linux version 2.6.18-128.el5 ( (gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)) #1 SMP Wed Dec 17 11:41:38 EST 2008
$ uname -a
Linux 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux

Another file -- the cpuinfo file -- supplies fairly extensive information on your system CPUs. While I don't want to insert all 500+ lines into this post, you can see some of the details below. The second command is simply counting up the number of CPUs.

$ head -11 /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU       X5650  @ 2.67GHz
stepping        : 2
cpu MHz         : 2660.126
cache size      : 12288 KB
physical id     : 1
siblings        : 12
core id         : 0
$ more cpuinfo | grep processor | wc -l

The vmstat file provides virtual memory statistics. Want to see what's happening with page swapping? The numbers below represent swapping activity (pages swapped in and out) since the system was booted.

$ grep pswp /proc/vmstat
pswpin 229269
pswpout 316559

If these names look familiar, you may be remembering them from sar output like that shown below.

# sar -W 10 2
Linux 3.14.35-28.38.amzn1.x86_64 (ip-172-30-0-28)       02/10/2016      _x86_64_(1 CPU)
12:17:03 PM  pswpin/s pswpout/s
12:17:13 PM      0.00      0.00
12:17:23 PM      0.00      0.00
Average:         0.00      0.00

We can also look at memory statistics. These details can come in very handy if you want to get a very detailed understanding of the memory on your system and how it is being used.

$ more /proc/meminfo
MemTotal:     37037804 kB
MemFree:      18605268 kB
Buffers:        323740 kB
Cached:       14919556 kB
SwapCached:      12068 kB
Active:       13878148 kB
Inactive:      3846048 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:     37037804 kB
LowFree:      18605268 kB
SwapTotal:    16778232 kB
SwapFree:     16309048 kB
Dirty:            9896 kB
Writeback:           0 kB
AnonPages:     2468880 kB
Mapped:        7089292 kB
Slab:           442900 kB
PageTables:     189648 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:  35297132 kB
Committed_AS: 12768916 kB
VmallocTotal: 34359738367 kB
VmallocUsed:    271696 kB
VmallocChunk: 34359466659 kB
HugePages_Total:     0
HugePages_Free:      0
HugePages_Rsvd:      0
Hugepagesize:     2048 kB

Want to check on what file system types are supported by your kernel? Take a look at /proc/filesystems.

$ head -11 /proc/filesystems
nodev   sysfs
nodev   rootfs
nodev   bdev
nodev   proc
nodev   cpuset
nodev   binfmt_misc
nodev   debugfs
nodev   securityfs
nodev   sockfs
nodev   usbfs
nodev   pipefs

To view all the mounts used by your system, look at the /proc/mounts file.

$ cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / ext3 rw,data=ordered,usrquota 0 0
/dev /dev tmpfs rw 0 0
/proc /proc proc rw 0 0
/sys /sys sysfs rw 0 0
/proc/bus/usb /proc/bus/usb usbfs rw 0 0
devpts /dev/pts devpts rw 0 0
/dev/sda1 /boot ext3 rw,data=ordered 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0
/etc/auto.misc /misc autofs rw,fd=6,pgrp=5541,timeout=300,minproto=5,maxproto=5,indirect 0 0
-hosts /net autofs rw,fd=12,pgrp=5541,timeout=300,minproto=5,maxproto=5,indirect 0 0
oracleasmfs /dev/oracleasm oracleasmfs rw 0 0
//windows-server/outgoing /mnt/ActAccts cifs rw,mand,unc=\\windows-server
\outgoing,username=xferSvc,uid=0,gid=0,file_mode=02767,dir_mode=0777,rsize=16384,wsize=57344 0 0

The /proc/net directory contains a wealth of network information including data for your network interfaces.

$ ls /proc/net
anycast6       ip_conntrack         netfilter  rt6_stats     tcp6
arp            ip_conntrack_expect  netlink    rt_acct       tr_rif
bonding        ip_mr_cache          netstat    rt_cache      udp
dev            ip_mr_vif            packet     snmp          udp6
dev_mcast      ip_tables_matches    protocols  snmp6         unix
dev_snmp6      ip_tables_names      psched     sockstat      wireless
if_inet6       ip_tables_targets    raw        sockstat6
igmp           ipv6_route           raw6       softnet_stat
igmp6          mcfilter             route      stat
ip6_flowlabel  mcfilter6            rpc        tcp

Examples of some /proc/net data include your arp cache and routing table.

$ cat arp
IP address       HW type     Flags       HW address            Mask     Device       0x1         0x2         0a:ee:74:5c:40:bd     *        eth0       0x1         0x2         0a:ee:74:5c:40:bd     *        eth0
$ cat route
Iface   Destination     Gateway         Flags   RefCnt  Use     Metric  Mask   MTU      Window  IRTT
eth0    00000000        01001EAC        0003    0       0       0       000000000       0       0       
eth0    FEA9FEA9        00000000        0005    0       0       0       FFFFFFFF0       0       0       eth0    00001EAC        00000000        0001    0       0       0       00FFFFFF0       0       0

For some files in /proc, you'll need to use your superpowers. Here we're looking into some aspects of our host-based firewall.

$ sudo cat /proc/net/ip_tables_names

You can view arp (address resolution protocol) data that your system has collected using the /proc/net/arp file. This is much the same information that you'd see using the arp command.

$ cat /proc/net/arp
IP address       HW type     Flags     HW address           Mask    Device     0x1         0x2       00:50:56:B1:2E:01    *       bond0     0x1         0x2       A4:BA:88:12:2C:5D    *       bond0     0x1         0x2       00:50:56:B3:0E:33    *       bond0       0x1         0x2       00:00:0C:07:AC:2A    *       bond0      0x1         0x2       00:50:52:B6:32:33    *       bond0

Or maybe you want to look into page faults.

$ cat vmstat | grep "fault"
pgfault 2426152809
pgmajfault 79826

You can examine your swap partitions and swap files through the /proc/swaps file.

$ more /proc/swaps
Filename                             Type            Size    Used    Priority
/dev/mapper/VolGroup00-LogVol01      partition       16777208        514200
/swapfile                            file            1024    0       -2

Details about your system's devices are available in the /proc/sys/dev directory. Below, we look at the cdrom and raid devices.

# ls -l /proc/sys/dev/cdrom
total 0
-rw-r--r-- 1 root root 0 Feb  8 17:59 autoclose
-rw-r--r-- 1 root root 0 Feb  8 17:59 autoeject
-rw-r--r-- 1 root root 0 Feb  8 17:59 check_media
-rw-r--r-- 1 root root 0 Feb  8 17:59 debug
-r--r--r-- 1 root root 0 Feb  8 17:59 info
-rw-r--r-- 1 root root 0 Feb  8 17:59 lock
# ls -l /proc/sys/dev/raid
total 0
-rw-r--r-- 1 root root 0 Feb  8 17:59 speed_limit_max
-rw-r--r-- 1 root root 0 Feb  8 17:59 speed_limit_min

Examining the contents of one of these files, we see the maximum speed (RAID rebuild speed) that is set for the device.

# cat /proc/sys/dev/raid/speed_limit_max

A lot of the information available through /proc can also be viewed using commands like arp, netstat, and sar. Still, it's useful to be able to pull data from the kernel in one convenient location and /proc provides a tremendous wealth of stats for anyone who's want to dive deeply into their system.

This tour of /proc and some of the extensive information that it provides was just a taste of the detail available to you. The key to making good use of all this data is deciding what kind of information you want to see and devising scripts or aliases to fetch it from the tremendously detailed files always waiting for you in /proc.



This article is published as part of the IDG Contributor Network. Want to Join?

Call on line 2! Six ways to add a second line to your smartphone
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies