System Administration

Adding Disk Space To An Ubuntu Server VM


This morning, I worked on an Ubuntu Server 14.04 guest who’s disk space has been increased by the VMWare host. LVM was configured on the guest during installation, so that simplified the process of getting the extra space rolled into the original partitioning scheme. Below is a set of instructions that I was able to use on this live server without interruption in service. Your results may vary though. There are GUI tools that you can use for much of this, but I prefer to work in the terminal as the knowledge tends to be more universal. Being able to work without a GUI is also a requirement with many server systems, and in emergency recovery situations as well.

The Process

I started off by getting a view of the current disk space available on the server. I knew that the host had increased the size of disk /dev/sda from 20GB to approximately 100GB, so I used parted to output the unallocated space from the command line (Listing 1).

Listing 1

$ sudo parted /dev/sda print free Model: VMware Virtual disk (scsi) Disk /dev/sda: 21.5GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 256MB 255MB primary ext2 boot 256MB 257MB 1048kB Free Space 2 257MB 21.5GB 21.2GB extended 5 257MB 21.5GB 21.2GB logical lvm 21.5GB 21.5GB 1049kB Free Space

I didn’t see that extra 80GB of space anywhere. That’s because to I needed to force the Linux kernel to rescan the SCSI drive for changes. This can be done through the sys virtual file system, which gives access to kernel information, as well as giving some configuration and control access. I used sudo with bash -c to handle the echo correctly and not get a permission denied error. Also note that the 2:0:0:0 part of the path may be different on your system.

Listing 2

$ sudo bash -c 'echo "1" > /sys/class/scsi_disk/2\:0\:0\:0/device/rescan'

When I ran the parted command from above again, I saw the 80GB (approximately) of additional disk space.

Listing 3

$ sudo parted /dev/sda print free Model: VMware Virtual disk (scsi) Disk /dev/sda: 107GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 256MB 255MB primary ext2 boot 256MB 257MB 1048kB Free Space 2 257MB 21.5GB 21.2GB extended 5 257MB 21.5GB 21.2GB logical lvm 21.5GB 107GB 85.9GB Free Space

Next, I used the unallocated space to create a partition using parted.

Listing 4

$ sudo parted -- /dev/sda mkpart primary 21.5GiB -1s

The two dashes right after the parted call keep the -1s from causing an invalid argument error. It tells parted not to expect any more options. -1s tells parted to use the rest of the unallocated space for the partition. Otherwise, I’d have to calculate the end of the partition myself.

Once this was done, I went ahead and ran partprobe to sync the kernel’s partition table with the disk’s. This prevented me from having to reboot the server.

Listing 5

$ sudo partprobe /dev/sda

The next thing to do was use pvcreate to create a physical volume which can be added into the logical volume group for the drive later. Just make sure that your partition number (the 3 at the end of sda3) matches the partition you just created.

Listing 6

$ sudo pvcreate /dev/sda3

In order to add the physical volume to the volume group, I needed to find the volume group’s name.

Listing 7

$ sudo vgdisplay ...snip... --- Volume group --- VG Name ustest-vg ...snip...

In my case the volume group was ustest-vg. This will most likely follow the [hostname]-vg naming convention, so change it appropriately for your server.

I was now ready to extend the logical volume group to include the new physical volume.

Listing 8

$ sudo vgextend ustest-vg /dev/sda3

Next, I had to find the name of the logical volume so I could extend it. Remember that the name of your volume group (ustest-vg) will vary with the hostname. Also notice that you want the LV Path that ends in root, not swap.

Listing 9

$ sudo lvdisplay ...snip... --- Logical volume --- LV Path /dev/ustest-vg/root ...snip...

Extending the logical volume has a similar form to extending the volume group.

Listing 10

$ sudo lvextend /dev/ustest-vg/root /dev/sda3

Lastly, before I could start using this new space, the file system needed to be re-sized.

Listing 11

$ sudo resize2fs /dev/ustest-vg/root

Running df -h should now show the increased space available in that logical volume. In my case this was an increase from about 20GB to approximately 100GB.

Listing 12

$ df -h ...snip... Filesystem Size Used Avail Use% Mounted on /dev/mapper/spwebapps1--vg-root 95G 12G 80G 13% / ...snip...


The process of extending the LVM disk space on a system is involved, but not difficult after you’ve stepped through the process once or twice. LVM also makes this process easier than if you use traditional partitions on your disks.

Have questions, comments, and/or suggestions? Let me know in the comments section below. Also, check out the Resources section for additional reading on this topic.


  1. The Parted User Manual
  2. – How to Increase the size of a Linux LVM by expanding the virtual machine disk
  3. Pluralsight – Resizing Linux drives in VMware | Give me my space part II

Video Tip – Finding Open IP Addresses


Welcome, this is an Innovations Tech Tip. In this tip we’re going to explore a couple of ways to find open IP (Internet Protocol) addresses on your network. You might need this information if you were going to temporarily set a static IP address for a host. Even after you’ve found an open IP though, you still need to take care to avoid IP conflicts if your network uses DHCP (Dynamic Host Configuration Protocol). Please also be aware that one of these techniques uses the nmap network scanning program, which may be against policy in some environments. Even if it’s not against corporate policy, the nmap man page states that “there are administrators who become upset and may complain when their system is scanned. Thus, it is often advisable to request permission before doing even a light scan of a network.”2





The first technique that we’re going to cover is the use of the arping command to tell if a single address is in use. arping uses ARP (Address Resolution Protocol) instead of ICMP (Internet Control Message Protocol) packets. The reason this is significant is because many firewalls will block ICMP traffic as a security measure. So when using ICMP you’re never sure whether the host is really down, or if it’s just blocking your pings. ARP pings will almost always work because ARP packets are used to provide the critical network function of resolving IP addresses to MAC (Media Access Control) addresses. Hosts on an Ethernet network will use these resolved MAC addresses to communicate instead of IPs. Be aware that one case in which ARP pings will not work is when you’re not on the same subnet as the host you’re trying to ping. This is because ARP packets are not routed. See Resource #3 below for more details.

arping has several options, but the three that we’ll be focusing on here are -I, -D, and -c . The -I option specifies the network interface that you want to use. In many cases you might use eth0 as your interface, but I’m using a laptop connected via wireless and my interface is wlan0 . The -D option checks the specified address in DAD (Duplicate Address Detection) mode. Let’s look at an example.

Listing 1

$ arping -I wlan0 -D ARPING from wlan0 Unicast reply from [D4:4D:D7:64:C6:5F] for [D4:4D:D7:64:C6:5F] 2.094ms Sent 1 probes (1 broadcast(s)) Received 1 response(s)

You can see that I’m pinging (a known router) with the -D option. If no replies are received DAD mode is considered to have succeeded, and you can be reasonably sure that address is free for use. Listing 2 shows an example of what you would see if the address is not in use.

Listing 2

$ arping -I wlan0 -c 5 -D ARPING from wlan0 Sent 5 probes (5 broadcast(s)) Received 0 response(s)

Here I’ve picked a different network address that I knew would be unused. I’ve also added the -c option mentioned above so that I could have arping stop after sending 5 requests. Otherwise arping would keep trying until I interrupted it (possibly via the Ctrl-C key combo).

Armed with this information and a knowledge of any dynamic addressing scheme on my network, I can set a temporary static IP for a host. See Resource #1 for more information on arping.


nmap, which stands for “Network MAPper”, was “designed to rapidly scan large networks…to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics.”2 We’ll be using this to find all of the currently used IP addresses on the network.

nmap has many options and is a very deep utility, and I highly suggest spending some time reading its man page. Of all these options, the only one that we’ll be dealing with in this quick tech tip is -e. The -e option allows you to specify the interface to use when scanning the network. This is similar to the -I option of arping. The example below shows a simple usage.

Listing 3

$ nmap -e wlan0 Starting Nmap 5.21 ( ) at 2011-08-23 11:13 EDT Nmap scan report for Host is up (0.033s latency). Not shown: 996 closed ports PORT STATE SERVICE 23/tcp open telnet 53/tcp open domain 80/tcp open http 5000/tcp open upnp Nmap scan report for Host is up (0.00015s latency). Not shown: 997 closed ports PORT STATE SERVICE 111/tcp open rpcbind 5900/tcp open vnc 8080/tcp open http-proxy Nmap scan report for Host is up (0.033s latency). Not shown: 995 closed ports PORT STATE SERVICE 22/tcp open ssh 111/tcp open rpcbind 139/tcp open netbios-ssn 445/tcp open microsoft-ds 2049/tcp open nfs Nmap done: 256 IP addresses (3 hosts up) scanned in 4.22 seconds

The first thing to notice is the notation that I used to specify the network submask (/24). If you’re unfamiliar with this notation, please see Resource #5 below. The next thing to notice is that nmap gives us a lot more information than just what IPs are in use. nmap also shows us things like what ports are open on each host, and what service it thinks is running on each port. As a network administrator you can use this information to get a quick overview of your network, or you can dig deeper into nmap to perform in-depth network audits. In our case we’re just looking for an open IP address to use temporarily, so we can choose one that’s not listed. Again, care needs to be taken when statically setting IPs on a network with DHCP. Have a look at Resource #4 for a more comprehensive guide to using nmap.

That concludes this Tech Tip. Have a look at for other tips, tricks, how-tos, and service offerings available from Innovations Technology Solutions. Thanks, and stay tuned for more from Innovations.


  1. man arping
  2. man nmap
  3. – Gerard Beekmans – Ping: ICMP vs. ARP
  4. Network Uptime – James Messer – Secrets of Network Cartography: A Comprehensive Guide to nmap
  5. – Bradley Mitchell – CIDR – Classless Inter-Domain Routing

Video Tip – Using Pipes With The sudo Command


Welcome, this is an Innovations Tech Tip. In this tip we’re going to cover how to run a command sequence, such as a pipeline, using sudo which is sometimes also pronounced “pseudo”. It may be tempting to think of the “su” in sudo as standing for “super user” since, especially if you’re an Ubuntu user, you normally use sudo to execute things as root. Something that may surprise you though is that you can use the -u option of sudo to specify a user to run the command as. This is assuming that you have the proper privileges. Have a look at the sudo man and info pages for more interesting options.




Now, if you’ve ever tried to use sudo to run a command sequence such as a pipeline, where each step required superuser priveleges, you probably got a Permission denied error. This is because sudo only applies to the first command in the sequence and not the others. There are multiple ways to handle this, but there are two that stand out to me. First, you can use sudo to start a shell (such as bash) with root priveleges, and then give that shell the command string. This can be done using the -c option of bash. To illustrate how this works, I’ll start out using sudo to run cat on a file that I created in the /root directory that I normally wouldn’t have access to.

Listing 1

$ cat /root/example.txt cat: /root/example.txt: Permission denied $ sudo cat /root/example.txt [sudo] password for jwright: You won't see this text without sudo.

If I try to use sudo with a pipeline to make a compressed backup of the /root/example.txt file, I again get the Permission denied error.

Listing 2

$ sudo cat /root/example.txt | gzip > /root/example.gz -bash: /root/example.gz: Permission denied

Notice that it’s the second command (the gzip command) in the pipeline that causes the error. That’s where our technique of using bash with the -c option comes in.

Listing 3

$ sudo bash -c 'cat /root/example.txt | gzip > /root/example.gz' $ sudo ls /root/example.gz /root/example.gz

We can see form the ls command’s output that the compressed file creation succeeded.

The second method is similar to the first in that we’re passing a command string to bash, but we’re doing it in a pipeline via sudo.

Listing 4

$ sudo rm /root/example.gz $ echo "cat /root/example.txt | gzip > /root/example.gz" | sudo bash $ sudo ls /root/example.gz /root/example.gz

Either method works, it’s just a matter of personal preference on which one to use.

That concludes this Tech Tip. Have a look at for other tips, tricks, how-tos, and service offerings available from Innovations Technology Solutions. Thanks, and stay tuned for more quick tips from Innovations.


  1. man bash
  2. man sudo
  3. The Ink Wells – James Cook
  4. Linux Journal – Don Marti – Running Complex Commands with sudo
  5. bash Cookbook – Albing, Vossen, Newham

Shared Library Issues In Linux




Quick Start

If you just want enough information to fix your problem quickly, you can read the How-To section of this post and skip the rest. I would highly recommend reading everything though, as a good understanding of the concepts and commands outlined here will serve you well in the future. We also have Video and Audio included with this post that may be a good quick reference for you. Don’t forget that the man and info pages of your Linux/Unix system can be an invaluable resource as well when you’re trying to solve problems.


To make things easier on you, all of the black command line and script areas are set up so that you can copy the text from them. This does make using the commands easier, but if you’re not already familiar with the concepts presented here, typing the commands yourself and working through why you’re typing them will help you learn more. If you hit problems along the way, take a look at the Troubleshooting section near the end of this post for help.

There are formatting conventions that are used throughout this post that you should be aware of. The following is a list outlining the color and font formats used.

Command Name or Directory Path
Warning or Error
Command Line Snippet With Commands/Options/Arguments
Command Options and Their Arguments Only

Where listings on command options are made available, anything with square brackets around it (“[” and “]“) is an argument to the option, and a pipe (“|“) means that you can choose one of two alternatives ([4|6] means choose 4 or 6).


This post is geared more toward system administrators than software developers, but anyone can make good use of the information that you’re going to see here. The Resources section holds links to take your study further, even into the developer realm. I’m going to start off by giving you a brief background on shared libraries and some of the rules that apply to their use. Listing 1 shows an example of an error you might see after installing PostgreSQL via a bin installer file. In this post, I’m going to step through some commands and techniques to help you deal with this type of shared library problem. I’ll also work through resolving the error in Listing 1 as an example, and give you some tips and tricks as well as items to help you if you get stuck.

Listing 1

$ ./psql ./psql: error while loading shared libraries: cannot open shared object file: No such file or directory


Shared libraries are one of the many strong design features of Linux, but can lead to headaches for inexperienced users, and even experienced users in certain situations. Shared libraries allow software developers keep the size of their application storage and memory footprints down by using common code from a single location. The glibc library is a good example of this. There are two standardized locations for shared libraries on a Linux system, and these are the /lib and /usr/lib directories. On some distributions /usr/local/lib is included, but check the documentation for your specific distribution to be sure. These are not the only locations that you can use for libraries though, and I’ll talk about how to use other library directories later. According to the Filesystem Hierarchy Standard (FHS), /lib is for shared libraries and kernel modules that are required for startup and running in the root filesystem (/bin and /sbin), and /usr/lib holds most of the internal libraries that are not meant to be executed directly by users or shell scripts. The /usr/local/lib directory is not defined in the latest version of the FHS, but if it exists on a distribution it normally holds libraries that aren’t a part of the standard distribution, including libraries that the system administrator has compiled/installed after the initial setup. There are some other directories like /lib/security that holds PAM modules, but for our discussion we’ll focus on /lib and /usr/lib.

The counterpart to the dynamically linked (shared) library is the statically linked library. Whereas dynamically linked libraries are loaded and used as they are needed by the applications, statically linked libraries are either built into, or closely associated with a program at the time it is compiled. A couple of the situations where static libraries are used is when you’re trying to work around an odd/outdated library dependency, or when you’re building a self-contained rescue system. Static linking typically makes the resulting application faster and more portable, but increases the size (and thus the memory and storage footprint) of the binary. There is also a multiplication of the size of a static library’s footprint if more than one program uses it. For instance, one program using a library that is 10 MB in size just consumes 10 MB of memory (1 program x 10 MB), but if you run 10 programs with the same library compiled into them, you end up with 100 MB of memory consumed (10 programs x 10 MB). Also, when programs are statically linked, they can’t take advantage of updates made to the libraries that they depend on. They are locked into whatever version of the library they were compiled with. Programs that depend on dynamically linked libraries refer to a specific file on the Linux file system, and so when that file is updated, the program can automatically take advantage of the new features and fixes the next time it loads.

Shared libraries typically have the extension .so which stands for Shared Object. Library file names are followed by a version numbering scheme which can include major and minor version numbers. A system of symbolic links are used to point the majority of programs to the latest and greatest library version, while still allowing a minority of programs to use older libraries. Listing 2 shows output that I modified to illustrate this point.

Listing 2

$ ls -l | grep libread lrwxrwxrwx 1 root root 18 2010-03-03 11:11 -> -rw-r--r-- 1 root root 217188 2009-08-24 19:10 lrwxrwxrwx 1 root root 18 2010-02-02 09:34 -> -rw-r--r-- 1 root root 225364 2009-09-23 08:16

You can see in the output that there are two versions of libreadline installed side-by-side (5.2 and 6.0). The version numbers are in the form major.minor, so 5 and 6 are major version numbers, with 2 and 0 being minor version numbers. You can usually mix and match libraries with the same major version number and differing minor numbers, but it can be a bad idea to use libraries with different major numbers in place of one another. Major version number changes usually represent significant changes to the interface of the library, which are incompatible with previous versions. Minor version numbers are only changed when an update such as a bug fix is added without significantly changing how the library interacts with the outside world. Another thing that you’ll notice in Listing 2 is that there are links created from to and from to This is so that programs that depend on the 5 or 6 series of the libraries don’t have to figure out where the newest version of the library is. If an application works with major version 6 of the library, it doesn’t care if it grabs 6.0, 6.5, or 6.9 as long as it’s compatible, so it just looks at the base name of the library and takes whatever that’s linked to. There are also a couple of other situations that you’re likely to encounter with this linking scheme. The first is that you may see a link file name containing no version numbers ( that points to the actual library file ( Also, even though I said that libraries with different major version numbers are risky to mix, there are situations where you will see an earlier major version number ( linked to a newer version number of the library ( This should only happen when your distribution maintainers or system administrators have made sure that nothing will break by doing this. Listing 3 shows an example of the first situation.

Listing 3

$ ls -l | grep ".*so " lrwxrwxrwx 1 root root 18 2010-02-02 14:48 ->

All things considered, the shared library methodology and numbering scheme do a good job of ensuring that your software can maintain a smaller footprint, make use to the latest and greatest library versions, and still have backwards compatibility with older libraries when needed. With this said, the shared library model isn’t perfect. There are some disadvantages to using them, but those disadvantages are typically considered to be outweighed by the benefits. One of the disadvantages is that shared libraries can slow the load time of a program. This is only a problem the first time that the library is loaded though. After that, the library is in memory and other applications that are launched won’t have to reload it. One of the most potentially dangerous drawbacks of shared libraries is that they can create a central point of failure for you system. If there is a library that a large set of your programs rely on and it gets corrupted, deleted, over-written, etc, all of those programs are probably going to break. If any of those programs that were just taken down are needed to boot your Linux system, you’ll be dead in the water and in need of a rescue CD.

While I would argue that dependency chains are not really a “problem”, they can work hardships on a system administrator. A dependency chain happens when one library depends on another library, then that one depends on another, and another, and so on. When dealing with a dependency chain, you may have satisfied all of the first level dependencies, but your program still won’t run. You have to go through and check each library in turn for a dependency chain, and then follow that chain all the way through, filling in the missing dependencies as you go.

One final problem with shared libraries that I’ll mention again is version compatibility issues. You can end up with a situation where two different applications require different versions of the same library – that aren’t compatible. That is the reason for the version numbering system that I talked about above, and robust package management systems have helped ease shared library problems from the user’s perspective, but they still exist in certain situations. Any time that you compile and/or install an application/library yourself on your Linux system, you have to keep an eye out for problems since you don’t have the benefit of a package manager ensuring library compatibility.

Introducing (or for older a.out binaries) is itself a library, and is responsible for managing the loading of shared libraries in Linux. For the purposes of this post, we’ll be working with, and if you need or want to learn more about the older style a.out loading/linking, have a look at the Resources section. The library reads the /etc/ file which is a non-human readable file that is updated when you run the ldconfig command. The way that shared libraries are loaded is that checks to see what paths to look for the libraries in by checking the value of the LD_LIBRARY_PATH environment variable, then the contents of the /etc/ file, and finally the default path of /lib followed by the /usr/lib directory.

The LD_LIBRARY_PATH environment variable is a colon separated list that preempts all of the other library paths in the ldconfig search order. This means that you can use it to temporarily alter library paths when you’re trying to test a new library before rolling it out to the entire system, or to work around problems. This variable is typically not set by default on Linux distributions, and should not be used as a permanent fix. Use it with care, and preference should be given to the other library search path configuration methods. A handy thing about the LD_LIBRARY_PATH variable is that since it’s an environment variable, you can set it on the same line as a command and the new value will only effect the command, and not the parent environment. So, you would issue a command line like LD_LIBRARY_PATH="/home/user/lib" ./program to run program and force it to use the experimental shared libraries in /home/user/lib in preference to any others on the system. The shell that you run program in never sees the change to LD_LIBRARY_PATH. Of course you can also use the export command to set this variable, but be careful because doing this will affect your entire system. One final thing about the LD_LIBRARY_PATH variable is that you don’t have to run ldconfig after changing it. The changes take effect immediately, unlike changes to /lib, /usr/lib, and /etc/ I’ll explain more about ldconfig later.

You can use the library by itself to list which libraries a program depends on. It’s behavior is very much like the ldd command that we’ll talk about next because ldd is actually a wrapper script that adds more sophisticated behavior to In most cases ldd should be your preferred command for listing required shared libraries. In order to use to get a listing of the depended upon libraries for the ls command, you would type /lib/ --list /bin/ls swapping the 2 out for whatever major version of the library that your system is running. I’ve shown some of the command line options for in Listing 4.

Listing 4

--list Lists all library dependencies for the executable --verify Verifies that the program is dynamically linked and that the linker can handle it --library-path [PATH] Overrides the LD_LIBRARY_PATH environment variable and uses PATH instead

You can start a program directly with by using the following command line form /lib/ --library-path FULL_LIBRARY_PATH FULL_EXECUTABLE_PATH , where you replace 2 with whatever version of the library you are using. An example would be /lib/ --library-path /home/user/lib /home/bin/program which would run program using /home/user/lib as the location to look for required libraries. This should be used for testing purposes only, and not for a permanent fix on a production system though.

Introducing ldd

The name of the ldd command comes from its function, which is to “List Dynamic Dependencies”. As mentioned in the previous section, by default the ldd command gives you the same output as issuing the command line /lib/ --list FULL_EXECUTABLE_PATH. Each library entry in the output includes a hexadecimal number which is the load address of the library, and can change from run to run. Chances are that system administrators will never even need to know what this value is, but I’ve mentioned it here because some people may be curious. Listing 5 shows a few of the options for ldd that I use the most.

Listing 5

-d --data-relocs Perform data relocations and report any missing objects -r --function-relocs Perform relocations for both data objects and functions, and report any missing objects or functions -u --unused Print unused direct dependencies -v --verbose Print all information, including e.g. symbol versioning information

Keep in mind that you have to give ldd the full path to the binary/executable for it to work. The only way to work around giving ldd the full path is to use cd to change into the directory where the binary is. Otherwise you get an error like ldd: ./ls: No such file or directory. The only time that you would need to run ldd with root privileges would be if the binary has restrictive permissions placed on it.

As I mentioned in the Background section, you need to be aware of dependency chains when using shared libraries. Just because you’ve run the ldd command on an executable and satisfied all of it’s top level dependencies doesn’t mean that there aren’t more dependencies lurking underneath. If your program still won’t run, you should check each of the top level libraries to see if any of them have their own library dependencies that are unmet. You continue that process, running ldd on each library in each layer until you’ve satisfied all of the dependencies.

Introducing ldconfig

Any time that you make changes to the installed libraries on your system, you’ll want to run the ldconfig command with root privileges to update your library cache. ldconfig rebuilds the /etc/ file of currently installed libraries based on what it first finds in the directories listed in the /etc/ file, and then in the /lib and /usr/lib directories. The /etc/ file is formatted in binary by ldconfig and so it’s not designed to be human readable, and should not be edited by hand. Formatting the file in this way makes it more efficient for the system to retrieve the information. The file may include a directive that reads include /etc/*.conf that tells ldconfig to check the directory for additional configuration files. This allows the easy addition of configuration files to load third-party shared libraries such as those for MySQL. On some distributions, this include directive may be the only line you find in the file.

You often need to run ldconfig manually because a Linux system cannot always know when you have made changes to the currently installed libraries. Many package management systems run ldconfig as part of the installation process, but if you compile and/or install a library without using the package management system, the system software may not know that there is a new library present. The same applies when you remove a shared library.

Listing 6 holds several options for the ldconfig command. This is by no means an exhaustive list, so be sure to check the man page for more information.

Listing 6

-C [file] Specifies an alternate cache file other than -f [file] Specifies an alternate configuration file other than -n Rebuilds the cache using only directories specified on the command line, skipping the standard directories and -N Only updates the symbolic links to libraries, skipping the cache rebuilding step -p --print-cache Lists the shared library cache, but needs to be piped to the less command because of the amount of output -v --verbose Gives output information about version numbers, links created, and directories scanned -X Opposite of -N, it rebuilds the library cache and skips updating the links to the libraries

ldconfig is not the only method used to rebuild the library cache. Gentoo handles this task in a slightly different way, which I’ll talk about next.

Introducing env-update

Gentoo takes a slightly different path to updating the cache of installed libraries which includes the use of the env-update script. env-update reads library path configuration files from the /etc/env.d directory in much the same way that ldconfig reads files from /etc/ via the include directive. env-update then creates a set of files within /etc including . After this, env-update runs ldconfig so that it reloads the cache of libraries into the /etc/ file.


Hopefully by the point you’re reading this section you either have, or are beginning to get a pretty good understanding of the commands used when dealing with shared libraries. Now I’m going to take you through a sample scenario of a PostgreSQL installation running on Red Hat 5.4 to demonstrate how you would use these commands.

I have downloaded a bin installer to use on my CentOS installation instead of the PostgreSQL Yum repository because I wanted to install a specific older version of Postgres outside of the package management system. In most cases you’ll want to use a repository with your package management system though, as you’ll get a more integrated installation that can be kept up to date more easily. That’s assuming that your Linux distribution offers the repository mechanism for installing and updating packages, and many distributions don’t.

After installing Postgres via the bin file, I take a look around and see that the majority of the PostgreSQL files are in the /opt/PostgreSQL directory. I decide to experiment with the binaries under the pgAdmin3 directory, and so I use the cd command to move to /opt/PostgreSQL/8.4/pgAdmin3/bin. Once I’m there, I try to run the psql command and get the output in Listing 7 (same as Listing 1).

Listing 7

$ ./psql ./psql: error while loading shared libraries: cannot open shared object file: No such file or directory

There might be some of you reading this who will realize that I could have probably avoided the library error in Listing 7 by running the psql command from the /opt/PostgreSQL/8.4/bin directory. While this is true, for the sake of this example I’m going to forge ahead trying to figure out why it won’t run under the pgAdmin3 directory.

The main thing that I take away from the output in Listing 7 is that there is a shared library named that cannot be found by To dig just a little bit deeper, I use the ldd command and get the output in Listing 8.

Listing 8

$ ldd ./psql => (0x003fc000) => not found => /usr/lib/ (0x00845000) => /lib/ (0x0054f000) => not found => not found => /usr/lib/ (0x00706000) => /usr/lib/ (0x003d5000) => not found => /lib/ (0x00325000) => /lib/ (0x004a3000) => /lib/ (0x0031f000) => /lib/ (0x0033f000) => /lib/ (0x001d7000) => /lib/ (0x00532000) => /usr/lib/ (0x0079e000) => /lib/ (0x0052d000) => /usr/lib/ (0x006f6000) => /lib/ (0x005ae000) => /lib/ (0x00518000) /lib/ (0x001b9000) => /lib/ (0x003bb000) => /lib/ (0x00373000)

Notice that the error given in Listing 7 only gives you the first shared library that’s missing. As you can see in Listing 8, this doesn’t mean that other libraries won’t be missing as well.

My next step is to see if the missing libraries are already installed somewhere on my system using the find command. If the libraries are not already installed, I’ll have to use the package management system or the Internet to see which package(s) I need to install to get them. The output in Listing 9 shows the output from the find command.

Listing 9

$ sudo find / -name /opt/PostgreSQL/8.4/lib/ /opt/PostgreSQL/8.4/pgAdmin3/lib/

After looking in both of the directories shown in the output, I notice that all of my other missing libraries are housed within them. If you were just temporarily testing some new features of the psql command, you could use the export command to set the LD_LIBRARY_PATH environment variable as I have in Listing 10.

Listing 10

$ export LD_LIBRARY_PATH="/opt/PostgreSQL/8.4/lib/" bash-3.2$ ./psql Password: psql (8.4.3) Type "help" for help. postgres=#

You can see that once I’ve set the LD_LIBRARY_PATH variable, all I have to do is enter my PostgreSQL password and I’m greeted with the psql command line interface. I’ve used the /opt/PostgreSQL/8.4/lib/ library directory instead of the one beneath the pgAdmin3 directory as a matter of preference. In this case both directories include the same required libraries. For a permanent solution, we can add the path via the file.

I could just add /opt/PostgreSQL/8.4/lib/ directly to the file on its own line, but since the file on my installation has the include*.conf directive, I’m going to add a separate conf file instead. In Listing 11 you can see that I’ve echoed the PostgreSQL library path into a file called postgres-i386.conf under the /etc/ directory. After checking to make sure the file has the directory in it, I run the ldconfig command to update the library cache.

Listing 11

$ sudo -s Password: [root@localhost bin]# echo /opt/PostgreSQL/8.4/lib > /etc/ [root@localhost bin]# cat /etc/ /opt/PostgreSQL/8.4/lib [root@localhost bin]# /sbin/ldconfig [root@localhost bin]# exit exit

Make sure that you unset the LD_LIBRARY_PATH variable though so that you can make sure that it was your configuration file changes that fixed the problem, and not the environment variable. Issuing a command line such as unset LD_LIBRARY_PATH will accomplish this for you.

There are many scenarios beyond the one in this example, but it gives you the concepts used to work through the majority of shared library problems that you’re likely to come up against as a system administrator. If you’re interested in delving more deeply though, there are several links in the Resources section that should help you.

Tips and Tricks

  • I have read that running ldd on an untrusted program can open your system up to a malicious attack. This happens when an executable’s embedded ELF information is crafted in such a way that it will run itself by specifying its own loader. The man pages on the Ubuntu and Red Hat systems that I checked don’t mention anything about this security concern, but you’ll find a very good article by Peteris Krumins in the Resources section of this post. I would suggest at least skimming Peteris’ post so that you’re aware of the security implications of running ldd on unverified code.
  • Although it’s a little bit beyond the scope of this post, you can compile a program from source and manually control which libraries it links to. This is yet another way to work around library compatibility issues. You use the GNU C Compiler/GNU Compiler Collection (gcc) along with its -L, and -l options to accomplish this. Have a look at item 13 (the YoLinux tutorial) in the Resources section for an example, and the gcc man page for details on the options.
  • Have a look at the readelf and nm commands if you want a more in-depth look at the internals of the binaries and libraries that you’re working with. readelf shows you some extra information on your ELF files by reading and parsing their internal information, and nm lists the symbols (functions, etc) within an object file.
  • You can temporarily preempt your current set of libraries and their functions with the LD_PRELOAD environment variable and/or the /etc/ file. Once these are set, the dynamic library loader will use the preload libraries/functions in preference to the ones that you have cached using ldconfig. This can help you work around shared library problems in a few instances.
  • If you run into a program that has its required library path(s) hard coded into it, you can create symbolic links from each one of the missing libraries to the location that’s expected by the executable. This technique can also help you work around incompatibilities in the naming conventions between what your system software expects, and what libraries are actually named. I talk about using symbolic links in this way a little more in the Troubleshooting section.


These scripts are somewhat simplified and in most cases could be done other ways too, but they will work to illustrate the concepts. If you use these scripts, make sure you adapt them to your situation. Never run a script or command without understanding what it will do to your system.

The first script shown in Listing 12 can be used to search directory trees for binaries with missing libraries. It makes use of the ldd and find commands to do the bulk of the work, looping through their output. Since I have heavily commented the scripts in Listing 12 and Listing 13, I won’t explain the details of how they work in this text.

Listing 12

#!/bin/bash - # These variables are designed to be changed if your Linux distro's ldd output # varies from Red Hat or Ubuntu for some reason iself="not a dynamic executable" # Used to see if executable is not dynamic notfound="not.*found" # Used to see if ldd doesn't find a library # Step through all of the executable files in the user specified directory for exe in $(find $1 -type f -perm /111) do # Check to see if ldd can get any information from this executable. It won't # if the executable is something like a script or a non-ELF executable. if [ -z "$(ldd $exe | grep -i "$iself")" ] then # Step through each of the lines of output from the ldd command # substituting : for a delimiter instead of a space for line in $(ldd $exe | tr " " ":") do # If ldd gives us output with our "not found" variable string in it, # we'll need to warn the user that there is a shared library issue if [ -n "$(echo "$line" | grep -i "$notfound")" ] then # Grab the first field, ignoring the words "not" or "found". # If we don't do this, we'll end up grabbing a field with a # word and not the library name. library="$(echo $line | cut -d ":" -f 1)" printf "Executable %s is missing shared object %sn" $exe $library fi done fi done

When run on the /opt/PostgreSQL directory mentioned above, it finds all of the programs that exhibit our missing library problem. As it stands now, this script will only check the first layer of library dependencies. One way to improve it would be to make the script follow the dependency chain of every library to the end, making sure that there is not a library farther down the chain that is missing. Better yet, you could add a “max-depth” option so that the user could specify how deeply into the dependency chain they wanted the script to check before moving on. A max-depth setting of “0” would allow the user to specify that they wanted the script to follow the dependency chain to the very end.

In Listing 13, I have created a wrapper script that could be used when developing new software, or as a last ditch effort to work around a really tough shared library problem. It utilizes the shell’s feature of temporarily setting an environment variable for a command on the same line as the command designation. That way we’re not setting LD_LIBRARY_PATH for the overall environment, which could cause problems for other programs if there are library naming conflicts.

Listing 13

#!/bin/bash - # Set up the variables to hold the PostgreSQL lib and bin paths. These paths may # vary on your system, so change them accordingly. LIB_PATH=/opt/PostgreSQL/8.4/lib # Postgres library path BIN_FILE=/opt/PostgreSQL/8.4/pgAdmin3/bin/psql # The binary to run # Start the specified program with the library path and have it replace this # process. Note that this will not change LD_LIBRARY_PATH in the parent shell. exec $(LD_LIBRARY_PATH="$LIB_PATH" "$BIN_FILE")

I’ve broken the library and binary paths out into variables to make it easier for you to adapt this script for use on your system. This script could easily serve as a template for other wrapper scripts as well, anytime that you need to alter the environment before launching a program. Remember though that this wrapper script should not be used for a permanent solution to your shared library problems unless you have no other choice.


In some cases, a program may have been hard coded to look for a specific library on your system in a certain path, thus ignoring your local library settings. In order to fix this problem, you can research what version/path of the library the program is looking for and then create a symbolic link between the expected library location and a compatible library. In some cases you can recompile the program with options set to change how/where it looks for libraries. If the programmer was really kind, they may have included a command line option to set the library location, but this would be the exception rather than the rule when library locations are hard coded.

The ldd command will not work with older style a.out binaries, and will probably give output mentioning “DLL jump” if it encounters one. It’s a good idea not to trust what ldd tells you when you’re running it on these types of binaries because the output is unpredictable and inaccurate. Newer ELF binaries have support for ldd built into them via the compiler, which is why they work.

Just because the dynamic linker finds a library doesn’t mean that the library isn’t missing “symbols” (things like functions/subroutines). If this happens, you may be able to match the ldd command output to libraries that are installed, but your program will still have unpredictable behavior (like not starting or crashing) when it tries to access the symbol(s) that are missing. In this case the ldd command’s -d and -r options can give you more information on the missing symbols, and you’ll need to dig deeper into the software developer’s documentation to see if there are compatibility issues with the specific version of the library that you’re running. Remember that you can always use the LD_LIBRARY_PATH variable to temporarily test different versions of the library to see if they fix your problem.

There may be some rare cases where ldconfig may not be able to determine a library type (libc4, 5, or 6) from it’s embedded information. If this happens, you can specify the type manually in the /etc/ file with a directive like dirname=TYPE where type can be libc4, libc5, or libc6. According to the man page for ldconfig, you can also specify this information directly on the command line to keep the change on a temporary basis.

If you have stubborn library problems that you just can’t seem to get a handle on, you might try setting the LD_DEBUG environment variable. Try typing export LD_DEBUG="help" first and then run a command (like ls) so that you can see what options are available. I normally use “all“, but you can be more selective on your choices. The next time that you run a program, you’ll see output that is like a stack trace for the library loading process. You can follow this output through to see where exactly your library problem is occurring. Issue unset LD_DEBUG to disable this debugging output again.


I hope that this post has armed you with the knowledge that you need to solve any shared library problems that you might come up against. Work through shared library problems step-by-step by determining what library/libraries are needed, finding out if they’re already installed, installing any missing libraries, and making sure that your Linux distribution can find the libraries, and you should have no problem fixing most of your dynamic library issues. If you have any questions, or have any information that should be added to this post, leave a comment or drop me an email. I welcome your feedback.


  1. IBM developerWorks Article On Shared Libraries By Peter Seebach
  2. Linux Foundation Reference On Statically And Dynamically Linked Libraries (Developer Oriented)
  3. LPIC-1 : Linux Professional Institute Certification Study Guide By Roderick W. Smith
  4. LPIC-1 In Depth By Michael Jang
  5. How-To On Shared Libraries From Linux Online
  6. Stack Overflow Post Giving An Example Of A Shared Library Problem
  7. OpenGuru Post On Shared Library Problem Caused By Not Having /usr/local/lib in /etc/
  8. An Introduction To ELF Binaries By Eric Youngdale (Linux Journal)
  9. Short Explanation Of How To Tell a.out and ELF Binaries Apart
  10. Post On ldd Arbitrary Code Execution Security Issues By Peteris Krumins
  11. The Text Version Of The Filesystem Hierarchy System (Version 2.3)
  12. A Linux Documentation Project gcc Reference Covering Shared Libraries
  13. YoLinux Library Tutorial Including A gcc Linking Example
  14. Article By Johan Petersson Explaining What Is And Why You’ll Never Find It